Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

Text link

Bold text

Emphasis

Superscript

Subscript

Read more

Speech-To-Text

Mastering real-time transcription: speed, accuracy, and Gladia's AI advantage

TL;DR: Most use cases like meeting assistants, post-call analytics, and note-taking tools don't need real-time transcription. Async delivers higher accuracy and better speaker attribution because the model processes the complete recording. Sub-300ms latency is a functional requirement only for voice agents, live captions, and live agent assist tools where immediate output is non-negotiable. Gladia's Solaria-1 delivers around 270ms average latency with 100+ language support and native code-switching for the use cases that do require it.

Speech-To-Text

Automated call scoring: Best practices for AI-powered QA and performance

TL;DR: Most contact centers manually review only a fraction of calls, leaving coaching decisions based on incomplete data. Automated call scoring closes that gap by combining async transcription with LLM-based evaluation, but every downstream score is bounded by the accuracy of your STT layer. When it fails on accented speakers or multilingual audio, compliance scores, sentiment flags, and coaching alerts all break, making STT engine selection the highest-leverage infrastructure decision in your QA stack.

Speech-To-Text

Generate automated follow-up emails from meeting recordings with Gladia and Claude

TL;DR: The bottleneck in automated meeting follow-ups is not the LLM writing the email. It's the transcription layer feeding it: wrong speaker labels and missed entities produce emails that sound generic or silently corrupt your CRM. Building your own pipeline with Gladia and Claude gives you predictable per-hour billing and strict data controls on paid tiers, backed by Solaria-1's on average 29% lower WER than competing APIs on conversational speech.

How to integrate live transcription API with Twilio to transcribe calls in real time

Published on Sep 28, 2023
How to integrate live transcription API with Twilio to transcribe calls in real time

Twilio, used by hundreds of thousands of businesses and more than ten million developers worldwide, can now integrate with our live transcription API. The integration makes it easier for users to natively transcribe any phone call in real time while using Twilio. With transcribed text at your disposal, you'll then be able to analyze, archive, and act upon voice data more effectively.

Below, you’ll find a step-by-step guide on setting up the Twilio integration with Gladia API in JavaScript for free.

What can you do with Twilio integration?

Any developer can use this integration to transcribe phone calls in real-time. 

How to implement Twilio + Gladia real-time transcription integration

Step 1: Set up your Gladia account

If you haven't already, sign up for our Speech-to-Text API at app.gladia.io and obtain your API key.

Step 2: Create and parametrize your Twilio account

  • Create an account on https://www.twilio.com/try-twilio
  • Get a phone number, following the first step of the main page to connect to your Twilio account.
  • On the left panel Develop > United States (US1) > Phone Numbers > Manage > Active numbers.
  • Click on the phone number you just created.
  • In 'Configure' panel, 'Voice Configuration' section, 'A call comes in' field, choose 'Webhook' with URL = 'http://[your-id-address]:[your-app-port-number]' and HTTP = 'HTTP POST'

Step 3: Configure your server and install dependencies

  • In .env file, add GLADIA_API_KEY var with your API key obtained from Gladia’s website and PORT var, the port you used to configure your phone number in above section (default is 8080)
  • Install dependencies:

npm i

Step 4: Make it work

  • Launch the websocket server:

npm run start

Voila! The transcription should appear in the server logs now.

🔗 Source GitHub repository is available here.

Feel free to check out the video version of the tutorial for a step-by-step walkthrough with one of our software engineers, Antoine.

We hope you enjoyed this how-to tutorial! Given how much audio data still goes to wasted, we’re always curious to explore the many ways in which transcription tech can be used to remedy that. Let us know if you end up using our API with Twilio, Discord, or other, we’d love to hear from you.

About Gladia

At Gladia, we built an optimized version of Whisper in the form of an API, adapted to real-life professional use cases and distinguished by exceptional accuracy, speed, extended multilingual capabilities and state-of-the-art features, including speaker diarization and word-level timestamps.

Contact us

280
Your request has been registered
A problem occurred while submitting the form.

Read more