Home
Blog
Recall and Gladia join forces to power online meetings transcription

Recall and Gladia join forces to power online meetings transcription

Recall and Gladia join forces to power online meetings transcription
Published on
Mar 2024

Today, we are thrilled to announce a partnership aimed at empowering businesses and developers worldwide to fully leverage data from online meetings.

Recall, a pioneering developer tooling, API, and infrastructure provider best known for plug-and-play meeting bots, has teamed up with Gladia to provide real-time code-switching and accurate transcription to over 100 clients worldwide. 

Recall: Capturing the essence of meetings

As the world grappled with the COVID-19 pandemic, the demand for video conferencing solutions skyrocketed, multiplying the number of Zroom calls alone by an astonishing 100-fold.

Founded in February 2022, Recall’s mission was to provide companies worldwide with the best possible infrastructure powered by LLMs to extract valuable data from virtual meetings.

Recall allows developers to build products on top of meeting data captured from key platforms like Zoom, Google Meet, and others. They offer a comprehensive API that enables video and audio recordings, transcriptions, and metadata extractions (participant names, timestamps, etc.) 

While it takes at least six months on average to develop meeting bots in-house, with Recall, companies can seamlessly integrate these functionalities in a matter of days.

Owing to its versatility and ease of use, Recall has exhibited spectacular growth and now caters to a wide range of enterprise clients across various industries and use cases, including sales enablement tools, note-taking solutions, productivity-enhancing applications, and more. 

Gladia x Recall: Advancing meeting data transcription

Transcription is a critical component of video recording and conferencing tools provided by Recall.

At Gladia, we built an enterprise version of OpenAI’s Whisper ASR in the form of an API, distinguished by exceptional accuracy and speed, extended language support, and a variety of additional features.

Virtual meeting and note-taking have been among the most important use cases for Gladia, making our API a perfect candidate to address the challenges of virtual meeting transcription.

Amanda Zhu, CEO of Recall, gives a quote about the value of Gladia's live transcription for online meetings

With Gladia's API integration, Recall's clients can now directly enjoy the benefits of instantaneous and accurate meeting transcription, including extended language support, speaker diarization, and word-level timestamps.

We’re grateful for the trust and thrilled to partner with a company like Recall, whose ambition to help companies improve the way they work by leveraging data from meetings aligns perfectly with Gladia’s vision and objectives.

For a more detailed practical tutorial on using Gladia API with Recall’s meeting bots, head to the tutorial on Recall’s website.

About Gladia

At Gladia, we built an optimized version of Whisper in the form of an API, adapted to real-life professional use cases and distinguished by exceptional accuracy, speed, extended multilingual capabilities and state-of-the-art features.

Contact us

280
Your request has been registered
A problem occurred while submitting the form.

Read more

Speech-To-Text

Keeping LLMs accurate: Your guide to reducing hallucinations

Over the last few years, Large Language Models (LLMs) have become accessible and transformative tools, powering everything from customer support and content generation to complex, industry-specific applications in healthcare, education, and finance.

Case Studies

Transforming note-taking for students with AI transcription

In recent years, fuelled by advancements in LLMs, the numbers of AI note-takers has skyrocketed. These apps are increasingly tailored to meet the unique needs of specific user groups, such as doctors, sales teams and project managers.

Speech-To-Text

RAG for voice platforms: combining the power of LLMs with real-time knowledge

It happens all the time. A user submits a query to a large language model (LLM) and swiftly gets a response that is clear, comprehensive, and obviously incorrect.