Home
Blog
Introducing Whisper-Zero

Introducing Whisper-Zero

Introducing Whisper-Zero
Published on
Apr 2024

Today, we're thrilled to release a new breakthrough ASR system, Whisper-Zero —a complete rework of Whisper combined with multiple state-of-the-art models, using over 1.5 million hours of diverse audio, including phone-quality and noisy data from real-life environments.

The biggest product milestone for Gladia to date, Whisper-Zero removes virtually all hallucinations from transcription, providing better accuracy, faster speed, enhanced language support, and more features to our users. All in a single production-ready transcription and audio intelligence API.

Our story with optimizing Whisper

Gladia’s core product has been based on Whisper architecture since our conception. Released by OpenAI in 2022, the transformer-based Whisper model set a new standard for automatic speech recognition (ASR) for accuracy and multilingual capabilities. Despite its many advantages, the model came with usage limitations and hardware requirements that made it impractical for enterprise needs and scale.

In the months following Whisper's release, Gladia has transformed the open-source version of the model into a production-grade transcription API for companies. Compared to the original, Gladia delivered better accuracy, extended multilingual support, and additional high-value features like live streaming transcription, translation, speaker diarization, word timestamps and code-switching (i.e., detecting a language change in an audio recording).

There was one pain point we were yet to solve — hallucinations, a phenomenon where an ASR system produces transcriptions that include words or phrases that were not present in the original audio.

Towards hallucinations-free audio transcription

Powered by a predecessor of GPT-3 at the decoding phase, Whisper is notoriously prone to hallucinations, resulting from internal — such as training data and model architecture — and external factors like complex input audio. It's even been reported that the latest version of the model, Whisper v-3, released a few weeks back by OpenAI, is in fact more likely to hallucinate compared to the most accurate of the 'Whispers', the large v-2.

Despite being described by the CEO of OpenAI as the "magic of AI", hallucinations are in reality a huge pain point for any company that relies on transcription to improve its operations and deliver a better user experience. By reducing the overall accuracy of transcription, they make it harder for companies to leverage transcripts to build ASR-powered apps, especially in use cases where the data extracted from transcriptions is used to feed one's database directly, as in the case of automated CRM enrichment, or showcase the transcript in real-time to the final user via live captions.

Gladia has committed to fixing this issue once and for all. In addition to upgrading the existing features set, we have improved the model’s architecture to mitigate Whisper’s hallucination flaw. The resulting word error rate (WER) — a metric used to assess the accuracy of speech recognition systems — is 10-15% more accurate comparing to both Whisper large v2 and v3.

Delivering the best version of enterprise Whisper

Moreover, Whisper-Zero has been optimized specifically for complex environments to account for another Whisper limitation — the fact that the base model was trained on large volumes of data collected from the internet, making it a versatile yet generalist audio model, which is mathematically more biased towards phrases that have nothing to do with professional audio data.

With the fine-tuning and prompt engineering done by Gladia, our customers from online meetings, media, call centers, and otherd domains, can now enjoy better precision in real-life, non-sterile scenarios.

In addition to that, for this release we have put special emphasis on enhancing transcription accuracy in multilingual environments, with Whisper-Zero fine-tuned to recognise a wide variety of accents.

In a nutshell, today we’re offering the market the best enterprise-grade version of Whisper, which removes its biggest limitations, boosts performance, and enhances its capabilities with more features. You can now enjoy the best version of Whisper in the cloud, without limitations, addressing enterprise scale and needs.

Find our more about the release on a dedicated landing page.

More resources

Contact us

280
Your request has been registered
A problem occurred while submitting the form.

Read more

Speech-To-Text

Key techniques to improve the accuracy of your LLM app: Prompt engineering vs Fine-tuning vs RAG

Large Language Models (LLMs) are at the forefront of the democratization of AI and they continue to get more advanced. However, LLMs can suffer from performance issues, and produce inaccurate, misleading, or biased information, leading to poor user experience and creating difficulties for product builders.

Speech-To-Text

Keeping LLMs accurate: Your guide to reducing hallucinations

Over the last few years, Large Language Models (LLMs) have become accessible and transformative tools, powering everything from customer support and content generation to complex, industry-specific applications in healthcare, education, and finance.

Case Studies

Transforming note-taking for students with AI transcription

In recent years, fuelled by advancements in LLMs, the numbers of AI note-takers has skyrocketed. These apps are increasingly tailored to meet the unique needs of specific user groups, such as doctors, sales teams and project managers.