How to build a Google Meet transcription bot with Python, React and Gladia API

Published on Jul 25, 2023
How to build a Google Meet transcription bot with Python, React and Gladia API

In today's fast-paced world, effective communication and collaboration are essential. Tools like Google Meet have revolutionized how we connect and conduct meetings remotely. However, it can be very challenging to keep track of all action items and key insights shared during long meetings.

Anyone who's used Google Meet native transcription, knows that relying on Google alone is not an option - the quality is poor, and processing time takes around 30 minutes on average.

One possible solution is building a custom Google Meet transcription bot that will transcribe and summarize the meetings for you automatically!

In this tutorial, we explain how to build a smart summary bot for Google Meets using Python, React, and Gladia speech-to-text API, able to record your virtual meetings and transcribe them using top-quality speech-to-text AI for easy future summarization using a tool like ChatGPT.

Here's what you'll do

  1. Build the backend with Python
  2. Create the frontend with React
  3. Integrate Python, React, and Gladia Speech-to-Text API

Prerequisites

Before we dive into the implementation, let's ensure we have the necessary tools and knowledge.

  1. Install Python on your system to create the bot's backend.
  2. Familiarise yourself with React, needed to create the bot's user interface.

Step 1: Build the backend with Python

1. Set up a virtual environment

python3 -m venv bot-env 
source bot-env/bin/activate
2. Install necessary packages

pip install flask
3. Connect to Google Meet

To interact with Google Meet, you can use the Selenium library. Install it with:


import React, { useState, useEffect } from 'react';
import axios from 'axios';

const MeetingSummary = () => {
  const [summary, setSummary] = useState('');

  useEffect(() => {
    axios.get('/api/meeting-summary')
      .then(response => {
        setSummary(response.data.summary);
      })
      .catch(error => {
        console.error(error);
      });
  }, []);

  return (
    

Meeting Summary

{summary}

); }; export default MeetingSummary;
4. Record the meeting video

from selenium import webdriver
import time
import pyaudio
import wave
from google.cloud import storage
from google.cloud import speech

# Set up Selenium WebDriver
driver = webdriver.Chrome()
driver.get("https://meet.google.com/meeting-url")

# Start recording the meeting audio
def record_audio():
    CHUNK = 1024
    FORMAT = pyaudio.paInt16
    CHANNELS = 1
    RATE = 16000
    RECORD_SECONDS = 600  # Adjust as per your meeting duration
    WAVE_OUTPUT_FILENAME = "meeting-recording.wav"

    audio = pyaudio.PyAudio()

    stream = audio.open(format=FORMAT, channels=CHANNELS,
                        rate=RATE, input=True,
                        frames_per_buffer=CHUNK)

    frames = []

    print("Recording started...")

    for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
        data = stream.read(CHUNK)
        frames.append(data)

    print("Recording finished.")

    stream.stop_stream()
    stream.close()
    audio.terminate()

    wave_file = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
    wave_file.setnchannels(CHANNELS)
    wave_file.setsampwidth(audio.get_sample_size(FORMAT))
    wave_file.setframerate(RATE)
    wave_file.writeframes(b''.join(frames))
    wave_file.close()

record_audio()

5. Transcribe the audio

Create a free 10h/month Gladia account on app.gladia.io, get the API key and paste it in the code below


bucket = storage_client.bucket(bucket_name)
file_path = "meeting-recording.wav"
with open(file_path, 'rb') as f:  # Open the file
  files = {
    # Sending a local audio file
    'audio': (file_name, f, 'audio/'+file_extension[1:]), # Send it. Here it represents: (filename: string, file: BufferReader, fileMimeType: string)
    # You can also send an URL for your audio file. Make sure it's the direct link and publicly accessible.
    # 'audio_url': (None, 'http://files.gladia.io/example/audio-transcription/split_infinity.wav'),
    # Then you can pass any parameters you wants. Please see: https://docs.gladia.io/reference/pre-recorded
    'toggle_diarization': (None, True),
  }
  print('- Sending request to Gladia API...');
  response = requests.post('https://api.gladia.io/audio/text/audio-transcription/', headers=headers, files=files)
  if response.status_code == 200:
    print('- Request successful');
    result = response.json()
    print(result)
  else:
    print('- Request failed');
    print(response.json())
  print('- End of work');


# open AI summary code here:

Step 2: Create the frontend with React

1. Set up a React project

npx create-react-app bot-ui

cd bot-ui
2. Design the user interface

Create your desired UI components and layout using React.

Build a React Chatbot Component
Example from Ordinary Coders.
3. Display the summary

import React, { useState, useEffect } from 'react';
import axios from 'axios';

const MeetingSummary = () => {
  const [summary, setSummary] = useState('');

  useEffect(() => {
    axios.get('/api/meeting-summary')
      .then(response => {
        setSummary(response.data.summary);
      })
      .catch(error => {
        console.error(error);
      });
  }, []);

  return (
    

Meeting Summary

{summary}

); }; export default MeetingSummary;
4. Implement the meeting recording functionality

import React from 'react';
import axios from 'axios';

const MeetingRecorder = () => {
  const startRecording = async () => {
    try {
      await axios.post('/api/start-recording');
      console.log('Recording started successfully!');
    } catch (error) {
      console.error('Failed to start recording:', error);
    }
  };

  const stopRecording = async () => {
    try {
      await axios.post('/api/stop-recording');
      console.log('Recording stopped successfully!');
    } catch (error) {
      console.error('Failed to stop recording:', error);
    }
  };

  return (
    
); }; export default MeetingRecorder;

Step 3: Integrate Python, React, and Gladia Speech-to-Text API

1. Set up communication between the backend and frontend

In your Python Flask backend, create the following API endpoints:


from flask import Flask, jsonify

app = Flask(__name__)

@app.route('/api/meeting-summary', methods=['GET'])
def get_meeting_summary():
    # Retrieve the summary from the database or file
    summary = retrieve_summary_from_database()
    return jsonify(summary=summary)

@app.route('/api/start-recording', methods=['POST'])
def start_recording():
    # Implement the code to start recording the meeting video
    return jsonify(message='Recording started')

@app.route('/api/stop-recording', methods=['POST'])
def stop_recording():
    # Implement the code to stop recording the meeting video
    return jsonify(message='Recording stopped')

if __name__ == '__main__':
    app.run()
2. Trigger the recording functionality

In your React frontend, use the MeetingRecorder component to initiate and stop the meeting recording.


import MeetingRecorder from './MeetingRecorder';

const App = () => {
  return (
    

Google Meet Smart Summary Recording Bot

); }; export default App;

Conclusion

Building a custom Google Meet bot can significantly streamline virtual meetings analysis and improve productivity.

By automating the meeting recording and speech-to-text transcription, this bot allows participants to focus on the meeting content without worrying about extensive note-taking, and access the most relevant takeaways and action points faster by summarising the transcript using ChatGPT or other similar tools.

With Gladia's blazing fast and accurate transcription capabilities, combined with flexibility of Python and React, you can create a highly efficient and intelligent bot that will save you time on recording, transcribing, and summarizing virtual meetings.

Contact us

280
Your request has been registered
A problem occurred while submitting the form.

Read more

Product News

Introducing Solaria, the first truly universal speech-to-text model

Voice is the most natural way we communicate. As AI continues to redefine the way businesses interact with customers, the ability to accurately and instantly transcribe speech across languages is no longer a luxury—it’s a necessity. Enter Solaria, the breakthrough speech-to-text model designed to power the next era of global AI-driven conversations.

Product News

Gladia x pyannoteAI: Speaker diarization and the future of voice AI

Speaker recognition is advancing rapidly. Beyond merely capturing what is said, it reveals who is speaking and how they communicate, paving the way for more advanced communication platforms and assistant apps

Speech-To-Text

Building AI voice agents: Starter guide

2025 marks a significant shift in AI-driven automation with the emergence of Agentic AI—intelligent, autonomous systems capable of reasoning, goal-setting, and adaptive decision-making.

Read more