top of page

Accessibility by the count of Three: Publishing without language barriers

Writer: Jan KerslingJan Kersling

In this interview, we speak with Akshat Prakash, CEO of CAMB.AI, about using AI technology to overcome language barriers. The exemplary use case we want to look at specifically is the translation of the movie Three, a psychological thriller by Nayla Al Khaja, whilst preserving original tone and emotions. This made Three become the first Emirate-produced movie to be dubbed in more than Arabic and English language.


 
Showcase of the used AI-technology for the movie "Three" for the "Red Sea Film Festival"
 

“Most people in the world don’t speak English – and they still deserve access to great content.”


What were the ideas/motivations behind the project?


"We partnered with Nayla Al Khaja the director and screenwriter for the movie. The mission: to bring an Emirate-produced movie to global audiences.


With our technology—an AI-based speech-to-speech translation company—we can take any long-form audio or video content, whether it’s an audiobook, a YouTube video, a movie, or even live sports, and hyperrealistically transform it into multiple languages while preserving the original tone, emotion, speaker identity, and all other aspects.

 

This is crucial because it democratizes access to content, ensuring that many people can understand it. Most people in the world do not speak English as their native language, and with our technology, this movie became the first Emirate-produced movie to be made available in other languages than Arabic and English."


Take us through the process of how you created it. How and at which points during the development process did you use AI?

 

"How did we build the software? It is fundamentally hardcore generative AI. We have been conducting research in generative AI, specifically speech AI, for the last six years and have built our own models that specialize in understanding speech and then recreating it in alternative languages.

 

We developed two primary models, and on top of those, we engineered our platform, which functions as an editor studio. Think of it as an Adobe Premiere Pro, but for AI-driven dubbing. Users can upload a video, generate a transcript, edit it, create the voice-over, tweak it, and then download the final version.


The entire technology is based on generative AI. When training AI models, you need access to GPUs, large amounts of data, and the right human expertise.

 

We had to hire the right talent, acquire the necessary GPUs, and collaborate with partners to obtain the required data. It’s a highly iterative process—it takes multiple experiments to get a model right. Our 12th or 15th experiment was the first one that worked for one of our models.

 

Even after getting these models into production, there are numerous iterations. Our current models are on their sixth version, and in the coming years, we will likely reach versions 10, 20, or beyond."


“It’s not just dubbing. It’s generative AI trained to understand and recreate speech across languages.”


What was the feedback, and how did you react to it?


"The feedback has been overwhelmingly positive, especially due to our AI’s ability to preserve emotions and tone in translations.


Our system is used in sports, films, and content creation. Next to "Three" we also partenered with the Australian Open - amongst others -, where live AI commentary was generated using our technology. This resulted in a fourfold increase in viewership."


What are your plans for the future regarding content creation with AI?

 

"While we are best known for dubbing, our platform offers a full solution for translation and localization. Companies using our platform can not only translate content into different languages but also manage multilingual fan engagement, PR, blogs, and internal training.

 

One exciting product we are working on is Chatterbox—a real-time translation solution for conversations. With this tool, you could speak in French, and I could speak in English, or you could speak in German, and I could speak in Hindi, and we would be able to hear each other in real-time in our respective languages.

 

Now that we have proven success in the B2B space, we are looking to expand further into the B2C market."



 

This interview is part of PANTA SPOTLIGHT where we publish interesting innovative cases coming up in the AI space and interview the creators about the background of the projects.

bottom of page