3 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
ElevenLabs CEO Mati Staniszewski argues that voice will become the primary way people interact with AI, moving beyond screens and text. He highlights advancements in voice technology and its integration with large language models, suggesting a future where devices respond to voice commands more naturally. However, this shift raises concerns about privacy and data security.
If you do, here's more
Mati Staniszewski, CEO of ElevenLabs, believes voice technology is on the brink of transforming how we interact with AI. Speaking at Web Summit in Doha, he highlighted the evolution of voice models that now go beyond mimicking human speech to incorporate emotional nuance and reasoning capabilities. This shift, he argues, will change technology interactions, potentially allowing users to control devices through voice alone, making screens less central to the experience.
ElevenLabs recently secured $500 million in funding, valuing the company at $11 billion. This investment reflects a broader industry trend, with major players like OpenAI and Google prioritizing voice in their next-gen AI models. Staniszewski envisions a future where voice interfaces replace traditional controls as AI becomes more integrated into wearables, vehicles, and other hardware, leading to a more seamless interaction with technology.
However, the rise of voice technology raises significant privacy concerns. As these systems become more embedded in daily life, the potential for surveillance and data misuse grows. Staniszewski acknowledged these risks, especially as companies like Google have faced scrutiny over data handling practices. ElevenLabs is exploring a hybrid processing approach to balance performance and privacy, and is already collaborating with Meta to integrate voice tech into platforms like Instagram and Horizon Worlds.
Questions about this article
No questions yet.