You could see this coming from a mile away.
In today's episode of What Could Go Wrong: AI Edition, a tech company that uses artificial intelligence to mimic voices is adding more"safeguards" to its tech after it was used to generate clips of celebrities reading offensive content.is a research company specializing in AI speech software that generates realistic-sounding voices to voice-over audiobooks, games, and articles in any language.
ElvenLabs says it has been taking steps to keep Voice Lab from being used for"malicious purposes," posting how it plans to keep its tech out of the wrong hands in a lengthyElevenLabs claims it"always had the ability to trace any generated audio clip back to a specific user." Next week it will release a tool that will allow anyone to confirm that a clip was generated using its technology and report it.
The company says that the malicious content was created by"free anonymous accounts," so it will add a new layer of identity verification. Voice Lab will be made available only on paid tiers, and immediately remove the free version from its site. ElevenLabs are currently tracking and banning any account that creates harmful content in violation of its policies.