Site icon TechNewsBoy.com

Microsoft Unveils VALL-E, Audio AI Version of DALL-E: All Details

Microsoft researchers recently announced VALL-E, a new text-to-speech AI model that can accurately mimic a person’s voice when given a three-second audio sample. Once it has learned a specific voice, VALL-E can synthesise audio of that person saying anything—while attempting to retain the speaker’s emotional tone. When combined with other generative AI models like GPT-3, VALL-E’s creators believe it can be used for high-quality text-to-speech applications, speech editing in which a recording of a person could be edited and altered from a text transcript (making them say something they did not actually say), and audio content creation.

According to Microsoft, VALL-E is primarily a “neural codec language model,” and is based on EnCodec, which Meta revealed in October 2022. VALL-E creates discrete audio codec codes from text and acoustic prompts, as opposed to other text-to-speech methods that typically synthesise speech by manipulating waveforms. It processes how a person sounds, breaks the relevant data down into discrete components (referred to as “tokens”) using EnCodec, and then uses training data to match what it “knows” about how that voice might sound if it spoke other phrases beyond the three-second sample.

Microsoft trained VALL-E’s speech synthesis functionalities using Meta’s LibriLight audio library. It includes 60,000 hours of English language speech from over 7,000 speakers, sourced primarily from LibriVox public domain audiobooks. The voice in the three-second sample should closely resemble a voice in the learning algorithm for VALL-E to produce a good result.

The American technology giant offers dozens of audio examples of the AI model in action on the VALL-E example website. The “Speaker Prompt” data set is the three-second audio given to VALL-E that it must try to emulate. The “Ground Truth” is a previously recorded version of that same speaker saying a specific phrase for comparative purposes (sort of like the “control” in the experiment). The “Baseline” sample is generated by a traditional text-to-speech synthesis method, and the “VALL-E” sample is generated by the VALL-E model.

A block diagram of VALL-E as shown in the example website by Microsoft researchers
Photo Credit: Microsoft

Researchers only supplied the three-second “Speaker Prompt” sample and a text string (what they would want the voice to say) into VALL-E to get those results. Some VALL-E results appear computer-generated, but others could be misunderstood for human speech, which is the model’s goal. Because of VALL-E’s potential to fuel wrongdoings and deceit, Microsoft has not made VALL-E code available for others to explore. The researchers appear to be aware of the potential social harm that this technology may cause.

They write in the paper’s conclusion: “Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E. We will also put Microsoft AI Principles into practice when further developing the models.”


Affiliate links may be automatically generated – see our ethics statement for details.

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2023 hub.

Poco C55 Tipped to Be a Rebranded Redmi 12C, Expected to Launch Soon

Featured video of the day

Redmi Note 12 5G Unboxing and First Impressions: Upgrades Attract a Higher Price

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@technewsboy.com. The content will be deleted within 24 hours.
Exit mobile version