Does deepfake voice carry any benefits? The next level of artificial intelligence

phone sits on silver maptop with deepfake voice technology enabled

The topic of artificial intelligence has been one of the hottest lately. Every now and then there is talk of its new capabilities and achievements, which do not always have a positive effect. The already well-known deepfake has its new baby – ai voice. An unusual dimension of scams or a tool to make life easier? Read how to use the fake voice creator safely.

AI voice, what it is  

Deep fake technology appeared on the Internet a few years ago. We have already mentioned what deepfake is in an article. However, artificial intelligence has gone even further and, in addition to creating realistic video footage, the possibility of cloning someone’s voice has emerged. All it takes is a minute of audio sample, and the AI voice generator can create any speech based on that.

On January 5, 2023, Microsoft unveiled its new Vall-E algorithm. The technologies it uses, TTS – text to speach model, EnCodec and advanced artificial intelligence, can generate any utterance based on just a 3-second voice sample.

A database of 60,000 conversations in English by as many as 7,000 people was used to create it.

The recordings created by the algorithm preserve the speaker’s intonation and style, and even the emotional tone. All this makes the resulting audio deepfake material almost indistinguishable from a real voice.

Fraud tool given on a platter

The man in red speaks into the microphone

Deepfake voice is not yet in the peak of popularity. In contrast, the first mentions of this technology appeared as early as 2019. At the time, Forbes magazine described a situation in which, with the help of the cloned voice of the CEO of a certain company, a scammer, unknown to this day, defrauded almost hundreds of thousands of pounds. The scam took place in the UK. A similar episode took place in 2021 in the United Arab Emirates.

These examples show that the most common use of deepfake audio is in scams. This begs the question, does this technology have any advantages?

The tool has great potential for illegal activities. In addition to phone phishing, it is a means by which voice security can be broken. Microsoft, aware of the threat posed by Vall-E, has blocked its public availability. One has to wait until it is not properly secured. But there are many similar programs for forging voice recordings, and the gateway to them can be opened by anyone.

The positive side of deepfake voice

microphone in the background of the voice recording

In addition to its dark side, deepfake voice also has positive uses. The tool is used for telemarketing work. It is also proving helpful in the creative industry, for example, to recreate an actor’s voice.

Perhaps ai voice generator will one day replace voiceovers or automate the laborious process of recording dubbing. Currently, GPS devices use the human voice. However, each of these solutions involves putting someone out of a job.

Don’t be fooled

Security researchers at NISOS warn against phone calls from supposed friends who want to borrow money. Audio deepfake allows us to fake a voice, a close friend, which only puts our vigilance to sleep. This is a higher level of vishing.

But ai voice generator carries other risks. From the perspective of an ordinary Internet user, such technology can be a mere form of fun. On the other hand, before using any application of this type, one should read the terms and conditions carefully. You should check who will have the rights to the voice created and whether the uploaded voice sample will not be reused. In the hands of the wrong people, it can be used as a tool to defraud your loved ones!

Ethical problem

woman holds phone in hand and records her voice

The ratio of the number of threats posed by deepfake audio technology to its positive aspects is not hopeful. Fraud, manipulative actions or spreading disinformation can lead to serious social consequences. With such methods, reputations can be easily damaged and even democratic processes disrupted.

Deepfake allows the illegal use of someone’s image. Creating fake recordings or impersonating someone violates privacy. These actions can lead to blackmail and various other forms of exploitation. The victim of such a violation may suffer psychological damage as a result.

Everything has its consequences, which reflect on people. The use of AI voice technology in a negative way, greatly reduces the level of trust in society. People become more and more suspicious when listening to various recordings, having in the back of their minds that it could be a product of artificial intelligence.

Is Ivona an AI voice?

ivona speech synthesizer on white background

The Ivona speech synthesizer, known by all, is the aunt mother of deepfake audio, so to speak. Perhaps it was the inspiration for the invention of ai voice generator. However, the two tools should not be confused.

The automatic voiceover is only used to translate written text into speech. Ivona’s voice does not maintain proper accentuation or intonation. Its sound is mechanical and not difficult to distinguish from real human speech. Besides, it has a limited number of voices in the synthesizer base. 

Be careful

The possibilities of artificial intelligence are unexplored. It is unclear what direction things will take regarding its development. What is certain is that in the hands of the wrong people, tools such as deepfake voice are very dangerous. They also carry serious risks for people working in telemarketing, for example. When using them, it is important to keep our own safety in mind, so that our voice is not accidentally used against loved ones.

error: Content is protected !!