CyberSecurity Article – 7 (Beware - Your Voice Messages On Social Media & Artificial intelligence (AI) Manipulation Techniques)

CyberSecurity Article – 7 (Beware - Your Voice Messages On Social Media & AI Manipulation Techniques)

As we all know that Artificial intelligence (AI) has become increasingly famous and prominent in recent years and it is now being used in a wide range of applications across industries and sectors and everyone is excited about the potential for Artificial intelligence (AI) to transform the way we live and work.

However, there are also concerns about the impact of Artificial intelligence (AI) on privacy, and security and these concerns need to be carefully managed as Artificial intelligence (AI) continues to be more advance every day and could be misused and lead to cybercrimes.

In this article I would like to emphasize on how voice messages shared on any social media platforms and other online channels could potentially be misused by cybercriminals to carry out various forms of cybercrime(s) including identity theft, fraud, and other malicious activities.

Artificial intelligence (AI) voice misuse refers to the malicious use of Artificial intelligence (AI) techniques to create or alter audio content in a way that misrepresents the original speaker or message.

With the use of Artificial intelligence (AI) techniques such as deep learning and natural language processing, voice messages shared on social media platforms could easily be manipulated. Cybercriminals could use these techniques to create Deepfake audio or alter the content of the voice message or manipulate existing audio to spread false information, defame someone, or impersonate someone's voice.

Cyber criminals could create realistic-sounding audio of people saying things they never actually said and create convincing fake voice messages or to manipulate real voice messages to say something different.

Artificial intelligence (AI) voice manipulation can also be used to change the tone or context of a message, making it sound more aggressive, friendly, or emotional. This can be used to deceive people into believing false information or to manipulate their emotions.

 Artificial intelligence (AI) voice manipulation in recent years:

  • In 2021: Tom Cruise Deepfakes videos began circulating on social media. The videos, which were created using Artificial intelligence (AI), featured someone impersonating Cruise's voice and likeness and were used to spread false and potentially harmful information.
  • In 2020:  A Deepfake audio clip of a CEO was used to scam a UK-based energy firm out of €220,000. The attacker used Artificial intelligence (AI) voice manipulation to create a fake audio clip of the CEO giving instructions to transfer funds to a Hungarian supplier.
  • In 2019: A popular Chinese mobile app called “Zao” was found to be using Artificial intelligence (AI) technology to create Deepfake videos of people's faces. The app allowed users to upload their photo and then superimpose it onto the face of a celebrity or other person in a video clip.
  • In 2018: Researchers at the University of California, Berkeley created an Artificial intelligence (AI) model that could generate convincing fake audio clips of former President Barack Obama. The researchers used a technique called "voice cloning" to create the clips, which could be used to make Obama say things he never actually said.
  • Political Manipulation: Artificial intelligence (AI) voice manipulation has also been used for political purposes, such as altering the speech of politicians to make them appear to say things they did not actually say, or creating fake endorsements or speeches to influence public opinion.

Above mentioned highlights the potential dangers of Artificial intelligence (AI) voice manipulation and the need for increased awareness and regulation of this technology

It is important for individuals and organizations to be vigilant when receiving or sharing audio messages on social media platforms and other online channels and for security practitioners / policymakers to consider measures to prevent the malicious use of Artificial intelligence (AI) voice manipulation techniques.

Thank you.

Regards

Sunil Kumar

Member - EC- Council - International Advisory Board

Visit My Blog:

Comments

Popular posts from this blog

CyberSecurity Article - 1

CyberSecurity Article – 10 (Internet of Things (IoT) | Impact on Cybersecurity and Data Privacy)

CyberSecurity Article – 22 (Cloud Migration Without A Strategy - Potential Risk for organizations)