Posted inTechnologyLatest NewsUAEWorld

Rise of AI deepfakes: ‘We need to rethink what we share,’ Kaspersky data scientist says

The UAE is advanced in AI, but responsible and ethical use of AI is crucial especially as AI hallucinations, privacy risks, and cybersecurity threats lurk worldwide, Kaspersky’s Lead Data Scientist said

Deepfakes are most commonly associated with manipulated videos where the faces of individuals in existing videos are replaced or modified

With deepfakes claiming the identities of well-known celebrities like Taylor Swift, Tom Hanks, Scarlett Johansson and Rashmika Mandanna to generate explicit and false content, it is important for all internet users to be careful with what they share online, a Kaspersky spokesperson told Arabian Business in an exclusive interview.

“We need to rethink what we share,” Kaspersky’s Lead Data Scientist Vladislav Tushkanov explained, adding that as far as ethics are concerned, individuals and businesses must not take a person’s identity to create a Deepfake without consent.

Tushkanov’s comments came on the side-lines of the 2024 edition of Kaspersky’s Cybersecurity Weekend held in Malaysia.

Taylor Swift and more celebrities are victims of deepfakes

Deepfakes, which are a type of synthetic media that is created using AI techniques, particularly deep learning algorithms, are rapidly rising today.

They are most commonly associated with manipulated videos where the faces of individuals in existing videos are replaced or modified to make them appear to say or do things they didn’t.

The technology can also be applied to create realistic voice impersonations or generate synthetic images of people that don’t exist. However, this can be used both positively and negatively, Tushkanov said.

“There are different uses for this technology, where they can be benign and beneficial. It can be used, for example, in education, to lip sync the lips on a translated video with the translation like to make it feel more natural, more seamless. It can be used in entertainment, but it also can be used for malicious purposes, and the most obvious of them are scams and phishing attacks.

“For example, you can take Elon Musk’s face and generate his voice into a video asking people to follow a link to get free cryptocurrency,” he said.

Tushkanov explained that when an individual’s messaging or social network account is compromised, there is a potential “threat” wherein the perpetrator gains access to voice recordings stored in that account.

They can exploit this access by creating fraudulent voice messages and sending them to the account owner’s friends and family.

By impersonating the account owner, the attacker may deceive recipients into transferring funds to the fraudster’s bank account, resulting in financial loss for the victims.

AI deepfake
Deepfake technology can be applied to create realistic voice impersonations or generate synthetic images of people that don’t exist. Image: Shutterstock

“It is especially dangerous among cybercriminals to hack accounts and send messages to other users asking to follow a malicious link or transfer money. It is generally not okay to make a Deepfake, even as a joke, as you probably know, the first use of deepfakes was to create pornographic materials with famous people. Deepfakes can be used for blackmail, they can be used to humiliate people to set for cyberbullying, etc. So, there are deep, like ethical complications arising from their use,” he said.

There is no ‘100 percent protection’ from deepfakes

Kaspersky’s Tushkanov added one can never be 100 percent immune from a Deepfake, due to certain algorithms.

“Sadly, you need just one photo of a person to create a Deepfake of them. However, there are both good and bad deepfakes. Good ones are those created by professional VFX artists, with resources, with a team and also via a professional impersonator to add more to the Deepfake. But, this is not what a low-level cybercriminal would do because there would be tell-tale signs to his output – like artefacts around the hair, the facts, etc. would prove to be a scam,” he concluded.

Should we be using generative AI in our everyday lives?

Tushkanov said yes but you must know where to apply these technologies.

“For example, if I need to brainstorm some idea, I can go to a chatbot and say, ‘now, you have to critique all my ideas’ or you can ask a chatbot to provide you with expertise as an interviewer if you want, to try to go to an interview to get a new job. It can also like stress test you if you want to pass some exams, because this is what these models are good at, like they have ingested a lot of professional literature, a lot of academic literature. They can grade your essays, they can proofread your texts. So, they can basically do everything which is connected to creating text, creating images for use.

“However, it is important to note that it only applies to fields that do not have a big price to pay if the information is incorrect. These models can make up information which is not true. This is called hallucinations,” he said, adding that if you use these tools for medical, legal or financial advice, they could “invent” incorrect information.

Deepfake models are built using deep learning techniques and are commonly used in various applications, from chatbots to virtual assistants. Image: Shutterstock

What are AI hallucinations?

AI technology is not perfect, Tushkanov said, explaining that hallucinations, also known as generative hallucinations or model hallucinations, refer to the phenomenon where AI models, such as generative language models, produce outputs that are unrealistic, nonsensical, or unrelated to the input or the desired task.

In other words, AI models can generate content that appears to be valid and coherent but lacks factual accuracy or context. This is largely due to a range of factors, including biases in the training data, limitations in the model architecture, or insufficient fine-tuning.

He also warned that privacy can be compromised.

“[AI applications] have a privacy policy, which clearly outlines what can happen to the inputs. It usually says that the inputs will be used to retrain the model, and actual real people, assessors, specialists in like content for training machine learning models, can see everything that you write, or you opt out of data collection,” he explained.

Do not download software from the first link on Google

People who try to use these technologies could fall prey to cybercriminals looking to capitalise on the popularity of these tools.

“We have seen examples of malware masquerading as charge pretty clients, right? They say you will get a lot of free credits to use charge up to just download this client. However, when people download this client, their systems would be infected with malware, making it risky. Businesses have to understand these risks, like privacy risks, confidentiality risks, intellectual rights and more – also, do not download software from the first link on Google risk,” he said.

Most importantly, he said that businesses must “educate employees.”

“Do not put confidential data, do not put private customer data or any sensitive information because this might be a breach of privacy regulations. Also, companies can implement some gateways, some filtering solutions, so that they detect that some data may leak, or more. However, I do not think that forbidding the use or absolutely banning the workplace use of LLMs are effective, because if this actually helps people do their job, they will use it on their phones, they will use on their personal PCs, they will do whatever because it helps to save money to save time and save effort.”

Companies are advised to implement gateways, filtering solutions, and data protection to keep cybercriminals at bay. Image: Shutterstock

Are jobs under threat because of the advent of AI?

Some professions are more affected by these tools than others, Tushkanov explained, adding that jobs that work with repetitive tasks, or texts are most affected. “You can outsource some mundane, boring tasks in order to boost creativity,” he said.

“So many executives that we talked with in our surveys, said that they know that their employees use ChatGPT-related technologies. And they basically, either try to forbid it altogether, however the result of this affects productivity,” Tushkanov added.

Businesses cannot simply replace their workforce with AI, he explained, as there is still a need for human responsibility.

In the case of journalism, he believes that it will be among the least affected careers because journalists themselves decide on what questions to ask.

“Artists could be affected due to image generation algorithms,” he added, explaining that it is very important to look at AI as a tool rather than an option to replace people.

The more internet we use, the more cybersecurity we need

With emerging technologies like AI and the Metaverse being buzzwords today, there is a dire need for every user and business to invest in cybersecurity.

“Cybersecurity is very important and the more devices we have, or the more ways to access internet content we have, the more important it is to have good, solid cybersecurity foundations,” Tushkanov said.

“Trends from 2023 will continue, especially because we have very powerful and capable foundation models – all of which are currently being improved. However, there is also a need for cybersecurity.  For example, if we have another device which we use in order to access content, such as the VR headset, maybe it also needs to be secured,” he explained.

UAE ‘very advanced’ in artificial intelligence

When asked if governments should adopt AI models, and Metaverse strategies to ensure maximum benefit, Tushkanov explained that these governments, in particular, hold a significant responsibility as their decisions can have far-reaching consequences for people. It is “crucial” for governments to prioritise the ethical aspects of AI implementation, he said.

Moreover, Tushkanov added that another important aspect is transparency, where governments should inform the public when decisions are made automatically by AI systems.

If a decision is automated, individuals should have the opportunity to contest it because “machine learning systems are not infallible.”

“They make mistakes, and there should be an authority over them with a person in position to be accountable for its potential mistakes,” Tushkanov said.

However, the UAE is a “very advanced” country when it comes to artificial intelligence, he explained. “The UAE is one of the few countries that have their own Large Language Model (LLM), Falcon, which is quite an impressive feat as it is extremely capable, comprehensive and practical.”

LLM is an advanced AI system trained on vast amounts of text data that can understand and generate human-like text. These models are built using deep learning techniques and are commonly used in various applications, from chatbots to virtual assistant. One of the most popular types of LLM is ChatGPT by OpenAI.

Follow us on

For all the latest business news from the UAE and Gulf countries, follow us on Twitter and LinkedIn, like us on Facebook and subscribe to our YouTube page, which is updated daily.

Sharon Benjamin

Born and raised in the heart of the Middle East, Sharon Benjamin has been making waves as a reporter for Arabian Business since 2022. With a keen eye for detail and an insatiable curiosity for the world...