in

Cyber criminals cash-in as deepfake demand spikes

Not only are deepfakes sold per minute on the darknet, but the price of one video ranges from $300 (R5 510) to $20 000 (R367 374).

This was revealed by Kaspersky lead data scientist Vladislav Tushkanov, speaking at the firm’s Cyber Security Weekend – META 2023, which took place in Almaty, Kazakhstan.

Now its eighth year, the Kaspersky Cyber Security Weekend sees its experts highlight the biggest cyber threats, targeting governments, enterprises, businesses and industrial organisations. The event also forecasts future cyber security trends. See also Imposters use deepfakes to land remote IT jobsDeepfakes: Africa hoodwinked by ‘weapons of mass disruption’

The cyber security firm analysed various darknet marketplaces and underground forums offering creation of deepfake videos and audios for different malicious purposes.

Based on Kaspersky’s research, Tushkanov said deepfakes are sold on the darknet for many use cases, ranging from advertising for crypto scams, to bypass verification in payment services.

For example, advertisements appear on the darknet declaring to create deepfakes, specifically for crypto currency and video streaming services scams.

“$300/per minute is the starting price for a deepfake on the darknet,” he stated. “The higher limit was discovered to be about $20 000.

“It’s important to remember that deepfakes are a threat not only to businesses, but also to individual users: they can spread misinformation, be used for scams, or to impersonate someone without consent.”

Deepfakes first gained popularity in 2017, emerging on web forums. Since then, various high-profile celebrities and political figures have been targeted, including former US president Barack Obama and actor Tom Cruise. More recently, billionaire Elon Musk’s image was used in a video to promote a new crypto-currency scam.

Closer to home, anti-crime activist and TV presenter Yusuf Abramjee was a victim of a deepfake revenge video in 2020.

At first, deepfakes were used mostly to create non-consensual pornography as a form of harassment, said Tushkanov. Increasingly, they have been used in attempts at blackmail and fraud.

The first case where deepfakes were used to target a company in a cyber attack was recorded in September 2019. A UK-based energy company was attacked using a voice deepfake and the company was scammed out of money, he revealed.

In June 2022, US law agency the FBI issued an official warning that deepfakes are being used to apply for remote jobs. This, explained Tushkanov, means the technology has evolved to such a degree that it can be used online, not like a pre-recorded video, but just like on a Zoom call.

Tushkanov said as with any emerging technology, numerous opportunities abound. However, there are also risks for people and businesses alike, which may enable cyber criminals.

In the case of viral artificial intelligence (AI)-language model ChatGPT, Tushkanov said it can empower new products with language-AI capabilities, such as customer support and internal automation. “A lot of stuff can be made better with a natural language interface.”

On the downside, if used widely enough, it can create additional attack surfaces because of the limited knowledge of protecting AI applications at the moment, he stated.

According to Tushkanov, cyber defenders such as cyber security companies can enjoy various language AI-based services. For example, some analysts can improve their workflows and reverse engineers can use it to aid them in their tasks.

However, these technologies introduce new vulnerabilities, he said. Referencing a case where this happened, he explained that a bot on Twitter was used to look for particular cue words, such as ‘remote jobs’ and ‘remote work’.

“When it detected a tweet with those words, the tweet was fed into a network like ChatGPT to provide a response that would support the idea of remote work and remote jobs, but answered in a threatening manner like ‘we will overthrow the president if he does not support remote work’.”

While the implications of this are not yet known, the lead data scientist indicated it could cause reputational damage for companies.

There is also the potential where cyber criminals can employ models like ChatGPT in their offensive applications. In this case, they would provide advice on hacking and malicious activities.

“As we know, a lot of spam and phishing e-mails are written in English, but English is not the mother tongue for many cyber criminals. Often spam and phishing e-mails are so badly written that you see straight away that something is suspicious and off.

“These technologies can help create good text for spam, as well as phishing, and also be used to adapt phishing e-mails to make them unique and more personalised.”

AI creates new opportunities for businesses −cyber security included − but awareness is key as risks also exist, he concluded. “Understanding how AI changes the world and educating the public about AI is of utmost importance.”

What do you think?

44 points
Upvote Downvote

Written by C.L Martin

Leave a Reply

Your email address will not be published. Required fields are marked *

REvil ransomware

Cyberattacks on health care are increasing. Inside one hospital’s fight to recover

PowerOFF: Authorities shut down 13 providers of DDoS services