ChatGPT provides a boost to cyberscams
Since its launch in November last year, ChatGPT has taken the media by storm. From fears of job losses to a temporary ban in Italy over privacy concerns, barely a week has gone by without news of OpenAI’s chatbot. Although much focus has been placed on the advantages of innovative AI technology, the matter of how it can aid cybercriminals has been largely overlooked.
Last month the cybersecurity firm Darktrace revealed that there had been an increase in the use of AI in cyber attacks since the introduction of ChatGPT, especially in phishing. While the new trend has not seen an increase in email scams among Darktrace customers, the techniques used have been more complex and convincing.
Darktrace reports that the emails attempting to make users click on malicious links have decreased in number, but the messages have improved in terms of linguistic complexity, which includes text volume, sentence length, and punctuation. The company believes ChatGPT has helped to make phishing emails more sophisticated and, therefore, more effective.
But the ChatGPT-powered scams are not just limited to email attacks.
ZDNet reports that scams that attempt to extort money by impersonating loved ones — often over the phone — are a growing problem in the U.S. In 2022, more than 36,000 Americans fell victim to imposter scams, making this the second most common type of cyberscam. A total of more than $11 million was stolen, according to the Federal Trade Commission (FTC). 5,100 of the scams took place over the phone. And scammers increasingly use AI to assist their activities. One way they use generative AI programs is to create AI voice generators to sound like famous people or even family members.
ChatGPT can also be used for writing malicious code, according to Business Standard. A report by Check Point Research found that cybercriminals have taken to ChatGPT as a means of scaling and teaching malicious techniques online. It revealed that hackers on underground forums are making encryption tools known as “infostealers”, among other forms of cyber fraud.
Last December, a thread on a dark web forum entitled “ChatGPT – Benefits of Malware”, featured discussions and reviews of different malware and techniques. The Check Point report suggested that the thread showed budding cybercriminals how the AI software could best be used for malicious purposes. Users of the forum have also given accounts of the ways ChatGPT has helped them to do things like creating basic malware products. Although these are fairly insignificant in the context of cybercrime, Check Point believes that these capabilities could be developed into sophisticated threat actors in the not-too-distant future.
The report also shows that even those with limited coding experience will potentially be able to build the main components of malicious programs, possibly including ransomware.
Last month Reuters reported on a warning from Europol over the potential misuse of ChatGPT in disinformation, phishing, and other forms of cybercrime. The law enforcement agency said that “ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes.”
Europol also warned that the speed and scale at which the AI tool can produce text and messages with a particular narrative make it a potentially dangerous tool when used for the wrong purposes.
Perhaps the world has yet to understand the full capabilities of ChatGPT and all the industries and fields that will be impacted. But we should certainly stay aware and be protected against every last threat before it becomes a reality.