ChatGPT, the language model developed by OpenAI that has been taking the world by storm, could potentially be used for cyberattacks in less than a year, as revealed by BlackBerry Limited in their latest global research.
The research also revealed that according to 71 percent of IT professionals, foreign states may already be using the technology for malicious purposes against other nations.
IT professionals globally have expressed concerns about ChatGPT’s potential threat to cybersecurity.
The survey was conducted with over 1,500 decision makers in North America, UK, and Australia, although the general perception was that the use of the platform is majorly positive, concerns about potential threat was also expressed.
Over 53 percent believe that hackers will now be able to craft more legitimate sounding emails as well as allowing hackers to gain more technical knowledge and skill development.
ChatGPT and cybercrime
“Poorly constructed sentences or grammatical errors are one of the few tell-tale signs of phishing emails and dating app profiles. With the prevalence of scams across social networks and messaging apps, ChatGPT could help fill the gap when it comes to writing more convincing profile bios for fake profiles,” Satnam Narang, senior staff research engineer at Tenable told Arabian Business.
“ChatGPT has reportedly been used by low-skilled cybercriminals to develop basic malware and may be used by scammers to develop phishing scripts to be used as part of both phishing emails and dating and romance scams,” Narang added.

ChatGPT has been trained on a massive amount of text data and can generate human-like text with remarkable coherence and fluency. With its advanced natural language processing capabilities, the platform is capable of answering questions, generating creative writing, and performing a wide range of language-related tasks.
“I believe these concerns are valid, based on what we’re already seeing. It’s been well documented that people with malicious intent are testing the waters and over the course of this year, we expect to see hackers get a much better handle on how to use AI-enabled chatbots successfully for nefarious purposes,” said Shishir Singh, Executive Vice President and Chief Technology Officer at BlackBerry Cybersecurity.

Considered to be one of the most advanced language models available, the potential applications for ChatGPT are vast, including customer service, language translation, and content creation.
“With the advancement of AI technology and growing interest in the growing capabilities of ChatGPT, it’s easy to see why security leaders have concerns over the tools use cases and the implications posed from a security standpoint. We could see more successful phishing attacks where common language mistakes from non-native speaking groups harness the power of ChatGPT to develop well-written and formatted messages, catering to specific targets in their native language,” Scott Caveza, Senior Manager, Research at Tenable told Arabian Business.
“Given that large language models like ChatGPT excel not only in generating natural language texts, but code as well, we anticipate that malicious parties might use them to generate malware at scale, potentially outpacing traditional methods for classifying and defending against malware. While these dangers are probable, ChatGPT can also be used to bolster our security posture,” he further explained.
Narang said “We’re still in the early stages of seeing ChatGPT’s impact on a broader level, but it’s clear that, as with any new technology, cybercriminals will seek to find a way to abuse it for their own financial gain.”