Blog

Analysis: Will ChatGPT’s Perfect English Change the Game For Phishing Attacks?

Mar 24, 2023

JD_usesofchatgpt By John E. Dunn 

Nobody predicted how rapidly AI chatbots would change perceptions of what is possible. Some worry how it might improve phishing attacks. More likely, experts think, will be its effect on targeting. 

Much has been said about the game-changing abilities of ChatGPT since it was launched in November 2022. One of the most interesting is that the chatbot will prime a new generation of sophisticated phishing attacks, still the most important technique cybercriminals use to harvest user credentials and personal identifiable information (PII). 

ChatGPT, of course, is not the only chatbot that uses a machine learning large learning model (LLM) that could be abused through its web interface or API. There are at least half a dozen plausible rivals, starting with Google’s Bard and that’s before considering the possibility that people with bad intentions might develop their own private LLMs.   

This type of AI looks like a big opportunity for attackers. In theory, malevolence should be limited by security guardrails, which limit an AI’s responses when asked certain questions. This is not a guarantee, however, with researchers successfully bypassing ChatGPT’s GPT-3.5 controls (GPT-4 is much harder to game but it’s early days).  

Security researchers across the industry have spent several months playing with ChatGPT – what have they found? 

Better Phishing Grammar 

Consider the following unremarkable phishing email which has probably been sent in multiple variations to a million inboxes:  

Windows user alert 

Unusual sign-in activity 

We detected something unusual to use an application to sign in your Windows computer. We have found suspicious login attempt on your Windows computer through an unknown source. When our security officers investigated it was found that someone from foreign IP address was trying to make a prohibited connection on your network. 

The person who composed this email could probably speak conversational English but not well enough to accommodate the grammatical nuances that are quickly exposed in its written form. Run the same email through ChatGPT on GPT-4 and you not only get flawless official-sounding prose out the other end, but it adds its own helpful advice: 

We urge you to take immediate action and secure your computer by changing your password, running a virus scan, and enabling two-factor authentication. Please do not hesitate to contact us if you need any assistance or have any questions. 

Clearly, writing a phishing email with a chatbot like ChatGPT is a breeze. For attackers, this capability could be the biggest upgrade since phishing and spam became a global problem 20 years ago. 

Better Business Email Compromise (BEC) 

For Etay Maor , who works for Cato Networks when he’s not lecturing on cybersecurity as an adjunct professor at Boston College, BEC is a bigger worry than phishing, which might anyway be countered with defensive AI.  

He can see a scenario where attackers have access to a genuine email sent by a CEO. “An attacker can ask the AI to write an email in the style of a CEO. And what happens if you complement this with voice synthesis and deep fakes?” 

What this adds up to is a dramatic improvement in targeting. Today’s phishing emails are generic for the most part. Now, suddenly, they are infinitely customizable. For example, Maor wonders aloud how easy it would be to mimic the email writing style of someone’s boss or colleague after researching this from open-source data.  

“You just point the AI at a target, let it do all the research, and it can answer any question about this person,” said Maor. All LLMs have guardrails designed to stop this but these can, to some degree, be bypassed to spit out the desired responses.  

“Are we in a new era of phishing? No, it’s the same stuff only better,” argued Maor. “I don’t think it’s going to change the threat landscape so much as expedite and make it more professional. It lowers the entry bar.” 

Long-Game Phishing Attacks 

“The thing about the emails you can write with ChatGPT is that each one is unique,” said Gavin Watson, technical director for U.K. penetration testing company, Pentest People . “They are not sending the same email a million times. That is incredibly powerful.” 

He can see a scenario with phishing in which AI systems engage targets in lengthy back and forth, slowly building their trust before sending people a malicious attachment or link when they are likely to accept it.  

“The phishing email is not asking you to click on a link or download an attachment. ChatGPT gets people into a convincing conversation they believe is with a human and then sends them a résumé or work example. That kind of phishing attack would be incredibly hard to defend against.” 

Gathering Threat Intelligence 

The threat here is that a chatbot could be used to automate the normally laborious process of collecting public domain intelligence on targets, including their systems and the people who manage them. ChatGPT has guardrails around researching individuals but that assumes that attackers aren’t using their own. This is an area that clearly needs more research. However, in principle a lot of the information considered to be difficult to guess – a mother’s maiden name for instance – might turn out not to be. “The amount of information you can gather on a person or company is incredible,” observed Watson. 

Cybersecurity Awareness 2.0 

Over the last decade, it’s become orthodoxy that organizations should train employees to recognize phishing attacks and other scams, giving them the mental aptitude to resist these attacks. This has never been foolproof – everyone has to click on something eventually – but there’s modest evidence that it works.  

If phishing composition and targeting improve, the task of distinguishing good from bad will become much more difficult very quickly. Already, security awareness training is becoming more targeted, customizing the training needed for sysadmins as opposed to HR team members, application managers or general employees, for instance. Chatbots might yet force companies to revise a lot of this. 

“If people are drawn into a conversation and the attackers start to gain trust, even a small amount, that could have a big impact,” Watson added.