Is ChatGPT A New Security Threat?

Utilities face many evolving security threats. Is the AI-driven chat bot ChatGPT one of them? A recent test of AI created “phishing” emails seemed to show that humans are better at suckering humans than AIs currently, although this may change...

Cybersecurity training company Hoxhunt compared phishing campaigns generated by ChatGPT against those created by people to determine which stood a better chance of deceiving an unsuspecting victim. They state that 90% of data breaches start with phishing.

To conduct this experiment, the company sent 53,127 users in 100 countries phishing simulations designed either by human social engineers or by ChatGPT. The phishing simulation was sent to users in their inboxes in the same way as they would receive any type of email. The test was set up to elicit three possible responses:

Success: The user reports the phishing simulation as dangerous via the Hoxhunt threat reporting button.

Miss: The user doesn’t interact with the phishing simulation at all.

Failure: The user takes the bait and clicks on the malignant link in the email; (of course, this is a simulation, so nothing dangerous happens).

 

The results of the phishing simulation created by Hoxhunt

In the end, human-generated phishing mails suckered more victims than any created by ChatGPT. The rate that users fell for the human-generated messages was 4.2%, while the rate for the AI-generated ones was lower, at 2.9%. That means bad human actors outperformed ChatGPT by about 69%.

The learning element of the study showed that security training is effective at reducing the users vulnerability to phishing attacks. Staff with a greater awareness of cybersecurity were less likely to fall for these phishing emails, whether they were generated by AI or humans. The percentage of users who clicked on the malignant link dropped from over 14% among the lowest-trained individuals, to 2-4% for those with greater cybersecurity training.