Will Phishing Emails Be Harder to Spot in the Age of AI?


By Emily Newton, Editor-in-Chief at Revolutionized Magazine

AI phishing emails are more realistic, personalized and varied than conventional spam content. Hackers are leveraging generative AI to create huge volumes of next-gen phishing emails. Are these messages harder to identify? What are the risks and red flags of AI-generated phishing content?

Heightened Risks of Phishing in the Age of AI

OpenAI’s ChatGPT has over 100 million users today and holds the record for the fastest-growing user base. Most people using ChatGPT and similar generative AI algorithms are simply doing research or automating harmless tasks. Threat actors have also discovered the many benefits of generative AI, even though directly asking ChatGPT to create a “phishing email” or “malware” results in a pre-programmed response that harmful content is prohibited.

AI phishing emails are now a severe security threat. They augment the risks of social engineering by making it more challenging to spot fake content. There are a few ways hackers accomplish this.

Greater Volume and Variety of Phishing Content 

Most people are familiar with repetitive phishing attacks, such as the infamous fake phone calls about “your car’s extended warranty” or emails from strangers full of incorrect grammar. Identifying these messages as spam is easy because they all use the same format, language and strategy.

In the age of AI, phishing emails are getting much more varied. Hackers can rapidly create new messages using generative AI, leading to less repetitive narratives and formats. More variety also increases the likelihood of tricking a victim because hackers can create fake content for numerous brands or companies.

For example, someone might know to ignore a message from a bank they have no account with. What if the phishing email appeared to be coming from the victim’s actual bank, though? Generative AI increases the efficiency of creating phishing emails so hackers can proliferate a greater volume and variety. As a result, there is a higher likelihood of successfully fooling victims.

More Personalization 

One of the most obvious signs that a message is phishing is irrelevance. For example, an email from someone the reader has never met or a business they have never visited is unlikely to trick them. With traditional phishing methods, it’s difficult for hackers to create customized phishing content on a large scale.

AI phishing emails are far easier to personalize. Creating content with generative AI is highly efficient, removing the time barrier that conventionally limits personalization. Additionally, AI can help hackers find information on potential targets.

Many people aren’t aware generative AI and chatbots learn from every interaction. These models can effectively remember anything users type in or ask. As a result, there is a high risk of sensitive information exposure when using AI tools.

Hackers can then ask chatbots about various businesses, employers or even individuals, and use info other users previously told the AI. For example, a scammer could ask a chatbot to create an email in the same style as Apple’s customer service team, address the email to a specific person and mention a particular product. The whole process only takes seconds, so it’s effortless for hackers to generate hundreds or thousands of personalized messages.

Fewer Grammar and Spelling Errors 

Spotting phishing emails traditionally relies on looking for red flags like demands for money, suspicious attachments or unusual grammar. AI phishing emails are changing that by making fake content more realistic.

Large language models like ChatGPT are extremely effective at mimicking realistic grammar, spelling and writing. They can even translate content. In fact, generative AI is so good at copying human-generated content that it can trick people with fake but realistic research studies and news articles. For example, in 2023, the UK news outlet The Guardian had to release a warning for its readers after discovering ChatGPT was generating fake news articles under its reporters’ names.

Hackers are leveraging this technology to make highly convincing phishing emails. In the age of AI, the old methods of spotting a clumsily written spam email no longer work. More realistic language also increases the likelihood of a phishing email getting around automated spam filters, making users more likely to see phishing content in the first place. There are still red flags, but obvious hints like incorrect grammar are disappearing.

Defense Against AI Phishing Emails

AI makes it easier for bad actors to generate realistic, personalized phishing content in huge volumes. It is fanning the flames of a cybersecurity threat that was already on the rise. In 2022 alone, there was a 61% increase in reported phishing attacks compared to 2021. Over the next few years, phishing rates will continue to skyrocket, thanks partly to generative AI.

Users can protect themselves from AI phishing emails, though. As realistic as the content is, it can’t perfectly match real human-generated writing. Several signs indicate a piece of content might be AI generated.

For example, AI models take a long time to train, so they usually work off outdated information. Even ChatGPT is about two years behind. As a result, AI phishing emails might refer to old data or events that happened years ago.

Repetition and simplicity are also common signs of AI-generated content. Generative models are essentially advanced pattern-recognition algorithms. They create content by copying what they already know. This means AI-generated text tends to have repetitive words, phrases and sentence structures. It often lacks depth and emotion, as well.

Organizations and individuals may also be able to use analysis tools to identify AI phishing emails. Many AI detection apps and websites are available today, some of which are free to use. Some of these tools can even spot AI-generated images and deepfakes. They aren’t foolproof but tend to give false positives more than false negatives.

Finally, users can double check whether or not a message is legitimate by directly contacting whoever it appears to be from. For example, if someone received an unusual email from their boss, they could message their boss using verified contact information and ask about the email. Double checking might take some time, but it’s a surefire way to spot even the most convincing phishing emails.

Advancing Cybersecurity For AI Threats

AI phishing emails are definitely more challenging to spot compared to most conventional spam content. However, there are still red flags users can keep an eye out for. Generated messages are often outdated, repetitive and simplistic. There are also plenty of tools available today for detecting AI-generated content.


Emily Newton is the Editor-in-Chief at Revolutionized Magazine. A regular contributor to Brilliance Security Magazine, she has over four years of experience writing articles in the industrial sector.

.

.


Follow Brilliance Security Magazine on Twitter and LinkedIn to ensure you receive alerts for the most up-to-date security and cybersecurity news and information.