Artificial intelligence has disrupted how people interact with tech. Internet security professionals warn that AI-powered cyberattacks will become more prevalent, necessitating effective countermeasures to mitigate their effects. How can individuals and organizations prepare and increase their defenses?
1. Keep Phishing Awareness Skills Sharp
Tools such as ChatGPT have made it easier for scammers to draft phishing emails much faster than their formerly manual methods allowed. However, even as people learn more about this emerging threat, some may use AI tools to help them differentiate between genuine and deceiving messages. However, a 2024 study from the United Kingdom showed these products vary in their effectiveness.
Researchers fed 40 genuine and scam emails into a pair of detection tools that used two types of artificial intelligence. The results showed the products were wrong 10% of the time and less successful at recognizing legitimate messages.
Although the programs still showed a decent success rate, this outcome shows people must not become overconfident in their AI products reaching the correct conclusion. Cybersecurity experts say AI will make phishing emails harder to spot, mostly because tools will create content without the spelling and grammar errors that previously tipped off people to untrustworthy messages.
Preventive Measures
Delayed action is one of the most effective countermeasures against these types of AI-powered cyberattacks because scammers try to get people to act immediately, often by warning that they must respond to avoid dire consequences. Additionally, cybersecurity teams should thoroughly vet AI email-screening tools, understanding their potential advantages while realizing all solutions have some imperfections.
Providing regular updates to employees or other relevant parties who use corporate networks will create the necessary awareness to help them understand that although AI has some helpful capabilities, bad actors may also use it to their advantage to launch attacks.
2. Use Technology to Counter AI-Powered Cyberattacks
Estimates suggest AI adoption rates will increase by 37.3% between 2023 and 2030. Fortunately, even as cybersecurity experts face a more challenging environment due to the technology’s capabilities and how bad actors use it, they can also use artificial intelligence to detect and minimize risks.
Staying ahead of adversaries’ methods may mean replicating them for defense purposes. Some artificial intelligence-based monitoring tools flag instances of abnormal activity, helping teams respond to them faster.
Since AI algorithms process vast amounts of data quickly, these tools establish detailed baselines and track potential deviations. Similarly, digital twins can help cybersecurity teams see the potential effects of cyberattacks on their systems, giving them the necessary insights to prepare for them.
Preventive Measures
Cybersecurity professionals should remain open-minded about how they can use technologies to reduce attack risks. An important thing to remember is best practices still apply, regardless of whether adversaries use AI.
Decision-makers should also test AI products before committing to them and note how peers in the same or other industries apply technologies. In one example, a bank decreased false positives by 60% after deploying an artificial intelligence tool.
A balanced perspective is an excellent countermeasure against sales promises associated with AI-driven cybersecurity products. Prospective buyers should watch for flashy language or unsupported claims. It is ideal if they can see case studies or other solid examples of comparable clients that have deployed a particular tool and gotten notable results.
3. Stay Wary of Current-Events-Driven Scams
Cybercriminals love orchestrating their attacks to affect as many victims as possible. That often means they choose buzzworthy themes. Anyone who has seen a celebrity supposedly endorsing a new investment opportunity knows this tactic well.
AI-generated images are central to these scams, with the cybercriminals behind them hoping people will fall for them due to the associated name value. A more recent and timely example relates to initial coin offering (ICO) scams mentioning the upcoming Paris Olympic Games.
Cybersecurity researchers said the websites advertising the ICO investments often feature AI-generated images. They speculate that cybersecurity scammers use them because they are more cost-effective and time-efficient than making photos through traditional means. Additionally, if the scammers opt to steal pictures rather than use AI, the tactics may become more evident to some targets due to duplication.
Centering these newer scams around the Olympic Games shows how cybercriminals often capitalize on newsworthy events or buzzworthy topics to drive interest. For example, research teams found some ICO scams containing AI-generated images that lured people by advertising Olympics ticket giveaways.
Preventive Measures
Cybercriminals use social media platforms as the main advertising mechanisms for their scams. However, citizens can use knowledge as a countermeasure by remaining vigilant against any offers that seem too good to be true. Cybersecurity teams should distribute details to warn how some AI-generated content seems real at first, and people may not look too closely if feeling particularly excited about offers found online.
Corporate cybersecurity teams should also block domains known to host scammy content, whether related to ICO scams or not. That preventive measure will reduce the chances of workers or anyone else using the network coming across the content and believing it is real.
AI-Powered Cyberattacks Are Increasing
Since mounting evidence shows AI-powered cyberattacks are becoming more common, cybersecurity professionals must remain vigilant for these new, more advanced tactics. Awareness of expert perspectives and peer insights can help them spot emerging trends and prepare accordingly.
However, the main thing to remember is many criminals use AI to accelerate familiar methods, such as phishing attacks or crypto scams. This reality reinforces why sector professionals must understand that although they may deal with attempted attacks more often, the fundamental mechanisms perpetrators use may not change as much as they imagine.
Emily Newton is the Editor-in-Chief at Revolutionized Magazine. A regular contributor to Brilliance Security Magazine, she has over four years of experience writing articles in the industrial sector.
.
.
Follow Brilliance Security Magazine on Twitter and LinkedIn to ensure you receive alerts for the most up-to-date security and cybersecurity news and information.