ChatGPT is Going to Change Cybersecurity. Here’s How

By Ramil Khantimirov, CEO and co-founder of StormWall

Artificial Intelligence is taking the world by storm. ChatGPT, which by now, everyone probably knows about, debuted at the end of November 2022 and, in just two months, has become a worldwide phenomenon, reaching over 100 million users — a new record high for the fastest-growing product ever.

Seemingly overnight, people from all over the world could talk to a computer program as if it were a real person, something that only a short while ago was only possible in science fiction.

Also, straight out of fiction, ChatGPT has a content moderation policy designed to ensure it does no harm.

Unfortunately, it didn’t take long for the Internet community to find a way around it. Turns out, the chatbot is more than willing to break the First Law of Robotics — a rule dictating that a robot may not cause harm to a human being — if you insist hard enough.

AI is about to change cybercrime forever

Cybercrime has long been attractive to those seeking to make a quick profit, but because of the technological complexity, getting good at hacking simply wasn’t realistic for most people.

This isn’t the case anymore. With ChatGPT (and, presumably, other AI chatbots like the new Bing and upcoming Bard AI), it doesn’t matter if you’re technologically illiterate. You can tell the chatbot what you want it to do in plain language, and it will do it — write code and all: you can design a phishing website, craft a fraudulent email in another language, or build functional malware.

This unprecedented accessibility of cybercrime can cause a massive shift in cybersecurity. And here are 3 major ways how the threat landscape might change in the coming years:

1. Outbreaks of primitive ransomware will target small and medium businesses

Ransomware is one of the most devastating malware types. It works by corrupting files with encryption and then requesting a ransom payment in exchange for the key that decrypts the data. Such attacks are often difficult to recover from because once the infection takes place, there is little the victim can do.

ChatGPT is not yet advanced enough to write ransomware as complex as that used by prominent cyber gangs — like REvil/Sodinokibi or ALPHV. These strains are destructive and sophisticated. Their targets are often large corporations, and their ransom amounts reach tens of millions of dollars per attack.

But ChatGPT is more than capable of creating simple ransomware. This report by an Israeli security firm, Check Point, reviews an underground forum where one user brags about creating a Python encryptor with ChatGPT. It’s the first script they’ve ever written, the user says. 

A script that encrypts files isn’t fundamentally malicious, but with some modifications, it can be used offensively. This won’t be sufficient to penetrate the defenses of a company that has a competent cybersecurity team. But most SMBs don’t.

Small and medium businesses typically don’t have the funds to run a Security Operations Center and lack an incident response plan. Many still use outdated or end-of-life software, which makes them vulnerable to attacks. 

When more cybercriminals catch on to the fact that they can easily target millions and millions of unprepared victims, there could be a massive increase in ransomware attacks directed at small companies.

2. Phishing attacks will get more common, sophisticated, and dangerous

A study by the security firm McAfee showed that 97% of users can’t tell a well-crafted phishing email from an authentic one. Phishing is already the most common type of attack — about 1 in every 99 emails flying through the internet is fraudulent, and 25% of those will make it through your spam filters.

The only reason we are not falling for phishing left and right is that most attempts don’t get much more sophisticated than the famous Nigerian prince scam. In fact, the most common advice on how to spot a phishing attack will instruct you to look out for poor grammar and spelling errors.  

Well, this won’t work anymore. AI chatbots don’t make typos, and their grammar is nearly perfect.

In fact, ChatGPT excels at writing emails — that’s what most legitimate guides suggest using it for. It is only natural that cybercriminals are also following this advice. It can also translate text into multiple languages, which makes it ideal for crafting sophisticated phishing emails that target victims anywhere in the world. 

To make things worse, while an argument can be made that better content moderation will stop the misuse of AI to write malicious code, there is no way to identify an attempt to write a fraudulent email.

Well-made phishing attacks already mimic communication from a real business. So if one tells ChatGPT to write an email on behalf of MetaMask asking the user to update their secret phrase, who’s to say it’s not an employee looking for inspiration?  

3. Cyberattacks will become more widespread as people learn to use offensive tools

One area where AI can be particularly useful is in teaching people computer fundamentals, like using the command line. Unlike googling a topic, talking to ChatGPT is like having a personal teacher: he patiently explains what you don’t understand and is ready to solve a task for you if you ask. This opens many doors for cybercriminals. For example, what if one wants to launch a DDoS attack but doesn’t know how? 

This situation is common in industries such as retail, where DDoS attacks tend to come from businesses that are competing with each other. As these are not technically proficient users, they often lack the required knowledge, such as Linux fundamentals. ChatGPT can help fill in those gaps. 

Let’s perform a harmless experiment to demonstrate the point by asking ChatGPT “how to test our personal server for DDoS resilience:”

The bot readily educates us about the basics of DDoS — and this is without using a jailbreak. Then, GPT goes on to give step-by-step instructions: 

Hping3 is a popular DDoS tool used by threat actors and penetration testers alike, but It’s not the most impactful. As a result, it may not be enough to take down a well-hardened website.

But small and medium businesses that lack safeguards against DDoS remain exposed, which positions them to be the most affected group if AI-assisted attacks become more widespread.

Wrapping up

I usually like to finish these articles on a positive note, but this time — it is difficult to find one.

There is an argument that since this technology is equally accessible to the good guys and the bad guys, for every advantage it gives to adversaries, there is an equal advantage for everyone else. I don’t think this is true. 

An average owner of a small business isn’t interested in technology. They don’t understand how application and system security work, nor should they have to. They haven’t heard about Yara Rules, what a WAF does, or how a UDP flood brings down a website. 

There isn’t a magic button that hardens a security perimeter. But a magic button that breaks it down? Well, that we now have. How it will impact us, remains to be seen.

Ramil Khantimirov is the CEO and co-founder of StormWall, an international cybersecurity company. He is a Ph.D. in Computer Science. Before co-founding StormWall in 2013, Ramil had vast experience in both IT architecture and management. Ramil is a recognized expert in the field of cybersecurity. He is the author of many articles on protection from DDoS attacks and the speaker at many professional conferences where he was the first to research the topic of protectability from DDoS attacks and the ways to improve it. He aims to use the maximum of his knowledge and skills to improve the safety and security of the information society by creating the technology to protect against hackers and malefactors.

Follow Brilliance Security Magazine on Twitter and LinkedIn to ensure you receive alerts for the most up-to-date security and cybersecurity news and information.