How Can We Apply Deep Learning Algorithms for Cybersecurity?

By Emily Newton

Cybersecurity practitioners are continually concerned with staying ahead of online criminals. However, as such attacks become more prevalent, it’s getting harder for them to succeed in that already-daunting task. Deep learning may help.

This is a subset of artificial intelligence that uses algorithms inspired by the human brain. These artificial neural networks learn from examples contained in massive amounts of data. Here’s an eye-opening look at how to rely on deep learning for better cybersecurity outcomes.

Enabling Near Real-Time Threat Detection and Prediction

Cybersecurity solutions that do not utilize deep learning typically only safeguard against known attacks and provide alerts once systems are infected. Such protections are better than none, but they fall short. Jonathan Kaftzan is the vice president of marketing at Deep Instinct, a company that applies deep learning to threat detection and gets speedy results. The brand’s tool can also predict a company’s likelihood of encountering previously unseen threats.

Kaftzan explained, “The time it takes us to analyze a file before you’ve even clicked it is 20 milliseconds. We’ve never seen it before to assess whether it is malicious or not. In another 50 milliseconds, we’ll be able to tell you where the attack has come from and what it is autonomously, without any human being involved in the process. The time it takes to remediate and contain the attack is under a minute.”

Cybercriminals continually tweak their attack methods to cause the most damage. That means it’s not always sufficient to deploy a system that recognizes known threats. Deep learning could help companies become more proactive with their cybersecurity while responding substantially faster to possible attacks.

Detecting Deepfake Content

Deepfake content uses advanced artificial intelligence to create fabricated media that looks real. It’s also an often-overlooked cybersecurity threat. In one example, a deepfaked tweet about an injured Barack Obama caused a drop of more than $130 billion in stock value within minutes. If a criminal uses deepfake content to impersonate a brand, the victimized company could quickly get caught up in misinformation that spreads rampantly and causes reputational damage.

However, researchers at MIT’s Media Lab relied on deep learning to spot deepfake material. They trained the neural network on 100,000 fabricated videos and 19,154 genuine ones. The results helped the team realize that the fakes often have shared defining characteristics.

They learned that high-end deepfaked material almost always consists of manipulated images, such as putting someone else’s face on a person’s body in a video. They also pointed out that faked content may have inconsistencies related to facial hair, the glare on a person’s glasses or the wrinkles of their skin. The group acknowledged that spotting deepfakes is not easy. However, they used their deep-learning experiment to create a website that helps people learn to recognize them.

Improving Threat Responses Through Managed Exposures

Deep learning algorithm performance improves through various methods. Supervised learning relies on using a labeled dataset to teach the algorithm how to respond to previously unseen information. People are also getting interested in reinforcement learning, which is a subset of deep learning. In that approach, the algorithm tries to solve a problem without specific instructions or details. It learns through trial-and-error, getting rewarded for making progress and receiving penalties for mistakes.

The authors of a 2019 research paper pointed out that malicious parties could launch adversarial attacks that reduce the performance of algorithms trained by reinforcement learning. Fortunately, the people who develop and improve neural networks are not powerless against those damaging efforts.

The authors mentioned that adversarial training is one of the most common strategies to make neural networks more robust. More specifically, the algorithms are exposed to simulated examples of malicious attacks. If they happen in real life, the neural network is well-prepared to resist them. This research may be crucial as using deep learning for cybersecurity becomes more widespread. After all, people who use it want assurance that cybercriminals could not easily fool the algorithms to facilitate attacks.

Verifying the Source of Anonymously Authored Fake News Content

Fake news has become a prominent issue in today’s society, especially when it spreads rampantly across social media. Researchers who were concerned that misinformation could negatively affect people’s willingness to take the COVID-19 vaccine recently created a game that taught players three aspects that often help fake news seem real. That’s a good start in helping people recognize it, but progress must occur in determining the origins of fake news.

Some analysts see the fake news problem as a lack of data integrity. Thus, they assert that it’s a cybersecurity problem. For example, there has been a surge in internet sites and emails promising COVID-19 cures, often featuring genuine health authorities’ logos. Many of them require people to download a file that supposedly contains the desired information about curing the virus. The trouble is that the material only offers a person malware that infects their computer.

A trio of researchers from Baidu Security applied deep learning algorithms to find the people responsible for authoring online misinformation. They trained the neural network on 130,000 articles written by more than 3,600 people. Moreover, the team pulled their information from eight websites. The algorithm correctly identified an anonymous author of fake news as one of five possible writers 93% of the time. The algorithm needs more development to become more precise, but it shows the possibilities for fighting this cybersecurity aspect.

Identifying Malicious Files

People download files daily, rarely considering that some might contain a virus that makes a computer inoperable and contaminates the network. However, a collaboration between Intel and Microsoft’s security teams led to a project that uses deep learning to identify malware.

The approach converts malware binary form into grayscale images. Then, a pattern-recognition algorithm scans that new content. After researchers trained the algorithm, they found it could correctly identify malware with a 99.07% accuracy rate. Also, the false-positive rate was less than 3%.

One downside is that the algorithm struggles with large images. Thus, the researchers compressed them into a JPEG format. They recognized that doing that makes the algorithm less effective by changing billions of pixels into resized JPEGs. However, they stressed that this method is still worthwhile because malware files are not typically very big. Moreover, the team clarified that the algorithm might send large files to a metadata-based model that’s more capable of handling them.

A Promising Future for Deep Learning in Cybersecurity

Using deep learning in cybersecurity is still not a mainstream practice. However, these examples show why people may get more on board with it soon. Even though some of the use cases mentioned above are in research phases, they highlight why deep learning could result in tremendous cybersecurity progress.

Emily Newton is the Editor-in-Chief of Revolutionized Magazine. She has over three years of experience writing articles in the industrial sector.

Follow Brilliance Security Magazine on Twitter and LinkedIn to ensure you receive alerts for the most up-to-date security and cybersecurity news and information.