HYAS AI-Generated Malware Research Contributes to EU’s AI Act

Contributors to and framers of the European Union’s AI Act are tapping into research from HYAS Labs, the research arm of HYAS Infosec, creators of the first publicly announced “white hat” sample of AI-generated malware, BlackMamba, and its more sophisticated and fully autonomous cousin, EyeSpy.

The move is understood to assist in both the development of proposed policies and a greater understanding of potential real-world challenges posed by fully autonomous and intelligent malware which cannot be solved by policy alone.

HYAS research codifies and provides deep insight into the potential harms of fully autonomous and intelligent malware and helps advance cybersecurity protections against AI-driven threats. The AI Act is widely viewed as a cornerstone initiative that is helping shape the trajectory of AI governance, with the United States’ policies and considerations soon to follow. 

AI Act researchers and framers assert that the Act reflects a specific conception of AI systems, viewing them as non-autonomous statistical software with potential harms primarily stemming from datasets. The researchers view the concept of “intended purpose,” drawing inspiration from product safety principles, as a fitting paradigm and one that has significantly influenced the initial provisions and regulatory approach of the AI Act. 

HYAS contributions are helping advance the understanding of AI systems that are devoid of intended purpose, a category that encompasses General-Purpose AI Systems (GPAIS) and foundation models. HYAS contributions specifically help shed new light on the unique challenges posed by GPAIS to cybersecurity.

HYAS research is proving important for both the development of proposed policies and for the real-world challenges posed by the rising dilemma of fully autonomous and intelligent malware which cannot be solved by policy alone.

With the introduction this summer of its BlackMamba proof of concept, HYAS provided the first tangible example of GPAIS “gone rogue.”  The proof of concept cited in the research paper “General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole,” by Claire Boine and David Rolnick, exploited a large language model to synthesize polymorphic keylogger functionality on-the-fly and dynamically modified the benign code at runtime — all without any command-and-control infrastructure to deliver or verify the malicious keylogger functionality. 

EyeSpy, the more advanced (and more dangerous) proof of concept from HYAS Labs, is a fully autonomous AI-synthesized malware that uses artificial intelligence to make informed decisions to conduct cyberattacks and continuously morph to avoid detection. The challenges posed by an entity such as EyeSpy capable of autonomously assessing its environment, selecting its target and tactics of choice, strategizing, and self-correcting until successful – all while dynamically evading detection – was highlighted at the recent Cyber Security Expo 2023 in presentations such as “The Red Queen’s Gambit: Cybersecurity Challenges in the Age of AI.”

In response to the nuanced challenges posed by GPAIS, the EU Parliament has proactively proposed provisions within the AI Act to regulate these complex models. The significance of these proposed measures cannot be overstated and will help to further refine the AI Act and sustain its continued usefulness in the dynamic landscape of AI technologies.

HYAS CEO, David Ratner, said: “The industry as a whole must prepare for a new generation of threats. Cybersecurity and cyber defense must have the appropriate visibility into the digital exhaust and meta information thrown off by fully autonomous and dynamic malware to ensure operational resiliency and business continuity.”

HYAS has won several awards for its work on AI-driven malware and for its protective DNS, HYAS Protect. HYAS has just been named a 2023 Digital Innovator by Intellyx.

Additional Resources:

General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole”  https://www.bu.edu/law/files/2023/09/General-Purpose-AI-systems-in-the-AI-Act.pdf.  Paper submitted by Claire Boine, Research Associate at the Artificial and Natural Intelligence Toulouse Institute and in the Accountable AI in a Global Context Research Chair at University of Ottawa, researcher in AI law, and CEO of Successif, and David Rolnick, Assistant Professor in CS at McGill and Co-Founder of Climate Change AI, to WeRobot 2023.

News – European Parliament – The European Union’s AI Act: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Future of Life Institute “General Purpose – AI and the AI Act” What are general purpose AI systems? Why regulate general purpose AI systems? https://artificialintelligenceact.eu/wp-content/uploads/2022/05/General-Purpose-AI-and-the-AI-Act.pdf

Towards Data Science – “AI-powered Monopolies and the New World Order – 

How AI’s reliance on data will empower tech giants and reshape the global orderhttps://towardsdatascience.com/ai-powered-monopolies-and-the-new-world-order-1c56cfc76e7d

“The Red Queen’s Gambit: Cybersecurity Challenges in the Age of AI” presented by Lindsay Thorburn at Cyber Security Expo 2023 https://www.youtube.com/watch?v=Z2GsZHCXc_c

Follow Brilliance Security Magazine on Twitter and LinkedIn to ensure you receive alerts for the most up-to-date security and cybersecurity news and information.