AI Security Risk Management: Red Teaming’s Role


By Jeff Broth

Microsoft recently released an AI security risk assessment framework in its bid to help organizations undertake more reliable IT security auditing, tracking, and improvement. This new framework, as Microsoft wrote in a security blog post, drew significant inputs from the company’s experiences in building and red teaming models.

“We recognize that securing AI systems is a team sport. AI researchers design model architectures. Machine learning engineers build data ingestion, model training, and deployment pipelines. Security architects establish appropriate security policies. Security analysts respond to threats. To that end, we envisioned a framework that would involve participation from each of these stakeholders,” Microsoft wrote in a blog post.

Purple teaming may be emerging as the new favorite approach in security validation, but this has not made red teaming irrelevant or passé. Organizations continue to benefit particularly from continuous red teaming as they battle cyber threats with increasing aggressiveness and complexity.

Red teaming in modern cybersecurity

Most organizations at present know red teaming as an automated solution integrated into their cybersecurity platforms. It simplifies the process of mitigating cyber risks by efficiently discovering potential attack paths into an organization’s IT or digital assets and examining these attacks’ impact or consequences. The “continuous” descriptor should also be emphasized here, as it shows how important it is not to give any opportunity for cybercriminals to attack

Most organizations no longer do manual red teaming and are abandoning periodic security validation. Manual red teaming is costly and can be prone to errors. Meanwhile, periodic security testing through red teaming and other methods does not guarantee adequate protection, especially for organizations that are often hit by attacks.

Red teaming has evolved significantly to provide security that matches the needs of the present. It is still mainly aimed at helping blue teams in focusing their efforts on exploitable vulnerabilities and exposures that can threaten an organization. However, it now incorporates attack surface management for enhanced threat detection at the recon level, risk-based exposure management, and the ceaseless evaluation of security controls to make sure that potential attacks are prevented before they can even happen.

The need for AI security risk management

More and more organizations are using artificial intelligence technology. According to a McKinsey global survey, around 50 percent of organizations are already utilizing AI in at least one business function. The biggest adopters are usually those in the product/service development, manufacturing, service operation, risk modeling and analytics, marketing and sales, human resource management, supply-chain management, and corporate finance sectors.

The same survey, however, indicates that cybersecurity is a top concern in embracing AI. “Cybersecurity remains the only risk that a majority of respondents say their organizations consider relevant,” reads a portion of the McKinsey report. A big majority of the survey respondents, at 62 percent, say that cybersecurity is the most relevant risk in adopting AI, followed by regulatory compliance (50 percent), explainability (39 percent), personal/individual privacy (45 percent), organizational reputation (34), labor displacement (35), and equity/fairness (26).

Most organizations understand that utilizing AI can result in the emergence of new attack surfaces and vulnerabilities that may not be sufficiently covered by existing security protocols and controls. Current security validation measures may also fail to catch weaknesses and issues that creep in as an organization starts relying on AI and machine learning. These realities are affirmed by a Gartner research paper entitled “Market Guide for AI Trust, Risk and Security Management.”

“AI poses new trust, risk and security management requirements that conventional controls do not address,” the Gartner research avers. Autonomous actions driven by an AI system may be in conflict with the security policies and controls of an organization. There’s also the major risk of organizations relying on pre-formulated AI security models, something that many companies tend to resort to because of lack of resources and the need to compromise to meet strict deadlines or outputs.

Red teaming in AI security risk management

As mentioned, Microsoft developed its new AI security risk assessment with the knowledge gained from its experiences in creating and validating (red-teaming) security models. This has resulted in a framework that bears the following core characteristics:

  • Comprehensive AI system security perspective – With both defensive and adversarial perspectives, security officers examine everything in the AI system to see potential attack points or vulnerabilities, from the data collection to processing and model deployment. AI supply chains, controls, and policies are also inspected meticulously.
  • Outlining of machine learning threats and the formulation of solutions to abate them – After the comprehensive examination, all potential threats are accounted for and strategies to resolve them are introduced. Again, this invokes the spirit of red teaming to come up with solutions that are not only effective at a specific instance but also anticipate other possible tweaks or evolution of attacks.
  • Enabling organizations to conduct risk assessments – Microsoft’s AI security risk management framework guides the collection of information relevant to examining the current state of AI system security. It also enables gap analysis and the monitoring of the organization’s security posture as it deals with emerging threats.

Microsoft likens the process of securing AI systems to a team sport. The company’s artificial intelligence research design team takes charge of the modeling of architectures, while machine learning engineers handle data feeding, model execution and training, and deployment pipelines. On the other hand, there’s the team of security architects who are tasked with the establishment of suitable security policies and the security analysts team responsible for threat response.

The security team is not only trying to reinforce the infrastructure the design and development teams have created. They also try to view the entire AI system architecture with a cyber attacker’s lens, so they can anticipate possible attacks that may have not been considered during the design and development process. Instead of merely providing backup to the system, they explore the many other ways that may lead to vulnerabilities that can be exploited by malicious actors.

It also helps when the security teams integrate existing security frameworks like MITRE ATT&CK to take advantage of authoritative cyber threat intelligence that can be used against AI systems.

AI security risk management calls for an approach that is open to all possibilities and inputs that can significantly improve threat detection and attack response and mitigation.

In summary

AI security risk management is relatively new for many organizations. It’s understandable for many to be not-so-familiar with it. However, it’s a big plus that the majority of organizations are aware of cybersecurity risks involved in the use of AI tech. Even better, there are existing frameworks as well as cybersecurity platforms that address AI system security needs. And these frameworks and platforms are expected to be reliable because they have been created with lessons from red teaming and other cybersecurity experiences.


Jeff Broth is a business writer and advisor, covering finance, cyber, and emerging fintech trends. He has consulted for SMB owners and entrepreneurs for eight years.


Follow Brilliance Security Magazine on Twitter and LinkedIn to ensure you receive alerts for the most up-to-date security and cybersecurity news and information.