AI Agent Ownership – An Underlying NIST AI Risk Management Framework Control


Artificial intelligence (AI) is becoming basic tables stakes for nearly any organization. Whether a software company embeds it into their product or a website uses it to answer questions, AI agents are changing the world.  In a May 2025 survey, 75% of senior executives agreed or strongly agreed that AI agents will reshape the workplace more than the internet did. Further, 52% of the executives have either broadly or fully adopted AI agents across the organization. 

While 66% of the surveyed executives said that this adoption already delivers measurable value through increased productivity, 34% worry about the cybersecurity implications and 25% have compliance and legal concerns. 

Compliance offers a set of basic best practices for organizations seeking to integrate AI agents into their systems. However, even these guardrails only hint at the important role that assigning ownership over these AI agents offers. As organizations look to compliance frameworks, the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) offers guidelines that organizations can implement. 

NIST AI RMF: Definition and Limitations

As early as 2021, NIST began working with various stakeholders to understand AI risks. Over the course of two years, NIST engaged in various workshops to build a risk management framework that established outcomes and provided suggestions for achieving them. 

The NIST AI RMF, published in January 2023, set out these four main functions:

  • Govern: Creating a culture of risk management
  • Manage: Prioritizing risks and acting based on a projected impact
  • Map: Recognizing contexts and relating risks to the identified context. 
  • Measure: Assessing, analyzing, or tracking the identified risks. 

Similar to the NIST Cybersecurity Framework (CSF), the AI RMF then divides each Function into Categories and Subcategories. 

As part of identifying AI risks, the AI RMF briefly notes that the CSF and NIST Privacy Framework acts as the underpinnings of AI system and ecosystem resilience, security, and privacy. In section “3.3 Secure and Resilient,” the AI RMF lists the following security concerns:

  • Data poisoning.
  • Exfiltration of models.
  • Theft of training data. 
  • Lost intellectual property through AI system endpoints. 

Despite this overlap, the AI RMF specifically identifies several ways that AI risks differ from traditional software risks. While privacy and cybersecurity risk management considerations apply to the AI systems’ design, developments, deployment, evaluation, and use, current existing framework may not adequately address certain security concerns, including:

  • Evasions
  • Model extraction
  • Membership interference. 
  • Availability
  • Machine learning attacks 
  • AI system attack surface
  • Third-party AI risks where training systems occurs outside the organization’s security controls. 

A Deeper Dive into NIST AI RMF Subcategories

Digging a little deeper into the NIST AI RMF, several subcategories hint at the identity and access risks that controls under the NIST CSF may not adequately manage. 

Govern 1.6

The NIST AI RMF defines this subcategory as follows:

Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.

Under the transparency and documentation section, the AI RMF Playbook suggests that organizations should consider defining a responsible individual or team responsible for maintaining the inventory. 

Govern 2.1

The NIST AI RMF defines this subcategory as follows:

  • Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.

Under the suggested actions, the AI RMF lists a series of various positions directly and indirectly related to AI systems. Beyond that, it also notes that organizations may want to consider:

  • Defining the AI risk management roles and responsibilities for positions directly and indirectly related to AI systems. 
  • Identifying the person or team responsible for the AI’s decisions and ensuring awareness of the analytic’s intended users and limitations. 
  • Identifying the personnel roles, responsibilities, and delegation of authorities involved across the AI system’s design, development, deployment, assessment, and monitoring. 
  • Implementing accountability-based data management and protection practices. 

Measure 2.10

The NIST AI RMF defines this subcategory as follows:

  • Privacy risk of the AI system – as identified in the MAP function – is examined and documented.

To help organizations manage these risks, the AI RMF notes that organizations should consider the following:

  • Protocols and access controls for training sets or production data containing personally sensitive information, like authorization mechanisms, duration of access, and type of access. 
  • Collaboration across privacy experts, AI end users and operators, and other domain experts to determine differential privacy metrics within use cases. 
  • Implementation of accountability-based data management and protection practices. 

Leveraging Identity Hygiene to Manage AI Risks

Functionally, these subcategories are the compliance equivalent of “tell me you need to implement identity hygiene without telling me you need to implement identity hygiene.” Like many non-human identities, AI agents often require elevated privileges to perform their tasks. For example, they may engage in any of the following behaviors:

  • Running with service-level access. 
  • Acting as administrators on key platforms. 
  • Simultaneously acting across multiple environments. 

However, unlike other non-human accounts, AI agents are dynamic, adapting and reconfiguring themselves as they interact with APIs, applications, users, or other agents. Even when an organization implements initial identity controls, these technologies can fall out of compliance. 

Even more challenging, AI agents exist across multiple domains, leading to ambiguity over ownership. For example, the NIST AI RMF suggests that organizations assign responsibility to an individual or team, yet the teams could be any of the following:

  • Engineering team that developed the model. 
  • Third-party Software-as-a-Service (SaaS) that deploys the technology. 
  • Human resources, security, or IT team that controls the data. 

To establish and enforce the appropriate accountability, transparency, and human oversight, organizations need identity hygiene more than ever. Identity hygiene encompasses the processes, rules and policies that manage access to digital assets. 

The key component of identity hygiene directly relates to setting appropriate controls as outlined in the NIST AI RMF. As part of managing the AI agent lifecycle, organizations need to:

  • Systematically discover AI agent-related identities: Assigning named, individual owners to every AI identity, including ones embedded in SaaS platform or third-party tools
  • Tailor access to assets: Limiting initial access based on specific use cases, like provisioning accounts, triaging requests, or summarizing data. 
  • Implement access accountability: Ensuring strong authentication and scoped permissions to specific resources, like datasets or applications. 
  • Identify protocol deviations: Continuously monitor access for drift, privilege accumulation, or anomalous decision patterns. 
  • Periodically review access: Validating ownership at key control points to maintain current access levels or deprovision unnecessary access. 

AI Agent Ownership: Say the Quiet Part Out Loud

While the NIST AI RMF may be quieter about managing AI agent identities, organizations need to say this quiet part out loud: We know and manage these dynamic non-human identities. AI agents represent the next frontier of identity governance. 

Without monitoring access decisions, organizations create risks that may be viewed as negligence if, or more likely when, a security or privacy incident occurs. As AI becomes integrated into all business functions, organizations need to connect the AI agents to a human steward who understands what they do, how they operate, what credentials they use, and how behavior should be evaluated. 


Rosario Mastrogiacomo is the Chief Strategy Officer at SPHERE. With extensive experience in identity security, privileged access management, and identity governance, his role involves strategizing and guiding enterprises toward robust cybersecurity postures. 

He specializes in identity hygiene, leveraging AI-driven technologies to automate and secure identities at scale. His professional career has included leadership roles at prominent financial institutions, such as Barclays, Lehman Brothers, and Neuberger Berman, where he honed his skills in complex, highly regulated environments.


Follow Brilliance Security Magazine on Twitter and LinkedIn to ensure you receive alerts for the most up-to-date security and cybersecurity news and information. BSM is cited as one of Feedspot’s top 10 cybersecurity magazines.