News / March 2026

Researchers Submit Response to U.S. Government Request on Security Considerations for AI Agents

On March 9, a team of researchers from CLTC’s AI Security Initiative (AISI) submitted comments to the Center for AI Standards and Innovation (CAISI) — an agency housed within the National Institute of Standards and Technology (NIST) at the Department of Commerce — in response to a request for information (RFI) regarding security considerations for AI agents.

AI agents, or “agentic AI,” are artificial intelligence-based systems that can autonomously pursue goals and take actions with little to no human oversight, often through interaction with external environments and tools. The CAISI’s “Request for Information Regarding Security Considerations for Artificial Intelligence Agents” called for “information and insights from stakeholders on practices and methodologies for measuring and improving the secure development and deployment of artificial intelligence (AI) agent systems.”

cover of the agentic AI report, featuring an image of swirling white lines
Learn more about the Agentic AI Risk-Management Standards Profile

The team of AISI researchers — including Nada Madkour, Interim Director of the AISI; Deepika Raman, Non-Resident Research Fellow; Krystal Jackson, Non-Resident Research Fellow; and Charlotte Yuan, Graduate Student Researcher — drew largely on recommendations from the recently published Agentic AI Risk-Management Standards Profile, which provides an overview of practices and controls for identifying, analyzing, and mitigating risks specific to agentic AI. 

In their response, the AISI team highlighted several key recommendations from the report:

  • Scale Governance Mechanisms with Degrees of Autonomy: Given the different configurations of agentic AI systems, governance mechanisms should scale with degrees of agency, rather than treating autonomy as a binary attribute. Agentic AI ranges from narrowly scoped, single-agent systems to highly autonomous, multi-agent architectures operating in complex environments, requiring risk controls that are proportionate to these characteristics.
  • Support Human Control and Accountability: It is important to develop effective human-agentic AI management hierarchies that preserve human authority while leveraging AI as a supportive tool, and to establish hierarchical oversight and escalation pathways that provide a clear, tiered system of oversight, ensuring that human attention is directed where it is most needed. Real-time monitoring systems should be equipped with emergency automated shutdowns and be triggered by certain activities…. In addition to automatic emergency shutdown, manual shutdown methods should be available as a last-resort control measure.
  • Implement Continuous Monitoring and Post-Deployment Oversight: Recognizing that agentic behavior may evolve over time and across contexts, the development and implementation of continuous monitoring and rapid-response infrastructures to accommodate for the speed of progress and to help adequately prepare for potential emerging risks and misuses is necessary. This includes investment in continuous monitoring mechanisms to keep track of and trace agent behavior in complex deployment environments… and investment in rapid-response infrastructure that can help in disabling agents or limiting their authority when significant evidence of unforeseen or emerging risks is observed.
  • Employ Defense-in-Depth and Containment: Given the many unknown and/or emergent risks from agentic systems and the lack of robust evaluation regimes, security considerations must evoke a layered technical, organizational, and societal safeguards across agentic AI development and deployment stages to ensure redundancy against failures. Treating sufficiently capable agents as untrusted entities due to the limitations of current evaluation techniques can help mitigate risks from accidents, malfunctions, and malicious use.
  • Implement System-Level Risk Assessment: Move beyond model-centric approaches to evaluate risks across the agentic AI ecosystem that considers autonomy, authority, tool access, operating environment, and multi-agent interactions in order to address emergent threats like cascading failures or tool misuse.

The researchers also provided answers to the specific questions posed on security considerations for AI agents, highlighting a range of concerns related to privacy and security, hallucinations (including cascading misinformation as agents interact with each other), and malicious use (such as the use of agents to develop biological, chemical, or cyber attacks). They highlighted the importance of cross-sector collaboration in addressing challenges related to agentic AI.

Government collaboration with academia, industry, civil society, and the AI ecosystem is most urgent in standardization, incident reporting, talent pipelines, and adaptive governance to secure agentic AI systems,” they wrote. “For example: an anonymized agent incident databases can combine the knowledge from industry reports, with the research and analysis of academia, and the oversight of civil society, enabling shared threat intelligence that aids in identifying elements such as tool misuse patterns or scaffold exploits. An approach like this would help prevent siloed learning and is modeled after CISA’s cybersecurity reports.

Read the “Response to the Request for Information Regarding Security Considerations for Artificial Intelligence Agents”