Announcement / February 2026

Dr. Nada Madkour to Serve as Interim Director of CLTC’s AI Security Initiative (AISI)

Nada Madkour
Nada Madkour

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is pleased to announce that Nada Madkour, Ph.D., will serve as Interim Director for its AI Security Initiative (AISI), a premier academic program dedicated to shaping standards and guardrails to prevent the harmful impacts of AI technologies.

Dr. Madkour has been a Non-Resident Research Fellow at AISI since early 2024, in parallel to her appointment as a Senior AI Standards Development Researcher at the Berkeley Existential Risk Initiative (BERI). Her expertise spans AI risk management for agentic AI systems and transparency for general-purpose AI; her work translates technical safety research into practical, implementable governance frameworks for developers and policymakers.

Dr. Madkour has co-authored several key reports published by AISI, including:

She serves on a range of critical AI governance bodies, including the NIST AI Consortium, ISO AI standards committees, the World Economic Forum Safe Systems and Technologies working group, and the EU AI Act Code of Practice on marking and labelling of AI-generated content working groups. Her Ph.D. in Technology (Information Assurance, AI Risk Assessment) is from Eastern Michigan University. 

“As the risks of AI are growing in scale, scope, and urgency, the work of the AI Security Initiative is more important than ever,” says Ann Cleaveland, Executive Director of CLTC. “With her deep expertise in AI standards, AI risk management, and cybersecurity, and her active involvement in global efforts to establish AI safety standards, Nada is well-positioned to lead AISI at this critical moment.”

“I am grateful to step into this role at a moment when the need for AI safety and security has never been greater, and to continue working with the talented AISI/CLTC team to ensure our research drives actionable recommendations for standards, practices, and policy,” Madkour says. 

CLTC is grateful for the service and vision of Jessican Newman, who launched the AI Security Initiative as its founding director in 2019. Madkour will assume the position previously held by Newman, building on the Initiative’s groundbreaking work.

On February 11, Dr. Madkour will moderate a panel discussion as part of a webinar launch event for the “AISI Agentic AI Risk Management Standards Profile,” a paper she co-authored with Jessica Newman, Deepika Raman, Krystal Jackson, Evan R. Murphy, and Charlotte Yuan. Register to attend.

About the AI Security Initiative (AISI)

Dr. Nada Madkour (far right) poses in 2024 with AI Security Initiative team members Deepika Ramen, Tony Barrett, Evan R. Murphy, and Jessica Newman (AISI Director 2019-2026) in front of the UC Berkeley Campanile.
Dr. Nada Madkour (far right) poses in front of the UC Berkeley Campanile with AI Security Initiative team members Deepika Raman, Tony Barrett, Evan R. Murphy, and Jessica Newman (AISI Director from 2019-2026). Photo taken in 2024.

The AI Security Initiative is a leading center for the research and development of AI risk management standards, helping developers and policymakers stay a step ahead of emerging threats by conducting actionable research on AI risk analysis and measurement, developing technical guidance for safety thresholds, and serving as a neutral convening platform for multidisciplinary experts.

This work directly shapes policy and standards at the highest levels. AISI’s flagship publication, the GPAI Profile, builds on the NIST AI Risk Management Framework (RMF) and is the only non-government resource listed in the NIST AI RMF page. AISI’s work also drives global AI deliberations; our researchers have provided their expertise to groups such as the EU GPAI Code of Practice working groups and the OECD Expert Group.

Since its founding in 2019, AISI has operated as a multidisciplinary research group, working in partnership with world-renowned researchers and with AI and tech policy leaders and centers at Berkeley and beyond.

In addition to helping shape international and federal policy recommendations, AISI currently works with both California and Washington state leaders on the development of effective AI governance. 

Through its affiliated AI Policy Hub, the program has successfully trained and deployed 18 graduate fellows as ‘policy accelerators,’ embedding sociotechnical expertise into high-leverage roles across government, industry, and civil society.


February 11, 11am: Webinar on Agentic AI Risk

Register for the upcoming webinar launch event for the “AISI Agentic AI Risk Management Standards Profile,” a new paper co-authored by Nada Madkour, Jessica Newman, Deepika Raman, Krystal Jackson, Evan R. Murphy, and Charlotte Yuan.

AISI Agentic AI Profile Launch Event