The Center for Long-Term Cybersecurity is thrilled to welcome a new cohort of researchers for Summer 2023. This multi-disciplinary group of researchers brings additional capacity to CLTC’s original and future-focused cybersecurity research, advancing our mission to help decision-makers act on foresight and expand who gets to participate in — and has access to — cybersecurity. Please help us welcome this talented cohort to CLTC! Read on to learn more about these researchers and their respective projects.
- First, the question of the unknown-unknown cyber operations and the problem of missing data in cyber research. My aim is to explore how the research and policy communities can effectively tackle these challenges.
- Second, the changing role of intelligence agencies in offensive cyber operations. This project seeks to uncover the shifting landscape of intelligence agencies and their involvement in cyber operations.
- Third, my book project on governmental decision-making during cyber conflict. I am investigating the circumstances under which governments opt for public disclosure of cyberattacks versus maintaining secrecy.
Master of Information and Cybersecurity, School of Information
Research Area: Board governance of cybersecurity
Bruce Duong is a graduate student in the Master of Information and Cybersecurity program at the UC Berkeley School of Information, where he has been involved in assisting Laura Georg Schaffner, CLTC Visiting Scholar and Associate Professor in Security Governance and Digitalization at the University of Strasbourg, with a project on cybersecurity governance and the development of quantifiable metrics for boards of directors.
In his work, Bruce combines knowledge of business management, data analysis, and cybersecurity to assess the nuances of board decisions surrounding cybersecurity metrics and the inherent challenges of identifying and quantifying useful cybersecurity metrics. Initially, Bruce conducted an in-depth literary review, surveying the current landscape for cybersecurity disclosure patterns and costs. More recently, he used data science techniques to perform textual analyses of cybersecurity disclosure documents for complexity, sentiment, and tone.
J.D. Candidate, UC Berkeley School of Law
Research Area: Cybersecurity textbook project
Gaurav Lalsinghani is currently pursuing their J.D. at the UC Berkeley School of Law. They worked with Professor Chris Hoofnagle, CLTC Faculty Director and UC Berkeley Professor of Law in Residence, on the development of an introductory textbook for the field of cybersecurity, Cybersecurity in Context (forthcoming). Gaurav helped to develop exercises and grading metrics, identify regulatory trends and synthesize them into high-level principles, and interview practitioners to ensure the text covers essential topics.
Gaurav explored philosophical and open-ended questions on a wide array of crucial topics, including consumer protection law, the role of the private sector, cyber insurance, and the ever-evolving landscape of cyberwarfare. Their work primarily focused on brainstorming innovative solutions and researching anticipated legal challenges in the field and across these topics. Gaurav deepened their understanding of the intersection of law and technology and the ongoing discourse within the field by responding to the textbook’s practical cybersecurity educational exercises.
Master of Information Management and Systems, School of Information
Research Area: Algorithmic fairness and opacity
Nyah is a graduate student in the Master of Information Management and Systems (MIMS) program at the UC Berkeley School of Information. During her undergraduate studies, Nyah undertook research studying Digital Minstrelsy and Blackface on social media platforms such as Twitter and Tiktok, looking specifically at the history of racial bias encoded into our web-based systems and the potential users have to dismantle that bias.
She is currently working with CLTC’s AI Security Initiative and the Algorithmic Fairness and Opacity Group (AFOG), coordinating new events and speakers for the academic year ahead. Nyah took part in organizing a panel in conjunction with CLTC on Responsible AI licensing, connecting prospective students with AFOG to organize presentations, as well as constructing and planning a future speaker series centered on generative AI technology, race, and power.
As a personal project, Nyah also penned an op-ed (forthcoming) connecting the potential danger of new breakthroughs in generative AI and the legacy of bell hooks’ “Eating the Other.”
Master of Information and Cybersecurity, School of Information
Research Area: Comparative analysis of the interdisciplinary cybersecurity education landscape
Sahar Rabiei is a graduate student in the Master of Information and Cybersecurity at the UC Berkeley School of Information where her interests lie at the nexus of cybersecurity and policy. In the ever-changing cybersecurity landscape, where threats continually evolve, and complexities increase, an interdisciplinary approach is imperative to equip cybersecurity professionals with the essential skills to adeptly navigate all facets of cybersecurity, including legal, technical, ethical, and national security challenges.
This drove Sahar’s collaboration with Lisa Ho, MICS Academic Director, on the Comparative Study of Interdisciplinary Cybersecurity Education (forthcoming). This comprehensive analysis of the interdisciplinary cybersecurity education landscape offers valuable guidance to educational institutions interested in creating new multidisciplinary cybersecurity programs or improving existing ones. It has been an honor to contribute to enhancing cybersecurity education to meet the needs across industries and the globe.
Muhammad Rusyadi Ramli
Visiting Student Researcher
Research Area: Development of trustworthy AI systems
Muhammad Rusyadi Ramli is a Visiting Student Researcher and a Ph.D Student in Engineering Design at the KTH Royal Institute of Technology. His research interests fall at the intersection of systems engineering, engineering design, and engineering management. Currently, Muhammad’s research investigates the role of engineering artifacts in bridging the knowledge gap and promoting compromise among different Community of Practice (e.g., cybersecurity engineers, safety engineers, software engineers) in developing trustworthy systems. Some theories he uses in his research include Community of Practice, boundary objects, and knowledge boundaries.
While at CLTC, Muhammad will focus on how a framework such as the NIST AI Risk Management Framework can help engineers to develop trustworthy systems. Specifically, he is looking at contextual factors such as culture and organization implications and how these factors can affect the effectiveness of framework. To achieve the goal of his study, Muhammad will conduct empirical research by interviewing cybersecurity, safety, and ML engineers.