Event Recap / November 2020

AI Race(s) to the Bottom? A Panel Discussion

https:///youtu.be/2HPeCN32mR4

Countries and corporations around the world are vying for leadership in AI development and use, prompting widespread discussions of an “AI arms race” or “race to the bottom” in AI safety. But the competitive development of AI will take place across multiple industries and among very different sets of actors, some of which appear to be bucking the conventional wisdom by slowly and steadily “racing to the top.”

How might competitive dynamics differ from one domain to the next, and what are the likely social consequences in terms of safety, security, surveillance, and other governance issues? These questions were at the heart of a recent online discussion presented by CLTC and the UC Berkeley AI Security Initiative (AISI), which convened a group of AI policy experts to explore when and where AI races to the bottom might be more or less harmful — and the surprising ways that specific industries are approaching AI development more cautiously and cooperatively.

The AI Security Initiative is a growing hub for interdisciplinary research on the global security impacts of artificial intelligence, explained Ann Cleaveland, Executive Director of CLTC. “We work across the technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future,” Cleaveland said. “We’re delighted to have three experts… to discuss each of their work on the competitive dynamics of AI development, and how that impacts issues like safety, security, surveillance, and other aspects of AI governance.”

Kicking off the presentations was Will Hunt, a Research Analyst at Georgetown’s Center for Security and Emerging Technology (CSET) and a PhD Candidate in Political Science at UC Berkeley. Hunt presented findings from his report, The Flight to Safety-Critical AI: Lessons in AI Safety from the Aviation Industry, which he authored as a graduate researcher at the AISI. Based largely on interviews with experts — including regulators, academics, safety engineers, and AI developers at startups serving the aviation industry — Hunt’s report analyzed the aviation industry as a case study for understanding AI “races” broadly.

“There are really good reasons to be worried about a race to the bottom,” Hunt said. “With traditional software, you can test the entire range of possible outputs in advance of deployment, but that’s very difficult to do with AI systems, especially deep learning systems. There’s also the fact that the costs of failing to bring our capabilities to market quickly are really high. The goal of my research was to work toward a theory of when and why AI races to the bottom might be more severe in some domains than others, in terms of safety.”

Hunt’s research revealed that the aviation industry is in fact not seeing a race to the bottom, but rather has seen AI development characterized by high levels of caution. “To my surprise, almost everyone was unanimous in this view that the aviation industry is moving extremely slowly and cautiously in adopting AI, at least in safety critical contexts,” Hunt said. “It’s not the case that aviation firms don’t care about beating the competition or moving quickly. The difference in that, in the case of aviation, when you when you push too hard or compromise too much on safety, you take a very serious hit to your bottom line.”

In her talk, Elsa B. Kania, Adjunct Senior Fellow with the Technology and National Security Program at the Center for a New American Security, explored AI innovation in the context of global militaries. A PhD candidate in Harvard University’s Department of Government, Kania specifically centers her research on Chinese military strategy, defense innovation, and emerging technologies.

I don’t think ‘race’ is the best way to frame the dynamics we’re seeing play out globally.

“I don’t think ‘race’ is the best way to frame the dynamics we’re seeing play out globally,” Kania said. “In artificial intelligence today, certainly there is intense rivalry, including in the domain of military affairs. And certainly, we’ve seen the US and Chinese government similarly emphasize the importance of leadership in artificial intelligence and aim to establish prominent positions in in this field. But at the same time, clearly, the reality is also one of a field of research that is very open and collaborative, as evidenced by the sizable amount of co-authorship that has continued, despite some of the frictions geopolitically.”

Acknowledging that AI is a “double-edged sword,” a general purpose technology with myriad associated risks and benefits, Kania said that AI competition between the US and China is more of a classic security dilemma than an arms race. “The metaphor of an arms race in many respects is inappropriate, given we’re not talking about a single weapon system, but rather something that will have near-term and long-term implications across a range of military applications, including supporting functionalities, such as logistics and general management, as well as intelligence.”

“When we are talking about military organizations and the culture and practices of professional militaries, there are some reasons to be encouraged that there is a tradition of testing weapon systems extensively prior to actual deployment,” Kania said. “I don’t think we are in an AI arms race, and I don’t think a ‘race to the bottom’ is inevitable either in general or in the military domain in particular, and I hope that there are ways to create some pragmatic parameters and more positive incentives for this rivalry in ways that could lead to dynamics more akin to a race to the top, where there is a premium placed upon security, reliability, and resilience.”

The third talk was presented by Tim Hwang, a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET) and former Director of the Harvard-MIT Ethics and Governance of AI Initiative, a philanthropic project working to ensure that machine learning and autonomous technologies are researched, developed, and deployed in the public interest.

Like Kania, Hwang was skeptical that AI competition should be characterized as a “race,” given the unique nature of the field. “The fundamental problem I have with the race frame is that it assumes that all of the players have the same object in mind,” Hwang said. “When you are racing toward something, there’s ideally some objective that you’re all chasing after. The more you dig into it, the more it becomes clear that it’s frequently not about AI. A lot of these so-called races in fact refer to all sorts of competition happening at many different levels.”

Hwang proposed instead thinking about AI in terms of what he called utilization capacity. “It’s really a kind of eating contest,” he said. “We’re asking the question out of all of the players that are competing over AI, who has the organizational, monetary, cultural, and technical ability to utilize these breakthroughs that are happening in an academic field, and who can do that faster than everybody else?”

Watch the full panel discussion — including Q&A — above or on YouTube.