Corporations are faced with difficult ethical, security, operational, and competitive decisions when it comes to artificial intelligence. The security and privacy issues are complex — and the solutions may be even more so.
On October 2, CLTC presented a talk by John deCraen, Associate Managing Director of Cyber Risk at Kroll, who has two decades of experience working with Global Fortune 500 businesses and AmLaw 100 law firms, specializing in digital forensics, incident response, information security risk, and compliance assessment matters.
The talk was presented as part of our Fall 2023 CyberMētis Speaker Series, which connects cybersecurity experts and practitioners with UC Berkeley students to discuss the wide range of practical skills and acquired intelligence required to operate in a constantly environment.
deCraen’s talk examined emerging risks related to artificial intelligence. “Many corporations may use artificial intelligence today, and as a consultant, I spend a great deal of time with a wide variety of clients,” he said. “They have some very unique, and in some cases very dangerous, perspectives for how AI can be used in the organization. It’s my job to help illuminate the various risks and threats of artificial intelligence.”
“[CEOs] are randomly using AI without any testing, or building their own models without any model protection.”
Rather than focus on AI as an existential threat, deCraen examined the potential risks of how AI is being implemented within corporations. “This is not a speech about Skynet,” he said, alluding to the AI that sets out to destroy humanity in the “Terminator” movie series. “I don’t believe AI is taking over the world. I think it’s too nascent to even have an opinion about whether AI itself is an existential threat. We’re going to talk about the various things that corporations take on as risks when they adopt various forms of AI.”
deCraen talked about how AI-based tools are becoming increasingly ubiquitous, with many firms developing their own AI-based tools. “There are thousands of new tools on the market today, and that brings a huge number of risks,” he said. “These risks aren’t all security. They aren’t all a threat to human life. But they’re quantifiable in the corporate world. You don’t want to be the organization that three years from now is discovered to have had hiring practices that were inequitable because you relied on something you didn’t vet.”
Many companies are rushing to acquire AI products without fully understanding what they are getting into, deCraen said. “There are not any tool stacks today to help them deal with the risks of artificial intelligence,” he said. “As these companies go in, they’re wowed by the term ‘AI,’ and they’re buying these tools and they don’t understand, where’s the data coming from? What is the model based upon?”
He also noted that many organizational leaders lack a nuanced understanding of the potential risks associated with AI. “People’s sense of AI is that it is either Skynet and the world is falling, or it is the best thing since sliced bread,” he said. “The real danger is that most CEOs are in the latter camp, and they’re randomly using AI without any testing, or building their own models without any model protection.”
“Bad guys 100% are using AI.”
Many security risks emerge from the process of training AI models, deCraen said. “If you go out to the world today, you can go download the tools, build your own large language model, and have it running in about two weeks,” he explained. “What are you bringing into your ecosystem when you do that? Bad guys are out there looking for training to occur. This stuff can be noticed. Seeding can be done. Bad code can be returned. Malware can be injected, and there can be man-in-the-middle attacks. The list is very long. It isn’t just about biases.”
He described his experience working with a CEO who wanted to implement AI tools at his company, but had not sufficiently weighed the potential security risks. “We said, what are you going to do about the security? [The CEO] said, why is it not secure? We had an evolving conversation with this particular CEO about all the things that he didn’t contemplate, and now he’s going the other way. He wants to ban AI completely. This happened within one week.”
deCraen added that cyber criminals are also using AI to develop more sophisticated methods of attack, for example by using trial and error to develop effective phishing attacks. “Bad guys 100% are using AI,” he said.
Addressing what companies and others can do to manage and mitigate the risks of AI, deCraen suggested relying on existing tools and frameworks for the responsible use of AI, such as resources developed by NIST and InfoTech. “Bring the frameworks and the guidance with you,” he said. “Let’s do it right. Let’s be thoughtful as we go through this.”