News / January 2020

2020 Cal Cybersecurity Fellowship Awarded to Nathan Malkin, PhD Student

Nathan Malkin
Nathan Malkin

The Center for Long-Term Cybersecurity is pleased to announce that Nathan Malkin, a PhD student in the Department of Electrical Engineering and Computer Science, is the 2020 recipient of the Cal Cybersecurity Fellowship, which was established following a gift from a Cal alum. The award will support Malkin’s research on privacy controls for “always-on” listening devices.

“Intelligent voice assistants and other microphone-equipped Internet of Things devices offer great convenience at the cost of very high privacy risks,” Malkin wrote in an abstract for his CLTC grant proposal. “The goal of our research is to develop privacy controls for devices that listen all the time — beyond a few specific keywords. More specifically, our goal is for users to be able to specify restrictions on these devices — what the devices can hear and what is off limits — and for our system to be able to enforce their preferences. During the first phase of our research, we investigated people’s expectations for these devices and how they varied across different individuals and situations. We also developed potential privacy-preserving approaches. For the next phase, we propose implementing these techniques for enforcing user preferences and evaluating their effectiveness and usability across several different dimensions and criteria.”

Malkin will be the second recipient of the Cal Cybersecurity Fellowship, which includes an award of up to $15,000 and is given to students or postdoctoral scholars to pursue cybersecurity research. “Of course, like everyone, my career (in cybersecurity) has had its ups and downs, but overall I’ve had a very good run,” the donor explained. “It’s now time to give back to the youth beginning their careers — our collective future. I could not be more proud than to assist the very talented graduate students at my undergraduate alma mater, UC Berkeley.” (Learn more about the 2019 recipient of the Cal Cybersecurity Fellowship.)

We interviewed Nathan Malkin via email to learn more about his project. (Note that responses have been lightly edited.)

Your research focuses on IoT devices with “always-on” microphones. What’s the core research question you’re planning to address?

In this project, we’re trying to imagine what privacy controls for devices with always-on microphones would look like — and how they would work. With today’s devices — for example, smart speakers like Amazon Echo and Google Home — you essentially have two modes: the device is listening, or it isn’t. What would it be like to have more fine-grained control over what the device is allowed to hear at any given moment? We think this is especially important as these devices evolve more “passive listening” capabilities, analyzing conversations and other sounds that aren’t necessarily directed at them.

What are some of the privacy issues related to these devices?

An always-listening device is a window into your home, making it a potent tool for surveillance. Without proper safeguards, hackers, governments, and unscrupulous companies could all listen to your conversations whenever they wanted to. Already, there have been reports of malicious apps phishing for passwords, police subpoenaing voice interaction records, and pretty much every voice assistant failing to disclose that humans listened to people’s interactions with the device.

You wrote in your proposal that, in the first phase of your project, you “investigated people’s expectations for these devices and how they varied across different individuals and situations.” What did you learn from that research?

Given the privacy risks associated with always-listening gadgets, many people are understandably averse to adopting these devices. However, we also found that a significant percentage of people could see themselves getting one, assuming it provided sufficient utility. These results may seem obvious, but they provide concrete evidence that counters two perspectives that might also seem like common sense: that no one would adopt always-listening devices because they’re too creepy, or that everybody would, because “privacy is dead.”

In reality, as with most choices involving privacy, people make nuanced trade-offs involving their desire to control their information, their need to get things done, and the options that are available to them. The upshot is that there’s a market for always-listening devices, but also a clear demand for them to be responsible about privacy. We hope that our research can help explore what the solutions in this space might look like.

You also wrote that you’ve already started to develop potential privacy-preserving approaches. What might these look like?

One direction we’ve been thinking about is the notion of “transparent” always-listening applications. In software, one basic thing a developer can do to engender trust is making their application open-source; then, others can review the source code to make sure the program is doing what it’s supposed to. However, modern systems that use natural language processing rely on machine learning. It’s a lot less clear what methodology you’d use to review an ML model and what information you could surface that would be actionable for a user. We’ve been working on a way for users to explore and ask questions about a given model so that they better understand its behavior.

As part of your study, you’re going to be using a methodology called “vignette studies.” What does that entail?

One of the challenges is that today’s always-listening systems are relatively rudimentary: their microphone might always be on, but it’s typically only listening for the assistant’s wake-word (e.g., “Hey Siri”). However, we’re interested in developing privacy controls for systems that don’t have this constraint, since we believe it will be gone in a few years’ time. In the meantime, this leaves us studying systems that are largely hypothetical. But we still want to know how users would interact with them.

Vignette studies are one way of dealing with this. In this approach, we present short descriptions of hypothetical situations — meant to simulate real events — and ask participants about their opinions, choices, or judgments in order to understand how they’d behave in real life. While surveys and vignettes can’t fully replace field studies or lab experiments, they are a proven and effective methodology that can teach us a lot about how people think and behave.

Is there anything else you’d like to say about this project?

The security field is often reactive, rather than proactive, so I am grateful to be able to work on a problem that — true to the name of the Center for Long-Term Cybersecurity — is still somewhat in the future. I’m hopeful that, by raising these issues early, we’ll be better prepared than if we waited to start thinking about them until passive-listening devices hit the market.