Explainer Videos

Consistent with CLTC’s mission to help expand who participates in cybersecurity we produce occasional videos to make key cyber topics better understood and more accessible.

In each we endeavor to answer three simple questions — What? So What? Now What? — so viewers can quickly gather insights on the defined topic at a high level, why it matters, and what can or should be done about it.

Enjoy the topics below and feel free to suggest new topics by contacting us at cltc (at) berkeley (dot) edu.


What is a Cybersecurity Clinic?

This video provides an overview of cybersecurity clinics, university-based programs that train students to help public interest organizations build the capabilities they need to defend themselves online. The video was produced by the University of California, Berkeley’s Center for Long-Term Cybersecurity on behalf of the Consortium of Cybersecurity Clinics, an international network of university-based cybersecurity clinics and allies working to advance cybersecurity education for public good and grow the number of cybersecurity clinics around the nation and the world. Learn more at https://cybersecurityclinics.org/.


Public Interest Cybersecurity

This video features an introduction to the “public interest cybersecurity,” a growing field focused on improving the digital defenses of non-profits, hospitals, local governments, and other organizations working for the public good.

Zero Trust

The fourth video in the series focuses on “Zero Trust,” an approach to digital security that is quickly becoming an industry standard because it is well-suited for for the era of cloud computing, when users, devices, and servers are not in the same location. “Zero Trust shifts the focus of threat detection from a location-centric model, based on the network perimeter, toward validating the identity and need for access of individual devices and users, regardless of their location,” the video explains.

In Zero Trust, devices must constantly prove their trustworthiness to the rest of the organization. Networks are divided into segments, and each segment is like a safe, with its own special security restrictions, allowing organizations to isolate their most important data and applications. This model helps ensure that even if someone has access to one piece of private information, like a password, they can’t do damage to the whole network.

As the video explains, Zero Trust is an operating philosophy, not a one-size-fits-all solution. It is important for organizations that implement Zero Trust to be mindful of privacy and explain to users that the name of this approach does not mean they are not trusted, but rather that no device on the network is intrinsically trusted.

Read more


Deepfakes

The third installment in CLTC’s “What? So What? Now What?” series focuses on deepfakes and misinformation, featuring perspectives from Dr. Hany Farid, Associate Dean of the UC Berkeley School of Information and a Senior Faculty Advisor for the Center for Long-Term Cybersecurity.

Produced as part of the “What? Now What? So What?” explainer video series, this short video provides an overview of what deepfakes are, why they matter, and what can be done to mitigate potential risks associated with fake content.

“Deepfake is a general term that encompasses synthesized content,” Professor Farid explains. “That content can be text, it can be images, it can be audio, or it could be video. And it is synthesized by an AI or machine learning algorithm to, for example, create an article by a computer, just given a headline. Create an image of a person who doesn’t exist. Synthesize audio of another person’s speech. Or make somebody say and do something in a video that they never said.”

As Farid notes, deepfakes are potentially dangerous in part because they they make way for the so-called “liar’s dividend.” In a world in which everything can be faked, nothing has to be accepted as real anymore, giving plausible deniability to anything caught on video.

“What happens when we enter a world where we can’t believe anything?” Farid says. “Anything can be faked. The news story, the image, the audio, the video. In that world, nothing has to be real. Everybody has plausible deniability. This is a new type of security problem, which is sort of information security. How do we trust the information that we are seeing, reading, and listening to on a daily basis?”

Read more


Differential Privacy

The Center for Long-Term Cybersecurity has produced an animated “explainer” video about differential privacy, a promising new approach to privacy-preserving data analysis that allows researchers to unearth the patterns within a data set — and derive observations about a population as a whole — while obscuring the information about each individual’s records.

As explained in more detail in a post on the CLTC Bulletin — and on Brookings TechStream — differential privacy works by adding a pre-determined amount of randomness, or “noise,” into a computation performed on a data set. The amount of privacy loss associated with the release of data from a data set is defined mathematically by a Greek symbol ε, or epsilon: The lower the value of epsilon, the more each individual’s privacy is protected. The higher the epsilon, the more accurate the data analysis — but the less privacy is preserved.

Differential privacy has already gained widespread adoption by governments, firms, and researchers. It is already being used for “disclosure avoidance” by the U.S. census, for example, and Apple uses differential privacy to analyze user data ranging from emoji suggestions to Safari crashes. Google has even released an open-source version of a differential privacy library used in many of the company’s core products.

Read more


Adversarial Machine Learning

CLTC has launched a new series of “explainer videos” to break down complex cybersecurity-related topics for a lay audience. The first of these videos focuses on “adversarial machine learning,” when AI systems can be deceived (by attackers or “adversaries”) into making incorrect assessments. An adversarial attack might entail presenting a machine-learning model with inaccurate or misrepresentative data as it is training, or introducing maliciously designed data to deceive an already trained model into making errors.

“Machine learning has great power and promise to make our lives better in a lot of ways, but it introduces a new risk that wasn’t previously present, and we don’t have a handle on that,” says David Wagner, Professor of Computer Science at the University of California, Berkeley.

CLTC has written a brief overview of adversarial machine learning for policymakers, business leaders, and other stakeholders who may be involved in the development of machine learning systems, but who may not be aware of the potential for these systems to be manipulated or corrupted. The article also includes a list of additional resources.

Read more