Cornell Bowers College of Computing and Information Science
A color photo showing an arial view of NYC with security icons overlaying the image

Story

New projects and SETS educational initiative announced in Google Cyber NYC program

June 24, 2024

By Patricia Waldron

The Google Cyber NYC Institutional Research Program has awarded funding to nine new Cornell projects aimed at improving online privacy, safety, and security.

Additionally, as part of this broader program, Cornell Tech has also launched the Security, Trust, and Safety (SETS) Initiative to advance education and research on cybersecurity, privacy, and trust and safety. 

Cornell is one of four New York institutions participating in the Google Cyber NYC program, which is designed to provide solutions to cybersecurity issues in society, while also developing New York City as a worldwide hub for cybersecurity research. 

"The threats to our digital safety are big and complex," said Greg Morrisett, the Jack and Rilla Neafsey Dean and Vice Provost of Cornell Tech and principal investigator on the program. "We need pioneering, cross-disciplinary methods, a pipeline of new talent, and novel technologies to safeguard our digital infrastructure now and for the future. This collaboration will yield new directions to ensure the development of safer, more trustworthy systems."

The nine newly selected research projects from Cornell are:

  • Protecting Embeddings, Vitaly Shmatikov, professor of computer science at Cornell Tech. 

Embeddings are numerical representations of inputs, such as words and images, fed into modern machine learning (ML) models. They are a fundamental building block of generative ML and knowledge retrieval systems, such as vector databases. Shmatikov aims to study security and privacy issues in embeddings, including their vulnerability to malicious inputs and unintended leakage of sensitive information, and to develop new solutions to protect embeddings from attacks.

  • Improving Account Security for At-Risk Users (renewal), Thomas Ristenpart, professor of computer science at Cornell Tech, with co-PI Nicola Dell, associate professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech. 

Online services often employ account security interfaces (ASIs) to communicate security information to users, such as recent logins and connected devices. ASIs can be useful for survivors of intimate partner violence, journalists, and others whose accounts are more likely to be attacked, but bad actors can spoof devices on many ASIs. Through this project, the researchers will build new cryptographic protocols for identifying devices securely and privately, to prevent spoofing attacks of ASIs, and investigate how to make ASIs more effective and with improved user interfaces.

  • From Blind Faith to Cryptographic Certification in ML, Michael P. Kim, assistant professor of computer science. 

Generative language models, like ChatGPT and Gemini, demonstrate great promise, but also pose new risks to users by producing misinformation and abusive content. In existing AI frameworks, individuals must blindly trust that platforms implement their models responsibly to address such risks. Kim proposes to borrow tools from cryptography to build a new framework for trust in modern prediction systems. He will explore techniques to enable platforms to earn users' trust by proving that their models mitigate serious risks.

  • Making Hardware Comprehensively Secure Against Spectre — by Construction (renewal), Andrew Myers, professor of computer science. 

In this renewed project, Myers will continue his work to design secure and efficient hardware systems that are safe from Spectre and other "timing attacks." This type of attack can steal sensitive information, such as passwords, from hardware by analyzing the time required to perform computations. Myers is developing new hardware description languages, which are programming languages that describe the behavior or structure of digital circuits, that will successfully prevent timing attacks.

  • Safe and Trustworthy AI in Home Health Care Work, Nicola Dell, with co-PIs, Deborah Estrin, professor of computer science at Cornell Tech, Madeline Sterling, associate professor of medicine at Weill Cornell Medicine, and Ariel Avgar, the David M. Cohen Professor of Labor Relations at the ILR School.

This team will investigate the trust, safety, and privacy challenges related to implementing artificial intelligence (AI) in home health care. AI has the potential to automate many aspects of home health services, such as patient–care worker matching, shift scheduling, and tracking of care worker performance, but the technology carries risks for both patients and care workers. Researchers will identify areas where the use of AI may require new oversight or regulation, and explore how AI systems can be designed, implemented, and regulated to ensure they are safe, trustworthy, and privacy-preserving for patients, care workers, and other stakeholders.

  • AI for Online Safety of Disabled People, Aditya Vashistha, assistant professor of information science.

Vashistha will evaluate how AI technologies can be leveraged to protect people with disabilities from receiving ableist hate online. In particular, he will analyze the effectiveness of platform-mediated moderation, which primarily uses toxicity classifiers and language models to filter out hate speech.

  • DEFNET: Defending Networks With Reinforcement Learning, Nate Foster, professor of computer science, with co-PI Wen Sun, assistant professor of computer science. 

Traditionally, security has been seen as a cat-and-mouse game, where attackers exploit vulnerabilities in computer networks and defenders respond by shoring up weaknesses. Instead, Foster and Sun propose new, automated approaches that will use reinforcement learning – an ML technique where the model makes decisions to achieve the most optimal results – to continuously defend the network. They will focus their work at the network level, training and deploying defensive agents that can monitor network events and configure devices such as routers and firewalls to protect data and prevent disruptions in essential services.

  • The Institutional Context of Generative AI Media Authentication and Provenance Systems, Gili Vidan, assistant professor of information science.

As AI-generated digital media proliferates online, we need new authentication tools, such as watermarking and content provenance systems, to address rising mistrust of digital media. In this work, Vidan proposes to examine how different institutions, including news organizations, online communities, and Trust & Safety teams in large social media platforms, create, tag, and trace AI-generated media. Ultimately, the findings may lead to more nuanced and practical response to AI-generated content and provide a deeper understanding of the social and human aspects of developing trustworthy AI.

  • Flexible Authenticated Encryption with Associated Data, Ristenpart.      

Authenticated encryption with associated data (AEAD) schemes, which are message encryption schemes that assure both data confidentiality and authenticity, are used frequently, but carry multiple disadvantages and security vulnerabilities. In this work, Ristenpart proposes to design new, formally analyzed schemes that will fix these deficiencies, to yield more secure, next-generation AEAD schemes for a variety of uses.

Under director Alexios Mantzarlis, formerly a principal at Google’s Trust and Safety Intelligence team, the newly formed SETS Initiative at Cornell Tech will focus on threats ranging from ransomware and phishing of government officials to breaches of personal information and digital harassment. 

"There are new vectors of abuse every day," said Mantzarlis. He emphasizes that the same vulnerabilities exploited by state actors that threaten national security can also be used by small-time scammers. “If a system is unsafe and your data is leaky, that same system will be a locus of harassment for users.” 

Additionally, SETS will serve as a physical and virtual hub for academia, government, and industry to tackle emerging online threats.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.