Cornell Bowers College of Computing and Information Science
A color graphic with the Cornell Bowers CIS and LinkedIn logos

Story

Eight scholars awarded Cornell Bowers CIS-LinkedIn Grants

July 17, 2024

By Louis DiPietro

Four faculty members and four doctoral students from the Cornell Ann S. Bowers College of Computing and Information Science are the latest recipients of annual grants from the college’s five-year partnership with LinkedIn.

This year’s award winners – the third cohort from the Cornell Bowers CIS-LinkedIn strategic partnership – will advance research in areas including algorithmic fairness, reinforcement learning, and large language models.

Launched in 2022 with a multimillion-dollar grant from LinkedIn, the Cornell Bowers CIS-LinkedIn strategic partnership provides funding to faculty and doctoral students advancing research in artificial intelligence. Awards to doctoral students include academic year funding and discretionary funds. The five-year partnership also supports initiatives and student groups that promote diversity, equity, inclusion and belonging.  

Faculty award winners

Sarah Dean, assistant professor of computer science, believes the algorithms that power social network platforms are too short-sighted. The models anticipate short-term engagement, like clicks, but fail to capture longer-term impacts, like a user’s growing distaste of clickbait headlines or educational content that no longer serves their skillset. In “User Behavior Models for Anticipating and Optimizing Long-term Impacts,” Dean seeks to develop models that can anticipate long-term user dynamics and algorithms that can optimize long-term impacts. 

Michael P. Kim, assistant professor of computer science, will explore fairness in algorithmic predictive models in his project, “Prediction as Intervention: Promoting Fairness when Predictions have Consequences.” Today's predictive algorithms can influence the outcomes they are meant to predict. For instance, algorithms may help job seekers connect with relevant companies, making it more likely for them to get hired by the company. Kim's project aims to understand the potential for such algorithms to cause harm by overlooking individuals from marginalized groups, but also to promote new opportunities through deliberate predictions.

Jennifer J. Sun, assistant professor of computer science, aims to leverage large language models (LLMs) to process text data from veterinarians at Cornell College of Veterinary Medicine. The goal of her project, “Learning and Reasoning Reliably from Unstructured Text,” is to use LLMs to develop a system to synthesize the text data into actionable insights for improving animal care, such as predicting surgical complications. Sun aims to develop algorithms that could scale to industry-level applications, for example, for use in tasks such as skills matching and career recommendations.

Daniel Susser will explore misalignments between the ways different actors conceptualize and reason about privacy-enhancing technologies (PETs) – statistical and computational tools designed to help data collectors process and learn from personal information while simultaneously protecting individual privacy. In “Navigating Ethics and Policy Uncertainty with Privacy-Enhancing Technologies,” Susser will develop shared frameworks for data subjects, researchers, companies, and regulators to better reason, deliberate, and communicate about the use of PETs in real-world contexts. 

Doctoral student award winners

Zhaolin Gao, a doctoral student in the field of computer science advised by Wen Sun and Thorsten Joachims, aims to improve methods used in reinforcement learning from human feedback (RLHF), which is used to train large language models. Gao’s project is called “Aligning Language Model with Direct Natural Policy Optimization.”

Kowe Kadoma, a doctoral student in the field of information science advised by Mor Naaman, studies how feelings of inclusion and agency impact user trust in artificial intelligence. In her project, “The Effects of Personalized LLMs on Users’ Trust,” Kadoma will expand on existing research that finds LLMs often produce language with limited variety, which may frustrate or alienate users. The goal is to improve LLMs so that they produce more personalized language that matches users’ language style.

Abhishek Vijaya Kumar, a doctoral student in the field of computer science advised by Rachee Singh, will develop systems and algorithms to efficiently share the memory and compute resources on multi-GPU clusters. The goal of the project, called “Responsive Offloaded Tensors for Faster Generative Inference,” is to improve the performance of memory and compute bound generative models. 

Linda Lu, a doctoral student in the field of computer science advised by Karthik Sridharan, will explore privacy through “machine unlearning,” a paradigm to give users the ability to delete any personal data that could be used to train large language models. Lu’s project is called “A New Algorithmic Design Principle for Privacy in Machine Learning.”

Louis DiPietro is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.