Cornell Bowers College of Computing and Information Science
A color graphic showing the CISCO Research's Outshift logo and the logo for Cornell Bowers CIS

Story

Cisco Research, Cornell Bowers CIS announce partnership

September 25, 2023

Cisco Research has funded multiple research awards to the Cornell Ann S. Bowers College of Computing and Information Science to support projects related to cybersecurity, sustainability, edge computing, and artificial intelligence (AI). 

Five faculty projects and one graduate student will receive funding through this partnership. The resulting research will further the college's leadership in AI and point the way toward innovative solutions to challenges surrounding the use and development of AI models. 

Cisco Research is within Outshift, which serves as Cisco's incubation engine. Outshift is dedicated to pioneering new businesses and new markets in cutting-edge technology domains, including cloud native application security, edge native, quantum, and AI.

"Cisco Research has been pushing the frontiers of technology through innovative, cutting-edge research in areas of emerging technologies such as AI/ML, edge computing, and quantum. We are super excited to partner with several leading researchers in their fields at Cornell who are doing amazing research in these areas,” said Ramana Kompella, head of Cisco Research. "In addition, Cisco Research encourages and promotes a culture of open innovation, and researchers are free to make all the research – publications and software – funded through these awards completely open to benefit everyone, not just Cisco.”

The following Cornell Bowers CIS faculty will receive Cisco Research grants:

Allison Koenecke, assistant professor of information science, is investigating Demographic Biases in Generative AI Hallucinations.  People from different backgrounds and countries have differences in how they speak and write. Those differences are likely to affect the output from generative AI programs that create text or images in response to written prompts. In her project, Koenecke will investigate how a person's demographics affect the accuracy of – or amount of "hallucination" in – text generated by three large language models (LLMs), including ChatGPT. She will also determine whether these models can be fine-tuned to reach more consistent levels of hallucination across demographics.

Ken Birman, professor of computer science, is developing faster data transfer methods so that machine learning (ML) technologies can be applied using edge computing. In his open-source project, Edge Framework for Ultra-Low Latency Computing, Birman will build on a system he previously developed called Derecho, which creates building blocks for fault-tolerant distributed computing. His new project, Cascade, uses Derecho to store ML software, such as for image processing or text generation, and allows the software to run very efficiently on standard high-speed networks. 

Kevin Ellis, assistant professor of computer science, proposes to engineer algorithms for guiding neural networks that generate code. In this project, Nonstandard Generation Strategies for Better LLM Reasoning with Code, Ellis aims to emulate how humans write code – through iteration, trial and error, and divide-and-conquer strategies that break a task into smaller subtasks – instead of having a neural language model write the code all at once. He expects this approach will yield more lightweight models that generate better code.

Rachee Singh, assistant professor of computer science, aims to leverage programmable optical interconnects for making distributed training of machine learning models more efficient. In her work, she develops systems and algorithms for programming photonic interconnects at server and rack-scales such that distributed computation does not get bottlenecked by communication between GPUs.

Volodymyr Kuleshov, assistant professor at the Jacobs Technion-Cornell Institute at Cornell Tech, and Christopher De Sa, assistant professor of computer science, have a vision to enable some of the largest LLMs to operate on consumer computers. They will take a step toward that goal with their project, Scaling Large Language Models to Consumer GPUs via 2-Bit Quantization. They propose to develop a new method called quantization with incoherence processing (QuIP), which will allow LLMs to function using only two bits of memory per parameter. This work will improve the cost and accessibility of generative AI models and bring miniaturized LLMs closer to running on edge devices.

The Cisco partnership will also support Trishita Tiwari, a doctoral student in the field of computer science, working with Edward Suh. Her doctoral research focuses on preventing LLMs from leaking sensitive information contained within their training data. She proposes to probe the weaknesses of LLMs by investigating possible attacks, and to develop solutions for existing security issues by modifying LLM architectures, training and inference.