Cornell Bowers College of Computing and Information Science
A graphic illustration of a woman using a computer

Story

Microaggressions have big impacts for disabled users online

October 27, 2022

By Patricia Waldron

In person, people with disabilities often experience microaggressions – comments or subtle insults based on stereotypes – but similar interactions, as well as new types of microaggressions, play out online as well.

A new study by researchers at Cornell and Michigan State University finds that those constant online slights add up. Interviews revealed that microaggressions affect their self-esteem and change how people with disabilities use social media. Ideally, digital tools will one day reduce the burden for marginalized users, but due to their subtlety, microaggressions can be hard for algorithms to detect.

“This paper brings a new perspective on how social interactions shape what equitable access means online and in the digital world,” said Sharon Heung, a doctoral student in the field of information science. Heung presented the study, “Nothing Micro About It: Examining Ableist Microaggressions on Social Media,” at ASSETS 2022, the Association for Computing Machinery SIGACCESS Conference on Computers and Accessibility on Oct. 26.

Previously, little was known about online microaggressions. “If you look at the discourse around harms emanating from social media use by communities that are vulnerable, there is almost no work that focuses on people with disabilities,” said co-author Aditya Vashistha, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science. “It's surprising because about one in seven people in the world has a disability.”

When microaggressions occur in live settings, they are often ephemeral, with few bystanders. In contrast, “when they happen on social media platforms, it's happening in front of a large audience – the scale is completely different,” said Vashistha, “and then they live on, for people to see forever.”

Additionally, social media platforms can amplify microaggressions, potentially spreading misinformation. “Online microaggressions have the ability to shape the understandings of disability for a lot of people who are not even involved in the situation,” said co-author Megh Marathe, assistant professor of media, information, bioethics, and social justice at Michigan State. “We're very concerned about how it's shaping the way the broader audience thinks about disability and disabled people.”

Heung and co-author Mahika Phutane, a doctoral student in the field of computer science, interviewed 20 volunteers who self-identified as having various disabilities and who were active on social media platforms. They asked the participants to describe subtle discrimination and microaggressions they had experienced and the impact on their lives.

Patronizing comments like, “You’re so inspiring,” were the most common, along with infantilizing posts, like “Oh, you live by yourself?” People also asked inappropriate questions about users’ personal lives and made assumptions about what the person could do or wear based on their disability. Some users were told they were lying about their disability, or that they didn’t have one, especially if that disability was invisible, such as a mental health condition. 

The researchers categorized the responses into 12 types of microaggressions. Most fit into categories previously recognized in offline interactions, but two were unique to social media. The first was “ghosting,” or ignored posts. The second was that social media platforms have accessibility gaps that can make people with various disabilities feel excluded. For example, some users said they felt unwelcome when people did not add alt text to photos or used text colors they couldn’t discern. One person with dwarfism said her posts were continually removed because she kept getting flagged as a minor.

After experiencing a microaggression, users had to make the tough choice of how to respond. Regardless of whether they ignored the comment, reported it, or used the opportunity to educate the other person, participants said it took an emotional toll and damaged their self esteem. Many took breaks from social media or limited the information they shared online to protect themselves.

“Addressing this problem is really hard,” said Phutane. “Social media is driven to promote engagement, right? If they educate the perpetrator, then that original post will just get more and more promoted.”

Most social media platforms already have moderation tools, but reporting systems are sometimes flawed, lack transparency, and can misidentify harassment. The participants proposed that platforms should automatically detect and delete microaggressions, or a bot could pop up with information about disabilities.  

However, microaggressions can be hard for automated systems to detect. Unlike hate speech, where algorithms can search for specific words, microaggressions are more nuanced and context-dependent.

Once the scope and types of microaggressions experienced by people from various marginalized groups are better understood, the researchers think tools can be developed to limit the burden of dealing with them. These issues are important to address, especially with the potential expansion of virtual reality and the “metaverse.”

“We need to be especially vigilant and conscious of how these real-world interactions get transferred over to online settings,” said co-author Shiri Azenkot, associate professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech and Cornell Bowers CIS. “It's not just social media interactions – we're also going to see more interactions in virtual spaces.”

This work was partially supported by the National Science Foundation Graduate Research Fellowship and the University of California President’s Postdoctoral Fellowship.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.