Research


Online Hate Language

Hate speech presents significant challenges to NLP in part due to the difficulty of its accurate detection. This difficulty stems from a variety of factors, including the noise present in short social media messages, and the variety of implicit, rational, and stance-dependent discourse surrounding it. The work we do focuses on the detection and characterization of hate speech on social media through a variety of computational methods.

Relevant Publications: Latent Hatred, Hate Lingo, Peer-to-Peer Hate, Hate Speech on News Websites, User Representation, Fine-grained Classification, Life Long Learning


Misinformation Surrounding Opioid Use Disorder (OUD)

Every day, more than 130 people in the U.S. die after overdosing on opioids. The misuse of and addiction to opioids—including prescription pain relievers, heroin, and synthetic opioids such as fentanyl—is a serious national crisis that affects public health as well as social and economic welfare. While many individuals seek recovery via formal and informal means, their attempts are thwarted due to the pervasiveness of inaccurate and potentially harmful misinformation online. In collaboration with public health experts at the CDC, the work we do leverages human-machine mixed initiatives that aim at investigating the landscape of Online Language for Opioid Use Disorder.

Relevant Publications: Medication-Assisted Treatment, Fentanyl Risk Indicators, Fentanyl Misuse Themes


Emerging Social Media Platforms

Voice-based and VR social networking platforms introduce fundamental changes to social media: the ephemeral nature of interactions, the additional modality of voice, and the spatial constraints of VR. And in turn, these changes may usher in new challenges for facilitation and moderation. The work we do sheds light on ephemeral social spaces and how moderation tools and practices must evolve to meet their new challenges.

Relevant Publications: Social VR Moderation


Responsible AI

Although NLP models have shown success in modeling various applications, they propagate and may even amplify gender bias found in text corpora. Bias is exhibited in multiple parts of a Natural Language Processing (NLP) system, including the training data, resources, pre-trained models (e.g., word embeddings), and algorithms themselves. Our work aims at identifying these biases and mitigating them.

Relevant Publications: Genger Bias in NRE, Gender Bias Review

Let's create together

Let's create together