Skip to main content

News & Events

HCDE researchers studying equitable automated speech recognition among African American English speakers

Leah Pistorius
December 1, 2022

HCDE researchers Jay Cunningham, Daniela Rosner, and Julie Kientz have received a grant from Google Research to develop equitable, community-collaborative design methods to mitigate racial disparities in automated speech recognition technologies.

researcher headshots HCDE PhD Candidate Jay Cunningham, Associate Professor Daniela Rosner, and Professor Julie Kientz

Automated speech recognition (ASR) systems that rely on natural language processing techniques are becoming increasingly prevalent in people’s everyday lives. From virtual assistants integrated into mobile devices, smart home assistants, and vehicles; to software tasks such as automatic translation, automatic captioning, automatic subtitling, and even hands-free computing, ASR systems are core components of new devices and applications. However, recent research has shown that with this broadening access comes new fairness-related harms and racial disparities that negatively impact African American speakers of African American Vernacular English (AAVE), leaving AAVE users’ speech less accurately recognized and processed. 

 
Through this project, we hope to further inform how Google and the tech industry can democratically collaborate with communities to create artificial intelligence and machine learning systems, practices, and policies that enable fair, equitable, and sustainable solutions that ultimately liberate and empower historically marginalized groups."
— Jay Cunningham

HCDE PhD candidate Jay Cunningham, with Professors Julie Kientz and Daniela Rosner, seeks to address this challenge by developing and validating collaborative methods for developing more inclusive and equitable automated speech recognition language technologies for African American speakers of AAVE that are culturally sensitive. 

The research team seeks to understand the norms and pitfalls in how automated speech recognition systems are designed, including the decisions that machine learning technologists make that contribute to disparities among underrepresented language variety users. They will partner with local community organizations to engage in community-based participatory research to inform best practices within language technology design, and they will co-design automated speech recognition and natural language processing prototypes and probes with African American speakers of AAVE that inform the development of design techniques for mitigating racial disparities in those systems.

This project is supported by a $60,000 grant from the Google Award for Inclusion Research Program. This award recognizes and supports academic research in computing and technology that addresses the needs of historically marginalized groups globally. It funds topics including accessibility, AI for social good, algorithmic fairness, education, gender bias, and many other areas that aim to have a positive impact on underrepresented groups.

"Through this project, we hope to further inform how Google and the tech industry can democratically collaborate with communities to create artificial intelligence and machine learning systems, practices, and policies that enable fair, equitable, and sustainable solutions that ultimately liberate and empower historically marginalized groups," said Cunningham.