Why AI fairness conversations must include disabled people
Tech offers promise to help yet too often perpetuates ableism, say researchers. It doesn’t have to be this way.
Tech offers promise to help yet too often perpetuates ableism, say researchers. It doesn’t have to be this way.
Third in a four-part series on non-apparent disabilities.
AI researcher Naomi Saphra faced “a programmer’s worst nightmare” in 2015. After a decade of coding and just as she was about to start a Ph.D. program in Scotland, neuropathy in her hands rendered typing too painful.
Seeking a solution, Saphra turned to the very technology she was studying. She began the long process of teaching herself how to code using voice-to-text dictation technologies. Today, a system called Talon, which Saphra has heavily customized to complete specific tasks, allows her to code and to write papers for her research on language models.
“I rely on it completely,” said the research fellow at the Kempner Institute for the Study of Natural and Artificial Intelligence . “I would absolutely not have a career if we didn’t have AI for speech-to-text. These days it’s pretty hard to exist in the world if you’re not able to use a computer much. And as things have advanced, it’s been very important that the word error rate has gotten lower over the years.”
AI technology can be a powerful assistive tool for people like Saphra with non-apparent disabilities — physical or mental conditions that are not immediately obvious to others. But disability advocates say these tools have a long way to go to become truly accessible. Experts say including disabled people in conversations on AI fairness and the development process is key.
Lawrence Weru, an associate in biomedical informatics at Harvard Medical School , was initially excited when voice-activated AI tools such as Siri and Alexa were released in the early 2010s. As someone who learned to code from a young age on public library computers before personal computers were common, he has long been fascinated by advances in digital technology. But Weru, who has a stutter, quickly found voice-activated technology more frustrating than helpful. When asking Siri for directions, the digital assistant would not understand the question if he stuttered.
While this was hardly a new experience — before AI, Weru remembers the frustration of trying to contact his bank and not being able to get past the automated phone system — it was disappointing to realize that the AI likely had not been trained on data from people with disabilities like his.
“People create things and people always have a vision in mind of who is going to be using their thing,” Weru said. “Sometimes not everybody is included in those personas.”
His experience with Siri makes Weru concerned about the future of voice-activated AI technology, envisioning a world in which critical tasks — making doctor appointments, applying for jobs, accessing education — are powered not by humans, but by technologies that can’t be used by everyone.
“If we’re creating tools that we know are fed with information that can bias against certain groups, and we integrate those into very crucial aspects of our lives, what’s going to be the impact of that?” Weru said. “That’s a concern that I hope people would be having enough foresight to try to address in advance, but historically accessibility is usually something that’s treated as an afterthought.”
Maitreya Shah, a fellow at the Berkman Klein Center for Internet and Society , recently launched a research project analyzing different schools of thought on “AI fairness,” or movements seeking to mitigate AI bias against people in marginalized groups. Shah , a blind lawyer and researcher, wants to go beyond conversations about accessibility and examine what he believes is the root of the issue: People with disabilities are not being included in conversations about AI, even in conversations about AI fairness.
“A lot of research so far has focused on how AI technologies discriminate against people with disabilities, how algorithms harm people with disabilities,” Shah said. “My aim for this project is to talk about how even the conversation on AI fairness, which was purportedly commenced to fix AI systems and to mitigate harms, also does not adequately account for the rights, challenges, and lived experiences of people with disabilities.”
For his research, he’s interviewing scholars who have studied the issue and evaluating frameworks designed to maintain AI fairness proposed by governments and the AI industry.
Shah said developers often consider disability data to be “outlier data,” or data that differs greatly from the overall pattern and is sometimes excluded. But even when it’s included, there are some disabilities — like non-apparent disabilities — that are overlooked more than others. If an AI is trained on a narrow “definition” of disability (like if data from people who stutter is not used to train a voice-activated AI tool) the outcome will be that the tool is not accessible.
“There is a paradox,” Shah said. “If you don’t incorporate disability data, your algorithms would be open to discriminating against people with disabilities because they don’t fit the normative ideas of your algorithms. If you incorporate the data, a lot of people with disabilities would still be missed out because inherently, the way you incorporate datasets, you divide data on the axes of identity.”
“Do people with autism or other disabilities even want these technologies? No one asks them.”
In his own life, Shah uses some AI technologies as assistive tools including “Be My AI,” which describes images, and “Seeing AI,” which provides users with visual information such as text, color, light, and scenery. Blind people were very involved in the development and testing process for both those tools.
But Shah said too often people with disabilities are not included in the high-level decision-making and development processes for AI that is purported to benefit them. He cited, as an example, technology designed to diagnose autism or address learning disabilities.
“The question is: Do people with autism or other disabilities even want these technologies? No one asks them,” Shah said.
In his research, Shah proposes adopting perspectives from disability justice principles, such as participation.
“Let people with disabilities participate in the development and the deployment of technologies. Let them decide what is good for them, let them decide how they want to define or shape their own identities.”
Saphra agrees, which is why she believes any developer creating assistive AI should make it easily customizable, not just by AI experts or coders, but by people who may not be tech experts. That way, users can set up the system to perform specific, essential tasks like Saphra did for writing code.
“It’s very important to make sure that everything you release is hackable, everything is open-source, and that everything has an accessible interface to start with,” Saphra said. “These things are going to make it more useful for the greatest number of disabled people.”
- University Disability Resources serves as the central resource for disability-related information, procedures, and services for the Harvard community.
- Students who wish to request accommodations should contact their School’s Local Disability Coordinator.
- The 24/7 mental health support line for students is 617-495-2042. Deaf or hard-of-hearing students can dial 711 to reach a Telecommunications Relay Service in their local area.
- Harvard Law School Project on Disability
Also in this series
4 students with conditions ranging from diabetes to narcolepsy describe daily challenges that may not be obvious to their classmates and professors
How to ensure students with disabilities have an equal chance to succeed?
Harvard lab’s research suggests at-risk kids can be identified before they ever struggle in school