Disability-ability-centered AI in education and policy
by Yonah Welker
About the author: Yonah Welker is a technologist and public expert for algorithms and policies, former tech envoy EU/MENA, advisor of the Ministry and authorities of AI and public Innovation, visiting lecturer, and evaluator. Yonah Welker’s contributions and work have been featured and added to the acts, reports and frameworks, featured by the White House PCAST, the World Economic Forum, OECD, and UNESCO, supported AI and Digital Acts, treaties, ontologies and taxonomies of assistive systems, AI, robotics, health, education, accessibility systems, programs.
Introduction
Historically, individuals with disabilities were excluded from the workplace, educational system, and sufficient medical support. For instance, around 50-80% of the population with disabilities1 are not employed full time, 50% of children with disabilities in low- and middle-income countries are still not enrolled in school, public spaces meet only 41.28% to 95% of the expectations of people with disabilities, and only 10% of the population have access to assistive technologies. For cognitive disabilities, the level of discrimination is even higher. The unemployment rate among those with autism may reach 85% , depending on the country; while among people with severe mental health disorders, it can be between 68%-83% , and for those with Down’s syndrome, 43%.
Along with exclusion, individuals with disabilities are disproportionally affected by unjust law enforcement, violence and brutality. Persons with disabilities were victims of 26% of all nonfatal violent crimes. 30-50% of individuals subject to the use of force or killed by police have a disability. People with intellectual disabilities are seven times more likely to be sexually assaulted than members of the general population. About one-third of young children and teenagers with disabilities face emotional and physical abuse.
As for conflicts and crises, people with disabilities are also recognized as among the most marginalized and at-risk populations. An estimated 9.7 million people with disabilities are forcibly displaced as a result of conflict and persecution and are victims of human rights violations and confict-related violence. As a result, these groups are also more affected by posttraumatic disorders and conditions.
Finally, there is a strong component of intersectionality behind disabilities that may amplify this exclusion and discrimination, including aspects of gender, ethnicity, cooccurring conditions and socioeconomic factors. For instance, individuals with learning disabilities also experience mental health problems, with estimates suggesting that between 25 and 40% fall into this category. Girls are often diagnosed at a much lower rate than boys, with a ratio of 4:1, and may also be misdiagnosed due to different manifestations. Certain ethnic and social groups have been historically excluded from research data and resources. For instance, it was found that Caucasian parents of autistic children were 2.61 times more likely to report any social concerns to their child’s paediatrician than African-American parents (Georgia State University, 2017).
AI, Data Sets and Disability Support
It’s important to highlight that ethically developed and implemented assistive technologies can eliminate particular social barriers and create more accessible workplaces, hiring and learning experiences, and accommodation practices.
For instance, In order to support physical impairments, AI algorithms can be used to augment smart wheelchairs, walking sticks, geolocation and city tools, and bionic and rehabilitation technologies. In the case of sensory impairments, it includes facial and sign recognition for sign language identification and support of deaf individuals, and computer vision algorithms that can interpret images and videos and then translate that information into braille or audio output to help individuals with visual impairments.
In the area of cognitive impairments, it includes social robotics and algorithms for emotional training for students with autism, wearables and devices that improve emotion recognition, and adaptive platforms that support dyslexia and attention deficit and hyperactivity disorders. Such technologies can serve to support the general population as well, including further advancement of healthcare, education, labour and city systems, and support of elders, neurodisabled groups and individuals with psycho-emotional disorders.
Disability, bias and autonomous systems
Algorithms do not create biases themselves but perpetuate societal inequities and cultural prejudices. The reasons behind it include lack of access to data for target populations, due to historical exclusion from research and statistics, simplifcation and generalization of the target group’s parameters (proxies), and unconscious and conscious bias within the society.
For instance, AI systems are known to discriminate against individuals with facial differences or asymmetry, different gestures, gesticulation, speech impairment, or different communication patterns. It especially affects groups with physical disabilities, cognitive and sensory impairments, and autism spectrum disorders. There are examples of direct life-threatening scenarios when police and autonomous security systems, or military AI may falsely recognize assistive devices as a weapon or dangerous objects, or misidentify facial or speech patterns. These concerns were raised by UN Special Rapporteur on the Rights of Persons with Disabilities, disability organizations such as EU Disability Forum.
There are a variety of physical, cognitive and social parameters that may lead to discrimination against individuals with disabilities:
- Assistive tools and devices – individuals with disabilities may use a wheelchair, walking stick, rehabilitation or assistive devices, bionic hands or legs, or other tools and devices of different shapes, forms and patterns that may not be properly recognized by autonomous systems;
- Assistance and users – solutions, addressing individuals with disabilities frequently involves not only one end-user but an “ecosystem” of users, such as family members, and caregivers. For instance, specialized solutions for autism frequently involve two interfaces – one for the parent, and one – for the child. Public and city systems may not take it into consideration;
- Physical impairments. A person with a disability may lack particular limbs, or have a different body shape, posture, and movement pattern, making it more difficult for proper recognition;
- Visual impairments. Blind persons and those with a visual impairment may not properly understand visual cues given by automated systems;
- Hearing impairments. Individuals with hearing impairments may not hear and comply with audible commands or warnings, making it especially cautious for police and law-enforcement systems;
- Speech impairments. Neurological conditions may affect speech and the ability to
- communicate, thus not meeting “typical” speech patterns;
- Cognitive impairments. Individuals with cognitive disabilities may communicate differently, lack emotional recognition or social skills;
- Behavioural and psychomotor patterns – individuals with disabilities may exhibit a different pattern of user behaviour related to attention span, activities and cognitive parameters;
- Facial recognition that may not identify persons with eye deviation or facial neuropathy;
- Tactile recognition that is built on the assumption that everyone has hands, fngers, and fingerprints and has similar tactile parameters excludes many individuals with disabilities
- Semantic, intersectional, age and other bias – systems may add negative connotations to disability keywords for individuals of particular ethnicities. Besides, algorithms may perpetuate existing ageism
Each parameter alone or in combination with others may lead to greater risks presented by autonomous systems. In order to better categorize risks for individuals with disabilities, we should refer to existing methodologies and policies.
Other challenges and limitations include the problem of “insufficient research evidence”, historical and statistical distortions, the tendency of AI model to generalization, higher errors and inaccuracies for smaller groups, technical limitations (e.g. facial recognition and craniofacial syndromes), etc.
Generative AI – opportunities and risks for disabilities
Generative AI and language-based models further expand this impact and the R&D behind it. In particular, such systems may fuel existing assistive ecosystems, health, work, learning and accommodation solutions, requiring communication and interaction with the patient or student, social and emotional intelligence and feedback. Such solutions are frequently used in areas involving cognitive impairments, mental health, autism, dyslexia, attention deficit disorder and emotion recognition impairment, which largely rely on language models and interaction.
With the growing importance of web and workplace accessibility (including the dedicated European Accessibility Act), Generative AI-based approaches can be used to create digital accessibility solutions, associated with speech-to-text or image-to-speech conversion.
It may also fuel accessible design and interfaces involving adaptive texts, fonts and colours benefitting reading, visual or cognitive impairments. Similar algorithms can be used to create libraries, knowledge and education platforms that may serve the purpose of assistive accommodation, social protection and micro-learning, equality training and policing. Finally, approaches explored through building such accessible and assistive ecosystems may help to fuel theassistive pretext – when technologies created for groups with disabilities can be later adapted for a broader population, including ‘neurofuturism’ – fueling new forms of interaction, learning and creativity, involving biofeedback, languages and different forms of media. Generative AI-based systems can support people with disabilities by fueling existing assistive technology ecosystems and robotics, learning, accommodation and accessibility solutions. Ultimately, Generative AI can empower broader health and assistive solutions. However, Generative AI also poses unique risks associated with transparency, understanding systems outcomes, cognitive silos, potential misinformation and manipulation, privacy and ownership. How Generative AI may support disabilities
AI algorithms and systems play a significant role in supporting and accommodating disabilities from augmenting assistive technologies and robotics to creating personalized learning and healthcare solutions. Generative AI and language-based models further expand this impact and the R&D behind it. In particular, such systems may fuel existing assistive ecosystems, health, work, learning and accommodation solutions, requiring communication and interaction with the patient or student, social and emotional intelligence and feedback. Such solutions are frequently used in areas involving cognitive impairments, mental health, autism, dyslexia, attention defcit disorder and emotion recognition impairment, which largely rely on language models and interaction.
With the growing importance of web and workplace accessibility, Generative AI-based approaches can be used to create digital accessibility solutions, associated with speech-to-text or image-to-speech conversion. It may also fuel accessible design and interfaces involving adaptive texts, fonts and colours benefitting reading, visual or cognitive impairments. Similar algorithms can be used to create libraries, knowledge and education platforms that may serve the purpose of assistive accommodation, social protection and micro-learning, equality training and policing. Finally, approaches explored through building such accessible and assistive ecosystems may help to fuel the assistive pretext – when technologies created for groups with disabilities can be later adapted for a broader population, including ‘neurofuturism’ – fueling new forms of interaction, learning and creativity, involving biofeedback, languages and different forms of media.
When compared to existing AI systems, however, language-based platforms require even more attention and ethical guidance. In particular, they can imitate human behaviour and interaction, involve more autonomy and pose challenges in delegating decision-making. They also rely on signifcant volumes of data, a combination of machine-learning techniques and the social and technical literacy behind it.
There are different ways, in which generative AI-associated systems may pose risks for individuals with disabilities. In particular:
- They may fuel bias in existing systems, such as automated screening and interviews, public services involving different types of physical and digital recognition and contextual and sentiment bias.
- They may lead to manipulative scenarios, cognitive silos and echo chambers. For instance, algorithms were used to spread misinformation among patients during the COVID-19 pandemic.
- Language-based systems may add a negative connotation to disability-related keywords and phrases or provide wrong outcomes due to a public data set containing statistical distortions or wrong entries.
- Privacy – in some countries, governmental agencies were accused of using data from social media without consent to confrm patients’ disability status for pension programmes.
High and increased risk systems and scenarios
There are different approaches to the categorization of risks associated with AI systems. For instance, the European AI Act introduces four risk levels (unacceptable, high, limited, and minimal) of such systems and related compliance practices. The category of unacceptable risk relates to public scoring and biometric systems which in most cases are prohibited. High-risk systems include transportation, public and private services, hiring, screening and education platforms. Such solutions require conformity assessment and audit of the system’s safety, privacy, robustness, and impact.
The limited risk category relates to chatbots, conversational AI, and emotion recognition and requires transparency assessment. In this case, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back. Finally, the category of minimal risk includes AI-enabled video games, internet filters and systems that do not involve sensitive data collection. Many countries including the United States, Japan, and China are working to update their vision of the categorization of AI risks and related regulations.
It’s important to highlight, that existing AI policies typically present a generalized vision of risks. It doesn’t specify the risks for particular groups such as marginalized and low-income communities, individuals with disabilities or groups presented with autism-spectrum disorders. Due to the historical exclusion, and different physical or cognitive patterns, the same algorithms
may pose much greater risks, specifically in cases of law enforcement, public systems, scenarios of misuse, manipulation, and silos.
In particular, such disability-specific high-risk categories include:
Police and law enforcement, autonomous weapons. 30-50% of individuals with disabilities are subject to the use of force or police brutality. AI systems rapidly emerge in the area of law enforcement, policing, and public security systems and may perpetuate these social challenges. In particular, autonomous systems may falsely recognize assistive devices, deviant psychomotor conditions or behavioural patterns, and individuals with visual or hearing impairments may not properly recognize clues given by such systems.
Public biometrics systems. Emerging public policies are starting to highlight these systems as an “unacceptable risk” category. It’s true for both the general population and individuals with disabilities. In particular, co-occurring impairments, such as facial, voice or tactile differences may lead to errors or inability to properly recognize a person with disabilities. As a result, such individuals may be falsely rejected, alarmed or discriminated against by public systems, services or police;
Public and private systems perpetuate discrimination. Around 50-80% of the population with disabilities12 are not employed full-time. These existing social biases and distortions perpetuate how hiring, educational or screening systems may work toward individuals with disabilities. For instance, several hiring and job search platforms were alleged to discriminate13 against elders and individuals with disabilities, automated systems may add negative sentiments14 to “disability” keywords in resumes, exams or personal information;
Systems prone to misuse, and manipulative scenarios. Individuals with disabilities were victims of 26% of all nonfatal violent crimes, people with intellectual disabilities are seven times more likely to be assaulted. It’s also known that such individuals nearly 2.2 times more frequently become victims of violence15, disinformation16, social attacks, abuse or manipulation. Social network algorithms, chatbots, messaging apps, and similar tools, despite their positive impact, can be intentionally or not intentionally misused by people. It includes scenarios of manipulation, abuse or digital attacks;
Systems prone to silos and/or evolving omissions. Typically not sufficiently covered by risk categorizations, non-actions (omissions)17 may present harm to people with disabilities. Many individuals with cognitive or physical impairment may increasingly use digital tools, assistive devices, remote working and learning platforms. Without proper human involvement and curriculums, such scenarios may lead to social isolation, silos, using tools trained on ingenuine or inconsistent data or sources;
Systems involving emotion recognition. Sometimes identifed as “limited risk categories”, such systems are widely used as the cornerstone of some assistive technologies, such as social and emotional AI and robotics, and educational and health-related solutions. Despite its positive role, the systems interpreting emotions are known to be prone to bias. In particular, such systems may fail to work properly in case of individuals with multiple impairments, particular behavioral and psychomotor patterns.
Policy, impact assessment and compliance
With more developments in the area of public AI policies, disability organizations such as EU Disability Forum, communities and public entities were vocal about the necessity to bring more focus to disability-specifc cases, vocabulary and legal frameworks, ensure fairness, transparency, and explainability for these groups, address the prohibition of particular high-risk systems such law enforcement and biometric solutions.
Taking into account, that historical discrimination against individuals with disabilities is the problem posed by institutional and social structures and biases frst, it’s important to develop solutions and policies accordingly. In particular, we should develop criteria and tools that both support the development of assistive technologies but also ensure their human-centricity, safety and proper guidance.
In order to support the development of such technologies, policy frameworks should address the complex nature of disability support, the spectrum of conditions, age groups. With more complexity of the adoption cycle, there is a need for guidelines and curriculums, involving a constant feedback loop between developers and patients, and disability-specifc impact assessment. With the World Health Organization rising necessity to evolve digital health competence framework18, it’s important to bring attention to disability and accessibility-specifc terminology, vocabulary and knowledge frameworks. For instance, cognitive disabilities and autism-related conditions drive specifc terminology related to neurodivergent individuals19 and the autism-disorders spectrum. Vocabulary becomes more complex due to the convergence of technical and social studies, more active involvement of bioethics-related professionals, and aspects of gender, age and ethnic groups (eg. Unicef’s Accessible and Inclusive Digital Solutions for Girls with Disabilities20, or AI For Children21)
Another group of tools and criteria is aimed to ensure safety, avoid silos and misuse. It includes disability-specifc categories of high and unacceptable risk systems aligned with different scenarios, types of impairments and co-occurring conditions. It also involves approaches to compliance that take into account human involvement and the level of autonomy, categorization of actions and non-actions (omissions). Taking into account the specifcs of vocabulary, diverse stakeholders and data input to approach aspects of fairness, transparency, explainability and accountability.
Finally, as was raised by the International Red Cross, even such complex issues as confict displacement, and threats posed by autonomous military systems and weapons, do not sufciently ensure the representation and actual participation of groups with disabilities that are especially affected by such systems. With more international conficts, there is an immediate call to action to address these challenges with the involvement of all vulnerable groups.
Way forward. Disability-centred risk and impact assessment
Disability is not a monolith, but a spectrum, affected by layers of health conditions, gender, demographic, socio-economic and historical criteria. This complexity poses an important reminder that disability exclusion is a social issue first and only then – algorithmic. Existing AI policies and acts attempt to categorize and describe systems through primarily generalized visions of technologies, scenarios and posed risks. These categories do not address specific groups, physical or cognitive differences, unequal access to medical support or education, or economic status.
With more risks of emerging data silos and monopolization of AI development posed by corporate agents or governments, there is an emergent need for collective action to address disability representation in every conversation and policy development, proper risk and impact assessment categorization.
Recent works, reports and contributions (disability-centered AI in education and policy):
Course and OECD Repository:
Reports (additions through disability lens):
- EU Commission, AI in Science /Research – https://scientifcadvice.eu/advice/artifcial-intelligence-in-science/
- OECD – https://www.oecd.org/social/using-ai-to-support-people-with-disability-in-the-labour-market-00 8b32b7-en.htm
- OECD MOOC (author) – https://oecd.ai/en/catalogue/tools/disability-centered-ai-and-ethics-mooc
- WHO – https://iris.who.int/bitstream/handle/10665/375579/9789240084759-eng.pdf?sequence=1&is Allowed=y
- Unesco – https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research
- White House PCAST – Generative AI – https://www.whitehouse.gov/wp-content/uploads/2023/07/PCAST-Written-Public-Comments-July-2023.pdf
Publications and letters:
- https://oecd.ai/en/wonk/eu-ai-act-disabilities ( AI Act)
- https://oecd.ai/en/wonk/disabilities-designated-groups-digital-services-market-acts ( DSA)
- https://www.euronews.com/next/2023/11/22/can-emerging-ai-strategies-protect-people-with-disabilities-and-other-vulnerable-groups ( AI policy and treaties)
- https://www.euronews.com/my-europe/2024/01/29/can-the-eu-ai-act-embrace-peoples-needs-while-redefning-algorithms ( AI policy and treaties)
- https://www.weforum.org/agenda/2023/08/sovereign-funds-future-assistive-technology-disabi lity-ai/ ( Sovereign Funds)
- https://www.weforum.org/agenda/2023/11/generative-ai-holds-potential-disabilities/ ( Gen AI)
- https://www.weforum.org/agenda/2023/04/how-cognitive-diversity-and-disability-centred-ai-ca n-improve-social-inclusion ( Cognitive spectrums)
- https://www.forbes.com/sites/abigaildubiniecki/2024/01/25/trustworthy-ai-string-of-ai-fails-sh ow-self-regulation-doesnt-work/?sh=5c722d4a105c (AI Safety Declaration)
- www.privacylaws.com/int186usai (Privacy)
- https://horasis.org/disability-peace-center-ai-policy/ ( Humanitarian context)
- https://www.forbes.com/sites/forbestechcouncil/2023/05/09/algorithmic-diversity-mitigating-ai-bias-and-disability-exclusion/?sh=423428b8417d ( Audit)