Agency News

AI Will Reshape Society — But Only If Disabled People Are at the Table

Artificial intelligence is rapidly becoming the infrastructure of modern life. It is shaping how governments allocate welfare, how employers screen candidates, how courts assess risk, and how healthcare systems decide who receives care. Yet amid the global rush to regulate and deploy AI, one truth remains dangerously overlooked: AI will reshape society unjustly unless disabled people are meaningfully involved in building, governing, and auditing it.

For more than a billion disabled people worldwide, AI is not a speculative debate about the future. It is already determining who gets a job interview, who is flagged as “high risk” by automated systems, who loses access to social protection, and who is deemed eligible for life-saving healthcare. These systems are often trained on datasets that erase disabled bodies, disabled speech, disabled behavior, and disabled lives. When disability is absent from data, discrimination does not disappear — it becomes automated.

We have seen this pattern before. When digital infrastructure is designed without disabled people, exclusion becomes systemic rather than incidental. In India, biometric authentication systems have repeatedly failed disabled citizens who cannot provide fingerprints or iris scans, cutting them off from essential services. In the United States, algorithmic hiring tools have screened out candidates with speech differences or atypical motor patterns. Across Europe, welfare fraud detection systems have disproportionately targeted disabled people who rely on state support.

These are not technical glitches or unfortunate side effects. They are the predictable outcomes of designing technology around a mythical “average” human — someone who walks, speaks, sees, hears, processes information, and behaves in narrowly normative ways. AI systems built on this assumption will inevitably treat disabled people as outliers, anomalies, or errors to be corrected rather than as legitimate users to be served.

The irony is that disabled people have long been pioneers of technological innovation. Long before Silicon Valley began marketing “assistive AI,” disabled communities were adapting, hacking, and repurposing tools to navigate inaccessible environments. Screen readers, speech-to-text systems, curb cuts, and the principles of universal design all emerged from disability-led innovation. Yet the very people who shaped modern accessibility are routinely excluded from AI labs, regulatory bodies, and ethics committees.

This exclusion is not only unjust — it is strategically foolish. AI systems become more accurate, more humane, and more resilient when disabled people help design them. Training voice models on diverse speech patterns improves accuracy for everyone. Designing interfaces for people with limited mobility leads to better usability across devices and contexts. Disability-inclusive design is not charity or accommodation; it is sound engineering.

But inclusion cannot be symbolic. It must be structural.

First, governments must mandate disability representation in AI governance — not as token advisory roles, but as decision-making positions with real authority. Policymakers frequently invoke fairness, transparency, and accountability, yet disability expertise is rarely treated as essential. Without it, AI laws intended to protect marginalized communities will systematically fail one of the largest minority populations in the world.

Second, companies must treat accessibility as a core engineering requirement, not a post-launch fix. Disabled users should be involved from the earliest stages of product design, and accessibility audits should be as routine and rigorous as security or privacy audits. Retrofitting inclusion after harm has occurred is neither ethical nor efficient.

Third, AI datasets must be rebuilt to reflect the full diversity of human bodies and minds. This includes people with speech differences, limb differences, neurodivergent communication patterns, chronic illnesses, and non-standard mobility. Fairness cannot be engineered without representative data; bias cannot be mitigated when entire populations are missing from the training process.

Finally, disabled people must be compensated for their expertise. Too often, companies solicit unpaid “feedback” from disabled users while paying consultants, engineers, and ethicists for comparable labor. Inclusion without compensation is not participation —

it is exploitation.

Disabled expertise is not anecdotal; it is professional, technical, and indispensable.

zuXEYkAAAAGSURBVAMAmax6lx3z9t8AAAAASUVORK5CYII=

The stakes could not be higher. AI is rapidly hardening into the invisible architecture that determines who thrives, who struggles, and who is left behind. If disabled people are excluded from shaping these systems, AI will not merely reflect existing inequalities — it will entrench them into code, scale them globally, and render them harder to challenge.

But the alternative is equally real. If disabled people are included as co-architects of this technological transformation, AI could become one of the most powerful tools for equity humanity has ever built. It could expand access to education, employment, healthcare, and independent living in ways that law and policy alone have never fully achieved.

The question is not whether AI will reshape society. It will.

The question is who gets to shape AI.

And unless disabled people are at the table — not as afterthoughts, but as leaders — the future being built will not be a future for all.

About the Author: Vikas Gupta is an entrepreneur and disability rights advocate based in India, working at the intersection of governance, technology, and inclusion. He is the founder of EquiCore Advisory, where he advises institutions, companies, and public bodies on disability-inclusive policy design, ethical technology deployment, and systemic equity in decision-making. Drawing on lived experience and active engagement with constitutional and international human rights frameworks, his work focuses on how laws, digital systems, and institutional processes impact marginalized communities at scale. He writes and speaks regularly on disability rights, AI governance, and access to justice in India and globally. He can be reached at [email protected]

By Vikas Gupta