What Can Tech Learn from Crip Futurity?

From the Series: Technology and Anthropological Ways of Knowing

Photo by Talin Wadsworth, 2018.

The disability to come . . . will and always should belong to the time of promise.”
—Robert McRuer

Technology and disability have a long, sometimes uncomfortable relationship. Many disabled people rely on various technologies to access their worlds. For instance, one of us, Molly, relies on the biomedical technology that creates her wheelchair and pain medications to navigate her physical and social worlds. Email, voice assistants, and captions emerged from technologies developed with and for disabled people to broaden their participation in life. Within the tech industry, accessibility refers to an entire field that ensures that disabled people have access to websites, apps, and digital experiences. An underlying ideology of these technologies is a goal for an equitable future for disabled and nondisabled folks. On the other hand, many technological interventions into disability are inspired by imaginations of a future in which disability is expunged. For instance, Eli Clare (2017)—one of many scholars to explore this topic—writes about his experiences as a person with cerebral palsy and rehabilitation. He notes that the rehabilitation technologies to which he has been exposed in order to minimize his disability have meant removing an important part of his embodied experience.

Crip theory considers disability to be an important aspect of humanity’s diversity. Alison Kafer (2013) builds on the notion of crip theory to imagine crip futurity as a longing for a future in which disability is welcome and in which the collective knowledge and practices of disabled people shape the future structures. Technology, too, imposes a future-oriented mindset into production. There is much at stake for how technology like artificial intelligence (AI) builds either a collective future, or a continuation of the inequitable world in which we reside.

AI Futures

AI systems collect, construct, and act upon knowledge that is provided to them by their context (often, datasets). AI systems chew through immeasurable amounts of data at record pace to make decisions. We believe that this data and the actions to which it contributes must incorporate the intricacies of disabled life as well as the needs of disabled people.

AI can reduce cognitive load: it can relieve the burdens of difficult decisions (streets filled with autonomous vehicles would be far safer than today’s manual cars) or tedious tasks. There is also a rising interest in “Artificial Intelligence for Social Good” (AI4SG) initiatives, which range from online reinforcement learning that targets HIV education for homeless youths to probabilistic models that prevent harmful policing and support student retention. AI4SG can create socially good outcomes that were previously unachievable, unfeasible, or unaffordable.

Specifically for disabled people, AI can help remove barriers to access. For instance, AI-driven computer vision might translate the visual world for people who are blind, while AI-powered speech recognition and auto-translation systems may assist people who are hard of hearing or deaf. Autonomous robots, cars, and machines may also be able to serve the needs of people with mobility or sensory impairments.

However, these positive AI-based outputs come with a bevy of risks; some challenges include protection of human self-determination, the replacement of human job opportunities, profiling algorithms that discriminate against historically marginalized populations (see Birrer 2005; Barocas and Selbst 2016), and the general impediment of ownership over one’s data. AI systems are inevitably making biased decisions. There are two ways that bias often occurs: (1) the algorithm exhibits the designer’s personal biases or (2) the dataset teaches the system society-wide biases. Though the discourse surrounding AI imagines a utopian equitable future, many marginalized populations experience the harmful impacts of bias in AI.

AI can both usurp self-determination and heighten discrimination for disabled people. As technology companies rush to develop AI systems, these systems are often not constructed with an inclusive approach. Even systems with enormous potential advantages for the disability community have not been designed with disabled users in mind. For example, speech-to-text systems do not function for people with speech impairments. Furthermore, the few AI systems that have been developed for disabled users often miss their mark by not developing with disabled users, therefore lacking consideration of key aspects of the embodied disabled experience.

AI can also enhance disability-based discrimination (Morris 2020). Meredith Ringel Morris et al. (2016) demonstrated that a user’s disability can be reconstructed based on their interaction with an AI-based system. Algorithms could potentially be used to target disabled users with harmful information, limited opportunities, or biased choices. For example, insurance companies’ algorithms could restructure the options (or lack thereof) provided to users.

Finally, the privacy of disabled people is of particular risk. Users with rare disabilities are far more at risk than the general population when AI systems collect, store, and analyze large datasets. Anonymizing these datasets, as required by current regulation, is certainly not a sufficient protection, as data can be easily reidentified. The revelation of this data can lead to a further spiral of bias, exclusivity, and unequal opportunities.

Guiding the Tech Future with Disability Wisdom

In this essay, we’ve explored how technology can imagine a utopian future that is either inclusive or exclusive of disabled people. AI systems that embrace “crip futurity” would embed disabled perspectives into algorithm design and development. Additionally, it is imperative that disabled people are represented within the datasets on which artificial technologies are trained. These datasets directly shape the learned habits and ultimate outcomes of these ever-evolving technologies; by controlling the data that they are served, we control the reality that this machine intelligence operates within and, thus, the future that they build. Crip futurity can ensure that this data-driven context is better representative of our soon-to-be inseparable reality.

In our work at Adobe, we have developed practices to include disabled voices in our product design and research processes, such that our programs are created both with and for disabled people. Conducting research that will help build crip futurity has been more involved than simply inviting disabled people into our research. It has required us to restructure the way we think about “typical users” and build a more robust recruitment infrastructure, deepen our relationships with disability communities, and continue to expand the awareness of inclusive design at Adobe. We believe that it is particularly important to consider these perspectives when designing tools that enable creativity and expressivity; it is imperative that all users are able to effectively express themselves. After all, enabling more equitable expressivity will help us build the “time of promise” that crip futurity offers.

References

Barocas, Solon, and Andrew D. Selbst. 2016. “Big Data’s Disparate Impact.” California Law Review 104, no. 3: 671–732.

Birrer, Frans A. J. 2005. “Data Mining to Combat Terrorism and the Roots of Privacy Concerns.” Ethics and Information Technology 7, no. 4: 211–20.

Clare, Eli. 2017. Brilliant Imperfection: Grappling with Cure. Durham, N.C.: Duke University Press.

Kafer, Alison. 2013. Feminist, Queer, Crip. Bloomington: Indiana University Press.

McRuer, Robert. 2006. Crip Theory: Cultural Signs of Queerness and Disability. New York: NYU Press.

Morris, Meredith Ringel. 2020. “AI and Accessibility.” Communications of the ACM 63, no. 6: 35–37.

Morris, Meredith Ringel, Annuska Zolyomi, Catherine Yao, Sina Bahram, Jeffrey P. Bigham, and Shaun K. Kane. 2016. “‘With most of it being pictures now, I rarely use it’: Understanding Twitter’s Evolving Accessibility to Blind Users.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems: 5506–16.