Caring for Images: Computational Anonymization as Refusal?

From the Series: Relating, Refusing, and Archiving Otherwise

Anonymized image taken from https://nobordercamps.eu. No Border Camp 2022, Rotterdam, Netherlands.

Introduction

Surveillance has been normalized into every aspect of our lives; “from the level of state practices all the way down to interpersonal exchanges among family members and friends” (Monahan and Murakami Woods 2018, xxxi). States and police forces increasingly expand their budgets and capabilities in hopes of rendering people hypervisible, while simultaneously protecting themselves behind regimes of opacity and secrecy (see Cinkevich and Engelhardt, in this series). While these tools are framed as being meant to identify dissidents and others labeled suspicious or potentially dangerous, state-corporate surveillance projects actually reach most people in one way or another—albeit with drastically different consequences. Those demographics who are subjected to the most violent forms of surveillance may be less likely to be in a position to reject or protect themselves from such scrutiny and targeting.

Activists, journalists, scholars, and others demanding investigations into police/border/state violence and human rights violations increasingly draw on some surveillance technologies in collective attempts to create a countervisuality (Monahan 2015) that will bring about some form of accountability or justice. In turn, states fight back with discrediting tactics, such as claiming that the audiovisual evidence is fake, or that it has been digitally modified and/or staged.

For example, when activist groups and the global press confronted the Assad regime with visual evidence of police injuring and murdering people in the streets of Syria in 2011, the regime denied the abuse through a disinformation campaign that mobilized a tactic Hamad calls “absenting images” (2020, 66). This consisted of producing an alternative version of the events—such as claiming that protesters’ harm was self-inflicted—through mere invocation and an absence of visual evidence that could support their claims.

Similarly, politicians and border police continue to minimize and/or outrightly deny push backs and drift backs in the Mediterranean, despite the vast evidence collected by journalists, NGOs, researchers, and people on the move. On June 14, 2024, the same day of the sinking of a ship carrying an estimated seven hundred people off the coast of Pylos, I spoke with a Frontex officer while doing fieldwork at a security conference. When I asked him to comment on the past decade’s estimated twenty thousand deaths in the Mediterranean, he responded that “if push backs were the case…” and that he “wasn't denying that they might occur,” the fault resided with “NGOs who rent the big ships” and “those who take people on ships.”

Stop Pushbacks, No Border Camp 2022, Rotterdam. Image by author.

Opacities

As a response to states’ disinformation campaigns and increasingly difficult evidentiary landscapes concerning what counts as legitimate evidence of violations, some activist initiatives have responded through acts of refusal, appropriation, and subversion. Especially in legal contexts and documentary or film-based practices involving testimony, those expressing or lodging a grievance must make themselves legible by disclosing not only their face—which is often equated with one’s identity—but also key biographical information. In order to bypass, or subvert these conventions, some activist groups and practitioners working with moving images have proposed methods such as “absenting images” (Hamad 2020). In the film “Testimony,” the Wujoud Collective simultaneously appropriates “the absent image”—weaponized by the Assad regime—and rejects disclosure conventions expected of testimony by only using a black screen to accompany a detainee’s account of torture (Hamad 2020).

Others have turned toward the glitch, the blur, and various other tools in order to reach some form of opacity. For example, the 2020 film Welcome to Chechnya, by David France, uses animation and computational anonymization to provide persecuted lesbian, gay, bisexual, transgender, and queer (LGBTQ) people from Chechnya with what some described as “digital shields” created from composites based on portraits of volunteers.

Refusals

In a time when facial recognition technologies abound and become increasingly more normalized, it has become imperative for activists, journalists, anthropologists, and others involved in the production and circulation of media to find and develop sensitive ways of documenting politically-urgent issues.

As an anthropologist documenting illegalized groups and their campaigns for alternative identity documents, it became especially important for me to experiment with a methodological approach that responded to both activists’ desires for recognition, as well as my concerns about exposing interlocutors to further surveillance and/or targeting through my work. Between 2019–2021, I documented the Massachusetts Driver’s License for All campaign, which sought to grant driver’s licenses to all residents regardless of legal status. Dilemmas related to visibility, surveillance, and recognition were at the heart of the campaign and my documentation of it. Effectively, the U.S. driver’s license is regarded by many as the most prominent identity technology through which local governments simultaneously govern mobility and grant residents political and everyday recognition. This possible partial recognition however also poses risks, as Department of Motor Vehicles databases contain the largest records of personal information concerning U.S. residents. These are accessible to law enforcement and have even been used by tech companies to train facial recognition technologies.

Drawing inspiration from the different approaches to visual opacity discussed earlier, I anonymized my ethnographic archive with DeepPrivacy—an open-source computational anonymization software used in Welcome to Chechnya and created by created by Hukkelås et al. 2019. In short, this software identified seven points on each face and swapped them for attributes from publicly available images within a dataset of 1.47 million human faces (Hukkelås et al. 2019.). It then created anonymized composites while mostly retaining each image's background information.

Woman demonstrating outside Massachusetts State House in support of the passage of An Act Relative to Work and Family Mobility (Driver's License Bill). Boston, 2020. Image by author.

I had two primary motivations for incorporating opacity into my ethnographic practice. First, I wanted to create alternative representations of the political struggles of the illegalized that would allow me to depart from essentializing photographic legacies related to the documentation of “immigrants” or “migration.” Second, and most importantly, I wanted to protect my interlocutors from facing additional digital surveillance.

Departures

In both bureaucratic and ethnographic contexts, illegalized immigrants and other non-citizens must disclose their life experience in ways that will make their suffering and/or plight legible to multiple audiences. In the case of government entities, it is expected that full disclosure will enable authorities to create “accurate” and comprehensive portraits of individuals, thus facilitating their tracking, or in the minority of circumstances, their legal case. Researchers, including anthropologists, photographers, and filmmakers may often reproduce the same kinds of testimonial standards and demands for disclosure that border regimes and state agencies require of people crossing borders. The ethnographer is expected to obtain detailed (and often personal) written and visual accounts that will one, presumably lead to better analysis and ethnography, and two, more empathy from readers. Anonymizing photographs departs from these norms because it challenges both ethnographic portraiture and the status of the face as an expectation of disclosure and target of surveillance.

While ethnography can have many aims, the goal of ethnography that is motivated by a feminist and attunement-driven ethos is “not to create empathy with the ‘cultural other’...” (Guzman and Hong 2023, 197). Instead, its aim is to challenge indifference, build “sensory accompaniment,” and create relations of proximity on which solidarity can be built (199).

Members of Driving Families Forward coalition manifesting outside Massachusetts State House in support of Massachusetts Driver's License Bill. Boston, 2020. Image by author.
Members of Movimiento Cosecha walking during march in support of Massachusetts Driver's License Bill. Boston, 2019. Image by author.

(Re)presentations

In her essay, The Scene of the Screen: Envisioning Photographic, Cinematic, and Electronic “Presence,” Vivian Sobchack draws our attention to the ways that expressive and perceptive technologies of representation—such as the photographic, cinematic, and electronic—have radically transformed our comprehension of ourselves and others (2004, 135). I take Sobchack’s observation as an invitation to interrogate how our representational choices have cemented particular ways of framing and looking at illegalized and other people whose movements are criminalized. Can such digital technologies reshape or be put in the service of a critical, abolitionist, feminist, and decolonial ethnographic practice?

Understandably, using or appropriating tools that rely on machine learning for tracking, inscribing, and enclosing our physical and digital selves may generate skepticism as to whether we can embrace them without endorsing them or aggrandizing their liberatory potential. However, I argue that in limited circumstances these tools can be mobilized to suspend—or potentially reorient—how we conceive of and represent marginalized and/or persecuted interlocutors. Such computational tools can alter (re)presentations of illegalized people and their various struggles for autonomy and push against a common ethnographic motivation or aspiration to depict or present our interlocutors in “unmediated” or transparent ways.

As a “technology of representation” (Sobchack 2009, 138), computational anonymization allows for a temporary refusal of the surveillance conditions under which an illegalized body must protest, offering a reprieve from conditions of digital and physical hypervisibility. Political systems force activists to take highly prominent and visible roles in order to request—and infrequently secure—political recognition. Understood as an instance of the glitch (Russell 2022), the anonymized photo disrupts the usual conditions of looking and allows the illegalized person to access anonymity.

The anonymization of images can also be read as a muting that can facilitate audiences’ alternative engagements with political struggles, marking a departure from regimes of both looking and recognition that center visibility, transparency, and disclosure. Demanding a political change or speaking out against oppressive regimes of surveillance and racial injustice requires exposure and subjection to scrutiny, since “neoliberal expectations of constant presence, physical exposure and public identification” (Korporaal 2020, 33) demand performances of vulnerability.

As part of the Massachusetts driver’s license campaign, illegalized residents organized encampments, hunger strikes, public audiences, and marches throughout the state. The very success of the driver’s license campaign depended on their on and off-line public presence over the course of two decades. Rather than serving as “evidence” of immigrant struggle or resilience, the anonymized photographs challenge expectations that the images must contain subjects who are ready to submit themselves to scrutiny in order to have a “truth” or their presence recognized.

Reorientations

Computationally-anonymized photographs also extend illegalized people’s offline engagements spatially and temporally, allowing the marches, actions, and activities to live on through the archival record. Given the uncertain afterlives of these images and potential future developments in biometric recognition technologies, it seems imperative for the images to contain this layer of anonymization. Hence, anonymization is a practice of care toward the present and future of these interlocutors and images.

Korporaal (2020) proposes a practice of “intensional activism” (36) to challenge artistic public encounters that reproduce “an aesthetic of poverty and precarity” (37). “In-tensional” methodologies implicate artists and the public in relationships of proximity and complicity that generate solidarity through embodied collective experiences of injury, alienation, or enclosure, creating hybrid modalities of visuality which “re-imagine the visual as a passage for diverse traces, reorientations, absences, opacities and resistances” (43).

Representations of resistance can live through opacity, and such a visual practice can be one that, in the Wujoud collective’s words, “creates presence through a conscious decision to be absent” (Hamad 2020). By turning the original people in the photos illegible—from presumably recognizable individuals into composites—I seek to expand the range of presence which is available to illegalized and other targeted collaborators. In this manner, illegalized people take up space from a position of refusal to be read, to be identified, and to be terrorized through surveillance and unknown repercussions for demanding basic rights.

Manifestation outside Massachusetts State House in support of Massachusetts Driver's License Bill. Boston, 2020. Image by author.

References

Fallon, Kathy, Giorgos Christides, Julian Busch, and Lydia Emmanouilidou. 2023. “Greek Shipwreck: Hi-tech Investigation Suggests Coastguard Responsible for Sinking.” The Guardian. July 10.

Forensic Architecture. 2024. “Drift-backs in the Aegean Sea.” Forensic Architecture. January 20.

France, David, director. 2020. “Welcome to Chechnya.” Public Square Films. 107 minutes.

Guzman, Elena H., and Emily Hong. 2023. “Feminist Sensory Ethnography: Embodied Filmmaking as a Politic of Necessity.” Visual Anthropology Review 38, no. 2: 184–210.

Hamad, Mario. 2020. “The Absent Image: Resisting the Erosion of Public Trust in Syrian Activists' Evidential Visuality.” In Feminist Visual Activism and the Body, edited by Basia Sliwinska, 62–74. New York: Routledge.

Harwell, Drew. 2019. “FBI, ICE find state driver’s license photos are a gold mine for facial-recognition searches.” The Washington Post. July 7.

Hukkelås, Håkon, Rudolf Mester, and Frank Lindseth. 2019. “DeepPrivacy: A Generative Adversarial Network for Face Anonymization.” In Advances in Visual Computing, edited by George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Daniela Ushizima, Sek Chai, Shinjiro Sueda, et al., 565–78. Cham: Springer International Publishing.

Korporaal, Astrid N. 2020. “Activist Intension: Mona Hatoum and Morehshin Allahyari's Disruptive Bodies.” In Feminist Visual Activism and the Body, edited by Basia Sliwinska, 32–45. New York: Routledge.

Monahan, Torin. 2015. “The Right to Hide? Anti-Surveillance Camouflage and the Aestheticization of Resistance.” Communication and Critical/Cultural Studies 12, no. 2: 159–78.

Monahan, Torin, and David Murakami Wood, eds. 2018. Surveillance Studies: A Reader. New York: Oxford University Press.

Russell, Legacy. 2020. Glitch Feminism: A Manifesto. New York: Verso.

Sobchack, Vivian. 2004. Carnal Thoughts: Embodiment and Moving Image Culture. Berkeley: University of California Press.

Witness. 2020. “Identity Protection with Deepfakes: ‘Welcome to Chechnya’.” September 14. Digital, 90 minutes.