The Tensions of Tech: An Interview with Nick Seaver

Photo by Possessed Photography.

This post builds on the research article “Care and Scale: Decorrelative Ethics in Algorithmic Recommendation” by Nick Seaver, which was published in the August 2021 issue of the Society’s peer-reviewed journal, Cultural Anthropology.

In this conversation, Kristin Gupta sits down with Nick Seaver to discuss his ethnographic engagements with music recommender systems and the people who make them. They discuss decorrelative ethics, gendered labor practices, the temporalities of scaling, and how engineers and scientists understand algorithms to be “human all the way down.”

Kristin Gupta: You begin the article with a vignette about your experience receiving a playlist personally curated by David, a worker at a music recommendation company that intentionally centered a smaller-scale, human touch (but has since shuttered). Did tech workers at other companies who did not personally make music suggestions see their creations—data, algorithms, or other objects—as no longer human? If so, at what point? How did they understand tensions around the human/nonhuman divide in transformations to the machinic, besides being a relationship in which they cared for the platforms they ran?

Nick Seaver: One of the most interesting findings from my ethnographic research (to my mind, at least) was that many of the people building music recommender systems have come to understand their products as very “human” (that is, not as humans, but as informed by humans or evincing a human touch). In contrast to common depictions of algorithms as not-human—coming from critics and boosters of algorithmic systems alike—I found a rather lively local theory of sociotechnicality and an insistence on the idea that humans were essential to algorithmic functioning. It does sort of make sense, if you consider the fact that my interlocutors generally were the people working inside of algorithmic systems. They didn’t need an anthropologist to tell them that their technology was sociocultural—that was pretty obvious. One person even described a recommender system they had built as “human all the way down” because of its reliance on data generated by users!

They didn’t need an anthropologist to tell them that their technology was sociocultural—that was pretty obvious. One person even described a recommender system they had built as “human all the way down” because of its reliance on data generated by users!

KG: The case of music recommendation systems and the passion of the self-identified “music lovers” who work on them leads to a fascinating discussion of how people reshape ethical plateaus in a business setting through decorrelation. To what extent does the emphasis on solutions or product deliverables also affect ethical decision-making in these spaces?

NS: I think one limit of my research on this topic is that I focused primarily on engineers and research scientists, and there are, of course, many other people involved in making commercial recommender systems and the platforms in which they’re typically embedded. So, some of the people I spoke with were nominally insulated from product deliverables; but over time, they would often end up in positions that brought them more in contact with those dynamics. In theory, they imagined that recommender technologies could appease many groups at once: helping listeners find new music, helping artists find new listeners, and helping their bosses acquire new users. So in one sense, recommendation itself was a kind of decorrelative technology that promised to mediate between tensions of care and scale. But not everyone in these companies thinks that way.

KG: Throughout the article, you articulate the ways decorrelative ethics are theorized and experienced in different settings (at conferences, in the workplace, in conversations about startups and how to succeed in the tech sector). However, I’m wondering if you ever saw contestations over these ideas, particularly in regard to creating a decorrelated world in which “people could choose freely among their values, with no choices impinging on any others” (520). Did any of your interlocutors ever take issue with these framings or see things differently?

NS: Absolutely! Many of my interlocutors were quite conflicted, especially those who had ended up working for large music streaming services. In general, the way they tried to resolve these issues was to rededicate themselves to helping artists find audiences—designing recommender products that focused on less popular artists. If you’ve ever used Spotify’s “Fresh Finds” feature, that is the sort of thing I’m talking about. But by the time I was doing fieldwork, the idea that large-catalog on-demand music streaming was the telos of music distribution had become a matter of common sense; many of the economic and political issues around that kind of infrastructure were thus kind of unthinkable because how could it be otherwise?

KG: Gender haunts many of these discussions; what labor is considered to be masculine and therefore centered, versus the feminized (and racialized) work often done invisibly by precarious workers. This is also true regarding scale and startup culture, which seem to rely on ideas of mastery, heroism, and individualism. How does masculinity figure into the relationships between care and scale that you so thoughtfully parse, especially in a field where gender disparity is very real?

NS: In the article, this comes up most centrally in the figure of Ellie, who headed a quality assurance team and described herself to me as a “data gardener,” performing maintenance work that made the products of her mostly male colleagues continue to function. Stereotypically, of course, scale is masculinized and care is feminized; the domineering narratives of startup heroism and expansion are just the most recent iterations of some very old stories, as is the idea that maintenance work is both insignificant and the proper domain of women. Feminist organization studies has helped us demystify this story in business contexts: maintenance work is extremely important to the functioning of organizations, no matter how invisible they try to make it. And the feminist work on care I cite in the paper makes a similar case more broadly: without care work, there is no shared world.

So, reimagining the relationship between care and scale is, as you suggest, going to have some ramifications for the gender politics of labor within the corporation. When we see a company embrace a concept like “care,” then we should immediately look to see who is doing it. In the article, we see it in two places: First, in the work of people like Ellie, leading feminized and racialized maintenance teams that, even if they are not outsourced, remain low in the corporate hierarchy. Second, we may see it embraced by company founders; but when they do it, the work is reframed not as maintenance but as heroism—a kind of strenuous labor to get a company going.

As the dynamics I identify in the article continue to develop, it will be important to see what happens around gendered roles in these companies, and I’m glad that there are more ethnographers taking on this topic!

KG: So much of the tech industry is oriented toward “designing the future.” I’d love to hear your thoughts on this concerning scale. Scaling seems to be so focused on short-term temporalities; on what companies can do in the here and now to become global giants. How does this kind of thinking interact with the broader futures imagined by engineers, designers, product managers, and so on? Are the long-term horizons and impacts of scaling considered?

NS: This is an interesting framing, because I tend to think of scalability as a type of long-term future-orientation, rather than something short-term. But, you’re right that actually existing scaling does not happen in the future (which, after all, is never here yet), but rather in the present. And one thing you see in short-term efforts to be “scalable” are business practices that seem quite irrational in the present, and which only make sense when the right kind of future is invoked: so, for instance, companies like Uber famously lose money while they grow astronomical valuations. To many critics, this seems like evidence of perverse misvaluation; to the people spending the money, it looks like investment in a company that will pay off much later. Of course, that payoff doesn’t always happen (hence the venturesomeness of venture capital), and when it does, it commonly happens in a less “tech” way that you might expect (e.g., an end-run around labor laws, rather than some novel software platform).

One thing you see in short-term efforts to be “scalable” are business practices that seem quite irrational in the present, and which only make sense when the right kind of future is invoked.

NS: I think Anna Tsing’s (2015) arguments about scalability’s externalities are right: as I say in the article, companies achieve the appearance of scalability largely by drawing their corporate boundaries in strategic ways. Unfortunately, this means that when we ask whether they consider the long-term impacts of scaling, the answer is “yes”—but not in terms that would meet the concerns of most outside critics. One of my ongoing research interests in this area is what technical actors do when they try to do the right thing; so for me, the question is less “do they consider impacts?” than “how do they consider impacts?” And, as you suggest, we shouldn’t expect the same answer to this question from all the various people involved in making something.

KG: Ethics are often about creating better worlds, although there are constraints to what is possible in this redefining of values, especially in a capitalist system. For example, you mention Google’s controversial firing of two AI ethicists whose research explored problems in the company’s models. How do you think about these kinds of limits in relation to the ethical plateaus currently being formed (and unformed) in the music recommendation sphere and tech more broadly?

NS: One of the reasons I find the “ethical plateau” concept so compelling is that it captures the simultaneity of freedom and constraint: our sense of what is possible, desirable, and obligatory is necessarily shaped by these conditions, even as we try to be open to what María Puig de la Bellacasa (2017) calls a “speculative ethics.” For people working in software companies, capitalism is of course part of that ground, and it can be hard (perhaps impossible) to reimagine from within the corporate context. You can certainly make the case that a company like Google only started investing in “ethics” once it became clear that ethical problems posed a threat to its public image (and, downstream from that, its bottom line). As an anthropologist, I’m really interested in the people within these organizations who don’t experience their lives as avatars of corporations, but rather try to reconcile what they’re doing with some kind of personal value system. They often encounter limits, and often those limits are framed in terms of profit: Why doesn’t Spotify just hire people to make personalized playlists for all their users? Because it would be unconscionably expensive! People in different positions will play with this limit differently: a company like The Yams disavowed scaling and embraced the idea of just doing things “by hand”; my interlocutors making algorithmic recommenders saw their work as a way to make it easier (and cheaper) to benefit from the judgments of other people; other critics may suggest that we just shouldn’t have personalized playlists in their current form at all.

One of the reasons I find the "ethical plateau" concept so compelling is that it captures the simultaneity of freedom and constraint: our sense of what is possible, desirable, and obligatory is necessarily shaped by these conditions. . . . As an anthropologist, I’m really interested in the people within these organizations who don’t experience their lives as avatars of corporations, but rather try to reconcile what they’re doing with some kind of personal value system.

NS: So yes, while the theory of decorrelative ethics I lay out in this article is primarily descriptive—an effort to help other ethnographers who may encounters similar dynamics in their own field sites—I do think that examining how people try to work on their values, not just within them, can help us develop our own ethical stances as critics and scholars.

References

Puig de La Bellacasa, María. 2017. Matters of Care: Speculative Ethics in More Than Human Worlds. Minneapolis: University of Minnesota Press.

Tsing, Anna Lowenhaupt. 2015. The Mushroom at the End of the World. Princeton, N.J.: Princeton University Press.