Anthropology and Algorithms

In recent years, developments around algorithms and artificial intelligence (AI) have sparked widespread interest and debate. But anthropologists have been exploring this topic for much longer. What can anthropology bring to the table in understanding this buzz? How can we use anthropological concepts and methods to make sense of algorithms and AI? And what implications do these technologies have for our everyday lives, for ethics, regulation, and social justice? In this episode, we explore these questions with Dr. Nick Seaver, Dr. Veronica Barassi, and Alex Moltzau.

Our interviews were conducted in February and March of 2023.

Anthropology and Algorithms via SoundCloud

View Transcript

Guest Bios

Nick Seaver is an assistant professor in the Department of Anthropology at Tufts University, where he also directs the program in Science, Technology, and Society. He has conducted extensive ethnographic research on developers of algorithmic music recommender systems and how they think about their work. This research has been published in Cultural Anthropology, Cultural Studies, and Big Data & Society. He is co-editor of Towards an Anthropology of Data (2021) and author of Computing Taste: Algorithms and the Makers of Music Recommendation (2022). Nick Seaver currently studies the rise of the idea of attention in the context of machine learning systems.

Veronica Barassi is an anthropologist and professor in Media and Communications at the School of Humanities and Social Sciences at the University of St. Gallen in Switzerland, as well as the Chair of Media and Culture in the Institute of Media and Communication Management. Her ethnographic research focuses on the impact of data technologies and artificial intelligence on civic rights and democracy. Her research has been published in New Media and Society, Sociological Research Online, and in The Routledge Companion to Media Anthropology. Veronica Barassi is the author of Activism on the Web: Everyday Struggles against Digital Capitalism (2015) and Child Data Citizen: How Tech Companies Are Profiling Us from before Birth (2020). In 2021, she launched The Human Error Project: AI, Human Nature and the Conflict over Algorithmic Profiling.

Alex Moltzau is a Seconded National Expert in the European AI Office where he works on AI regulation and compliance. A graduate of the Copenhagen Center for Social Data Science (SODAS), he has an interdisciplinary background that includes anthropology, data science, programming, and design. In addition, he holds a Master's degree in Artificial Intelligence for Public Services. At the time of our interview, Alex Moltzau was a Senior AI Policy Advisor at NORA.ai, the Norwegian Artificial Intelligence Research Consortium, where he worked with AI policy in the public sector, governance, ethics, and international partnerships. He also serves as a member of the OECD.AI Expert Group on AI Compute and Climate.

Credits

This episode was created and produced by Contributing Editor Steffen Hornemann, with review provided by Marie Melody Vidal.

Theme Song: All the Colors in the World by Podington Bear

References

Appadurai, Arjun. 1993. “Number in the Colonial Imagination.” In Orientalism and the Postcolonial Predicament: Perspectives on South Asia, edited by Carol A. Breckenridge and Peter van der Veer, 314–39. Philadelphia: University of Pennsylvania Press.

Barassi, Veronica. 2019. “Datafied Citizens in the Age of Coerced Digital Participation.” Sociological Research Online 24, no. 3: 414–29.

——. 2020. Child Data Citizen: How Tech Companies Are Profiling Us from before Birth. Cambridge, Mass.: MIT Press.

——. 2021. “David Graeber, Bureaucratic Violence, and the Critique of Surveillance Capitalism.” Annals of the Fondazione Luigi Einaudi 55: 237–54.

——. 2022. “Algorithmic Violence in Everyday Life and the Role of Media Anthropology.” In The Routledge Companion to Media Anthropology, edited by Elisabetta Costa, Patricia G. Lange, Nell Haynes, and Jolynna Sinanan, 481–91. London: Routledge.

Graeber, David. 2015. The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy. Brooklyn, N.Y.: Melville House.

Gupta, Akhil, 2012. Red Tape: Bureaucracy, Structural Violence, and Poverty in India. Durham, N.C.: Duke University Press.

Mayer-Schönberger, Viktor, and Kenneth Cukier. 2013. Big Data: A Revolution That Will Transform How We Live, Work, and Think. London: John Murray.

Seaver, Nick. 2017. “Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems.” Big Data & Society 4, no. 2.

——. 2019. “Captivating Algorithms: Recommender Systems as Traps.Journal of Material Culture 24, no. 4: 421–36.

——. 2022. “(Artificial) Intelligence.” Talk presented at The Conference, Malmö, 29 August.

——. 2022. Computing Taste: Algorithms and the Makers of Music Recommendation. Chicago: University of Chicago Press.

Transcript

[00:00 Podington Bear—All the Colors in the World plays]

Steffen Hornemann [00:06]: Welcome to AnthroPod. I’m Steffen Hornemann. Today on our podcast, we’ll be talking about anthropology and algorithms. We’re a bit late to the game, considering the hype around algorithms and artificial intelligence (AI) in the last years. But here's the thing: Anthropologists have been exploring the topic of algorithms and AI for quite some time now. Early last year, I had the privilege of sitting down with three remarkable guests for a series of interviews, each offering a unique perspective from both inside and outside of academia. I talked to Professor Nick Seaver, Professor Veronica Barassi, and Alex Moltzau about the intersection of anthropology and algorithms.

What exactly can anthropology bring to the table in understanding them? How can we use anthropological concepts and methods to make sense of algorithms? And how does this research translate into practice? As we look at these questions, it becomes clear that the development of algorithmic systems comes with big implications for our everyday lives, for ethics, regulation, and social justice. Join us as we discuss these topics.

Let's start with our first guest. Nick Seaver is an Assistant Professor of Anthropology at Tufts University, where he’s also the Director of the Program in Science, Technology, and Society. To get us started, I asked him: What are algorithms actually?

Nick Seaver [01:35]: That is a surprisingly tricky question to answer. It's something that I've written about a little bit, and I can give you a couple of different answers. One is that algorithms are, broadly speaking, a kind of computational operation that turns inputs into outputs, right? There's a technical definition of algorithms that we get from computer science, that is more or less saying, you know—say you've got a deck of cards that's been shuffled up: what's the procedure that you could follow to unshuffle that deck of cards? That would be an algorithm.

Usually, however, when people use the term algorithm, these days, they're not referring to something simple and straightforward like that; they're usually referring to something like a machine learning system that's used to make recommendations, or personalized—the stuff that you see on social media, on Netflix, and Spotify, and so on. And that's the kind of algorithm that I've been mostly interested in my own research. And that's the kind of algorithm that I think most people mean, outside of computer science departments, when they talk about algorithms today.

SH [02:38]: So I was just curious how you landed on algorithms as what you study?

NS [02:43]: Yeah, it's a good question. I think that this is a sector where the terms that people use for basically the same thing sort of come and go in waves. And so I started working on music recommender systems, which has been my primary focus over ten years ago. And back then, the big term that people were using was big data; that was very soon going to be replaced by data science, which was going to sound better; was very soon going to be replaced by machine learning, which was going to sound better; and then eventually was going to be replaced by AI. Obviously, all of those terms are older than exactly then. But there's this kind of cycling through terms where for some people, the differences between those terms is really significant. It's not significant for me. I don't think it makes a big difference. And I've seen the exact same kinds of systems described as data science, machine learning, AI, algorithms—it doesn't really matter. I will say that my approach to definitions is very anthropological in that I'm not so much interested in finding the right term, like, for me, but I am interested in how people out in the world make sense of the world, what they use to talk about what they're doing.

SH [03:54]: You mentioned that different people use different terms, maybe also different people in different disciplines. So what would you say the role of anthropology is in these debates? What can anthropology bring to the table when it comes to studying algorithms?

NS [04:09]: Sure. So my argument has been for a while now, this thing I've sort of made my name on in this critical algorithms studies field, is this idea that, well, we have this concept of algorithms as being a kind of computational program, something very simple, a simple kind of procedure. That's really not adequate for talking about algorithms in the broader sense of the kind of thing people mean when they say, you know, the Facebook algorithm or something. If you want to look at those big systems it's really important to recognize that those systems are updated constantly. They're being revised all the time. They're being experimented on, they're being changed according to all sorts of business logics, ideas that people have about, you know, what might work, what do we want? And all of that stuff, all of those things that are, you know, why you decide that you're going to build your algorithm this way and not that way, those are all human, cultural decisions. And so I've long made this argument that we're better off thinking about the algorithm as a kind of sociotechnical algorithmic system, which is a fancy way of saying there are people in it, right? It's not just a computer, you don't want to just say, well, how does the algorithm work? Because if you looked at how the algorithm worked by itself, you would miss all of this other stuff about why it works the way that it does.

One example I like to give, which is kind of a thought experiment, is that on Facebook, for instance, people are familiar with the idea that your newsfeed is sorted in some kind of algorithmic way, right, according to some model of what's going to be interesting to you. That newsfeed could tomorrow, if Mark Zuckerberg, who is a majority shareholder still on Facebook, if he wanted to, he could say, “You know what, we're going to put the newsfeed in alphabetical order tomorrow, we're going to do it alphabetical by person.” And that would be an algorithmic newsfeed. It would obviously be very different from what we have now. It would almost equally obviously not happen, for some reason. But the difference between those, the difference between the alphabetic newsfeed and the newsfeed that we actually have, is not a difference between algorithms and not algorithms. It's a difference between two ideas about what algorithms ought to do. [06:11] And so my argument has been if you don't understand that, if you don't understand the reason why, or the procedures that would result in us having the newsfeed we have, instead of the alphabet one, then you don't really understand much about why these systems work the way they do. So I think it's really essential for understanding how algorithms work, and more importantly, why they work, to have a kind of anthropological approach to the people who are involved in designing and developing and maintaining these systems.

I should say the other option here, right. One other thing anthropologists might do. And this is what some, I think, critical humanistic researchers tried to do, is to carve out a space kind of opposed to algorithms, right? To say that, okay, algorithms are a technical object, and they're doing bad things to culture, or they're doing sort of weird, uncanny things to culture. And what should we do as cultural experts? We should kind of defend culture, we should stick up for culture, figure out what it is and kind of define it against these systems. And I don't find that a very useful thing for anthropology to do, because I think it gives up too much, right? It imagines that these technical systems are only technical, but there's not culture there as well. Not to mention the fact that our own culture is technical as well. So I like embracing this idea that culture and technology are always entangled with each other. And that's true, whether we are in a domain that seems like cultural, or technical, right? On both sides, there's this kind of entanglement. And it doesn't serve us well to imagine that we should be defending, you know, humanness against technology, when what it means to be human is, in large part to be entangled with technology, among other things.

SH [07:49]: I find that interesting, because a lot of what we see in the public debate on algorithms and AI is that it's all new and revolutionary. ChatGPT is maybe one of the recent examples. So what do we make of this hype? Especially because anthropologists have been studying these sociotechnical systems for a very long time, for decades. So how is the study of algorithms maybe similar or different to studies of other sociotechnical systems?

NS [08:18]: I like to think about the study of algorithmic systems as really continuous with these older things. I don't think everybody has to do that. I think that there are certainly things to gain from seeing, you know, what's new here, what's unlike these previous iterations, but there is a lot of energy out in the world dedicated to defining these things as brand new, never before seen, like, you know, they're going to change everything, we got to change every way we think. And so I think it's useful for some anthropologists at least to say, “Okay, well, how are these similar to things that we've done before?” It's not like, all of a sudden we got some sort of, you know, delivery of technology from the future and it's a clean break with the past, right? There's clearly things that are going on here, that we've seen before. So the idea, for instance, that these are sociotechnical systems and not just autonomous technology, it's not just like a machine going off on its own—that's not particularly novel. That's not even for, I think, people working in the anthropology of technology, it's not even a very interesting claim to make, to say that these are social and technical. We want to say, “Okay, they are social and technical. So what?” Right? What do we get out of that? What do we learn? What can we pay attention to?

And I think one of the things we can pay attention to is the, are the kind of ideas that people have in these systems and how those ideas inform what they do, and how what they do informs their ideas, right? So we can see in these systems, for instance, ideas about what it means to be creative, or ideas about like, what we want these systems to be able to produce, getting built in technical form, right? It was never a necessity or like an obvious thing, that we were going to have AI systems that could generate images, the way that we have them now. The reason that exists is because people thought it was a reasonable thing to do, right, that it was kind of a goal that we could orient towards. We could have oriented towards other goals—not we, them. They could have oriented towards other goals, but they didn't. And that's not technology, that's culture, right? [10:16] That's sort of a kind of set of values, a set of ideas about what's an appropriate behavior. So I guess this is a long winded way of saying that, I don't think that it's necessary for us to go along with the hype and say, “Oh, we've got to develop brand new kinds of anthropology to deal with this brand new kind of human behavior.” Because people are always doing new stuff.

SH [10:42]: You explicitly draw on quite old anthropological work, I would say. And what I'm thinking of is trapping and animal traps.

NS [10:50]: Yeah!

SH [10:51]: So maybe you can talk a bit about how the anthropology of traps and animal trapping helps you think about algorithms?

NS [10:57]: Yeah, so there's a few ways to think about this right. One is in terms of who the audience is. So you're referring to a talk that I gave at a sort of design and computery conference, where I'm kind of presenting anthropology to non-anthropologists and trying to say, you know: here's a way to think about what you're doing. Here's a way to think about this phenomenon in the world that draws on some of the disciplinary resources of anthropology. And is that going to be useful for anthropologists themselves? Maybe.

But largely, for these non-anthropologists, I think the argument is basically, that there's this long and interesting history of the anthropology of trapping. It's not as big a field as it might be, but the idea is that when people build traps, like animal traps, a mouse trap, or something like that, embedded in the trap that they're building is a kind of model of the prey that they're trying to capture, right? Embedded in the design of a mousetrap is an idea about the physical characteristics of a mouse, about the behavioral qualities of a mouse, about the psychology of mice, right? Like, what they want, what they don't want, what's going to scare them away, and so on. And in order to function, those systems need to sort of adequately reflect this psychological existence of this other creature. People make the argument, for instance, that traps are in some sense, the first robots. They're the first automatic technologies, right? They work in a weird sort of automatic way, but they require animal agency to work, right, you need the mouse to press on the thing in order for the trap to close, and all of that.

And so I have an article that's called “Captivating Algorithms: Recommender Systems as Traps,” which also ended up in edited form as a chapter in my in my book [Computing Taste: Algorithms and the Makers of Music Recommendation], which basically looks at recommender systems and thinks: Okay, well, how can we think about recommender systems by analogy with these animal traps? Because they are designed to attract and retain users for platforms. [12:54] How can we look at the sort of design of recommender systems and think of them as these kinds of technologies that have embedded in them ideas about the psychology of prey, these kinds of automatic appearances that actually require the agency of the entities that they're trying to capture?

SH [13:11]: And you mentioned that that's something you wrote about in your book that came out last year with the University of Chicago Press. It's called Computing Taste: Algorithms and the Makers of Music Recommendation (2022). So maybe you can tell us a bit about this book and how algorithms figure in your research on music recommender systems?

NS [13:31]: Sure. So yeah, so the book is an ethnography of the developers of music recommender systems. So I did a bunch of multi-sited field work primarily in the United States, with people working largely on the research teams at organizations that were building recommender systems. So these are companies, you know, that are off music streaming, working on sort of data analysis of people listening. And the idea behind the book is that, like I said before, if you think of algorithmic systems as having people in them, in an essential way, then you want to understand how those people think about the objects of their work. And so basically, each chapter of that book moves through a different aspect of the work, you know, how do people building these systems think about musical sound? How do they think about context? The context in which people listen. How do they think about musical genre? How do they think about their own responsibility as people who might be able to shape the way that the music industry works? Or to shape what music becomes popular, to shape people's perceptions? How do they justify the need for their work, right? Why do they say it's important to make music recommender systems in the first place?

And so in some sense, it’s a very traditional anthropological kind of approach, despite being about a fairly untraditional object, because I'm really interested in drawing out how the people working on this stuff, how their work makes sense to them. So it's a little bit different from other books that have come out lately about recommender systems about big data and profiling, in that it is critical, but it's not primarily aimed at talking about how these systems are bad, or how they have harmful consequences. I think they can have harmful consequences. But my main goal here is to sort of understand how they make sense to the people who make them. [15:25] The idea being that if we do that, and we want to sort of call them out for harms or to make more serious critiques of them, it's gonna be very useful for us to have a very strong grasp on the way that the people building these systems understand their work, because that's going to affect how they receive critique, what they do in response to critique.

I think we're in a moment now where a lot of these systems have been critiqued in public for a long time, and people are trying to do, quote, unquote, “the right thing.” And now the new things that they're doing are also posing interesting problems. And we might have been able to anticipate some of the new things that they're doing if we understood better how they think about their work, right? How are they likely to respond to critique, how are they going to operationalize something like, the critique that, you know, your data doesn't have, doesn't take context into account, for instance? What is that going to look like if people are going to try to add context then to meet our critique?

SH [16:18]: And like you said, you focused on the people who make these music recommender systems, who work on algorithms actively. So I'm curious what some of the challenges of studying this ethnographically are in terms of access, and so on?

NS [16:34]: Yeah, so access is the big problem. I think everyone working in this field has experienced this issue, because, you know, it's very easy for a company to say, “No, thank you,” right? “I would like you to not come in here.” And so basically, it takes a long time. I think that's sort of the only takeaway that I really have, is that I spent a lot of time going to conferences when I started. I went to regular conferences in recommender systems design, whether it was in music or not, and also in music and computing conferences. And so after a while, I kept seeing the same subset of people who would go to both of those conferences. And those sort of became my people, those sort of became the people that I was interested in studying, right, those who were building recommender systems, and also just trying to study music computationally. And eventually, one of them invited me into their company, which was nice, because I had asked that company formally earlier whether I could study them, and they had said no. But you know, as many ethnographers of organizations will recognize, sometimes just having the right person vouch for you is all you really need. But it took several years, for me to actually get in to one of these places, to do any kind of ethnography sort of on site.

But I have argued in other places, that I think it's important for us to not mistake secrets for things that we want to know. Because, for instance, it's hard to get into a company, it's hard to get into a meeting of people who are working in a company and designing something. And that might lead you to think that, oh, that's it, then. That's the thing I want to find out—I want to find out whatever that secret is. And it puts the anthropologist in particular in a weird spot, because it kind of suggests that all you're there for is to be basically a spy, right? To go in and to find out something that somebody else already knows, right? Like, if all I want to tell you is, like, the stuff that people talk about in some meeting somewhere, then all my job is is to tell you, like, what that guy knows. Right? What is Brad doing? What does he know? But I think anthropology has more to offer than that. [18:34] I don't think anthropology is just about sort of shuttling out secrets from inside of obscure places.

So one of the things I try to do in my own work, is to kind of draw out some of these ways of thinking that I encountered in the field and bring them into conversation with stuff from the history of anthropology. So one of those, for instance, is animal trapping, the anthropology of trapping, like we just said. Obviously, the people building these systems are not sitting around thinking about the anthropology of animal trapping, but they are talking about trapping a lot. They're talking about capture, they're talking about catching their users’ attention, and so on. And so one thing I can do as an anthropologist is sort of draw what they say into conversations that they were not originally a part of. Other things I could do is think about the kind of mythological register of the way that they talked about the way the world is, right? They tell all sorts of stories about what it's like to be a person in the world experiencing, you know, information overload and how recommender systems might come and save you from that. And so it's important to understand how they think about a world full of information overload, because that's kind of why they make these systems, at least in their own regular telling. So studying these groups of people is very challenging. And unfortunately, I have to say, it's probably only gotten more challenging since I started to do this, because I think a lot of these companies have very little interest in the kind of exposure to public critique that letting an ethnographer in brings.

SH [20:02]: You said that you've been working on this for years now. So to wrap up, I'm curious where you see the study of algorithms going in anthropology? Is there maybe something you're working on right now?

NS [20:15]: Yeah. Well, so I think there's two, there's sort of two questions there. One is what's going on with anthropology of algorithmic systems. And I think one thing that we're seeing that is great is a diversifying of the kinds of objects that people are studying. I mean, it's becoming more of an acceptable thing for people to work on, partly because these systems are just going everywhere, right? They're entering into all sorts of field sites where people might not have thought that there were anthropologists of algorithmic systems before. But now they kind of have to be because even in your conventional field site you're going to find people doing software work on stuff that you're interested in. So I think there's a lot more ethnographic work happening on the user side of these things. I'm seeing the effects of them on the world—people will often ask me things like “How are music recommender systems changing how people listen? How are they changing how music is made?” And I don't know, because I'm not an anthropologist of that, right? The ethnography that I did was of the developers. And I think there's, in that question, sometimes an idea that, like, the developers will know for sure what the effects of their technology are going to be, right? That it's just kind of a given, given how they designed the system that that's going to make some kind of effect. And I don't think that's true. I think we need to have more ethnographic work on communities that are affected by these systems, not just on the people who develop them to see what actually happened. Because I can definitely tell you what people building music recommender systems think about, you know, what's happening in the world, but we don't need to believe them, right? We don't have to believe them about how it's changing listening or whatever. So there's more work happening on that kind of stuff, which is great.

There's more work happening on ethnographic systems, sorry, ethnographic systems, algorithmic systems being developed in non-Western settings. I think one of the things that we see in work like mine is a view of, definitely like, a hegemonic tech center. And it's very influential. It's, you know, lots of people are trying to emulate it, whether or not they're located in the U.S. or in Western Europe. [22:10] But there are alternatives out there, there are people, companies trying to develop sort of separate new systems that are, you know, distinguishing themselves from these big ones. And people are studying those. And I think that's going to be exciting, too.

For myself, I am now working on a project about attention, and about the sort of meaning of attention in and around technical systems. And part of that is about machine learning. So the generative models that you mentioned earlier, you know, DALL-E, and [Chat]GPT, and all of that. The T in GPT stands for transformer, which is a kind of neural net architecture, which is built around a sort of fundamental building block, sort of nowadays fundamental building block of deep learning and machine learning, that's called an attention mechanism. And so I'm very interested in the way that this concept of attention has been mobilized in all sorts of different domains, including in the middle of, you know, inside of, software architectures, to make sense of problems that we're facing today as a kind of thing that people are worried about in their personal lives, in the political sphere, online, in relation to recommender systems, and so on. So I’m now working on a project about the various ways that attention is operationalized, in and around computers.

SH [23:27]: That was professor Nick Seaver on algorithms as sociotechnical systems and on what we can learn by studying how these systems are developed. We're looking forward to seeing more of his upcoming work on attention and technologies.

When it comes to new avenues for research around algorithms, Professor Seaver mentioned growing work around the communities that are affected by algorithmic systems. That brings us to our second guest. Veronica Barassi is an anthropologist and professor in Media and Communication Studies at the University of St. Gallen in Switzerland, as well as the Chair of Media and Culture in the Institute of Media and Communication Management.

Her work is concerned with how our everyday lives are turned into data that feeds algorithmic systems. In particular, she has explored the datafication of kids' and families' lives. During our conversation, we discuss why anthropology is so important in addressing issues of cultural representation and social justice in debates around algorithms and AI.

Here’s Professor Barassi on how she came to study algorithms and artificial intelligence.

Veronica Barassi [24:37]: I am a trained anthropologist, but I've always worked at the intersection between anthropology and media and communication studies. And I've developed most of my career in departments of media and communication studies, or media and communication sciences. I've been working—I've always been interested in the digital transformations and how digital transformations were being negotiated on the ground by people. I've always also engaged with questions about social inequality, and citizenship in relation to digital and new digital forms. I started off researching how social media were transforming political participation on the ground, working ethnographically with different social movements. And then I became interested in the question of Datafication, and how we were all becoming datafied citizens in some ways. And, and so I launched a project in 2015, which was called “Child, Data, Citizen: How Tech Companies are Profiling Our Children.” And the idea was to look at how, with the rise of big data, deep learning, and the datafication of society, how our data traces were being collected from before birth, and how people were negotiating with this datafication of everyday life. And then over the last few years, instead, I have been working more on errors and artificial intelligence, and how well does artificial intelligence read humans? And what are we doing as a society when we realize that, actually, the profiling of AI systems of humans is often inaccurate and biased, and unaccountable? So, and that is a new project that I launched in 2020, which is called “The Human Error Project.”

SH [26:21]: So I'm curious—you talk about datafication, datafication of family life, of children. So can you tell us a little bit more about what you mean by datafication?

VB [26:31]: So basically, over the last twenty years we have seen technological, social, cultural transformations, in the ways in which we collected the data of people, the ways in which we made sense of this data, the machines that we had to make sense of this data. And we've restructured a lot of society around these new cultural values, right? And so at the moment, I can't go to the zoo without downloading an app, because my walk with my daughters needs to be tracked. We have very little choice on when and how we share our data. Because the system around us, our schools, our airports, our zoos, our everyday life has been reorganized to collect as much data as possible and data that can actually then be analyzed and used to produce value. So when we talk about datafication of society, the very first to talk about it, were Cukier and Mayer-Schönberger in their book in 2013, on the big data revolution, where they started noticing that with the rise of big data, we were seeing an increased datafication of every little aspect of everyday life. And between 2013 and 2023, that datafication has dramatically amplified and expanded.

SH [27:53]: And how do algorithms fit in this equation? And maybe—I mean, I talked about algorithms with two other people on the show and I asked them what an algorithm is. So I would like to ask you the same question, maybe not expecting a perfect answer. Is that even a question that's helpful?

VB [28:09]: It's interesting that you ask this question because I've been thinking about this very much. I've been thinking about why have we started to talk so much about algorithms. I think when we use the term algorithm, effectively, what we’re actually doing is trying to make sense of the fact that, that over the last twenty years, the ways in which we relate to knowledge has changed. So we have prioritized statistical forms of knowledge, we have prioritized computer calculations of everyday practices over other forms of knowledge, right? I think it's not only the algorithm that we need to take into account, we need to take into account also the system of values, the cultural practices, infrastructures, that we are creating around these new technologies, right?

SH [29:05]: I mean, some of the things you just talked about in your answer are knowledge, how we make sense of the world. And these are things that anthropologists have been interested in for ages. And in your work you write—I mean, you said that you work yourself across boundaries, but you also write that a lot of approaches to datafication and algorithmic profiling fail to take into account anthropological perspectives and anthropological theory. So can you tell us a bit about how what anthropology can bring to the table in these debates?

VB [29:34]: Well, anthropology can bring so much to the table, that is a matter of fact. I think what I wrote in the past year, I think that maybe the article that you're referring to is the one on algorithmic violence and automated systems, right? And it's a chapter that was basically, was inspired by frustration, to a certain degree, because I was in media and communication studies, in data and critical data studies, in internet studies—in all those social sciences that are connected with digital transformations I was seeing, and relating to much literature on the topic of automated systems, vast structural violence or symbolic violence, and, of course, very important literature on algorithmic bias, on data inequality, and so on. And the chapter that I wrote on algorithmic violence came from, again, the frustration because I noticed that all this literature was not taking into consideration the fact that anthropologists have been studying automated systems, maybe not as machine learning systems, but automated systems in terms, like, bureaucratic systems, or the violence of bureaucracy, the violence of structural hierarchies. And the violence of data you know, like, and like I'm referring to like works like seminal works like Appadurai’s work on data colonialism, Gupta’s work on bureaucracy in India and gender violence and so on. Graeber[‘s] work on violence and bureaucracy. I mean, there was so much out there in anthropology that was not even being cited or considered in all these debates about algorithmic violence. And so that's where I think anthropology is essential.

But there are many other ways in which I think anthropology has a very important role to play. The one, for instance, with generative AI increasingly more, we're going to be confronted with large language models that are going to be mostly trained on Anglo-Saxon languages and Western cultural values and norms. [31:41] And this is going to be a big issue, because it means that there is going to be, perhaps, an amplification of the digital divide, or what artificial intelligence will be available in different countries. And not only an amplification of digital divides, but also the fact that we're going to have technologies with whom we are going to interact more, that might be completely offensive in terms of cultural representations or cultural understandings. So I think that not only anthropology had a lot to say about the debates around AI systems of the last ten years. But also, that anthropology has a lot to say about the next steps in AI developments.

SH [32:24]: And I mean, when it comes to these questions, we can also see in your work how you connect this to larger issues of social justice, of inequality, of privilege. And many times this governance or ethics of AI and algorithms is framed in terms of transparency, accountability—do you think that this is enough? Or do we need more for, maybe, a social justice-minded approach to algorithms and AI?

VB [32:49]: [We] definitely need a social justice-minded approach. And I think, you know, we have been heading towards that, you know, like all the debates around data justice that have emerged. Here, I'm referring specifically to the work of the Data Justice Lab in Cardiff, but also other institutes around the world. What we are seeing is that we cannot talk about data and AI development without looking at how it's intersecting, and amplifying social injustices. This is something that is really crucial that we understand and I think that increasingly more we are seeing, we are seeing much more research done in this field. And that's very important.

Having said that, a lot of this research still remains a bit ethnographically thin. And part of the problem of ethnographic thinness when it comes to understanding our data cultures and our AI systems, is that we might have very good principles, but those principles might not apply in everyday life. And so some of my work that I've done with families, I developed the concept of coerced digital participation to show that even if we have transparent terms of condition, even after the GDPR, right, the general data protection regulations in Europe, right, where you get more choice in terms of what happens to your cookies, or you have more transparency in general. And you can give your consent. Your consent is never really informed. And your consent is never really voluntary and meaningful consent, right? Because a lot of the situations that families encounter in everyday life are situations whereby the society around them is datafied, so if they say no, they're going to be completely excluded from important aspects of everyday life. So for instance, if my, if the school of my daughter has a Facebook group or, I don’t know, or a particular app, or, I cannot say, “Oh, no, I'm not gonna join.” [34:51] I could try to persuade them to go somewhere else but, and I tried it, but I won’t say no. Or during the pandemic, I had to do a Google subscription for my daughter’s Google Classroom even if I had been studying datafication of children for many years, so it's, and I think that this is the richness of anthropology and anthropology can actually look at all this tension, so this process of negotiations, and the broader picture, by really zooming in to the paradoxes of everyday life.

SH [35:23]: To me, it seems like this is also where anthropology can contribute to this research by looking at how these issues play out in people's everyday lives. And through your ethnographic approach, you can see a lot of nuance and detail about how it plays out in families lives. I mean, you just described to me how it plays out in your own family life, right.

VB [35:41]: Yes!

SH [35:41]: So with, with this regards, you also wrote in this chapter that we mentioned before, that you've become a participant observer in your own life. Can you talk a bit about the role of autoethnography in studying datafication and algorithmic profiling?

VB [35:54]: So, autoethnography, [is] usually term that I was, I always shied away from. I never really embraced it that much. Because I like more “anthropology at home” or other aspects, but, other terms like that, like the idea of the anthropologist not going to the field, but actually doing field work in their own cities, in their own home, right? But still, anthropology at home has somehow a look to other people[‘s] experiences, and I think that that's very important for anthropologists, you know. That's fundamental, you know. For how much ethnography has been appropriated by other disciplines, sometimes we do confront ourselves with people using new ethnographic methodologies; just doing a self-ethnography. Like we have, like, really paradoxical uses of the method. And to me, ethnography should be about the encounter with others. It's by looking at others that you really learn things about yourself and about the world. And by participating in places that are not your comfort zone. That's very rich.

Having said that, when I was doing my research, and I was working with other families and families from very different backgrounds, very different cultural heritages, and different cities. And when I was working with others, I actually realized that there was a lot of datafication that was very much connected to the sensory multimodal aspects of ethnographic practice, right? And so I decided to turn the lens also, together with others, on myself and my family, and how we were being datafied. And what I felt when I had to sign the terms and conditions that I really didn't want to and things like that. So that is an element in my research that I thought I learned a lot from that, too, because it was basically, yeah, doing ethnography in my own life, that it was not necessarily only about me and myself, right, it was actually what was happening.

SH [37:45]: To round up: You blur disciplinary boundaries; you work between anthropology, media and communication studies. But you also kind of blur the boundary between academia and the outside world, so to speak. Because you have this TED talk, where you talk about these issues. You discussed AI ethics with the Irish government, consult for companies, nonprofits. Can you tell us a bit about that work and how it ties to your academic scholarship and why this is something you do?

VB [38:14]: I mean, public work is something that I do but it's also something that has happened to me in some ways. I suppose I see a lot of richness in academic exchange, in conferences, in journals, and in all the fabric of the academic field. And that's why I am doing the job I’m doing and I love it, and I feel very privileged in it. But I also kind of feel that there is a moment, especially for specific issues, especially where we have a perspective that others don't have, that you need to engage with broader public debates. It makes sense, it just makes sense to try to exchange specific findings and specific reflections with the broader public. But it can't be only that. Public work is very good, it's very important. But often, especially at the moment, there is a tendency to push social scientists to have impact, to be part of society. And some of our work is about reflection, it's about reading, it's about exchanging, it's about not being sure, and it's about problematizing. Right? Whilst sometimes we are kind of pushed to find solutions to summarize our research in one sentence, and things like that.

SH [39:27]: That was Professor Veronica Barassi on the role of social science and on the role of anthropologists in the public debate around algorithms and AI. As we wrap up our conversation on anthropology and algorithms, we also want to try and bridge the gap between academic discourse and reflection, on the one hand, and practical application and solutions, on the other hand. Our final guest, Alex Moltzau, is a Senior AI Policy Advisor at NORA.ai, the Norwegian Artificial Intelligence Research Consortium. Coming from an interdisciplinary background that includes anthropology, data science, programming, and design, he works with AI policy in the public sector, with governance, ethics, and international partnerships. In our conversation, we discuss bringing social science research into policy and tech communities.

Here's Alex Moltzau on the role of anthropology in his approach to artificial intelligence:

Alex Moltzau [40:30]: So, I am a bit mixed in terms of my background. I studied some management, and then I went on to work, building a company. And it was only later that I realized, I wanted to go into anthropology. So, I was working for a few years going back into university, and I had a wish to explore artificial intelligence from day one at university. I think, then, going into sort of more like social data science, my background is—I feel a lot like an anthropologist. But I'm a bit fluid in terms of categorization. And that's good. I really enjoy sort of navigating the field of artificial intelligence, with the perspective that anthropology as a field [has].

I work on understanding how artificial intelligence is used within the Norwegian state. And that sounds very abstract. But as a vantage point, the Norwegian state did not have an overview of where they applied artificial intelligence, or where their AI projects were. So what I was working on was to just try to get some overview, and we actually found in December [2022] or January [2023], upwards of between one hundred and one hundred and fifty AI-related projects, in [the] Norwegian public sector. So these type of techniques, and these technologies are being used for our citizens, and I think this algorithmic decision-making is a lot which is being challenged. If you're going to make decisions about welfare, which has been done in Netherlands, where bad modeling, you know, really caused some people to lose the rights of their children, you know, to be put in jail, to have these really serious consequences for people. Because it is about how we make decisions and that's highly about people. So from, from an anthropologist’s perspective, you know, how do the people within these different places think and conceive of the decisions that are being made? [42:31] And how, how do they work with people and to what extent are they being pushed into a framework that doesn't necessarily create any sort of, like, equitable situation?

SH [42:46]: So as I understand it, governance and ethics is also a big part of what you do? Can you tell us a bit more about that?

AM [42:53]: Yeah. So for me, it's very much like ethics from a systems approach to some extent, you know, because I could go into like, probably individual cases, but also, like, how is this being governed?

So when the state is, like, “Oh, yeah, AI, that's great.” ChatGPT, and you have like politicians, even asking kind of questions to Parliament, with ChatGPT, you know, maybe for fun, maybe for attention, who knows? But I mean, at the same time, these technologies are being rolled out in a lot of departments in a lot of municipalities, and also, between nations, also from different nations. Something that's being bought in the U.S. and then used in Norway—is that something that we want to use here? Does it function according to the kind of values that we think we have in Norway, you know? So there's a bit of a globalization involved, as well.

I think of my work a lot when it comes to fieldwork. And I, you know, I often say to some of my colleagues, this is a bit of detective work. Because, almost they expect the information to be out there. I'm not really finding much information about it online. Is that transparent enough, you know? But those decisions are already being made in these different units, right. So, so does even making them more transparent, is that a good thing? What comes with that transparency?

As an anthropologist, you stumble across a lot of things. And then you're like, why is this not made available? And that's also about transparency, because people don't share the AI-related projects in the public sector. Is that a democratic issue? My approach was to kind of find the people that wanted to share, and just think about how we could start this rolling ball, of, like, let's share these algorithms, you know, let's find out how these decisions are, are made, and that hasn't really come to fruition yet. I kind of like have an overview of algorithms in the Norwegian public sector, to where to locate them, so to speak, but it's not like all of them are shared. So like with the National Library of Norway, they do use Hugging Face, which is a platform to share algorithms. And so you can kind of go and you can download their model. [44:54] But I mean, of course, that requires some kind of proficiency, as well. So when we say transparent, when we say explainable, there are kind of different levels, or different layers of that. Like, are we talking about explainability to the general population, or to, kind of, technical experts, or to people that want to, kind of, do due diligence or audit algorithms?

So, my other colleagues, they don't always think that I'm doing the right thing. So, and that's, that's kind of fine. There is a lot of technical and like math-oriented discussions around this. But I think the social aspect of algorithms, the social aspect of artificial intelligence, there's some urgency in taking those discussions. And I think anthropology as a field can contribute in a vast way to how the outcomes of artificial intelligence affect people.

SH [45:57]: So just from a very on-the-ground perspective, you say that you have kind of a strange role, and sometimes not every one of your colleagues agrees. So what is it like to be an anthropologist or social scientist among computer science or more technical people?

AM [46:02]: So my Master's is in Social Data Science, and I also have a separate Master’s in Artificial Intelligence in Public Services. So I think that is fairly common for some people with an anthropology background, too, is that they're a bit of chameleons. You know, people think that we are certain things, and then we just act in unexpected ways. I’m not always a programmer, I like to program. I, I'm not always like an anthropologist, sometimes I like data. I fall in between a lot of chairs. And I think that's difficult. And I think, like, being an anthropologist in organizations is sometimes [a] very difficult thing, because a lot of the time people are like, “What are you doing? Why are you doing this?” You know, this is like a strange thing. Because people think like, “Oh, we just have to do this project, you know, with this amount of people and just get it out.” And then, you know, you have someone asking, like, “Why are we doing this in this specific way?” For example, governance, that's extremely abstract, but it's also very specific, right? So if you just put it into these shapes, and, and you just press, you know, go, that's one thing, but it, like, looking at how the people act within the given context? And, and what are they asking for? What do they want to help them in their role? And what do they want to understand? That's a different question. And it's not necessarily so simple.

SH [47:21]: Do you have any examples of concrete anthropological work or research that you find interesting or helpful in your job?

AM [47:32]: I think the movement in Copenhagen, in computational anthropology, to me is extremely exciting. And I think, you know, working with developers, working with programmers, working interdisciplinary, as an anthropologist, you know, crossing divides. I think there are a lot of really cool examples of that happening in the past. And I think we need sort of anthropologists to float into those communities. So that's kind of what I like about anthropology. You know, you have the strangest, weirdest people coming into some situations and then just asking questions that you're not used to.

SH [48:03]: So in that regard, maybe you can help us understand a little better what computational anthropology actually is, and how it complements a more traditional, more general training in anthropology.

AM [48:15]: I don't have a complete answer to that. But I mean, to me, it's like, having this interest in programming and having this interesting in in computing, too. I mean, some people study these things from an anthropological perspective, I think that's extremely valuable. And they don't have to, like, dive into the code or, like, think about that in any sort of, like, concrete way. They can just look at the outcomes. And that's what I said earlier, it's like, I do believe, like, that artificial intelligence would have a lot of adverse outcomes. And I think anthropologists are incredible at studying that.

But for some people in the subfield, they might be interested in also learning how to program. And anthropologists are used to learning new languages. So programming languages, why not? Is that something we should be interested in learning, too—the way people talk, or like the way that machines now, kind of, is considered to be talking. We talk to each other, and we interact with machines. And like, you know, you can think about human-machine interaction and like those—sorry, human-computer interaction—and how that works. But I think anthropology has this different vibe, this different approach to it that I think is fruitful.

SH [49:19]: And I guess adoption of new technologies as a question anthropologists have been interested in, not only since algorithms and artificial intelligence.

AM [49:27]: Yeah, I mean, I think a lot of anthropologists remember the discussion of the introduction of stone axes, like, versus steal axes, you know, in, in Australia. And so this is nothing new for sure. I don't think we should be so extreme about being so buzzworthy. But also, on the other flip side of that, some anthropologists are afraid of being too much into whatever is buzzing, technology-wise. So there's, like, a balance there, I think.

[49:55 Podington Bear—All the Colors in the World begins playing]

SH [49:58]: That was Alex Moltzau on interdisciplinary collaboration around artificial intelligence.

In this episode, we have thought through the role of anthropology in making sense of algorithms and artificial intelligence. A special thank you to our guests, Professor Nick Seaver, Professor Veronica Barassi, and Alex Moltzau, for sharing their expertise and perspectives on algorithms and AI.

You’ve been listening to AnthroPod, the podcast of the Society for Cultural Anthropology. This episode was produced by me, Steffen Hornemann, with review provided by Marie Melody Vidal.

To learn more about our guests and explore additional resources, be sure to check out the show notes for this episode. You can also find us at culanth.org (that’s c-u-l-a-n-t-h.org).

Thanks for listening. Until next time!

[50:57 Podington Bear—All the Colors in the World continues]