AI is an extraction machine. But resistance is possible.

In this Q&A, author James Muldoon sheds light on the real face of AI capitalism and suggests how to fight back.

AI is an extraction machine. But resistance is possible.
Photo by Julien Tromeur / Unsplash

Artificial Intelligence is being sold to the public as a wonderful technology that will solve humanity’s most pressing issues, or as an unprecedented threat that will lead to its destruction. It is neither, states expert James Muldoon. 

James Muldoon is Associate Professor in Management at the Essex Business School, Research Associate at the Oxford Internet Institute and Head of Digital Research at the Autonomy Institute.

In Feeding the Machine, the hidden human labor powering AI, (written with Mark Graham & Callum Cant) he argues that AI actually is ‘an extraction machine that feeds off humanity's collective effort and intelligence, churning through ever-larger datasets to power its algorithms’. The book is the first comprehensive attempt to tell the stories of the army of underpaid and exploited workers who power AI. All to the benefit of a shielded, untouchable elite.

Read our conversation, abridged for clarity.

💡
Citizen Common is our series of honest and open conversations between citizen contributors and potential change-makers. 

Is there a particular Big Tech issue you want us to host discussions and debates on? Mail us at editorial@the-citizens.com to submit your suggestion.

AI is widely perceived, and covered, as an extraordinary technological advancement that could either unlock an era of unprecedented progress or the end of humanity. Why did you choose to tell this story from a human angle instead?

We wanted to tell the story of AI from the perspective of the workers who build it. We found that a lot of these workers, particularly those based in the Global South, were facing horrendous working conditions and were building AI under really, really appalling circumstances. They were getting low pay, working extremely long hours, and often dealing with very toxic content, which produced a really unsafe workspace. And we thought this was a story that needed to be told.

The book opens with a specific job, that of data annotators, which is extremely common but not well-known. What do they do, and why are they so important? How deeply are they actually intertwined with our lives?

We tell the story of Anita, a data annotator who lives in northern Uganda in a town called Gulu, and she works at what's called an outsourcing center. Leading tech companies like Meta and Tesla and all these big household names outsource a lot of their more menial, low-skilled work to various companies based in the Global South, where data annotation is undertaken by millions of workers.

Data annotation is the process of labeling data sets to be used by AI software. Think of the AI software in a self-driving car: for that vehicle to see and understand a street scene, it has to develop an understanding of what different types of objects are on the road. How do you tell the difference between a child, a street sign, or a tree? Someone has to manually go in there to the footage and draw bounding boxes around each of these different objects. 1 hour of annotated footage that can be fed into an autonomous vehicle software requires 800 human hours to manually label that data set. And every example of AI has a similar story.

The prevailing narrative is that artificial intelligence is so clever and so autonomous, while actually it's really heavily based on the exploitation of people…

What is really exemplary of this is the story of an automatic chess-playing machine called the Mechanical Turk, constructed in the late eighteenth century by Wolfgang von Kempelen. Many people would have heard about Amazon Mechanical Turk, which is an online crowdsourcing platform owned by Amazon in which small tasks could be posted online and distributed to a workforce around the world.

The platform Amazon Mechanical Turk, was actually named after this 18th century machine: it looked like a wooden replica of a human seemingly playing chess automatically. It toured all through Europe playing chess games against the best players in the world and people were absolutely convinced that this wooden machine could defeat these grandmasters in chess. But, actually, it had a small compartment and a very small grandmaster sat inside the machine and moved the pieces through a series of mirrors and levers.

What we're trying to argue is that actually AI works on a very similar principle: we assume that all of this is automatic, but actually, at the end of the day, AI is people.

You also introduced the concept of ‘the extraction machine’ as a way of understanding what AI does.

When we think about AI, we often see it as a mirror of human intelligence, replicating some kind of pattern that occurs in our brain or creating a synthetic version of intelligence. But what does AI really do?

First, it takes all of these things from the world: natural resources like water and electricity in enormous amounts; the physical work of data annotators and engineers; and our intellectual work as well, from the articles, books, and paintings in the datasets, right? Because AI is actually using all these datasets, which in essence, is the collection of human knowledge.

The training dataset for generative AI includes almost every book that's ever been written: it is articles, songs, paintings... it is all kinds of creative work that have been taken, usually unremunerated, usually unacknowledged, by these AI companies and fed into datasets as training material for their algorithms.

What we wanted to show in the book was this idea of this process of “extraction,” and AI is understood as an extraction machine using all of this physical and intellectual material to churn into predictions and statistical outputs, which are then used to generate profits for the AI companies.

In 2019 Shoshana Zuboff published her monumental work on how surveillance capitalism lies at the core of platforms, but things seem to have moved forward since the publication of this book. What's the difference between Big Tech and Big AI?

There are differences between Zuboff and my co-authors, not only in how we diagnose the problem but also in what we think the possible solution might be. Surveillance capitalism is really describing a particular business model of technology companies, one that's embodied mostly by Google and Facebook. These companies are giving you a free service in exchange for collecting your data and then essentially showing you ads, and something like 98% of Facebook's revenue just comes through ads. It's an ad business.

AI really expands on this business model because it's not just about ad revenue and it's not just about data. First, AI is a voracious consumer of water and electricity and has an enormous impact on the environment. A simple ChatGPT inquiry consumes ten times as much energy as a Google search, for example. So, generative AI is going to be an absolute disaster for the environment.

Another thing that Zuboff was saying in Surveillance Capitalism is that this model of Big Tech was a kind of capitalism gone mad, a Frankenstein-like version of capitalism that actually drifted away from the core tenets of capitalism, which she saw in largely positive terms of satisfying consumer demand. I think what we're saying is there's much more wrong with what Big Tech is doing. It's not simply surveillance. It's also this extractive logic of exploiting workers, exploiting consumers, and doing whatever is necessary to increase engagement and turn a profit. Whereas Zuboff thinks it's really consumers that have to push back and fight back, we are saying that it should be workers who need to organize together to build collective power and to try and put pressure on companies to create better working conditions and to make better products.

What we are seeing seems to be a different stage of capitalism and the relation between workers and employers: we went from users' networks, to dematerialized services, and now we're back to ownership of resources. Also, many of those companies are and have been for a long time "transnational powers". Someone like Musk is now only effectively the editor of a big, big platform that he can use to advance his agenda, but he also has contracts in defense, and this seems to call for a different kind of “resistance”.

I think one of the big shifts with the rise of what I call Big AI is that network effects are not as important as the ownership over infrastructure and hardware. We've gone from a discourse where it doesn't really matter about the assets - think of Airbnb or Uber, which own no houses or taxis - to a shift towards the necessity of owning large data centers and digital infrastructure. This is expensive. Companies like Meta and Amazon, are investing tens of billions a year in new data centers and expanding capacity in getting access to the energy infrastructure needed to run them. That is a really big shift because they know that the people who will own and be able to sell the most cutting-edge AI models will require the hardware and the capacity to run it.

Feeding the Machine by James Muldoon, Mark Graham, Callum Cant - Canongate Books
Feeding the Machine - The Hidden Human Labour Powering AI by James Muldoon, Mark Graham, Callum Cant - hardback (9781837261819) published by Canongate. The first book to tell the stories of the army of underpaid and exploited workers powering artificial intelligence, Feeding the Machine tells the story of a global technology through the eyes of the people who produce it

The relationship between Big Tech companies and governments, particularly the US, has always been a bit ambiguous. "We regulate you, we don't regulate you" vs "We influence politics, we decide not to influence policy directly". And now the ownership and control of AI resources and potential become a big geopolitical issue, one of the reasons why AI is the new frontier for the confrontation between big powers. It's a private oligopoly, completely beyond any democratic control. How do you see the future of this confrontation among big powers above our heads?

AI absolutely does raise the stakes of this technology competition between great powers. Going back we can see that ever since the detonation of the first atomic bomb, technology has been very intimately entwined in geopolitics and in great power competition. But more recently, there has been a shift. When you get to the platform era, which coincides with the Obama presidency, largely, you see this really intimate connection between Obama staffers and Big Tech.

Particularly during that era, America was still very invested in being at the forefront of tech development, a race to develop emerging green technology, particularly around things like solar and wind. But what you see with AI is a heightened sense of competition with China and a greater sense of the potential in military, strategic and economic terms of what AI could be. 

There's certainly a growing antagonistic relationship between the US and China on this topic. China is increasingly viewed in the US as a threat and AI is seen as something that could aid military technology. What really changes from about 2021 onwards is this sense that AI is going to shape the future of geopolitics, military capabilities, advances in the sciences, and other factors that will give countries an economic advantage. The US doesn't want to fall behind China, but it also needs to marshal other middle powers into the service of its hegemonic project. So for US policymakers, countries like the United Arab Emirates, France, and Germany need to come on board as well.

But a lot of it is private money and private ownership.

Yeah, definitely over the AI infrastructure it's private money. But there's an ambiguous relationship between large AI companies, AI startups and the US state: sometimes they're marching to the same tune, and other times there's tension between the priorities and the strategies of the big companies and the state.

You talk about the Californian ideology and the early utopian dreams around Silicon Valley. Now tech executives are convinced they are building products for the common good, but they do not even consider the need for democratic oversight or control. How is their approach changing our societies?

The Californian ideology was really a growth out of the development of the personal computer, a strange blend of right-wing economics: anti-big government and anti-taxation; with left-wing hippie counterculture: freedom of the individual, free expression etc. 

And really, that dream is dead. The idea that technology and computers will enable individuals to be liberated from big structures of the state and corporations, I don't think anyone believes that anymore.

But we do have a new ideology that has taken its place, and people like Sam Altman and other evangelists of AI are essentially trying to sell us a new set of beliefs about how technology is going to save us in the future. This is the idea that AI is not just a replica of human intelligence, but the start of superhuman intelligence, that AI will be so powerful and so beneficial to humanity, that it is going to magically enable us to solve myriad problems.

At the same time, there's been a lot of fear-mongering along the same lines: it's so powerful that it’s going to be the end of the world, a kind of AI doomerism. The two go hand in hand, because some people are trying to convince us how powerful and how intelligent the software is or could become. But what we really should be doing is focusing on the problems that we see in the present here and now, and extrapolate from exactly what's happening now to what could be plausibly happening in the next few years, rather than projecting decades or hundreds of years into the future. Because as Sci-Fi has shown us time and time again, the future is never what we think it will be.

When Rishi Sunak organized the AI summit, we set up an alternative summit called the People's AI Summit, where we said, actually the problems are here and they are about copyright, about resources, about exploitation of workers. How do you see the future of work now?

One of the main dangers at the moment, if you are just a normal worker working in a range of different industries, is the idea that AI will have an increasingly important role in managing your work. Not that you will be replaced by a robot, but rather the company will bring in AI software to try to squeeze more work out of you to increase productivity in the company.

What we're seeing mostly is just surveillance tools: a wide range of what you could call Bossware. Software installed on your computer, checking how hard you're working, how many hours, how many minutes and seconds you're putting in, what screens you have up, but also increasingly trying to track more ephemeral things like your emotional states and language models that are telling bosses that if they feed all of their workers conversations in, they can predict the mood at the company or they can give you emotional intelligence scores.

AI is deployed widely in hiring and firing, in monitoring CVs etc. There are a lot of risks and dangers in this. Workers are not going to be consulted, they don't even know that this software is being used to monitor their work and to make decisions by the company about who is most productive and who gets to stay. We should be focusing on these everyday concerns, rather than this idea of some kind of autonomous AI system launching nuclear weapons at humanity. This issue is happening right here and now to us in the present.

Another very interesting aspect is the pattern of new colonialism at play.

Something that was striking throughout our fieldwork for this project was how technology was being used really mirror older colonial patterns of power and how it was not only bringing them back but also strengthening them in many key ways. To start with, all of this work has been outsourced to various locations in the Global South: East Africa, Latin America, the Philippines, India, Pakistan etc.

What we saw was a bunch of really highly paid, usually white executives at offices in San Francisco or elsewhere, who sent out the work to these outsourcing or distribution centers, primarily to brown and black workers.


It is essentially recreating the conditions of a digital sweatshop, where some are extremely poorly paid people work extremely long hours, do grueling work, mind-numbing work with no future and no career progression. The executives are getting paid around 300K a year, and they're fighting tooth and nail to pay the workers at the bottom of the pyramid as little as possible.

So, yes, you're part of the empire, yes, you're part of this new technological regime. But the kinds of work that are available to you and how you can do that completely deny your agency, they completely dehumanize you. And what really struck us was hearing the stories of the workers at the end of these networks who have their own dreams which are crushed by the system. They want to be entrepreneurs; they want to decide about how the technology is developed; they've been to university; they've done computer science degrees or business and management; they want access to follow their dreams and to start something new and to do something cool and interesting within their friends. And all of those opportunities were denied to them because the only thing that they were good for, according to the most powerful people in these networks, was labeling datasets and doing so day in, day out, with no end in sight.

The book gives those workers their humanity back by connecting them as part of the same assembly line, each one with doubts and moral questions, and they struggle with what they're doing. Is this common humanity the root of a pushback? 

Yes, every chapter deals with a different worker in the supply chain, and we do try to center their voices and experiences. You're right to point out that different workers are exploited in different ways. Some of them are very privileged, like the machine learning engineer on a very high salary in London. But they're very connected: decisions that they make about how their model is trained can affect people on the outskirts of a town in Uganda and can have very real implications for how their work is organized. And these tensions and antagonisms are the start of forms of resistance.

What can workers do to resist?

The main thing is transnational solidarity. We need people from different countries standing up for each other, participating and sharing in each other's campaigns. We also see a really important role for civil society actors, people who are going to be in organizations that try to hold companies to account and get them to do better.

And we also see a potential role for worker-led organizations. It’s increasingly difficult with large-scale infrastructure projects like AI, but I think it's still a possibility. And of course, we need to see all of these struggles as connected, that actually the growing rise of ethno-nationalism and fragile masculinity and capitalism on steroids is connected to how this AI is getting deployed and built across the world.

💌
Help us to reach new audiences by forwarding this to friends and family.