Standing in the dense heat of a Kenyan Business Process Outsourcing (BPO) centre, Mark Graham witnesses a harrowing reality. Rows upon rows of data annotators sat hunched over their screens, their backs stiff, and their eyes reddened from 14-hour shifts. The air is thick with the hum of old computers as the workers battle sleep deprivation. This is the human labour behind the seemingly innocent interface of artificial intelligence (AI).
Graham, Professor of Internet Geography at the Oxford Internet Institute and a Senior Research Fellow at Green Templeton College, has spent years studying how digital technologies reshape global labour and inequalities. As a policy adviser to bodies such as the Global Partnership on AI and the G20 Task Force on Digital Technologies and Decent Work, he has seen firsthand how these systems are built and sanctioned.
Drawing on thousands of hours of fieldwork, Graham distills the experiences of the humans fuelling AI in his book Feeding the Machine: The Hidden Human Labour Powering AI, which is co-authored by James Muldoon and Callum Cant. His work exposes how global AI systems depend on an invisible workforce to satisfy the industry’s insatiable demand for data.
In his conversation with NU-Q Views, Graham reflected on this exploitative production system and how he personally grapples with the ethics of using AI in academic spaces.

- What brought you to write a book about this topic?
For the last seven years, I’ve been running a project called Fairwork. We score companies against minimum standards of decent work out of 10 to pressure them to make changes. Initially, the project focused on digital labour platforms. Later, I thought we could apply it to the business process outsourcing sector I’d been studying, which had been integrated into the production network of AI.
We piloted it with one company in Kenya that marketed its services as ethical AI. They gave us full access, and what we found was shocking and distressing. Workers were in tears, and you could feel their stress and anxiety. At that moment, my two colleagues and I realized we needed to do more than just publish an academic report. We needed to tell the bigger story: that when you use AI, you’re interacting with a whole production network of people being extracted from in different ways.
- Were the Business Process Outsourcing centres receptive to the Fairwork rating system?
An average company in the sector is hesitant to allow the kind of access we need. That’s why, when we started the Fairwork rating project, we focused on platforms. A platform has a simple operational model: There’s a consumer, a platform, and a worker. If you’re a consumer using Uber, the worker is also using Uber, so there’s nothing opaque in that system.
But with AI, you have no idea what the production network looks like. So, instead of just scoring companies, we’ve created a certification scheme for the consumer-facing company in the supply chain. We certify them if they give us access to their suppliers. That’s how we reach the firms that actually employ the workers.
- How do you make sure that the rating system is both accurate and broad enough to encompass multiple countries?
That’s one of the most challenging things we’ve had to grapple with in the project. When we started seven years ago, we piloted it in India and South Africa. Then we expanded to Germany, Indonesia, and Chile, which had extremely different environments.
However, we didn’t want to set different standards. There’s no reason why a worker in Germany deserves a higher standard than one in South Africa or India. We’ve maintained from the beginning that these should be universal standards. However, universal standards can be contextualised in different ways. For example, the living wage is different in each country, but conceptually, every worker deserves one.
- Existing technology, such as the internet, is reliant on the extraction machine. Why does the Fairwork project only rate the production networks of AI?
We used AI as the entry point to address much broader digital production networks that have low-paid data workers somewhere in the supply chain. In fact, the evaluation framework we’re using can apply to any sector of the economy. We could do the same study to score hotels based on the conditions they offer to cleaners or security staff. It’s simply a way of holding companies accountable against universal standards. We chose to do it in the AI data work sector because it’s so opaque, hidden, and hard to understand.
- Why is there such a dearth of information accessible to the public on the realities of the production network powering AI?
With a large language model (LLM), only the digital interface is visible. There is no way of understanding what’s behind that screen. That’s why it’s so difficult to even know what questions to ask.
That’s compounded by how the sector runs. There are extreme levels of opacity. Throughout the supply chains, companies have nondisclosure agreements with one another. Many data annotation and labelling companies aren’t even told what they’re doing. They’re given instructions like, “Label 10 million images this way by this date,” but not told what it’s for — maybe it’s for the new Apple product, for example.
- Last week, Amazon Web Services (AWS) shut down, disrupting a large part of the internet. How does this illustrate the monopolisation you address in your book?
When something like AWS goes down, it sheds light on the critical position of that company in the digital economy. As a government and a big university, it’s a very worrying thing that one big American company has the power to take down so many essential services and parts of the economy. There are moves in different parts of the world to develop local sovereign infrastructure and sovereign AI, but that’s very difficult because of the opacity of the chain and the vast amount of resources needed.
- Do you think AI can reach human levels of thinking?
I would never say it can’t reach that level of thinking because advancements in technology and AI will take us to unpredictable, fascinating, and terrifying places in the next decade or two.
Right now, I think it’s misguided to compare artificial intelligence to human intelligence. AI, as we know it, is basically systems that try to predict the most likely thing they should be saying.
One might say human brains are also, in some ways, trying to predict the next thing we should be saying. So I’m not in the camp saying it is just a smokescreen and there’s nothing intelligent to it. It’s not human intelligence, but that doesn’t mean it’s not a different form of intelligence.
- In your book, you mention that the general consensus is that AI cannot reach human levels of creativity because it does not have a subversive role in politics. AI has recently been used for political campaigns. Do you think this is representative of AI as an expressive agent?
I think AI can be creative, insofar as it can create genuinely new things that are unexpected and unanticipated. But the difference between calling that creativity and when a human being creates something genuinely new is intentionality.
When a human creates, there’s a level of sentience, direction, and purpose embedded in that creativity that isn’t there when an AI creates something new.
One of the things that drives humans to be creative is the sense of pleasure and wonder we get when we hear a new piece of music that makes our hair stand up and makes us happy. AI won’t experience that. It can create new things and be creative, but that creativity comes from a very different place.
- What advice do you have for students and teachers who are contending with the development of AI in the context of higher education?
We’re grappling with this in my department as well. As a professor, I don’t want to spend a few days reading a 15,000-word thesis when I know that ChatGPT created it rather than the student I’m supposed to be evaluating.
We can drill into students that there’s no point in getting an AI to write all your essays for you, but there’s always going to be some who will still do it. That’s why we are going to have to radically rethink the model we’ve used in academia. What we have to do is move to forms of learning and assessment that don’t allow for that, such as in-person learning.
- In a system as opaque and complex as you claim, how can students, as primary consumers of AI, use it mindfully and contribute to changing the system?
I think it’s very difficult for consumers to make a change at an individual level. That’s why we started the Fairwork project: to create certifications that empower consumers. Collectively, consumers can make interventions through student groups, classrooms, associations, and unions. On a higher level, students can put conditions on purchasing by demanding improvement. Companies might not listen to one group, but if enough groups around the world start doing this, companies absolutely will. It has to start somewhere. As individuals, there’s very little power, but collectively, we have all the power in the world.
Editor’s note: The text has been edited for clarity.
