New technologies always come with trade-offs and unanticipated consequences. The more powerful the technology, the greater the potential for disruption. We’re still in the early stages of accessible AI tools, but we’re already seeing profound rippling effects. In this eye-opening and important book, Madhumita Murgia investigates some of those effects in a global tour of AI’s invisible side.
Murgia is an experienced tech journalist, currently the Financial Times’ first ever Artificial Intelligence Editor and previously an editor at The Telegraph and Wired. It’s a level of seniority that gets access to the high profile names in AI, but that is not the focus of the book. Instead, the opening chapter finds us in Kenya, where thousands of people are employed in the “ghost work” of training AI systems. Cheap labour is used to provide the millions of data points that AI needs to spot patterns. Workers might watch dashcam footage, for example, tagging and classifying everything – we don’t want our self-driving cars slamming on the brakes for a passing cloud. Machine learning is not autonomous, and relies on the painstaking and repetitive work of a vast global precariat.
This wave of new jobs in AI has lifted people out of poverty and given them a steady income – for now. The downside is that the whole point of the work is that they are training the computer to identify things by itself. Every click brings their own obsolescence closer, and then what?
Neither is the work itself always as benign as watching traffic. Meta recruited young Kenyans who found out on their first day that their ‘job in AI’ was to watch flagged content on Facebook, and they would be spending eight hours a day tagging child abuse and violence so that the computer could learn to recognise it. Breaks were insufficient and there was no mental health support. Workers who tried to unionise to improve matters were fired, and then the whole facility packed up and left as quickly as it appeared.
Not all of this work is unskilled. The book mentions out-of-work doctors who left Syria as refugees, who put their skills to use making diagnoses remotely. Medical AI offers one of the most positive stories in the book, as the author talks to doctors in her native India who have seen the benefit with their patients. Interpreting x-ray imagery is a highly specialist skill, and x-ray technicians are in short supply. Even if a remote clinic or hospital has an x-ray machine, they may have to wait for days for someone to interpret the results and make a diagnosis. Today doctors in India are making those diagnoses through an app, a life-saving innovation that has improved health outcomes for those at the margins.
This is, unfortunately, not the norm. AI generally isn’t deployed for the benefits of the disadvantaged. Chapters describe how facial recognition fails ethnic minorities. Or how Amsterdam’s city authorities used an algorithm to identify young people most likely to commit crimes, and then targetted innocent teenagers with police interventions so intrusive that it pretty much became a self-fulfilling prophesy.
Another system, in Argentina this time, tasked AI with identifying girls most likely to get pregnant as teenagers. This was for social workers rather than police, though still fraught with complications. It was sold to Conservative politicians as a way of reducing abortions, but ostracised girls in a manner reminiscent of Nathaniel Hawthorne’s novel The Scarlet Letter – while, as usual, ignoring the male half of responsibility for underage pregnancy. This kind of system is useless without caring and compassionate people on the ground, building trust and valuing girls as individuals, not as data points on an index of vulnerability, interpreted by a computer with no grasp of lived reality. “There is an abyss between human experience and artificial intelligence,” as one interviewee says.
The firm behind this particular scheme was Microsoft, buffing its reputation with a programme aimed at vulnerable girls in slum communities, while also using them and their data to train social work software that they can sell to richer governments elsewhere. This too is a recurring theme – harvesting masses of data where it is cheap and where laws about data are more relaxed. It’s a form of extraction and a new kind of “digital colonialism.”
In all of these stories, Murgia talks to the people affected, not the engineers and coders in their offices in Silicon Valley. It’s an impressive feat of reportage – just finding some of the case studies must have taken a heroic effort. It’s impressively global, with chapters on a wide range of issues. There are women affected by deepfake porn, activists fighting China’s surveillance state, artists teaching people how to use dazzle camouflage make-up to thwart facial recognition. We meet a man who was tasked with using ChatGPT to do as much theoretical harm as possible, so that the developers could put better safeguards in place. (He developed a new nerve agent and found companies willing to produce it, which met the brief.) There’s a coder who was working on self-driving cars until he realised he was making things easier for drivers, but not making cities any more liveable. He switched to developing an app that allowed Uber drivers to claim what they’re owed, as the platform systematically underpays them.
One of the take away messages of my own book on climate change and race is that events unfold in a historical, geographical and political setting. The world is unequal, and climate change reflects and exacerbates those inequalities. The same is true of AI – released into an unequal world, it will by default reflect and entrench existing racism, sexism, and inequalities of income and power. Unless it is specifically designed to do otherwise, it will serve the wealthy and powerful.
Murgia confesses that she was less optimistic about AI on finishing the book, though her answer isn’t to slam the door on one of the most powerful technologies ever developed. She sets out a series of questions at the end for ensuring that AI is deployed with fairness and transparency. Her list includes decent wages for global data workers, clearly alerting people to the role of AI in products or services, and the right to a human review. There’s an urgent need for the law to catch up to AI, as shown by the recent Grok scandal. How can AI be used to reduce inequalities, and how will it be made accessible to those who need it most? Who is accountable when it goes wrong? The answers to these sorts of questions are likely to determine whether or not AI is broadly positive, or whether it becomes – like nuclear weapons – a human invention we can’t undo.
- Code Dependent is available from Earthbound Books UK

