Our Voices

Atlas of AI: Examining the human and environmental costs of artificial intelligence

According to author and scholar Kate Crawford, the term “artificial intelligence” is a misnomer.

“AI is neither artificial nor intelligent,” Crawford said. “[There is an] enormous environmental footprint – the minerals, the energy, the water – that drives AI. This is the opposite of artificiality. It’s profound materiality.”

During Robert F. Kennedy Human Rights’ virtual book club on November 7, Crawford highlighted the tangible consequences of AI, including environmental ramifications, exploitation of underpaid laborers, and discrimination within the criminal justice system.

A longtime AI researcher, Crawford has been studying the social and political implications of data systems, machine learning, and AI for two decades. Her most recent book, the award-winning Atlas of AI, describes how artificial intelligence systems are made and maintained – often at significant human and environmental cost.

Joining RFKHR’s Chief Operating Officer Michael Schreiber for the conversation about human rights and technology, Crawford outlined the inspiration behind her book and its relevance today amid the explosive growth of generative AI.

“Many people think of AI as being the stuff of science fiction,” Crawford said. “But actually…AI has a really far-reaching impact on human beings and on the planet. So, what I wanted to do was go and figure out what AI is ‘made of,’ in the fullest sense.”

That mission took Crawford on a journey around the world, from mines where minerals are extracted for the construction of data centers to sites where human laborers work to prepare data. Explaining the creation of AI as a “three-part taxonomy,” Crawford identified data, labor, and natural resources as the key ingredients in building AI systems.

Each of these areas is rife with human rights concerns, as well as environmental costs. Crawford pointed to the increased water consumption and carbon footprint required to create AI models.

“Depending on the study, it’s anywhere from 14 to 50 times more computationally intense to run a generative AI large language model,” Crawford said. “We’ve seen that all of the major tech companies building generative AI have increased their water consumption almost up to 40% in a year, threatening the groundwater and drinking supplies of entire towns.”

Artificial intelligence has also created new avenues for the exploitation of low-wage workers. Crawford discussed the rise of a new category of human labor called reinforcement learning with human feedback (RLHF).

“RLHF is the “secret sauce” behind ChatGPT,” she said. “You have people who are essentially prompting these models, seeing if the answers are problematic or incorrect, and then trying to fix them. That is an enormous amount of workers, generally in the global South, often being paid well below poverty levels.”

While these “data-click” workers are subject to low wages and workplace abuses, the individuals actually creating AI algorithms represent another concern: codifying human biases into supposedly objective systems.

Drawing on examples within criminal justice, Crawford explained how flawed AI can lead to racist and discriminatory outcomes in sentencing, interactions with police, and life post-incarceration.

“We can think about ways that AI systems have been used inside the courtroom to try and give a single number of risk, which was the objective of the COMPAS system, to defendants… which was shown to be based on a biased algorithm.”

Despite its “patina of neutrality,” AI is in large part formed by the training data that fuels it and the institutions that implement it, Crawford said. And the convergence of those factors can have horrific consequences if we aren’t careful.

“At every step of the way we have to ask, whose civic space is being defended?” Crawford said. “Whose rights are being recognized? What forms of discrimination are being calcified into technical systems?”

Ending the conversation with a call to action, Crawford reinforced the need for further study and collaboration.

“As someone who has been focusing on questions around human rights, equity, and AI for a long time, I still think it’s important that we drive research and ask those hard questions. Now it’s about all of us working together, because we’re in this, and we’re at an extraordinary inflection point, and I think the implications will be very far-reaching.”

Learn more about the social justice implications of AI at the 2023 RFK Compass Summit on AI, Ethics, and Investments.