Book Review: Is Artificial Intelligence Racist?  The Ethics of AI and the Future of Humanity by Arshin Adib-Moghaddam

1] Though a self-proclaimed friend of Artificial Intelligence (pp.31,119), Cambridge University Professor of Global Thought Arshin Adib-Moghaddam is concerned about racism and sexism creeping into the algorithms governing emerging AI technology (pp.3-4,48-49).  This is an issue that is getting some attention among experts (notably in another new book by Calvin Lawrence, Hidden in White Sight, which I’ve reviewed for the Newsletter of the AI and Faith network), but it demands more attention among church leaders and ethicists.  Adib-Moghaddam  is concerned about the need to develop shared ethical standards with which to supervise AI systems.  Indeed, he advocates the creation of independent ethics boards to supervise AI technology (p.14).

[2] Adib-Moghaddam points out problematic algorithms such as the COMAS algorithm which is used to predict which convicted criminals are most likely to re-offend.  Studies indicate disparate results for Black offenders and white offenders (p.5).  The author notes that Apple’s Facell program designed to allow one’s phone to be unlocked by facial recognition regularly mistakes users of darker facial complexion (p.15).  Similarly, a Microsoft Face AP used by Uber to verify the identity of its drivers also had difficulty discerning the identity of individuals with darker skin tones (p.62).   Adib-Moghaddam also discusses a 2016 international contest which was judged solely by an algorithm and produced 44 winners, almost of them white (p.57).

[3] Use of algorithms by many U.S. police departments in their “predictive policing” practices result in Black people being much more likely to be stopped or arrested than white people.  In health-care services, Black people are more likely to be denied life-saving health care.  For example, while fair-skinned people are considered to be more at risk for skin cancer, the five-year survival rate is much lower for Black people than white people (pp.62-63).

[4]  Racism in the algorithms of banking and job applications are also noted.  For example, those marginalized in society are subjects of more data-collecting than economically secure groups. Consequently, the poor receive more scrutiny and surveillance by AI and the institutions that AI systems serve (pp.37-38).

[5] The author does a good job explaining the difficulties in trying to discern who is accountable for biases in the algorithms (p.23).  And he correctly notes that tech giants use this fact as an excuse to avoid ethical regulation of their businesses (p.132).

[6]  When changing his company name from Facebook to Meta and creating a Metaverse platform, Zuckerberg asserted that the physical world and the digital world are increasingly overlaid.  In Adib-Moghaddam’s view, Zuckerberg has effectively divorced himself from the real world, using technology in quest of a “perfect world,” not unlike the Enlightenment sought (pp.42-43,64-65,121-122).  Adib-Moghaddam contends that this vision is Hegelian (p.81).  In this perfect world, sameness is a virtue (p.43).  But undermining diversity makes those in the minority an “other.”  It also places barriers on creativity (p.44).

[7] Adib-Moghaddam further explores this stress on sameness, coupled with analogies between superintelligent AI and Nietzsche’s Ubermensch, both taking the place of God in modernity and contemporary thought.  But as it was for Aryan Germany, so it is for our AI-ethos – God is white!  This is said to have led in the Enlightenment suggesting the “otherness” of non-Europeans (pp.44-48,86).

[8] Adib-Moghaddam notes that the meta-verse can also be interpreted as the logical evolution of the Industrial Revolution, creating new needs and desires among the population (pp.40,67).  In expanding capitalism from the material world to the virtual world AI technology buttresses and furthers the stratification of society into rich and poor (pp.67,70).  He contends that the algorithms of AI serve capitalist aims well, effectively stimulating consumption desires and speeding up the transactions required to make consumption possible (p.68).  As long as AI research yields an ideology dedicated to maximum productivity it will not contribute to the quality of life (p.124).  As such, the digital economy seems to be imperialistic (pp.77,91).  It not only undermines individual sovereignty, but also national sovereignty (as illustrated by the need for the U.S. to negotiate with Elon Musk on some issues) (pp.98-99).  In that spirit, Adib-Moghaddam compares AI technology’s imperialism to the dynamics of European colonialism (p.100), which was a key component in the origins of racism and slavery.

[9] In the closing pages, the author calls for a ban on AI automated weapons (pp.113-114).  Other concerns are raised about AI and its use including how it invades privacy and how it might be used to implement psychological strategies of social control (pp.115-117).  To tackle all these problems the author adds that we will need to develop AI algorithms charged with poetic love and empathy (p.124).  Here I join with neuroscientist and tech entrepreneur Jeff Hawkins, who contends that AI machines need to be patterned more after the fashion of the human brain’s neocortex (which is comprised in large part of the brain’s prefrontal  cortex in the frontal lobe) (“What Intelligent Machines Need to Learn From the Neocortex,” Spectrum, June 2017).  For this is the part of the brain which participates in directing our ethics, making us empathetic.  It is also the seat of our spirituality.  Were this technology to become widespread it would contribute to the present debate noted by Adib-Moghaddam on whether AI can consciously act (pp.25-27) and whether it should have a moral status attributed to it (pp.27-28).  An AI governed something like the frontal lobe of homo sapiens would be less of a threat to human well-being.

[10] Personally I wish Adib-Moghaddam had provided more data and specific examples of the racism embedded in AI algorithms as well as more details on specific solutions.  The previously noted book on racism and AI by Calvin Lawrence offered more details along with specific proposals for remedying abuses, and so I commend your consideration of that volume too.  But the philosophical and economic analysis in this book concerning how present trends in AI exacerbate racism is so thought-provoking that Lutheran ethicists and the ELCA as a whole should be prompted to begin grappling with the issue of racism and AI.

Mark Ellingsen 

Mark Ellingsen is Professor of Church History at the Interdenominational Theological Center.  He is the author of over 400 published articles (several on the abortion controversy) and 26 books, most recently a book he co-authored with Civil Rights leader James Woodall, titled Wired for Racism? How Evolution and Faith Move Us to Challenge Racial Idolatry (New CIty Press).