Editor’s Introduction: Ethical Considerations on Artificial Intelligence.

[1] This issue of JLE showcases three excellent articles about Artificial Intelligence.  These articles explain how generative Artificial Intelligence works, some of the current ethical challenges, and some frameworks for building a set of guidelines for our use of this new technology.

[2] The first essay, written by William Rodriguez, begins with a calming introduction that reminds readers that we need not be fearful and we ought not be ignorant about Artificial Intelligence.  He explains that as human beings, as tool makers, we are created by God and also creators in our world.  He also reminds the reader that patriarchy, racism, and greed are present in the way we move and act in the world.  The ethical issues he brings to the fore in his article concern how Artificial Intelligence is being used in ways that harm women and minorities and exploit workers.

[3] The examples Rodriguez presents are chilling. The creation of artificial girlfriends who are totally deferential to the male user and the creation of deepfake porn that uses the images of real women without their consent may be new news to many readers. The current use of generative AI to make profits off of deceased musicians and to eliminate the positions of creative writers and actors may also be shocking. That said, the solution is not to ban or ignore the tools of generative AI but to think deeply about how we use these tools while also seeking to be more fair and just in our daily lives.  Readers will come to see that the issues involved are issues with our society and our thinking generally.  It is to this philosophy of technology that we must turn.

[4] Trevor Sutton’s article, thus, requires readers to think deeply about the philosophy of technology.  He explains that when we look at our technology, it is like looking in the mirror.  The issues that come to the fore are our issues.  When we look at our technology and how we use it, we look to ourselves and our values.

[5] Sutton compares optimistic and pessimistic philosophies of technology.  This article requires a slow close read.  But I am hopeful that the concerns raised in Rodriguez article will compel readers to take the time to think deeply about the philosophy of technology so we can go forward in our use of AI with the values we hold dear. Sutton encourages a theo-centric philosophy, a ‘philosophy fed by the springs of faith.’  The key here is to think of humans as co-workers with God.  We are not self-sufficient, and the world we live in is not simply a world of resources for us to use to amass wealth for ourselves.  Remembering our call as stewards of creation, and co-creators with God, will help us as we move forward to create rules and guidelines concerning artificial intelligence.

[6]  The third essay is written by Marcus Schwarting and explains how Generative Artificial Intelligence works.  This article requires slow careful reading for those who are not in the industry. But this is essential knowledge.  Some fears of generative AI involve the concern that these bots are conscious and malicious.  Schwarting explains how they actually work in a way that is understandable to the lay reader. Importantly, the ethical concern is not that these bots are conscious, but rather that they are unconsciously generating content from the data we give them.  Their racism and misogyny comes from our data not their “minds.”  This is a wake-up call to human beings to consider how we are behaving, how the text we write and images we put on the internet lead to the generation of new racist and misogynist material. It is also a call to work to create better codes so that the pattern does not continue to spiral into ever more problematic generation that feeds off of itself.

[7] These three articles are excellent.  They are not quick reads; they require slow reading and thinking. The reader may need to look up a word or two that is unfamiliar, and may need to check the many articles that are in the references to understand a point.  But all three articles point to the need for each of us to do this work, to understand the technology that we use and that is being used increasingly by corporations and government agencies.  We must take on our role as co-creators and understand both our power and the limits of our power.

Jennifer Hockenbery

Jennifer Hockenbery serves as Editor of the Journal of Lutheran Ethics .  She is Professor of Philosophy and Dean of Humanities at St Norbert College. She attends Grace Lutheran Church in Green Bay, WI.