Editor’s Introduction: Artificial Intelligence, Spirituality, and the Church

[1] There is a lot of talk about artificial intelligence. As there is a rush to market the uses of AI, there is a need to create guardrails and guidelines for the use of AI in order to protect intellectual property, safeguard personal data, and reign in the energy consumption of AI.  Here at Journal of Lutheran Ethics, for example, we have recently created a new guideline for authors that requires their certification that the essays published in JLE were not written with the help of generative AI.   This fall the ELCA Church Council approved an issue paper on artificial intelligence.

[2]  Just last year, JLE published an issue on the “Ethical Considerations on Artificial Intelligence” (August/September 2024: Ethical Considerations on Artificial Intelligence – Journal of Lutheran Ethics.)  The three essays in that issue discussed the way our own systemic sins are being replicated by AI, especially racism and misogyny.

[3] This current issue deals with the ethics of artificial Intelligence, spirituality, and the church.  All the essays in this issue discuss the way AI can appear to be super-human, almost divine, in its knowledge.  The essays explain carefully the way AI works, and the limits of our knowledge on how it works, in order to show the disconnect between what we imagine AI is able to do and what it actually is likely doing.  These authors also consider how our own minds and wills work.

[6] The first essay by Jordan Baker is a deep dive into human agency. Using Luther’s understanding of the will bound either to God or the devil, Baker discusses the way AI shapes the heart’s imagination and binds the will.  Baker explains, “The image-making power of the heart, which orientates a will towards what is worth doing, is fundamental aspect of our agency. By giving these evaluative determinations over to an AI assistant, and then following whatever that system gives me, I have made myself into an instrument for the AI agent. We might call this a “diabolical exchange,” where human agents conform to the structure of merely functional AI agents, and thereby give up agential control allowing their own evaluative judgements to align with the AI’s judgement.” In a chilling example, he speaks of how AI can construct a sermon but requires a human to preach the sermon.  AI uses the human preacher as its tool.

[7] The second essay delves into the question of AI bots that serve as chaplains. Aaron Fuller discusses both the mechanics of AI and the purpose of pastoral care in order to show the disconnect between AI generated responses to questions resulting from moral injury and human pastoral responses to the same issue. In part the difference lies in different understandings of pastoral care.  The AI algorithm is built to provide answers that provide comfort based on the machines prediction of what the user wants to hear. The human pastor should be responding to the questions themselves in relation to the individual, not simply providing comfort, but also promoting honesty, discernment, and healing.

[8] However, Fuller demands that we wrestle with the fact that AI pastoral care is already happening, in great part because many clergy, already overworked, are reluctant to offer spiritual care to the mentally ill thinking that clinicians are better equipped. Desperate for spiritual answers, vulnerable people dealing with moral injury are turning to bots they can access on their phones.  Fuller asks that readers consider the duty of the church and its clergy in times when spiritual need is high, pastors are in short supply, and corporations are producing chaplain bots.

[9] Jose Marichal’s essay spends time explaining how AI works and the limits to our understanding of how AI “knows.” Because AI is not transparent in where it gets its data or how it processes it (in other words it fails to cite its sources or explain its reasoning), its answers seem mystical or enchanted to some users who often give answers provided by AI full authority, even though they do not know how to fact check them. This creates a double problem. Users are possibly believing false data, when AI hallucinates or creates data that it predicts the user wants to believe.  But even when the data is not false, the user with no way to check the data’s veracity struggles to use her own empirical and rational methods of abstraction to do her own intellectual work. We risk entering into a post-scientific age in which we trust an enchanted force that is biased towards profiting those who created it.

[10] As the final piece in this issue, we offer Kaari Reiertson’s reflections on the ELCA issue paper on AI.  Her reflections contain a link to the paper.

[11] We are in the midst of a revolution.  Let us take the time to understand the technology at play, its limits and its uses.

 

Jennifer Hockenbery

Jennifer Hockenbery serves as Editor of the Journal of Lutheran Ethics .  She is Professor of Philosophy and Dean of Humanities at St Norbert College. She attends Grace Lutheran Church in Green Bay, WI.