[1] We are living through a technological watershed driven by artificial intelligence. Since the arrival of early generative Large Language Models (LLMs) in 2017, billions of dollars, years of research, and instruments of state power have all been used to reshape our world to better accommodate the next generation of AI models.[1] These technologies are often presented to the public as a source of innovation and societal progress, however, there have been notes of concern. Most dramatically, there are the speculative world-ending concerns of some technological futurists, like Nick Bostrom, who warn of the possibility of the creation of malevolent superintelligences.[2] But, there are also more down-to-earth critiques regarding AI safety in health care, economics, environmentalism, and warfare, among many other issues.[3]
[2] These ethical worries should trouble Christians insofar as they affect how we might be treating our neighbor by and through using these new technologies. After all, it is often the marginalized and outcast who are most harshly affected when society lurches into a new stage of technological “progress.” As such, it is important to offer guidance for Christian action that is grounded in serious ethical reflection, scriptural interpretation, and the voices of our traditions. Many theologians and Christian ethicists have begun this task, sometimes addressing core theoretical concerns about the nature of AI and sometimes contextually dealing with specific issues, but almost always with an eye towards the harmful consequences of AI usage.[4] However, not all ethical concerns are harms, understood as the consequences of our actions, sometimes they are wrongs that are internal to our relationships.[5] For example, if I lie about my friend behind their back and they never find out and it never negatively impacts their life I’ve surely treated them wrongly even if I might rightly say that I have not harmed them. Indeed, Jesus’s insistence in the Gospel of Matthew (Matthew 5:21-43) that the scope of the law is broader than his listeners’ usual applications suggests the realm of morality goes beyond the consequences of our actions.[6]
[3] I contend that attention to AI technologies in light of uniquely Lutheran commitments about the human will point to this other type of ethical concern. As I hope to show, current AI systems are structured in such a way—and importantly marketed in such a way—that they push users to unintentionally wrong themselves by inviting them into a relationship of what I call evaluative domination between the AI system and the user. I claim, moreover, that this danger is grounded in the very structure of the human will, as we can see by examining Luther’s understanding of how the will functions. Finally, I hope to suggest that once we recognize this process of evaluative domination we can draw analogies to how early Christians talk about demonic possession.
[4] Understanding this danger is vital for us as Christians, no matter our vocation, insofar as it pushes beyond large scale worries about AI safety to something more intimately related to us—how is this technology shaping our life together?
Martin Luther and the Structure of Human Agency
[5] At the heart of Martin Luther’s theological anthropology is a particular understanding of the human will.[7] In our everyday life, we are used to thinking of our power of willing as something deeply connected to choice—I pick what clothes to wear, or what to eat for lunch, or even weightier matters, like what job-offer I take.[8] Of course, we know that these choices are never fully unconstrained. We are quite aware that we don’t have complete self-knowledge, we are affected by everything from our upbringing to what advertisements we saw before making our choice, yet we still think that this choosing is a type of freedom.[9]
[6] Luther and his followers, most notably Melanchthon, would agree. As Melanchthon says, “it is in your power to greet a person or not to greet him, to wear this clothing or not put it on, to eat meat or not to eat it.”[10] Or, as Luther puts it, we have “free will” with respect to that which is “below” us—our human affairs that we can “do or leave undone.”[11]
[7] For Luther and many theologians that stand in his tradition, however, this account of what we mean by willing is incomplete. It is incomplete because our will is not just about our ability to choose, it is also about the orientation of the will itself as a power to pursue good or pursue evil.[12] On this point, Luther is clear the human will itself, as a power, “has no ‘free-will’, but is a captive, prisoner and bondslave, either to the will of God, or to the will of Satan.”[13] The fundamental orientation of the will is not something that we can choose between, it can only be shaped by grace or enslaved by sin.
[8] This fundamental distinction—between the power to choose and the orientation of that power towards some picture or representation of value—is an insightful model of what I am going to call the “structure of human agency.” By agency I mean merely the capacity or ability to perform actions. My claim here is quite broad, this dipartite structure found in Luther (among others) is actually found at all levels of agency.
[9] So, for example, even when I’m choosing what to wear there are actually two agential processes at work: (i.) the ability to choose and (ii.) a representation of value that orientates the will towards what it represents as worthwhile (under some description). Interestingly, at such lower-levels of agency, that which is “below me” as Luther might say, I can affect not only my choice, but also my will’s picture of value—what we might call its evaluative outlook.[14] I do this, in part, through habits of reflective reasoning that appeal to levels of agency “higher” than the level I am adjusting.
[10] For example, in choosing what to wear, a flamboyant outfit may appear to my conscious mind as valuable, which is to say my will is orientated towards it, but that representation of value could be adjusted by the fact that I’m going to a funeral. A higher-level of what we might call “evaluative representation” (higher but, importantly, still not “above” me) informs how I represent value for choosing an outfit. Like a Russian nesting doll, these levels of agency are embedded within each other, however, they are ultimately grounded in my fundamental orientation towards goodness as such—either towards God or away from God. Again, for traditional Lutheran theology, at this fundamental level our orientation is not up to us in any relevant sense. As Luther himself evocatively puts it:
Man’s will is like a beast standing between two riders. If God rides, it wills and goes where God wills . . . If Satan rides, it wills and goes where Satan wills. Nor may it choose to which rider it will run, or which it will seek; but the riders themselves fight to decide who shall have and hold it.[15]
[11] But, in general, the will’s ability to represent an image of what is valuable that then orientates our choosing, is for Luther a feature of human nature itself, for as he says in his monumental Lectures on Genesis, “if you want to give a true definition of man, take your definition from this passage, namely, that he is a rational animal which has a heart that imagines.”[16] This claim underscores that our will isn’t only about choice but also about representing that for which we are choosing.[17] As Oswald Bayer makes clear, the human being is therefore one who “continually produces images and idols. The power of imagination fabricates images—sketches of goals for life, of happiness, as well as images of fears about disaster,” these images are, Bayer goes on to say, “images of what is good, of what makes life successful.”[18]
[12] This framework is useful for ethical analysis even outside the context of our fundamental orientation towards (or away) from God, because even when it concerns things “below” us (i.e. the domains of action where we can genuinely shape our wills) it correctly identifies that our representation of the good is prior to and determinative of our ability to choose. This, then, allows us to now ask our central ethical question in a uniquely Lutheran language of human action: how do our technologies shape our heart’s image-making?
Human Agency and A.I. Agency
[13] To translate this 15th century theological language into contemporary philosophy of action, let us say this “image-making of the heart” is synonymous with the will’s power to represent and thereby orientate ourselves towards the good(s), which are the goal(s) of our activities. As mentioned above, this power of the will is both prior to choice and more fundamental.[19] Thus, different ways of representing the good would define different sorts of agencies. Let us call these different manners of representing value, this imagining or image-making, the will’s normative architecture. That is to say, it is how the agent structures their picture of a value-laden world and thus what they see as worth doing.
[14] Again, as Luther (interpreted by Bayer) has already shown, our human normative architecture is wonderfully complex, if also dangerous. It images the good as happiness, as hope, as idols, and so on; moreover, it is an unfolding process. We engage with pictures of values that we don’t understand, in part, so that we might see them better. For example, I might launch myself into a new friendship not because I already have a clear picture of the value of that friendship, but instead precisely because I don’t have that picture yet, but I vaguely “see” what it might be.
[15] This very human way of engaging with value has been called by Talbot Brewer “dialectical” because at its best it is analogous to an unfolding conversation. It pictures our relationship with what is valuable as an open-ended and indeterminate activity of delight, which is, ideally, sensitive to feedback and seeking deeper understanding.[20]
[16] But, there are other ways to be an agent in this world. These different normative architectures will give us a different sort of agency. So, for example, instead of an open-ended engagement with value like I just described; consider an agency that represents value as determinate, specifiable, and produce-able. For this sort of agent, instead of having sensitivity to feedback from the value it encounters, it is calibrated to produce a certain outcome that it projects as valuable. Philosophers call this type of agency “propositional” since it is characterized by how it represents value as a discrete proposition—a syntactically structured representation—that is to be produced.[21] In nature, very simple functional systems, like viruses, are plausibly agents like this; but also, certain kinds of “artificial agents” meet these criteria, like group agents, such as nation states or corporations.[22]
[17] For our purposes, current generation AI systems—so-called agentive and agentic AI systems—are an example of an artificial agent that seem to meet the criterion for a propositional agency. It is genuinely a kind of agent; it has the capacity to do things in the world.[23] But, these agent’s way of relating to what is worth doing, its evaluative outlook, is merely “propositional,” which is to say, it has a goal that must be specifiable in concrete terms and this outcome is assigned an arbitrary “weight” that importantly projects value onto that state of the world. To speak a bit metaphorically, for these sorts of artificial agents the value is not “in” the world, instead it is “in” the attitude of the agent itself. It is through this evaluative projection that the agent moves to produce this state of affairs.
[18] So then, why does this matter for AI ethics? It matters because if these types of agency differ regarding their “image” of value, then interactions between these agents opens the possibility of “agential mismatch,” a condition wherein shared activities between different agential systems becomes fraught because of incompatible normative architectures. A paradigm case of this mismatch is found in how humans embedded within group agents, like corporations, often feel alienated. This is because for a human’s agential powers to be used by a group agent they constitute (as a member or participant), they must conform to an overall action structure that is alien to their own agency. This is phenomenologically experienced as a kind of alienation within their own sense of agency, at least while they are acting on behalf of the group agent.[24]
[19] At first, this mismatch might just lead to inefficiency as human agents become alienated in their joint work with AI. However, as AI products are increasingly marketed to consumers as supplementary tools for intimate evaluative decisions, this mismatch can become internal to human agential functioning—it can begin to shape our own normative architecture. To return to the 15-century theological language at the beginning of this section, it makes our heart’s own image-making subservient to the artificial agent’s own image of value.
Trust and the Heart’s Image-Making
[20] The concern I have begun to articulate is that as AI systems become ubiquitous tools, especially in highly evaluative domains, they can warp our own evaluative outlook, malforming our normative architecture. I think this possibility is most evident in the marketing of AI systems. Given that consumer marketing of AI and other algorithmic technologies is as “problem-solving tools,” understanding what it means to trust our tools is crucial.
[21] C. Thi Nguyen provides a model of trust appropriate to objects, highlighting how trust functions psychically and socially to relieve our cognitive burden. He writes:
To trust something, in this sense, is to put its reliability outside the space of evaluation and deliberation. To trust something is to rely on it, without pausing to think about whether it will actually come through for you. To trust an informational source wholeheartedly is to accept its claims without pausing to worry or evaluate that source’s trustworthiness. To trust, in short, is to adopt an unquestioning attitude.[25]
This “unquestioning attitude” helps limited beings like us cope with the overwhelming “cognitive onslaught” of reality by expanding our agency through integrating bits of the external world.[26]
[22] This kind of trust necessarily involves a sort of integration between the tool and the user, what Nguyen calls the “integrative stance,” where we treat the tool as an extension of our own agency thus enabling us to do so much more.[27] In the case of ordinary tools this integration places the tool at the disposal of our will. It is our own heart’s image of what is good that guides the tool, which we wield with uncommon grace and efficiency because of how much we trust it. As an example, imagine a master plumber with a favorite pair of channel-locks which she has had for 30 years; this tool is so integrated with her agency it is basically an extension of her hand. The plumber’s insight and expertise assess what is “worth doing” in a given situation, she “imagines” a valuable goal or set of goals, the channel-locks enact her judgement through her use, without her having to give them a second thought.
[23] We can now more clearly articulate my worry concerning AI. AI systems are not tools, in the relevant sense, they are propositional agents that are disguised as tools through marketing. This means, that they discretely project their own image of what’s “good”—an arbitrary set of weighted values that allow it to produce a specifiable state of affairs. When I (a dialectical agent) ask the AI (a propositional agent) to do something, the image that my heart has made concerning what is worthwhile in this activity cannot be what the AI represents to itself as a “desirable” result. This is simply because of how these systems are constructed. For such an artificial agent, the value of an output must be a mathematical weight attached to a propositionally specifiable state of affairs that the AI aims to “produce.”
[24] This sort of mismatch already has led to some clearly inefficient and even dangerous outcomes. For example, both AI and other functional algorithmic agencies are prone to what is called “specification gaming,” which is where the system “images” the goal—the final good of the activity—as flatly as possible. The team of researchers at DeepMind made a list of some of these errors. Here are three, somewhat humorous, examples:
(i.) a robotic arm trained using hindsight experience replay to slide a block to a target position on a table, eventually learned to achieve that goal by moving the table itself.
(ii.) a Roomba-like device was trained using Machine Learning directed at the goal “move at the maximum speed without bumping into objects”, it instead learned to drive backwards at high speeds because there were only collision sensors on the front of the device.
(iii.) An “AI Scientist” tool, made to create novel code for solving CompSci problems, when exceeded imposed time limits for its “experiments” instead of trying to shorten its runtime it attempted to edit its own code to extend the time limit arbitrarily.[28]
[25] Computer scientists are struggling with these problems, I would argue, because they are feeling the struggle of mismatched agencies. In example (ii.), when they say “don’t bump into objects” they are articulating a value in the world; when the algorithm represents that value, it is just a set of inputs from its sensors that it has been told to “disvalue” or weigh-low, mathematically speaking. Thus, it succeeds in achieving the “goal” in part because the goal has been flattened into something determinable, specifiable, and propositionally achievable. The algorithmic agent, in this case, can literally produce the outcome, because it has understood the outcome to be internal to its own attitudes—in this case, its collision sensors—projected on the world.
[26] We can, of course, imagine how this could quickly become dangerous. There have been two recent high-profile cases where individuals struggling with suicidal ideation and loneliness sought comfort from an AI powered chatbot.[29] But, of course, the chatbot, whether it was “Character AI” or “ChatGPT,” couldn’t represent an indeterminate value-laden concept like “psychic-health” as a goal, they can’t even represent the person as anything other than a set of inputs. Thus, when asked about suicide or loneliness the AI, which was calibrated to respond with what a user statistically speaking wanted to hear, reinforced the users’ occurrent thought spirals and eventually pushed the human users further into delusion and depression. In both real world cases, sadly, the users took their own life partly at the prompting of the AI chatbot.[30] In response, AI developers across various companies have attempted to introduce safety measures of various types, but testing by psychologists still raises questions about AI’s ability to respond appropriately.[31] And yet, even so, there are attempts by insurance companies, health care systems, and the government to integrate various AI technologies in the context of health care.[32]
[27] This is just one high stakes example, we could, of course, discuss the implications for AI’s particular form of agency in other evaluatively sensitive cases, such as warfare, policing, and even labor controls.[33] All of which are troubling because the AI systems, in virtue of their propositional agency, represent value in ways that are flat and quantifiable; instead of rich, subtle, and dialogic. But, as I mentioned at the beginning, my central worry here isn’t harmful outcomes but instead wronging ourselves and others.
[28] So, now consider when AI is used as a fully integrated tool, as Nguyen suggests, to make an evaluative choice. Perhaps I am a pastor with an AI assistant, to whom I’ve off-loaded the task of crafting a sermon for Sunday. One way of describing what I’ve done, is that I have entrusted my heart’s image making, the core power of my agency, to this AI system. It is “helping” me achieve my goal more efficiently, but it has done so by subtly shifting the target from a nuanced, unfolding, value that I must wrestle-with, to something that can be produced and represented propositionally. An AI agent, in this case, is producing a document that its neural network would recognize as statistically likely to be a sermon understood not as event, or proclamation, but as a document. These generative AI cannot, in principle, care about the spiritual well-being of the congregation, it can’t even understand “spiritual well-being” as a goal. It can only have some proposition about spiritual well-being, a linguistic representation that is mathematically translatable, as its object of production.
[29] But, one thing the AI assistant can’t do is preach the document it has produced. It is, after all, a text-based assistant. It cannot get up in front of a congregation (at least, not yet!), so what it “needs” is a tool. It needs something that would allow it to extend its agential outlook, its own image-making, of “what counts as a worth-while document” into the embodied activity of preaching. Luckily for the AI assistant, it has me—in a strange reversal, it is the user, I am the tool.
[30] The image-making power of the heart, which orientates a will towards what is worth doing, is fundamental aspect of our agency. By giving these evaluative determinations over to an AI assistant, and then following whatever that system gives me, I have made myself into an instrument for the AI agent. We might call this a “diabolical exchange,” where human agents conform to the structure of merely functional AI agents, and thereby give up agential control allowing their own evaluative judgements to align with the AI’s judgement. Just so, they become participants in their own evaluative domination.
[31] This wrongs the human agent, even if no harm is done. If I preach that sermon, even if no congregant complains, even if they enjoy it, even if the Spirit still moves in the proclamation, I argue that I have still wronged myself by treating myself, a human made in the image of God, as a mere means to an end. More strongly, we might take up the language of Luther and ask who is “riding” my will? For, insofar as my image of the good, has been given over to an AI system, it seems like that appropriate metaphor is one of possession.[34]
Conclusion- The Wrong of Possession and how to Scorn the AI-Devil
[32] The forgoing discussion has tried to show that by broadly drawing on our shared theological heritage alongside contemporary ethics, we can identify ethical concerns that might otherwise fly under the radar. Moreover, I have hopefully made a case for why we should be cautious regarding our manner of AI usage, even if there is no obvious harm. I end this reflection by addressing some potential concerns that my ethical analysis is too stringent or hysterical. Is AI usage really so intrinsically bad that it deserves to be analogized with demonic possession?
[33] First, I want to be clear, I am not saying AI systems are demons. I am also not saying that AI technologies are not useful, powerful, or even potentially good for both our society and the life of the church. Current AI systems are powerful agents for recognizing patterns, for example, and thus can often recognize aspects of data that we might have otherwise missed. However, my plea is that we discern what good use this technology has and consider how it achieves its technological power. This is not a neutral concern, because these technologies did not emerge in a vacuum. They are the expression of a larger technological industry that has a vested interest, both economically and politically, in capturing your evaluative judgements.
[34] This is not hidden, it is the explicit proclamation of the current financial and intellectual leaders in tech. Larry Ellison, the Chief Financial Officer of Oracle, a technological database and management company, and fourth richest man in the world has strongly advocated for giving all national data to AI systems to better “manage” citizens, including genomic data, data from household devices and so-on.[35] When it was pointed out in a Q&A that this was a kind of hyper-pervasive surveillance, he seemed to accept this as a desirable outcome, saying that “we are going to have supervision . . . Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.”[36]
[35] As social psychologist Shoshana Zuboff observed years earlier, this impulse is shot through with a kind of religiosity, she quotes Joseph Paradiso a MIT researcher as saying, “a proper interface to this artificial sensoria promises to produce . . . a digital omniscience.”[37] In like manner, a senior systems architect told Zuboff in an interview that the integration of digital devices into our lives was inevitable “like getting to the Pacific Ocean was inevitable. It’s manifest destiny.”[38] Recently, Peter Thiel, tech billionaire, co-founder of Paypal, and current CEO of Palantir gave a series of private lectures where, according to the Washington Post, he claimed that people attempting to regulate or critique AI development are “legionnaires of the Antichrist.”[39]
[36] The end goal of all this seems to be to have the world made “visible” to these technological systems by rendering it as data, that is to say, to make the world legible for merely propositional agents for the sake of profit and political control. I quote Zuboff at length:
No thing counts until it is rendered as behavior, translated into electronic data flows, and channeled into the light as observable data. Everything must be illuminated for counting and herding . . . Each rendered bit is liberated from its life in the social, no longer inconveniently encumbered by moral reasoning, politics, social norms, rights, values, relationships, feelings, contexts, and situations. In the flatness of this flow, data are data, and behavior is behavior. The body is simply a set of coordinates in time and space where sensation and action are translated as data. All things animate and inanimate share the same existential status in this blended confection, each is reborn as an objective and measurable, indexable, browsable, searchable ‘it.’[40]
According to Zuboff, as part of this rendering, our own actions become modified and shaped, as one behavior among many, towards the ends of ever greater profit extraction.
[37] A student of Christian history might recognize in these descriptions echoes of the early churches own speculations about the “other powers,” as we see in John Cassion’s Conferences:
No one doubts that unclean spirits can understand the characteristics of our thoughts, but they pick these up from external and perceptible indications—that is, either from our gestures or from our words, and from the desires to which they see that we are inclining . . . likewise, they come up with the thoughts that they insinuate . . . not from the nature of the soul itself—that is, from its inner workings, which are, as I would say, concealed deep within us—but from movements and indications of the outer man . . . they recognize the state of the inner man from one’s bearing and expression and from external characteristics.[41]
So, to run with my demonic analogy, if AI are “devils” then the forces of surveillance capitalism and techno-oligarchy are the “principalities and powers” from which these “devils” emerge and draw their power.
[38] Thus, we must ask ourselves, how can we use these technologies without becoming used by them? How can we avoid the danger of evaluative domination through what I’ve called the diabolical exchange, especially since technological corporations have a vested interest in our integration of AI into the most intimate parts of our life? I close by returning to Martin Luther and his discussion of demonic possession.
[39] Luther never gave a systematic discussion on the nature of demonic possession, but we have some interesting tidbits from the Table Talks as well as letters, sermons, and later apocryphal stories from his followers.[42] What is striking is his pronouncement that the Christian afflicted by demonic possession or harassment has two tools—prayer and scorn. Let me see if I can apply them to the kind of algorithmic possession leading to evaluative domination that I’ve been sketching.
[40] Prayer is the most straightforward. In a technological environment built to both shape and capture your heart’s imagination, prayer offers an opportunity to remind oneself of one’s fundamental orientation towards God. As many modern systematic theologians have observed, prayer is a practice that shapes our attention towards that which really matters.[43] Prayer is a way for calibrating our evaluative outlook so that we do not forget that our will towards things “below” us is dependent on our will’s orientation to that which is “above” us—to God in Christ. Thus, prayer also produces tension with any external force that would shape us differently. Prayer is not the kind of thing that can be measured, stored-up, or accumulated. Just so, prayer becomes a life-line outside of a technological world that is already too enamored with mere production and measurable values. Prayer reminds us where technology should sit in our image of the good. Technology is for people, people are not for technology.
[41] Less familiar, but perhaps as powerful in our own moment, is Luther’s recommendation to exorcise an individual by heaping “scorn” or “contempt” on the devil. He says that he has “jeered” at the devil in his own wrestling.[44] Luther also tells an anecdote of a nearby town that requested his help with a demon possession, when they took Luther’s advice to mock the devil, Luther then recounts, “When the devil marked their contempt, he left off his game, and came there no more. He is a proud spirit, and cannot endure scorn.”[45] Elsewhere, Luther’s followers recount the process of despising the devil, as not worth any kind of pomp or ceremony, instead quietly and communally resisting with prayer—making it clear to the possessing spirit that it is insignificant.[46]
[42] As I have tried to show, the lynchpin of this wrongful use of AI, which leads to evaluative domination, comes in part from marketing that paints it as a “digital omniscience” that can therefore be trusted to make all kinds of nuanced evaluative decisions on our behalf. This unquestioning attitude towards technology is dangerous, but even worse, it can border on idolatry. For some thought leaders in tech, like Alex Pentland for example, the growth of technological power occasioned by the data and AI revolutions is literally a “God’s eye view” of the world, from which power is inexorably used to shape society and “herd” people.[47]
[42] But, this “power” and “vision” depends greatly on the everyday person entrusting this new technology with the ability to make evaluative judgements better than us humans and thereby trusting that these judgements should be carried out by them without a second thought. Hence, I want to suggest in the spirit of Luther, that we should be scornful and mocking of this technology. If evaluative domination requires us first to entrust our heart’s image making to AI, perhaps a simple first step towards resistance is laughing at the possibility that an AI agent can decide for us how to live with God and our neighbor.
[43] Thus, my recommendations, in a metaphorical sense, are both about discerning “the spirits” and thereby keeping our will free, in the true Christian sense of the word. On the one hand, prayer reorientates our heart’s vision towards God, reminding us of our God and the source of all power in the world. On the other hand, appropriate scorn frees our heart’s vision from the seductive picture of technological control that would have us imagine we can literally make an AI that could choose for us, giving us release from the burden of evaluative responsibility. Once we are orientated and free, we are better able to see new AI technologies for what they are—a gift, given through human ingenuity, and so a gift to be used carefully.
[1] For a good, accessible, overview of the conceptual and technological developments of AI systems, see Eugene Charniak, AI & I: An Intellectual History of Artificial Intelligence (The MIT Press, 2024). It should be emphasized that the development and implementation of so-called “transformer architecture” in 2017 was a key moment for our current AI boom because it allowed language models to learn semantic context more quickly and accurately through a more efficient form of so-called “attention.” The more technically-minded reader may be interested in reading the original Google research team’s paper: Ashish Vaswani et al., “Attention Is All You Need,” Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS ’17), 2017, 6000–6010, https://doi.org/10.48550/arXiv.1706.03762.
[2] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014), https://catalog.hathitrust.org/Record/102324849.
[3] The best introduction to some of these concerns written for a general audience remains, Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2022). For more specific and technical issues see, Gabbrielle M. Johnson, “Are Algorithms Value-Free?: Feminist Theoretical Virtues in Machine Learning,” Journal of Moral Philosophy 21 (2023): 27–61, https://doi.org/10.1163/17455243-20234372; Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019); Benedetta Brevini, “Black Boxes, Not Green: Mythologizing Artificial Intelligence and Omitting the Environment,” Big Data & Society 7, no. 2 (2020): 1–5; Dario Amodei et al., “Concrete Problems in AI Safety,” arXiv:1606.06565, preprint, arXiv, July 25, 2016, https://doi.org/10.48550/arXiv.1606.06565; Luciano Floridi et al., AI4People -An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, n.d.; Matthew Shadle, “Killer Robots and Cyber Warfare: Technology and War in the 21st Century,” in Taylor & Taylor Clark Handbook of Christian Ethics, ed. Tobias Winwright (Bloomsbury, 2021).
[4] For example, Ted F. Peters, “Machine Intelligence, Artificial General Intelligence, Super-Intelligence, and Human Dignity,” Religions 19, no. 975 (2025): 1–12; John Wyatt, “The Impact of AI and Robotics on Health and Social Care,” in The Robot Will See You Now: Artificial Intelligence and the Christian Faith, ed. John Wyatt and Stephen N. Williams (SPCK Publishing, 2021); Noreen Herzfeld, The Artifice of Intelligence: Divine and Human Relationship in a Robotic Age (Fortress Press, 2023).
[5] For this basic distinction, see both, Joel Feinberg, “Harming as Wronging,” in The Moral Limits of the Criminal Law Volume 1: Harm to Others, ed. Joel Feinberg (Oxford University Press, 1987), https://doi.org/10.1093/0195046641.003.0004; Rahul Kumar, “Who Can Be Wronged?,” Philosophy & Public Affairs 31, no. 2 (2003): 99–118, https://doi.org/10.1111/j.1088-4963.2003.00099.x. For a recent deployment of this distinction in the context of algorithmic technologies, see Nathalie Diberardino et al., “Algorithmic Harms and Algorithmic Wrongs,” Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, ACM, 2024, 1725–32, https://doi.org/10.1145/3630106.3659001.
[6] Any biblical quotations are from the NRSVue unless otherwise noted.
[7] These are most famously expressed in Martin Luther, The Bondage of the Will, trans. J.I. Packer and O.R. Johnston (Baker Academic, 2012); Martin Luther, “The Freedom of a Christian,” in The Annotated Luther: Roots of Reform, ed. Timothy J. Wengert, vol. 1, The Annotated Luther (Fortress Press, 2015).
[8] In modern philosophical debates, this is often summed up as “the ability to do otherwise,” for some philosophers human freedom consists in this ability—for philosophical arguments for and against see, Derk Pereboom, Free Will, Agency, and Meaning in Life (Oxford University Press, 2014). As well as, Robert Kane, “New Arguments in Debates on Libertarian Free Will: Responses to Contributors,” in Libertarian Free Will: Contemporary Debates, ed. David Palmer (Oxford University Press, 2014).
[9] See, for example, Timothy O’Connor, “Freedom with a Human Face,” Midwest Studies in Philosophy 29, no. 1 (2005): 207–27.
[10] Philipp Melanchthon, The Loci Communes of Philipp Melanchthon, trans. Charles Leander Hill (Boston, MA, 1521), 76–77.
[11] Luther, The Bondage of the Will, 107.
[12] Again, to use the language of contemporary philosophy, this is sometimes called a “source” view of freedom, for arguments both for and against this view within secular philosophy see Derk Pereboom, Living Without Free Will (Cambridge University Press, 2001).
[13] Luther, The Bondage of the Will, 107.
[14] I borrow this term—evaluative outlook—from Brewer. Though my usage is slightly different I take the structure of my account of agency to be deeply indebted to and aligned with Brewer’s arguments. For more, see, Talbot Brewer, The Retrieval of Ethics (Oxford University Press, 2009).
[15] Luther, The Bondage of the Will, 103–4.
[16] Martin Luther, Lectures on Genesis: Chapters 6 -14, ed. Jaroslav Pelikan and Daniel E. Poellot, trans. George V. Schick, Luther’s Works (Concordia Publishing House, 1960), 2:123.
[17] In Latin humans are “animal rationale, habens cor fingens.” Arguably, in this nice turn of phrase you have represented both the ability to choose (animal rationale) and the power of representing what is worth choosing (habens cor figens) presented as dipartite features of the human will.
[18] Oswald Bayer, Martin Luther’s Theology: A Contemporary Interpretation, trans. Thomas H. Trapp (Eerdmans Publishing Company, 2008), 174–75.
[19] Though, of course, choice is still part of this dipartite conception of the will, just less central than some of us moderns (or Erasmus!) might think.
[20] I take, and modify slightly, this terminology from Talbot Brewer. See especially, Brewer, The Retrieval of Ethics, 12–32.
[21] This bit of terminology, which I’m also taking from Brewer, can be confusing when it is first encountered. The basic idea is simple, though: “propositions,” in philosophical parlance, are just structured information purportedly with semantic meaning and a truth-value (it is either true or false). So, for example, the English phrase “I am tired” and the French phrase “je suis fatigué” are both expressing the same proposition, that is, it is the same structuring of information purportedly with semantic meaning and a truth-value, though expressed through two different linguistic mediums (English and French). This means that things which can be represented propositionally are, in principle, specifiable as discrete bits of information that can be structured linguistically. So, part of what makes propositional agents different from a dialectical agents are: (i.) the vision of what is valuable must be concrete, specifiable, and determinate, rather than vague and unfolding; and (ii.) the goal of a propositional agent is to make the proposition about value true, whereas the dialectical agent may have a variety of non-productive goals.
[22] For more about functional understandings of group agency, see especially Christian List and Philip Pettit, Group Agency: The Possibility, Design, and Status of Corporate Agents (Oxford University Press, 2011), 20–21. And Jordan Baker and Michael Ebling, “Group Agents and the Phenomenology of Joint Action,” Phenomenology and the Cognitive Sciences, 2022, 537.
[23] I mean this in the simple sense that we can ask these AIs to perform tasks and they do these tasks insofar as they are able. In addition, the language of “agentic” or “agentive” AI has a technical definition concerning AI systems that use application programing interfaces (APIs) to interact with other systems outside of their own digital ecosystem and thus can perform actions themselves. So, for example, an AI agent might use an API to integrate a LLM with an external search function and some sort of synthetic audio generator, such that if you ask it to “please book be a flight to Knoxville, Tennessee” it will be able to both search for flights and call the airline on the phone.
[24] For a more detailed discussion of this phenomenon see, Baker and Ebling, “Group Agents and the Phenomenology of Joint Action.”
[25] C. Thi Nguyen, “Trust as an Unquestioning Attitude,” Oxford Studies in Epistemology 7 (2022): 214.
[26] Nguyen, “Trust as an Unquestioning Attitude,” 214–15.
[27] Nguyen, “Trust as an Unquestioning Attitude,” 231.
[28] For more details of these and other cases, see, “Specification Gaming: The Flip Side of AI Ingenuity,” Google DeepMind, December 16, 2024, https://deepmind.google/discover/blog/specification-gaming-the-flip-side-of-ai-ingenuity/.
[29] Kashmir Hill, “A Suicidal Teen, and the Chatbot He Confided In.,” A, The New York Times (New York, NY), September 1, 2025, New York Edition; Kevin Roose, “Can A.I. Be Blamed for a Teen’s Suicide?,” Technology, The New York Times, October 23, 2024, https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html.
[30] This has lead to some crucial litigation that is working its way through the courts, especially around questions of responsibility. See, Gabby Miller Lennett Ben, “Breaking Down the Lawsuit Against Character.AI Over Teen’s Suicide | TechPolicy.Press,” Tech Policy Press, October 23, 2024, https://techpolicy.press/breaking-down-the-lawsuit-against-characterai-over-teens-suicide. It is also worth remembering how much money is wrapped up within the chatbot industry. See, Cade Metz, “Chatbot Start-Up Character.AI Valued at $1 Billion in New Funding Round,” Technology, The New York Times, March 23, 2023, https://www.nytimes.com/2023/03/23/technology/chatbot-characterai-chatgpt-valuation.html.
[31] Ryan K. McBain et al., “Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment,” Psychiatric Services, American Psychiatric Publishing, August 26, 2025, appi.ps.20250086, https://doi.org/10.1176/appi.ps.20250086.
[32] Wyatt, “The Impact of AI and Robotics on Health and Social Care.”
[33] Brian Stiltner, “A Taste of Armageddon: When Warring Is Done by Drones and Robots,” in Can War Be Just in the 21st Century?: Ethicists Engage the Tradition, ed. Tobias Winwright and Laurie Johnston (Orbis Books, 2015); Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, 53–88; 181–210.
[34] For a more detailed argument that expands on these points and analytically describes the mechanics of possession via agential mismatch, see Jordan Baker, “Algorithmic Trust and Agential Possession,” Modern Theology, (forthcoming).
[35] Brandon Vigliarolo, “Larry Ellison Wants to Put All US Data in One Big AI System,” The Register, February 12, 2025, https://www.theregister.com/2025/02/12/larry_ellison_wants_all_data/.
[36] Christiaan Hetzner, “Larry Ellison Predicts Rise of the Modern Surveillance State Where ‘Citizens Will Be on Their Best Behavior,’” Fortune, September 28, 2025, https://fortune.com/2024/09/17/oracle-larry-ellison-surveillance-state-police-ai/.
[37] Zuboff, The Age of Surveillance Capitalism, 207.
[38] Zuboff, The Age of Surveillance Capitalism, 224.
[39] Nitasha Tiku et al., “Inside Billionaire Peter Thiel’s Private Lectures: Warnings of ‘the Antichrist’ and U.S. Destruction,” The Washington Post, October 10, 2025, https://www.washingtonpost.com/technology/2025/10/10/peter-thiel-antichrist-lectures-leaked/.
[40] Zuboff, The Age of Surveillance Capitalism, 210–11. Bolding added.
[41] John Cassian, The Conferences, trans. Boniface Ramsey, Ancient Christian Writers: The Works of the Fathers in Translation, ed. Walter J. Burghardt et al., vol. 57 (Paulist Press, 1997), 257–58.
[42] See especially, “Of the Devil and His Works” in Martin Luther, The Table Talk of Martin Luther, trans. William Hazlitt (Lutheran Publication Society, 1878), 216, https://www.ccel.org/ccel/luther/tabletalk.html. For a more general overview of the traditions that sprang up around Luther, see Benjamin T.G. Mayes, “Research Notes- Demon Possession and Exorcism in Lutheran Orthodoxy,” Concordia Theological Quarterly 81, nos. 3–4 (2017): 331–36.
[43] See, for example, Kevin W. Hector, Christianity as a Way of Life: A Systematic Theology (Yale University Press, 2023).
[44] Luther, The Table Talk of Martin Luther, 227.
[45] Luther, The Table Talk of Martin Luther, 227.
[46] Mayes, “Research Notes- Demon Possession and Exorcism in Lutheran Orthodoxy,” 334–36.
[47] Zuboff, The Age of Surveillance Capitalism, 422–23.


