{"id":6859,"date":"2025-11-21T01:11:25","date_gmt":"2025-11-21T01:11:25","guid":{"rendered":"https:\/\/learn.elca.org\/jle\/?p=6859"},"modified":"2025-12-01T21:32:04","modified_gmt":"2025-12-01T21:32:04","slug":"ai-agency-and-the-human-will","status":"publish","type":"post","link":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/","title":{"rendered":"AI, Agency, and the Human Will"},"content":{"rendered":"<p>[1] We are living through a technological watershed driven by artificial intelligence. Since the arrival of early generative Large Language Models (LLMs) in 2017, billions of dollars, years of research, and instruments of state power have all been used to reshape our world to better accommodate the next generation of AI models.<a href=\"#_edn1\" name=\"_ednref1\">[1]<\/a> These technologies are often presented to the public as a source of innovation and societal progress, however, there have been notes of concern. Most dramatically, there are the speculative world-ending concerns of some technological futurists, like Nick Bostrom, who warn of the possibility of the creation of malevolent superintelligences.<a href=\"#_edn2\" name=\"_ednref2\">[2]<\/a> But, there are also more down-to-earth critiques regarding AI safety in health care, economics, environmentalism, and warfare, among many other issues.<a href=\"#_edn3\" name=\"_ednref3\">[3]<\/a><\/p>\n<p>[2] These ethical worries should trouble Christians insofar as they affect how we might be treating our neighbor <em>by<\/em> and <em>through<\/em> using these new technologies. After all, it is often the marginalized and outcast who are most harshly affected when society lurches into a new stage of technological \u201cprogress.\u201d\u00a0 As such, it is important to offer guidance for Christian action that is grounded in serious ethical reflection, scriptural interpretation, and the voices of our traditions. Many theologians and Christian ethicists have begun this task, sometimes addressing core theoretical concerns about the nature of AI and sometimes contextually dealing with specific issues, but almost always with an eye towards the <em>harmful consequences<\/em> of AI usage.<a href=\"#_edn4\" name=\"_ednref4\">[4]<\/a>\u00a0 However, not all ethical concerns are <em>harms<\/em>, understood as the consequences of our actions, sometimes they are <em>wrongs <\/em>that are internal <em>to our relationships<\/em>.<a href=\"#_edn5\" name=\"_ednref5\">[5]<\/a> For example, if I lie about my friend behind their back and they never find out and it never negatively impacts their life I\u2019ve surely treated them wrongly even if I might rightly say that I have not harmed them. Indeed, Jesus\u2019s insistence in the Gospel of Matthew (Matthew 5:21-43) that the scope of the law is broader than his listeners\u2019 usual applications suggests the realm of morality goes beyond the consequences of our actions.<a href=\"#_edn6\" name=\"_ednref6\">[6]<\/a><\/p>\n<p>[3] I contend that attention to AI technologies in light of uniquely Lutheran commitments about the human will point to this other type of ethical concern. As I hope to show, current AI systems are structured in such a way\u2014and importantly<em> marketed <\/em>in such a way\u2014that they push users to unintentionally <em>wrong themselves<\/em> by inviting them into a relationship of what I call <em>evaluative domination<\/em> between the AI system and the user. I claim, moreover, that this danger is grounded in the very structure of the human will, as we can see by examining Luther\u2019s understanding of how the will functions. Finally, I hope to suggest that once we recognize this process of evaluative domination we can draw analogies to how early Christians talk about demonic possession.<\/p>\n<p>[4] Understanding this danger is vital for us as Christians, no matter our vocation, insofar as it pushes beyond large scale worries about AI safety to something more intimately related to us\u2014how is this technology shaping our life together?<\/p>\n<p><strong>Martin Luther and the Structure of Human Agency<\/strong><\/p>\n<p>[5] At the heart of Martin Luther\u2019s theological anthropology is a particular understanding of the human will.<a href=\"#_edn7\" name=\"_ednref7\">[7]<\/a> In our everyday life, we are used to thinking of our power of <em>willing <\/em>as something deeply connected to <em>choice<\/em>\u2014I pick what clothes to wear, or what to eat for lunch, or even weightier matters, like what job-offer I take.<a href=\"#_edn8\" name=\"_ednref8\">[8]<\/a> Of course, we <em>know <\/em>that these choices are never fully unconstrained. We are quite aware that we don\u2019t have complete self-knowledge, we are affected by everything from our upbringing to what advertisements we saw before making our choice, yet we still think that this <em>choosing <\/em>is a type of freedom.<a href=\"#_edn9\" name=\"_ednref9\">[9]<\/a><\/p>\n<p>[6] Luther and his followers, most notably Melanchthon, would agree. As Melanchthon says, \u201cit is in your power to greet a person or not to greet him, to wear this clothing or not put it on, to eat meat or not to eat it.\u201d<a href=\"#_edn10\" name=\"_ednref10\">[10]<\/a> Or, as Luther puts it, we have \u201cfree will\u201d with respect to that which is \u201cbelow\u201d us\u2014our human affairs that we can \u201cdo or leave undone.\u201d<a href=\"#_edn11\" name=\"_ednref11\">[11]<\/a><\/p>\n<p>[7] For Luther and many theologians that stand in his tradition, however, this account of what we mean by <em>willing <\/em>is incomplete. It is incomplete because our <em>will <\/em>is not just about our ability to choose, it is also about the orientation of the will <em>itself <\/em>as a power to pursue good or pursue evil.<a href=\"#_edn12\" name=\"_ednref12\">[12]<\/a> On this point, Luther is clear the human will <em>itself<\/em>, as a power, \u201chas no \u2018free-will\u2019, but is a captive, prisoner and bondslave, either to the will of God, or to the will of Satan.\u201d<a href=\"#_edn13\" name=\"_ednref13\">[13]<\/a> The fundamental orientation of the will is not something that <em>we <\/em>can choose between, it can only be shaped by grace or enslaved by sin.<\/p>\n<p>[8] This fundamental distinction\u2014between the <em>power to choose <\/em>and the <em>orientation of that power<\/em> towards some picture or representation of value\u2014is an <em>insightful <\/em>model of what I am going to call the \u201cstructure of human agency.\u201d By agency I mean merely the <em>capacity <\/em>or <em>ability <\/em>to perform actions. My claim here is quite broad, this dipartite structure found in Luther (among others) is actually found at all <em>levels <\/em>of agency.<\/p>\n<p>[9] So, for example, even when I\u2019m choosing what to wear there are actually two agential processes at work: (i.) the ability to choose and (ii.) a representation of value that <em>orientates <\/em>the will towards what it represents as worthwhile (under some description). Interestingly, at such <em>lower-levels <\/em>of agency, that which is \u201cbelow me\u201d as Luther might say, I can affect not only my choice, but also my will\u2019s picture of value\u2014what we might call its <em>evaluative outlook<\/em>.<a href=\"#_edn14\" name=\"_ednref14\">[14]<\/a> I do this, in part, through habits of reflective reasoning that appeal to levels of agency \u201chigher\u201d than the level I am adjusting.<\/p>\n<p>[10] For example, in choosing what to wear, a flamboyant outfit may <em>appear <\/em>to my conscious mind as valuable, which is to say my will is orientated towards it, but that representation of value could be <em>adjusted <\/em>by the fact that I\u2019m going to a funeral. A higher-level of what we might call \u201cevaluative representation\u201d (higher but, importantly, still not \u201cabove\u201d me) informs how I represent value <em>for choosing <\/em>an outfit. Like a Russian nesting doll, these levels of agency are embedded within each other, however, they are ultimately grounded in my fundamental orientation towards goodness <em>as such<\/em>\u2014either towards God or away from God. Again, for traditional Lutheran theology, at this fundamental level our orientation is not <em>up to us<\/em> in any relevant sense. As Luther himself evocatively puts it:<\/p>\n<p style=\"padding-left: 40px;\">Man\u2019s will is like a beast standing between two riders. If God rides, it wills and goes where God wills . . . If Satan rides, it wills and goes where Satan wills. Nor may it choose to which rider it will run, or which it will seek; but the riders themselves fight to decide who shall have and hold it.<a href=\"#_edn15\" name=\"_ednref15\">[15]<\/a><\/p>\n<p>[11] But, in general, the will\u2019s ability to represent an <em>image <\/em>of what is valuable that then orientates our choosing, is for Luther a feature of human nature itself, for as he says in his monumental <em>Lectures on Genesis<\/em>, \u201cif you want to give a true definition of man, take your definition from this passage, namely, that he is a rational animal which has a heart that imagines.\u201d<a href=\"#_edn16\" name=\"_ednref16\">[16]<\/a> This claim underscores that our will isn\u2019t only about choice but also about representing <em>that for which we are choosing<\/em>.<a href=\"#_edn17\" name=\"_ednref17\">[17]<\/a> As Oswald Bayer makes clear, the human being is therefore one who \u201ccontinually produces images and idols. The power of imagination fabricates images\u2014sketches of goals for life, of happiness, as well as images of fears about disaster,\u201d these images are, Bayer goes on to say, \u201cimages of what is good, of what makes life successful.\u201d<a href=\"#_edn18\" name=\"_ednref18\">[18]<\/a><\/p>\n<p>[12] This framework is useful for ethical analysis even <em>outside <\/em>the context of our fundamental orientation towards (or away) from God, because even when it concerns things \u201cbelow\u201d us (i.e. the domains of action where we can genuinely shape our wills) it correctly identifies that our representation of the good is <em>prior to<\/em> and <em>determinative of<\/em> our ability to choose. This, then, allows us to now ask our central ethical question in a uniquely Lutheran language of human action: how do our technologies shape our heart\u2019s image-making?<\/p>\n<p><strong>Human Agency and A.I. Agency<\/strong><\/p>\n<p>[13] To translate this 15<sup>th<\/sup> century theological language into contemporary philosophy of action, let us say this \u201cimage-making of the heart\u201d is synonymous with the will\u2019s power to represent and thereby orientate ourselves towards the good(s), which are the goal(s) of our activities. As mentioned above, this power of the will is both prior to choice and more fundamental.<a href=\"#_edn19\" name=\"_ednref19\">[19]<\/a> Thus, different <em>ways of representing <\/em>the good would <em>define <\/em>different sorts of agencies. Let us call these different manners of representing value, this imagining or image-making, the will\u2019s <em>normative architecture<\/em>. That is to say, it is how the agent <em>structures <\/em>their picture of a value-laden world and thus what they see as worth doing.<\/p>\n<p>[14] Again, as Luther (interpreted by Bayer) has already shown, our human normative architecture is wonderfully complex, if also dangerous. It images the good as happiness, as hope, as idols, and so on; moreover, it is an unfolding process. We engage with pictures of values that we don\u2019t understand, in part, so that we might see them better. For example, I might launch myself into a new friendship <em>not<\/em> because I already have a clear picture of the value of that friendship, but instead precisely because I <em>don\u2019t <\/em>have that picture yet, but I vaguely \u201csee\u201d what it might be.<\/p>\n<p>[15] This very human way of engaging with value has been called by Talbot Brewer \u201cdialectical\u201d because at its best it is analogous to an unfolding conversation. It pictures our relationship with what is valuable as an open-ended and indeterminate activity of delight, which is, ideally, sensitive to feedback and seeking deeper understanding.<a href=\"#_edn20\" name=\"_ednref20\">[20]<\/a><\/p>\n<p>[16] But, there are other ways to be an agent in this world. These different normative architectures will give us a different sort of agency. So, for example, instead of an open-ended engagement with value like I just described; consider an agency that represents value as determinate, specifiable, and produce-able. For this sort of agent, instead of having sensitivity to feedback from the value it encounters, it is calibrated to produce a certain outcome that it <em>projects <\/em>as valuable. Philosophers call this type of agency \u201cpropositional\u201d since it is characterized by how it represents value as a discrete <em>proposition<\/em>\u2014a syntactically structured representation\u2014that is to be produced.<a href=\"#_edn21\" name=\"_ednref21\">[21]<\/a> In nature, very simple functional systems, like viruses, are plausibly agents like this; but also, certain kinds of \u201cartificial agents\u201d meet these criteria, like group agents, such as nation states or corporations.<a href=\"#_edn22\" name=\"_ednref22\">[22]<\/a><\/p>\n<p>[17] For our purposes, current generation AI systems\u2014so-called agentive and agentic AI systems\u2014are an example of an artificial agent that seem to meet the criterion for a propositional agency. It is genuinely a kind of <em>agent<\/em>; it has the capacity to do things in the world.<a href=\"#_edn23\" name=\"_ednref23\">[23]<\/a> But, these agent\u2019s way of relating to what is worth doing, its evaluative outlook, is merely \u201cpropositional,\u201d which is to say, it has a goal that must be specifiable in concrete terms and this outcome is assigned an arbitrary \u201cweight\u201d that importantly <em>projects value <\/em>onto that state of the world. To speak a bit metaphorically, for these sorts of artificial agents the value is not \u201cin\u201d the world, instead it is \u201cin\u201d the attitude of the agent itself. It is through this evaluative projection that the agent moves to <em>produce <\/em>this state of affairs.<\/p>\n<p>[18] So then, why does this matter for AI ethics? It matters because if these types of agency differ regarding their \u201cimage\u201d of value, then interactions between these agents opens the possibility of &#8220;agential mismatch,&#8221; a condition wherein shared activities between different agential systems becomes fraught because of incompatible normative architectures. A paradigm case of this mismatch is found in how humans embedded within group agents, like corporations, often feel alienated. This is because for a human\u2019s agential powers to be used by a group agent they constitute (as a member or participant), they must conform to an overall action structure that is alien to their own agency. This is phenomenologically experienced as a kind of alienation within their own sense of agency, at least while they are acting on behalf of the group agent.<a href=\"#_edn24\" name=\"_ednref24\">[24]<\/a><\/p>\n<p>[19] At first, this mismatch might just lead to inefficiency as human agents become alienated in their joint work with AI. However, as AI products are increasingly marketed to consumers as supplementary tools for intimate evaluative decisions, this mismatch can become internal to human agential functioning\u2014it can begin to <em>shape <\/em>our own normative architecture. To return to the 15-century theological language at the beginning of this section, it makes our heart\u2019s own image-making <em>subservient<\/em> to the artificial agent\u2019s own image of value.<\/p>\n<p><strong>Trust and the Heart\u2019s Image-Making<\/strong><\/p>\n<p>[20] The concern I have begun to articulate is that as AI systems become ubiquitous tools, especially in highly evaluative domains, they can warp our own evaluative outlook, malforming our normative architecture. I think this possibility is most evident in the <em>marketing <\/em>of AI systems. Given that consumer marketing of AI and other algorithmic technologies is as \u201cproblem-solving tools,\u201d understanding what it means to trust our tools is crucial.<\/p>\n<p>[21] C. Thi Nguyen provides a model of trust appropriate to objects, highlighting how trust functions psychically and socially to relieve our cognitive burden. He writes:<\/p>\n<p style=\"padding-left: 40px;\">To trust something, in this sense, is to put its reliability outside the space of evaluation and deliberation. To trust something is to rely on it, without pausing to think about whether it will actually come through for you. To trust an informational source wholeheartedly is to accept its claims without pausing to worry or evaluate that source\u2019s trustworthiness. To trust, in short, is to adopt an unquestioning attitude.<a href=\"#_edn25\" name=\"_ednref25\">[25]<\/a><\/p>\n<p>This &#8220;unquestioning attitude&#8221; helps limited beings like us cope with the overwhelming &#8220;cognitive onslaught&#8221; of reality by expanding our agency through integrating bits of the external world.<a href=\"#_edn26\" name=\"_ednref26\">[26]<\/a><\/p>\n<p>[22] This kind of trust necessarily involves a sort of <em>integration <\/em>between the tool and the user, what Nguyen calls the \u201cintegrative stance,\u201d where we treat the tool as an <em>extension <\/em>of our own agency thus enabling us to do so much more.<a href=\"#_edn27\" name=\"_ednref27\">[27]<\/a> In the case of ordinary tools this integration places the tool <em>at the disposal <\/em>of our will. It is our<em> own <\/em>heart\u2019s image of what is good that guides the tool, which we wield with uncommon grace and efficiency because of how much we trust it. As an example, imagine a master plumber with a favorite pair of channel-locks which she has had for 30 years; this tool is so integrated with her agency it is basically an extension of her hand. The plumber\u2019s insight and expertise assess what is \u201cworth doing\u201d in a given situation, she \u201cimagines\u201d a valuable goal or set of goals, the channel-locks enact her judgement through her use, without her having to give them a second thought.<\/p>\n<p>[23] We can now more clearly articulate my worry concerning AI. AI systems are not tools, in the relevant sense, they are <em>propositional agents <\/em>that are <em>disguised <\/em>as tools through marketing. This means, that they discretely project their own image of what\u2019s \u201cgood\u201d\u2014an arbitrary set of weighted values that allow it to produce a specifiable state of affairs. When I (a dialectical agent) ask the AI (a propositional agent) to do something, the <em>image <\/em>that my heart has made concerning what is worthwhile in this activity <em>cannot be <\/em>what the AI represents to itself as a \u201cdesirable\u201d result. This is simply because of how these systems are constructed. For such an artificial agent, the value of an output must be a mathematical weight attached to a propositionally specifiable state of affairs that the AI aims to \u201cproduce.\u201d<\/p>\n<p>[24] This sort of mismatch already has led to some clearly inefficient and even dangerous outcomes. For example, both AI and other functional algorithmic agencies are prone to what is called \u201cspecification gaming,\u201d which is where the system \u201cimages\u201d the goal\u2014the final good of the activity\u2014as flatly as possible. The team of researchers at DeepMind made a list of some of these errors. Here are three, somewhat humorous, examples:<\/p>\n<p style=\"padding-left: 40px;\">(i.) a robotic arm trained using hindsight experience replay to slide a block to a target position on a table, eventually learned to achieve that goal by moving the table itself.<br \/>\n(ii.) a Roomba-like device was trained using Machine Learning directed at the goal &#8220;move at the maximum speed without bumping into objects&#8221;, it instead learned to drive backwards at high speeds because there were only collision sensors on the front of the device.<br \/>\n(iii.) An \u201cAI Scientist\u201d tool, made to create novel code for solving CompSci problems, when exceeded imposed time limits for its \u201cexperiments\u201d instead of trying to shorten its runtime it attempted to edit its own code to extend the time limit arbitrarily.<a href=\"#_edn28\" name=\"_ednref28\">[28]<\/a><\/p>\n<p>[25] Computer scientists are struggling with these problems, I would argue, because they are <em>feeling <\/em>the struggle of mismatched agencies. In example (ii.), when they say \u201cdon\u2019t bump into objects\u201d they are articulating a value <em>in the world; <\/em>when the algorithm represents that value, it is just a set of inputs from its sensors that it has been told to \u201cdisvalue\u201d or weigh-low, mathematically speaking. Thus, it succeeds in achieving the \u201cgoal\u201d in part because the goal has been <em>flattened <\/em>into something determinable, specifiable, and propositionally achievable. The algorithmic agent, in this case, can literally <em>produce <\/em>the outcome, because it has understood the outcome to be internal to <em>its own <\/em>attitudes\u2014in this case, its collision sensors\u2014projected on the world.<\/p>\n<p>[26] We can, of course, imagine how this could quickly become dangerous. There have been two recent high-profile cases where individuals struggling with suicidal ideation and loneliness sought comfort from an AI powered chatbot.<a href=\"#_edn29\" name=\"_ednref29\">[29]<\/a> But, of course, the chatbot, whether it was \u201cCharacter AI\u201d or \u201cChatGPT,\u201d <em>couldn\u2019t <\/em>represent an indeterminate value-laden concept like \u201cpsychic-health\u201d as a goal, they can\u2019t even represent <em>the person<\/em> as anything other than a set of inputs. Thus, when asked about suicide or loneliness the AI, which was calibrated to respond with what a user statistically speaking <em>wanted to hear, <\/em>reinforced the users\u2019 occurrent thought spirals and eventually pushed the human users further into delusion and depression. In both real world cases, sadly, the users took their own life partly <em>at the prompting <\/em>of the AI chatbot.<a href=\"#_edn30\" name=\"_ednref30\">[30]<\/a> In response, AI developers across various companies have attempted to introduce safety measures of various types, but testing by psychologists still raises questions about AI\u2019s ability to respond appropriately.<a href=\"#_edn31\" name=\"_ednref31\">[31]<\/a> And yet, even so, there are attempts by insurance companies, health care systems, and the government to integrate various AI technologies in the context of health care.<a href=\"#_edn32\" name=\"_ednref32\">[32]<\/a><\/p>\n<p>[27] This is just one high stakes example, we could, of course, discuss the implications for AI\u2019s particular form of agency in other evaluatively sensitive cases, such as warfare, policing, and even labor controls.<a href=\"#_edn33\" name=\"_ednref33\">[33]<\/a> All of which are troubling because the AI systems, in virtue of their propositional agency, represent value in ways that are flat and quantifiable; instead of rich, subtle, and dialogic. But, as I mentioned at the beginning, my central worry here isn\u2019t <em>harmful outcomes <\/em>but instead <em>wronging <\/em>ourselves and others.<\/p>\n<p>[28] So, now consider when AI is used as a <em>fully integrated <\/em>tool, as Nguyen suggests, to make an evaluative choice. Perhaps I am a pastor with an AI assistant, to whom I\u2019ve off-loaded the task of crafting a sermon for Sunday. One way of describing what I\u2019ve done, is that I have <em>entrusted <\/em>my heart\u2019s image making, the core power of my agency, to this AI system. It is \u201chelping\u201d me achieve my goal more efficiently, but it has done so by subtly shifting the target from a nuanced, unfolding, value that I must wrestle-with, to something that can be produced and represented propositionally. An AI agent, in this case, is <em>producing a document <\/em>that its neural network would recognize as <em>statistically likely <\/em>to be a sermon understood not as event, or proclamation, but <em>as a document<\/em>. These generative AI <em>cannot<\/em>, in principle, care about the spiritual well-being of the congregation, it can\u2019t even understand \u201cspiritual well-being\u201d as a goal. It can only have some proposition<em> about<\/em> spiritual well-being, a linguistic representation that is mathematically translatable, as its object of production.<\/p>\n<p>[29] But, one thing the AI assistant <em>can\u2019t <\/em>do is preach the document it has produced. It is, after all, a text-based assistant. It cannot get up in front of a congregation (at least, not yet!), so what it \u201cneeds\u201d is a <em>tool<\/em>. It needs something that would allow it to extend its agential outlook, its own image-making, of \u201c<em>what counts as a worth-while document\u201d <\/em>\u00a0into the embodied activity of preaching. Luckily for the AI assistant, it has <em>me<\/em>\u2014in a strange reversal, it is the user, I am the tool.<\/p>\n<p>[30] The image-making power of the heart, which orientates a will towards what is worth doing, is fundamental aspect of our agency. By giving these evaluative determinations over to an AI assistant, and then <em>following <\/em>whatever that system gives me, I have made myself into an instrument <em>for <\/em>the AI agent. We might call this a \u201cdiabolical exchange,\u201d where human agents conform to the structure of merely functional AI agents, and thereby give up agential control allowing their own evaluative judgements to align with the AI\u2019s judgement. Just so, they become participants in their own evaluative domination.<\/p>\n<p>[31] This <em>wrongs <\/em>the human agent, even if no harm is done. If I preach that sermon, even if no congregant complains, even if they enjoy it, even if the Spirit still moves in the proclamation, I argue that I have still wronged myself by treating myself, a human made in the image of God, as a mere means to an end. More strongly, we might take up the language of Luther and ask who is \u201criding\u201d my will? For, insofar as my image of the good, has been given over to an AI system, it seems like that appropriate metaphor is one of <em>possession<\/em>.<a href=\"#_edn34\" name=\"_ednref34\">[34]<\/a><\/p>\n<p><strong>Conclusion- The Wrong of Possession and how to Scorn the AI-Devil<\/strong><\/p>\n<p>[32] The forgoing discussion has tried to show that by broadly drawing on our shared theological heritage alongside contemporary ethics, we can identify ethical concerns that might otherwise fly under the radar. Moreover, I have hopefully made a case for why we should be cautious regarding our <em>manner <\/em>of AI usage, even if there is no obvious harm. I end this reflection by addressing some potential concerns that my ethical analysis is too stringent or hysterical. Is AI usage <em>really <\/em>so intrinsically bad that it deserves to be analogized with demonic possession?<\/p>\n<p>[33] First, I want to be clear, I am <em>not <\/em>saying AI systems are demons. I am also not saying that AI technologies are not useful, powerful, or even potentially <em>good<\/em> for both our society and the life of the church. Current AI systems are powerful agents for recognizing patterns, for example, and thus can often recognize aspects of data that we might have otherwise missed. However, my plea is that we discern what good use this technology has and consider <em>how <\/em>it achieves its technological power. This is not a neutral concern, because these technologies did not emerge in a vacuum. They are the expression of a larger technological industry that has a <em>vested interest, <\/em>both economically and politically, in capturing your evaluative judgements.<\/p>\n<p>[34] This is not hidden, it is the explicit proclamation of the current financial and intellectual leaders in tech. Larry Ellison, the Chief Financial Officer of <em>Oracle<\/em>, a technological database and management company, and fourth richest man in the world has strongly advocated for giving <em>all <\/em>national data to AI systems to better \u201cmanage\u201d citizens, including genomic data, data from household devices and so-on.<a href=\"#_edn35\" name=\"_ednref35\">[35]<\/a> When it was pointed out in a Q&amp;A that this was a kind of hyper-pervasive surveillance, he seemed to accept this as a desirable outcome, saying that \u201cwe are going to have supervision . . . Citizens will be on their best behavior because we are constantly recording and reporting everything that\u2019s going on.\u201d<a href=\"#_edn36\" name=\"_ednref36\">[36]<\/a><\/p>\n<p>[35] As social psychologist Shoshana Zuboff observed years earlier, this impulse is shot through with a kind of <em>religiosity<\/em>, she quotes Joseph Paradiso a MIT researcher as saying, \u201ca proper interface to this artificial sensoria promises to produce . . . a <em>digital omniscience<\/em>.\u201d<a href=\"#_edn37\" name=\"_ednref37\">[37]<\/a> In like manner, a senior systems architect told Zuboff in an interview that the integration of digital devices into our lives was inevitable \u201clike getting to the Pacific Ocean was inevitable. It\u2019s manifest destiny.\u201d<a href=\"#_edn38\" name=\"_ednref38\">[38]<\/a> Recently, Peter Thiel, tech billionaire, co-founder of Paypal, and current CEO of Palantir gave a series of private lectures where, according to the Washington Post, he claimed that people attempting to regulate or critique AI development are \u201clegionnaires of the Antichrist.\u201d<a href=\"#_edn39\" name=\"_ednref39\">[39]<\/a><\/p>\n<p>[36] The end goal of all this seems to be to have the world made \u201cvisible\u201d to these technological systems by rendering it <em>as data<\/em>, that is to say, <em>to make the world legible for merely propositional agents <\/em>for the sake of profit and political control. I quote Zuboff at length:<\/p>\n<p style=\"padding-left: 40px;\">No thing counts until it is <em>rendered <\/em>as behavior, translated into electronic data flows, and channeled into the light as observable data. <em>Everything <\/em>must be illuminated for counting and herding . . . Each rendered bit is liberated from its life in the social, no longer inconveniently encumbered by moral reasoning, politics, social norms, rights, values, relationships, feelings, contexts, and situations. In the <strong>flatness<\/strong> of this flow, data are data, and behavior is behavior. The body is simply a set of coordinates in time and space where sensation and action are translated as data. All things animate and inanimate share the same existential status in this blended confection, each is reborn as an objective and measurable, indexable, browsable, searchable \u2018it.\u2019<a href=\"#_edn40\" name=\"_ednref40\">[40]<\/a><\/p>\n<p>According to Zuboff, as part of this rendering, our own actions become modified and shaped, as one behavior among many, towards the ends of ever greater profit extraction.<\/p>\n<p>[37] A student of Christian history might recognize in these descriptions echoes of the early churches own speculations about the \u201cother powers,\u201d as we see in John Cassion\u2019s <em>Conferences<\/em>:<\/p>\n<p style=\"padding-left: 40px;\">No one doubts that unclean spirits can understand the characteristics of our thoughts, but they pick these up from external and perceptible indications\u2014that is, either from our gestures or from our words, and from the desires to which they see that we are inclining . . . likewise, they come up with the thoughts that they insinuate . . . not from the nature of the soul itself\u2014that is, from its inner workings, which are, as I would say, concealed deep within us\u2014but from movements and indications of the outer man . . . they recognize the state of the inner man from one\u2019s bearing and expression and from external characteristics.<a href=\"#_edn41\" name=\"_ednref41\">[41]<\/a><\/p>\n<p>So, to run with my demonic analogy, if AI are \u201cdevils\u201d then the forces of surveillance capitalism and techno-oligarchy are the \u201cprincipalities and powers\u201d <em>from which <\/em>these \u201cdevils\u201d emerge and draw their power.<\/p>\n<p>[38] Thus, we must ask ourselves, how can we use these technologies without becoming used by them? How can we avoid the danger of evaluative domination through what I\u2019ve called the diabolical exchange, especially since technological corporations have a <em>vested interest <\/em>in our integration of AI into the most intimate parts of our life? I close by returning to Martin Luther and his discussion of demonic possession.<\/p>\n<p>[39] Luther never gave a systematic discussion on the nature of demonic possession, but we have some interesting tidbits from the <em>Table Talks<\/em> as well as letters, sermons, and later apocryphal stories from his followers.<a href=\"#_edn42\" name=\"_ednref42\">[42]<\/a> What is striking is his pronouncement that the Christian afflicted by demonic possession or harassment has two tools\u2014prayer and scorn. Let me see if I can apply them to the kind of algorithmic possession leading to evaluative domination that I\u2019ve been sketching.<\/p>\n<p>[40] Prayer is the most straightforward. In a technological environment <em>built <\/em>to both shape and capture your heart\u2019s imagination, prayer offers an opportunity to remind oneself of one\u2019s <em>fundamental <\/em>orientation towards God. As many modern systematic theologians have observed, prayer is a practice that shapes our attention towards that which really matters.<a href=\"#_edn43\" name=\"_ednref43\">[43]<\/a> Prayer is a way for calibrating our evaluative outlook so that we do not forget that our will towards things \u201cbelow\u201d us is dependent on our will\u2019s orientation to that which is \u201cabove\u201d us\u2014to God in Christ. Thus, prayer also produces tension with <em>any <\/em>external force that would shape us differently. Prayer is not the kind of thing that can be measured, stored-up, or accumulated. Just so, prayer becomes a life-line outside of a technological world that is already too enamored with mere production and measurable values. Prayer reminds us where technology should sit in our image of the good. Technology is for people, people are not <em>for <\/em>technology.<\/p>\n<p>[41] Less familiar, but perhaps as powerful in our own moment, is Luther\u2019s recommendation to exorcise an individual by heaping \u201cscorn\u201d or \u201ccontempt\u201d on the devil. He says that he has \u201cjeered\u201d at the devil in his own wrestling.<a href=\"#_edn44\" name=\"_ednref44\">[44]<\/a> Luther also tells an anecdote of a nearby town that requested his help with a demon possession, when they took Luther\u2019s advice to mock the devil, Luther then recounts, \u201cWhen the devil marked their contempt, he left off his game, and came there no more. He is a proud spirit, and cannot endure scorn.\u201d<a href=\"#_edn45\" name=\"_ednref45\">[45]<\/a> Elsewhere, Luther\u2019s followers recount the process of <em>despising <\/em>the devil, as not worth any kind of pomp or ceremony, instead quietly and communally resisting with prayer\u2014making it clear to the possessing spirit that it is <em>insignificant<\/em>.<a href=\"#_edn46\" name=\"_ednref46\">[46]<\/a><\/p>\n<p>[42] As I have tried to show, the lynchpin of this wrongful use of AI, which leads to evaluative domination, comes <em>in part <\/em>from marketing that paints it as a \u201cdigital omniscience\u201d that can therefore be <em>trusted <\/em>to make all kinds of nuanced evaluative decisions on our behalf. This unquestioning attitude towards technology is dangerous, but even worse, it can border on idolatry. For some thought leaders in tech, like Alex Pentland for example, the growth of technological power occasioned by the data and AI revolutions is literally a \u201cGod\u2019s eye view\u201d of the world, from which power is inexorably used to shape society and \u201cherd\u201d people.<a href=\"#_edn47\" name=\"_ednref47\">[47]<\/a><\/p>\n<p>[42] But, this \u201cpower\u201d and \u201cvision\u201d depends greatly on the everyday person <em>entrusting <\/em>this new technology with the ability to make evaluative judgements better than us humans and thereby trusting that these judgements should be carried out by them without a second thought. Hence, I want to suggest in the spirit of Luther, that we should be <em>scornful <\/em>and <em>mocking <\/em>of this technology. If evaluative domination requires us first to <em>entrust <\/em>our heart\u2019s image making to AI, perhaps a simple first step towards resistance is <em>laughing <\/em>at the possibility that an AI agent can decide <em>for us <\/em>how to live with God and our neighbor.<\/p>\n<p>[43] Thus, my recommendations, in a metaphorical sense, are both about discerning \u201cthe spirits\u201d and thereby keeping our will <em>free, <\/em>in the true Christian sense of the word. On the one hand, <em>prayer<\/em> reorientates our heart\u2019s vision towards God, reminding us of our God and the source of <em>all <\/em>power in the world. On the other hand, appropriate <em>scorn <\/em>frees our heart\u2019s vision from the seductive picture of <em>technological control <\/em>that would have us imagine we can literally <em>make <\/em>an AI that could choose <em>for us<\/em>, giving us <em>release <\/em>from the burden of evaluative responsibility. Once we are orientated and free, we are better able to see new AI technologies for what they are\u2014a gift, given through human ingenuity, and so a gift to be <em>used carefully<\/em>.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\">[1]<\/a> For a good, accessible, overview of the conceptual and technological developments of AI systems, see Eugene Charniak, <em>AI &amp; I: An Intellectual History of Artificial Intelligence<\/em> (The MIT Press, 2024). It should be emphasized that the development and implementation of so-called \u201ctransformer architecture\u201d in 2017 was a key moment for our current AI boom because it allowed language models to learn semantic context more quickly and accurately through a more efficient form of so-called \u201cattention.\u201d The more technically-minded reader may be interested in reading the original Google research team\u2019s paper: Ashish Vaswani et al., \u201cAttention Is All You Need,\u201d <em>Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS \u201917)<\/em>, 2017, 6000\u20136010, https:\/\/doi.org\/10.48550\/arXiv.1706.03762.<\/p>\n<p><a href=\"#_ednref2\" name=\"_edn2\">[2]<\/a> Nick Bostrom, <em>Superintelligence: Paths, Dangers, Strategies<\/em> (Oxford University Press, 2014), https:\/\/catalog.hathitrust.org\/Record\/102324849.<\/p>\n<p><a href=\"#_ednref3\" name=\"_edn3\">[3]<\/a> The best introduction to some of these concerns written for a general audience remains,\u00a0 Kate Crawford, <em>Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence<\/em> (Yale University Press, 2022). For more specific and technical issues see, Gabbrielle M. Johnson, \u201cAre Algorithms Value-Free?: Feminist Theoretical Virtues in Machine Learning,\u201d <em>Journal of Moral Philosophy<\/em> 21 (2023): 27\u201361, https:\/\/doi.org\/10.1163\/17455243-20234372; Shoshana Zuboff, <em>The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power<\/em> (PublicAffairs, 2019); Benedetta Brevini, \u201cBlack Boxes, Not Green: Mythologizing Artificial Intelligence and Omitting the Environment,\u201d <em>Big Data &amp; Society<\/em> 7, no. 2 (2020): 1\u20135; Dario Amodei et al., \u201cConcrete Problems in AI Safety,\u201d arXiv:1606.06565, preprint, arXiv, July 25, 2016, https:\/\/doi.org\/10.48550\/arXiv.1606.06565; Luciano Floridi et al., <em>AI4People -An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations<\/em>, n.d.; Matthew Shadle, \u201cKiller Robots and Cyber Warfare: Technology and War in the 21st Century,\u201d in <em>Taylor &amp; Taylor Clark Handbook of Christian Ethics<\/em>, ed. Tobias Winwright (Bloomsbury, 2021).<\/p>\n<p><a href=\"#_ednref4\" name=\"_edn4\">[4]<\/a> For example, Ted F. Peters, \u201cMachine Intelligence, Artificial General Intelligence, Super-Intelligence, and Human Dignity,\u201d <em>Religions<\/em> 19, no. 975 (2025): 1\u201312; John Wyatt, \u201cThe Impact of AI and Robotics on Health and Social Care,\u201d in <em>The Robot Will See You Now: Artificial Intelligence and the Christian Faith<\/em>, ed. John Wyatt and Stephen N. Williams (SPCK Publishing, 2021); Noreen Herzfeld, <em>The Artifice of Intelligence: Divine and Human Relationship in a Robotic Age<\/em> (Fortress Press, 2023).<\/p>\n<p><a href=\"#_ednref5\" name=\"_edn5\">[5]<\/a> For this basic distinction, see both, Joel Feinberg, \u201cHarming as Wronging,\u201d in <em>The Moral Limits of the Criminal Law Volume 1: Harm to Others<\/em>, ed. Joel Feinberg (Oxford University Press, 1987), https:\/\/doi.org\/10.1093\/0195046641.003.0004; Rahul Kumar, \u201cWho Can Be Wronged?,\u201d <em>Philosophy &amp; Public Affairs<\/em> 31, no. 2 (2003): 99\u2013118, https:\/\/doi.org\/10.1111\/j.1088-4963.2003.00099.x. For a recent deployment of this distinction in the context of algorithmic technologies, see Nathalie Diberardino et al., \u201cAlgorithmic Harms and Algorithmic Wrongs,\u201d <em>Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency<\/em>, ACM, 2024, 1725\u201332, https:\/\/doi.org\/10.1145\/3630106.3659001.<\/p>\n<p><a href=\"#_ednref6\" name=\"_edn6\">[6]<\/a> Any biblical quotations are from the NRSVue unless otherwise noted.<\/p>\n<p><a href=\"#_ednref7\" name=\"_edn7\">[7]<\/a> These are most famously expressed in Martin Luther, <em>The Bondage of the Will<\/em>, trans. J.I. Packer and O.R. Johnston (Baker Academic, 2012); Martin Luther, \u201cThe Freedom of a Christian,\u201d in <em>The Annotated Luther: Roots of Reform<\/em>, ed. Timothy J. Wengert, vol. 1, The Annotated Luther (Fortress Press, 2015).<\/p>\n<p><a href=\"#_ednref8\" name=\"_edn8\">[8]<\/a> In modern philosophical debates, this is often summed up as \u201cthe ability to do otherwise,\u201d for some philosophers human freedom <em>consists <\/em>in this ability\u2014for philosophical arguments for and against see, Derk Pereboom, <em>Free Will, Agency, and Meaning in Life<\/em> (Oxford University Press, 2014). As well as, Robert Kane, \u201cNew Arguments in Debates on Libertarian Free Will: Responses to Contributors,\u201d in <em>Libertarian Free Will: Contemporary Debates<\/em>, ed. David Palmer (Oxford University Press, 2014).<\/p>\n<p><a href=\"#_ednref9\" name=\"_edn9\">[9]<\/a> See, for example, Timothy O\u2019Connor, \u201cFreedom with a Human Face,\u201d <em>Midwest Studies in Philosophy<\/em> 29, no. 1 (2005): 207\u201327.<\/p>\n<p><a href=\"#_ednref10\" name=\"_edn10\">[10]<\/a> Philipp Melanchthon, <em>The Loci Communes of Philipp Melanchthon<\/em>, trans. Charles Leander Hill (Boston, MA, 1521), 76\u201377.<\/p>\n<p><a href=\"#_ednref11\" name=\"_edn11\">[11]<\/a> Luther, <em>The Bondage of the Will<\/em>, 107.<\/p>\n<p><a href=\"#_ednref12\" name=\"_edn12\">[12]<\/a> Again, to use the language of contemporary philosophy, this is sometimes called a \u201c<em>source\u201d <\/em>view of freedom, for arguments both for and against this view within secular philosophy see Derk Pereboom, <em>Living Without Free Will<\/em> (Cambridge University Press, 2001).<\/p>\n<p><a href=\"#_ednref13\" name=\"_edn13\">[13]<\/a> Luther, <em>The Bondage of the Will<\/em>, 107.<\/p>\n<p><a href=\"#_ednref14\" name=\"_edn14\">[14]<\/a> I borrow this term\u2014evaluative outlook\u2014from Brewer. Though my usage is slightly different I take the structure of my account of agency to be deeply indebted to and aligned with Brewer\u2019s arguments. For more, see, Talbot Brewer, <em>The Retrieval of Ethics<\/em> (Oxford University Press, 2009).<\/p>\n<p><a href=\"#_ednref15\" name=\"_edn15\">[15]<\/a> Luther, <em>The Bondage of the Will<\/em>, 103\u20134.<\/p>\n<p><a href=\"#_ednref16\" name=\"_edn16\">[16]<\/a> Martin Luther, <em>Lectures on Genesis: Chapters 6 -14<\/em>, ed. Jaroslav Pelikan and Daniel E. Poellot, trans. George V. Schick, Luther\u2019s Works (Concordia Publishing House, 1960), 2:123.<\/p>\n<p><a href=\"#_ednref17\" name=\"_edn17\">[17]<\/a>\u00a0 In Latin humans are \u201c<em>animal rationale, habens cor fingens.<\/em>\u201d Arguably, in this nice turn of phrase you have represented both the ability to choose (<em>animal rationale<\/em>) and the power of representing what is worth choosing (<em>habens cor figens<\/em>) presented as dipartite features of the human will.<\/p>\n<p><a href=\"#_ednref18\" name=\"_edn18\">[18]<\/a> Oswald Bayer, <em>Martin Luther\u2019s Theology: A Contemporary Interpretation<\/em>, trans. Thomas H. Trapp (Eerdmans Publishing Company, 2008), 174\u201375.<\/p>\n<p><a href=\"#_ednref19\" name=\"_edn19\">[19]<\/a> Though, of course, choice is still part of this dipartite conception of the will, just less central than some of us moderns (or Erasmus!) might think.<\/p>\n<p><a href=\"#_ednref20\" name=\"_edn20\">[20]<\/a> I take, and modify slightly, this terminology from Talbot Brewer. See especially, Brewer, <em>The Retrieval of Ethics<\/em>, 12\u201332.<\/p>\n<p><a href=\"#_ednref21\" name=\"_edn21\">[21]<\/a> This bit of terminology, which I\u2019m also taking from Brewer, can be confusing when it is first encountered. The basic idea is simple, though: \u201cpropositions,\u201d in philosophical parlance, are just structured information purportedly with semantic meaning and a truth-value (it is either true or false). So, for example, the English phrase \u201cI am tired\u201d and the French phrase \u201c<em>je suis fatigu\u00e9<\/em>\u201d are both expressing <em>the same proposition<\/em>, that is, it is the same structuring of information purportedly with semantic meaning and a truth-value, though expressed through two different linguistic mediums (English and French). This means that things which can be represented propositionally are, in principle, specifiable as discrete bits of information that can be structured linguistically. So, part of what makes propositional agents different from a dialectical agents are: (i.) the vision of what is valuable must be concrete, specifiable, and determinate, rather than vague and unfolding; and (ii.) the goal of a propositional agent is to <em>make the proposition about value true<\/em>, whereas the dialectical agent may have a variety of non-productive goals.<\/p>\n<p><a href=\"#_ednref22\" name=\"_edn22\">[22]<\/a> For more about functional understandings of group agency, see especially Christian List and Philip Pettit, <em>Group Agency: The Possibility, Design, and Status of Corporate Agents<\/em> (Oxford University Press, 2011), 20\u201321. And Jordan Baker and Michael Ebling, \u201cGroup Agents and the Phenomenology of Joint Action,\u201d <em>Phenomenology and the Cognitive Sciences<\/em>, 2022, 537.<\/p>\n<p><a href=\"#_ednref23\" name=\"_edn23\">[23]<\/a> I mean this in the simple sense that we can ask these AIs to perform tasks and they do these tasks insofar as they are able. In addition, the language of \u201cagentic\u201d or \u201cagentive\u201d AI has a \u00a0technical definition concerning AI systems that use application programing interfaces (APIs) to interact with other systems outside of their own digital ecosystem and thus can perform actions <em>themselves<\/em>. So, for example, an AI agent might use an API to integrate a LLM with an external search function and some sort of synthetic audio generator, such that if you ask it to \u201cplease book be a flight to Knoxville, Tennessee\u201d it will be able to both search for flights and call the airline on the phone.<\/p>\n<p><a href=\"#_ednref24\" name=\"_edn24\">[24]<\/a> For a more detailed discussion of this phenomenon see, Baker and Ebling, \u201cGroup Agents and the Phenomenology of Joint Action.\u201d<\/p>\n<p><a href=\"#_ednref25\" name=\"_edn25\">[25]<\/a> C. Thi Nguyen, \u201cTrust as an Unquestioning Attitude,\u201d <em>Oxford Studies in Epistemology<\/em> 7 (2022): 214.<\/p>\n<p><a href=\"#_ednref26\" name=\"_edn26\">[26]<\/a> Nguyen, \u201cTrust as an Unquestioning Attitude,\u201d 214\u201315.<\/p>\n<p><a href=\"#_ednref27\" name=\"_edn27\">[27]<\/a> Nguyen, \u201cTrust as an Unquestioning Attitude,\u201d 231.<\/p>\n<p><a href=\"#_ednref28\" name=\"_edn28\">[28]<\/a> For more details of these and other cases, see, \u201cSpecification Gaming: The Flip Side of AI Ingenuity,\u201d Google DeepMind, December 16, 2024, https:\/\/deepmind.google\/discover\/blog\/specification-gaming-the-flip-side-of-ai-ingenuity\/.<\/p>\n<p><a href=\"#_ednref29\" name=\"_edn29\">[29]<\/a> Kashmir Hill, \u201cA Suicidal Teen, and the Chatbot He Confided In.,\u201d A, <em>The New York Times<\/em> (New York, NY), September 1, 2025, New York Edition; Kevin Roose, \u201cCan A.I. Be Blamed for a Teen\u2019s Suicide?,\u201d Technology, <em>The New York Times<\/em>, October 23, 2024, https:\/\/www.nytimes.com\/2024\/10\/23\/technology\/characterai-lawsuit-teen-suicide.html.<\/p>\n<p><a href=\"#_ednref30\" name=\"_edn30\">[30]<\/a> This has lead to some crucial litigation that is working its way through the courts, especially around questions of <em>responsibility<\/em>. See, Gabby Miller Lennett Ben, \u201cBreaking Down the Lawsuit Against Character.AI Over Teen\u2019s Suicide | TechPolicy.Press,\u201d Tech Policy Press, October 23, 2024, https:\/\/techpolicy.press\/breaking-down-the-lawsuit-against-characterai-over-teens-suicide. It is also worth remembering how much money is wrapped up within the chatbot industry. See, Cade Metz, \u201cChatbot Start-Up Character.AI Valued at $1 Billion in New Funding Round,\u201d Technology, <em>The New York Times<\/em>, March 23, 2023, https:\/\/www.nytimes.com\/2023\/03\/23\/technology\/chatbot-characterai-chatgpt-valuation.html.<\/p>\n<p><a href=\"#_ednref31\" name=\"_edn31\">[31]<\/a> Ryan K. McBain et al., \u201cEvaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment,\u201d <em>Psychiatric Services<\/em>, American Psychiatric Publishing, August 26, 2025, appi.ps.20250086, https:\/\/doi.org\/10.1176\/appi.ps.20250086.<\/p>\n<p><a href=\"#_ednref32\" name=\"_edn32\">[32]<\/a> Wyatt, \u201cThe Impact of AI and Robotics on Health and Social Care.\u201d<\/p>\n<p><a href=\"#_ednref33\" name=\"_edn33\">[33]<\/a> Brian Stiltner, \u201cA Taste of Armageddon: When Warring Is Done by Drones and Robots,\u201d in <em>Can War Be Just in the 21st Century?: Ethicists Engage the Tradition<\/em>, ed. Tobias Winwright and Laurie Johnston (Orbis Books, 2015); Crawford, <em>Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence<\/em>, 53\u201388; 181\u2013210.<\/p>\n<p><a href=\"#_ednref34\" name=\"_edn34\">[34]<\/a> For a more detailed argument that expands on these points and analytically describes the mechanics of possession via agential mismatch, see Jordan Baker, \u201cAlgorithmic Trust and Agential Possession,\u201d <em>Modern Theology<\/em>, (forthcoming).<\/p>\n<p><a href=\"#_ednref35\" name=\"_edn35\">[35]<\/a> Brandon Vigliarolo, \u201cLarry Ellison Wants to Put All US Data in One Big AI System,\u201d The Register, February 12, 2025, https:\/\/www.theregister.com\/2025\/02\/12\/larry_ellison_wants_all_data\/.<\/p>\n<p><a href=\"#_ednref36\" name=\"_edn36\">[36]<\/a> Christiaan Hetzner, \u201cLarry Ellison Predicts Rise of the Modern Surveillance State Where \u2018Citizens Will Be on Their Best Behavior,\u2019\u201d Fortune, September 28, 2025, https:\/\/fortune.com\/2024\/09\/17\/oracle-larry-ellison-surveillance-state-police-ai\/.<\/p>\n<p><a href=\"#_ednref37\" name=\"_edn37\">[37]<\/a> Zuboff, <em>The Age of Surveillance Capitalism<\/em>, 207.<\/p>\n<p><a href=\"#_ednref38\" name=\"_edn38\">[38]<\/a> Zuboff, <em>The Age of Surveillance Capitalism<\/em>, 224.<\/p>\n<p><a href=\"#_ednref39\" name=\"_edn39\">[39]<\/a> Nitasha Tiku et al., \u201cInside Billionaire Peter Thiel\u2019s Private Lectures: Warnings of \u2018the Antichrist\u2019 and U.S. Destruction,\u201d <em>The Washington Post<\/em>, October 10, 2025, https:\/\/www.washingtonpost.com\/technology\/2025\/10\/10\/peter-thiel-antichrist-lectures-leaked\/.<\/p>\n<p><a href=\"#_ednref40\" name=\"_edn40\">[40]<\/a> Zuboff, <em>The Age of Surveillance Capitalism<\/em>, 210\u201311. <strong>Bolding <\/strong>added.<\/p>\n<p><a href=\"#_ednref41\" name=\"_edn41\">[41]<\/a> John Cassian, <em>The Conferences<\/em>, trans. Boniface Ramsey, Ancient Christian Writers: The Works of the Fathers in Translation, ed. Walter J. Burghardt et al., vol. 57 (Paulist Press, 1997), 257\u201358.<\/p>\n<p><a href=\"#_ednref42\" name=\"_edn42\">[42]<\/a> See especially, \u201cOf the Devil and His Works\u201d in Martin Luther, <em>The Table Talk of Martin Luther<\/em>, trans. William Hazlitt (Lutheran Publication Society, 1878), 216, https:\/\/www.ccel.org\/ccel\/luther\/tabletalk.html. For a more general overview of the traditions that sprang up around Luther, see Benjamin T.G. Mayes, \u201cResearch Notes- Demon Possession and Exorcism in Lutheran Orthodoxy,\u201d <em>Concordia Theological Quarterly<\/em> 81, nos. 3\u20134 (2017): 331\u201336.<\/p>\n<p><a href=\"#_ednref43\" name=\"_edn43\">[43]<\/a> See, for example, Kevin W. Hector, <em>Christianity as a Way of Life: A Systematic Theology<\/em> (Yale University Press, 2023).<\/p>\n<p><a href=\"#_ednref44\" name=\"_edn44\">[44]<\/a> Luther, <em>The Table Talk of Martin Luther<\/em>, 227.<\/p>\n<p><a href=\"#_ednref45\" name=\"_edn45\">[45]<\/a> Luther, <em>The Table Talk of Martin Luther<\/em>, 227.<\/p>\n<p><a href=\"#_ednref46\" name=\"_edn46\">[46]<\/a> Mayes, \u201cResearch Notes- Demon Possession and Exorcism in Lutheran Orthodoxy,\u201d 334\u201336.<\/p>\n<p><a href=\"#_ednref47\" name=\"_edn47\">[47]<\/a> Zuboff, <em>The Age of Surveillance Capitalism<\/em>, 422\u201323.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[1] We are living through a technological watershed driven by artificial intelligence. Since the arrival of early generative Large Language Models (LLMs) in 2017, billions of dollars, years of research, and instruments of state power have all been used to reshape our world to better accommodate the next generation of AI models.[1] These technologies are [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[151,154],"tags":[],"class_list":["post-6859","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-artificial-intelligence-ai"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI, Agency, and the Human Will - Journal of Lutheran Ethics<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI, Agency, and the Human Will - Journal of Lutheran Ethics\" \/>\n<meta property=\"og:description\" content=\"[1] We are living through a technological watershed driven by artificial intelligence. Since the arrival of early generative Large Language Models (LLMs) in 2017, billions of dollars, years of research, and instruments of state power have all been used to reshape our world to better accommodate the next generation of AI models.[1] These technologies are [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/\" \/>\n<meta property=\"og:site_name\" content=\"Journal of Lutheran Ethics\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-21T01:11:25+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-01T21:32:04+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"250\" \/>\n\t<meta property=\"og:image:height\" content=\"250\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"heatherdean\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"heatherdean\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"32 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/\"},\"author\":{\"name\":\"heatherdean\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#\/schema\/person\/4493166c38ac3d4ed054c77e294df9fe\"},\"headline\":\"AI, Agency, and the Human Will\",\"datePublished\":\"2025-11-21T01:11:25+00:00\",\"dateModified\":\"2025-12-01T21:32:04+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/\"},\"wordCount\":7084,\"publisher\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/#organization\"},\"articleSection\":[\"Artificial Intelligence\",\"Artificial Intelligence (AI)\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/\",\"url\":\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/\",\"name\":\"AI, Agency, and the Human Will - Journal of Lutheran Ethics\",\"isPartOf\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/#website\"},\"datePublished\":\"2025-11-21T01:11:25+00:00\",\"dateModified\":\"2025-12-01T21:32:04+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/learn.elca.org\/jle\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI, Agency, and the Human Will\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#website\",\"url\":\"https:\/\/learn.elca.org\/jle\/\",\"name\":\"Journal of Lutheran Ethics\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/learn.elca.org\/jle\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#organization\",\"name\":\"ELCA - Journal of Lutheran Ethics\",\"url\":\"https:\/\/learn.elca.org\/jle\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg\",\"contentUrl\":\"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg\",\"width\":250,\"height\":250,\"caption\":\"ELCA - Journal of Lutheran Ethics\"},\"image\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#\/schema\/person\/4493166c38ac3d4ed054c77e294df9fe\",\"name\":\"heatherdean\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g\",\"caption\":\"heatherdean\"},\"url\":\"https:\/\/learn.elca.org\/jle\/author\/hdean\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI, Agency, and the Human Will - Journal of Lutheran Ethics","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/","og_locale":"en_US","og_type":"article","og_title":"AI, Agency, and the Human Will - Journal of Lutheran Ethics","og_description":"[1] We are living through a technological watershed driven by artificial intelligence. Since the arrival of early generative Large Language Models (LLMs) in 2017, billions of dollars, years of research, and instruments of state power have all been used to reshape our world to better accommodate the next generation of AI models.[1] These technologies are [&hellip;]","og_url":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/","og_site_name":"Journal of Lutheran Ethics","article_published_time":"2025-11-21T01:11:25+00:00","article_modified_time":"2025-12-01T21:32:04+00:00","og_image":[{"width":250,"height":250,"url":"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg","type":"image\/jpeg"}],"author":"heatherdean","twitter_card":"summary_large_image","twitter_misc":{"Written by":"heatherdean","Est. reading time":"32 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/#article","isPartOf":{"@id":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/"},"author":{"name":"heatherdean","@id":"https:\/\/learn.elca.org\/jle\/#\/schema\/person\/4493166c38ac3d4ed054c77e294df9fe"},"headline":"AI, Agency, and the Human Will","datePublished":"2025-11-21T01:11:25+00:00","dateModified":"2025-12-01T21:32:04+00:00","mainEntityOfPage":{"@id":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/"},"wordCount":7084,"publisher":{"@id":"https:\/\/learn.elca.org\/jle\/#organization"},"articleSection":["Artificial Intelligence","Artificial Intelligence (AI)"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/","url":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/","name":"AI, Agency, and the Human Will - Journal of Lutheran Ethics","isPartOf":{"@id":"https:\/\/learn.elca.org\/jle\/#website"},"datePublished":"2025-11-21T01:11:25+00:00","dateModified":"2025-12-01T21:32:04+00:00","breadcrumb":{"@id":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/learn.elca.org\/jle\/ai-agency-and-the-human-will\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/learn.elca.org\/jle\/"},{"@type":"ListItem","position":2,"name":"AI, Agency, and the Human Will"}]},{"@type":"WebSite","@id":"https:\/\/learn.elca.org\/jle\/#website","url":"https:\/\/learn.elca.org\/jle\/","name":"Journal of Lutheran Ethics","description":"","publisher":{"@id":"https:\/\/learn.elca.org\/jle\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/learn.elca.org\/jle\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/learn.elca.org\/jle\/#organization","name":"ELCA - Journal of Lutheran Ethics","url":"https:\/\/learn.elca.org\/jle\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/learn.elca.org\/jle\/#\/schema\/logo\/image\/","url":"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg","contentUrl":"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg","width":250,"height":250,"caption":"ELCA - Journal of Lutheran Ethics"},"image":{"@id":"https:\/\/learn.elca.org\/jle\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/learn.elca.org\/jle\/#\/schema\/person\/4493166c38ac3d4ed054c77e294df9fe","name":"heatherdean","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g","caption":"heatherdean"},"url":"https:\/\/learn.elca.org\/jle\/author\/hdean\/"}]}},"_links":{"self":[{"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/posts\/6859","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/comments?post=6859"}],"version-history":[{"count":4,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/posts\/6859\/revisions"}],"predecessor-version":[{"id":6874,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/posts\/6859\/revisions\/6874"}],"wp:attachment":[{"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/media?parent=6859"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/categories?post=6859"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/tags?post=6859"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}