{"id":6865,"date":"2025-11-21T01:25:06","date_gmt":"2025-11-21T01:25:06","guid":{"rendered":"https:\/\/learn.elca.org\/jle\/?p=6865"},"modified":"2025-12-01T21:32:12","modified_gmt":"2025-12-01T21:32:12","slug":"machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding","status":"publish","type":"post","link":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/","title":{"rendered":"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding"},"content":{"rendered":"<p>[1] \u201cIf we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means&#8230;.We built the computers, but then we just gave the faintest outline of a blueprint and kind of let these systems develop on their own. I think an analogy here might be that we\u2019re trying to grow a decorative topiary, a decorative hedge that we\u2019re trying to shape.&#8221; -Sam Bowman.<a href=\"#_edn1\" name=\"_ednref1\">[1]<\/a><\/p>\n<p>[2] AI scientist and NYU professor Sam Bowman offers a frank admission about the limits of our knowledge of how large language models (LLMs) produce their outputs. LLMs for the most part use highly sophisticated transformer models to engage in a form of autocomplete&#8211;corrected or affirmed millions of times through human trainers\u2014a process called reinforcement learning. The number of parameters (features) of the model that are manipulated by the algorithm are so vast (numbering in the tens of billions) that teasing out which feature was tweaked to produce which outcome is an impossible task. For his part, Bowman likens the development of transformer LLMs to a \u201cdecorative hedge\u201d that researchers can only shape but not truly know its biology.<\/p>\n<p>[3] The nature of deep learning is such that we are incapable of understanding how it arrives at answers. In October 2023, the Stanford Center for Research on Foundation Models examined 10 large AI models and found all of them received failing scores with respect to 100 transparency indicators that cover a wide range of issues including sources of training data, information about the labor used to train the data, and the amount of compute power\/energy used, among others. In October 2023, none of the ten models received a passing grade. By May 2024, when the center re-examined the models, there had been marginal improvement in transparency (for example, Google\u2019s AI went from a \u201c40\u201d to a \u201c47\u201d on its index), but the major AI platforms (OpenAI, Meta, Anthropic, Stability AI, Amazon, Google) all received scores of 60 or less. What this means in layperson&#8217;s terms is that we know very little about the data these models are trained on, who trains them, how the algorithms are applied, or how much energy they expend. They take on the characteristics of enchanted objects that appear to have mystical qualities.<\/p>\n<p>[4] Henry Farrell analogizes LLMs to a <em>planchette<\/em> on a Ouija board.<a href=\"#_edn2\" name=\"_ednref2\">[2]<\/a> In the same way that a planchette is designed to appear as if it is eerily moved by an external force, AI-generated text and art can have an eerie resemblance to human-generated outputs. This sense of eeriness is both compelling and repelling. We are fascinated by the ability of LLMs and other deep learning algorithms to mirror human creativity, yet we are also made uneasy by models patterned after the human brain that not only can mirror our thoughts but can also produce seemingly \u201cother-worldly\u201d outcomes.<\/p>\n<p>[5] This other-worldliness can produce bizarre reactions from users. A May 2024 article in Rolling Stone interviewed a number of people whose significant others were engaging with large language AI chat apps as if the apps either had divine qualities or empowered the users to become divine. In one instance, a wife reported that her husband told her that the AI with which he was interacting on \u201cwas teaching him how to talk to God, or sometimes that the bot was God \u2014 and then that he himself was God.\u201d<a href=\"#_edn3\" name=\"_ednref3\">[3]<\/a><\/p>\n<p>[6] What does it mean when the world becomes too complex for us to understand? When AI\/algorithms detach us from understanding and knowing, what do we base our authority upon?<\/p>\n<p>[7] The more we use AI to dive into the world&#8217;s complexity, the more turbulent and confounding the world becomes. Paradoxically, as AI makes more scientific discoveries, the sense of the world for many becomes increasingly incomprehensible. If we can\u2019t draw upon theories for how we think the world works, then what\u2019s the point of debate and discourse in a democratic process? If we think the algorithm will figure it out for us, then why would we deliberate and debate and be part of decision-making? How can we evaluate whether our representatives are making good decisions? Increasingly, we rely upon recommendation algorithms-infused information diets that \u201cfeed us back to ourselves\u201d and become susceptible to what M.P. \u00a0Lynch calls epistemic arrogance\u2014belief without empirical validation.<a href=\"#_edn4\" name=\"_ednref4\">[4]<\/a><\/p>\n<p>[8] AI models that can produce answers through pattern detection but have no sense of understanding signal a profound shift in scientific discovery. The concept of human understanding presumably links the scientific revolution to the Enlightenment. Scientific discovery of empirical facts can only help us better explain and understand the world if we are able to abstract from those facts a theory using reason.\u00a0 Scientific method, therefore, provides us with the basis for rational decision-making. Self-governance and liberal democracy rely upon this potential for human understanding.<\/p>\n<p>[9] Limits, however, have always been present in the Enlightenment project. Doubt has always been a central element of rigorous critical thinking. The world was too vast and complex to fully \u201cknow\u201d it, but we could approximate this knowledge. Although we could not collect data on the entire world, we could use theory to abstract from the world just enough so we could conceptualize it. Abstraction was essential to science &#8212; any given theory will not explain every possible case, but we can approximate something about the characteristics of sand grains.<\/p>\n<p>[10] Abstraction requires a form of intellectual humility. It represents the fact that we can never \u201cknow\u201d the world in its entirety at any given moment because it is too vast and complex. A theory cannot explain 100 percent of the cases to which it is applied. This plurality of conditions means that empiricists from David Hume onward cautioned against claiming that science can \u201cprove\u201d propositions to be true. However, the inability to refer to anything as \u201ctrue\u201d also causes doubt and uncertainty. If the world is ultimately too complex to fully understand, that leaves room for the mystery and wonder of the transcendent.<\/p>\n<p><strong>Disenchantment and Modernity<\/strong><\/p>\n<p>[11] A common theme for understanding the break between modern and pre-modern societies is the theory of enchantment. Popularized by Max Weber in 1946, the theory suggests that the pre-Protestant Reformation world was bathed in mysticism and wonder. Spirits and deities were responsible for the mysteries of human existence. The capricious nature of these spirits presumably needed to be tamed by prayer or ritual. Aside from ritual, humans are a meaning-making species. Enchantment was a way to make sense of a dangerous and complex world. Before the language of scientific rationality, humans left the ultimate meaning of reality to the gods.<\/p>\n<p>[12] However, the idea of a clean break between an enchanted pre-modern and a disenchanted modern has been vigorously debated. For example, David Noble in his 1997 book <em>The Religion of Technology<\/em> claims that in modernity, technology serves as a main source of enchantment. For Noble, there are dual impulses in modernity: the inevitability of technological advance and the increased efficacy of religious fundamentalism. But rather than see these in opposition, Noble finds a complementarity. Noble claims technology is an &#8220;essentially religious endeavor&#8221;<a href=\"#_edn5\" name=\"_ednref5\">[5]<\/a> riven by the same impulses that drive religious faith. Technologists place a redemptive potential within technological progress. But Noble cautions against drawing a simple analogy between religious faith and technological faith by noting that what connects the two is an assumption of an originally sinful humanity whose \u201cfallen nature\u201d can be redeemed. Both faith and technology are then seen as a means of redemption through an external agent.<\/p>\n<p>[13] Others have been more insistent on a disenchantment thesis. A particularly interesting view comes from Jacques Ellul\u2019s<em> The Technological Society<\/em>. Ellul differentiates between technology and what he calls <em>technique<\/em>. By <em>technique<\/em>, he refers to states of mind that structure our habits. This is different than our usual conceptualization of technique, which refers to a distinct method for approaching a task or craft; we can think of the Alexander technique of movement as an example. In <em>The Technological<\/em> <em>Society<\/em>, Ellul posits technique as an all-encompassing ideology:<\/p>\n<p style=\"padding-left: 40px;\">Technique has penetrated the deepest recesses of the human being. The machine tends not only to create a new human environment, but also to modify man&#8217;s very essence. The milieu in which he lives is no longer his. He must adapt himself, as though the world were new, to a universe for which he was not created.<a href=\"#_edn6\" name=\"_ednref6\">[6]<\/a><\/p>\n<p>[14] Ellul\u2019s concern was the widespread incorporation of technique at the expense of spontaneous action. Ellul argues that while technique existed in the pre-modern era (e.g., magic), it didn\u2019t have the patina of rationality. He inverts the common view that the <em>disenchantment<\/em> of modern life is reducible to capitalism or the scientific method. He suggests that machines themselves are drivers of a rationality-based technique. The promulgation of machines instills in us habits of efficiency and bureaucratic rationality such that a greater and greater number of our interactions become oriented towards a certain kind of efficiency.<\/p>\n<p>[15] Artificial intelligence is an efficiency machine par excellence.\u00a0 It provides ready-made answers to practically any question a user poses without the painstaking task of conducting an online search and sifting through website.\u00a0 That practice, in turn, was more efficient than driving to the library and wandering through the stacks of books to synthesize knowledge.\u00a0 Yet, this optimization of the technique of knowledge acquisition, while making the process more efficient, reduces our curiosity regarding the debates and nuances embedded within the knowledge discovery process.<\/p>\n<p>[16] This doesn\u2019t simply apply to knowledge production. Ellul saw the logic of rational technique overwhelming all institutions in society (community, education, kin networks, etc.). This is so because a technique of rational efficiency has optimization as its telos; hence, only one solution can be optimal (just like AI produces one answer \u2013 unless otherwise directed). This makes the technique of rationality a universal pursuit, which undermines the plurality and diversity of localized techniques emerging from place-bound communities. To survive, humans and social institutions are required to adapt to the technique. Higher education, for example, becomes an exercise in \u201ctraining for technique\u201d rather than training in critical inquiry. Perhaps most perniciously, technique can be applied to impacting human behavior and uncovering \u201coptimal\u201d ways to control human behavior through propaganda and advertising.<\/p>\n<p>[17] Much of our lives are managed by what F. Pasquale called black-box algorithms.<a href=\"#_edn7\" name=\"_ednref7\">[7]<\/a> Algorithms dictate what we watch, what we order, whom we date, whether we get a loan or a job, all on the basis of some AI-aided classification metric. The sociologists Marion Fourcade and Kieran Healy call this increased engagement with systems of classification ranking us through our data the <em>ordinal society<\/em>.<a href=\"#_edn8\" name=\"_ednref8\">[8]<\/a> Because these algorithms are proprietary intellectual property, we know little about how they are set upon our datafied lives to rank us or our choices. The political philosopher Colin Koopman argues the hoard of data collected from us, by invitation via engagement algorithms, constitutes a datafied \u201csecond self.\u201d A self that is \u201cfrom us\u201d but not entirely constitutive \u201cof us.\u201d <a href=\"#_edn9\" name=\"_ednref9\">[9]<\/a>Unsupervised machine learning classification models allow for the clustering of individuals into groups that individuals themselves may not even recognize themselves as members. If there are enough data points, this clustering may seem incomprehensible to us, and yet it constitutes an ordinal classification of us that is used to allocate resources and social rights\u2014and most of us submit to this classification system without reflection because we believe it to be more optimal.\u00a0 In Ellul\u2019s language, it is our \u201ctechnique.\u201d<\/p>\n<p>[18] These habits move us away from caring about what the algorithm doesn\u2019t show us or the knowledge the AI doesn\u2019t include in its answer. This has serious implications for liberal democracy. If we trade understanding for optimization, we begin to lose interest in explanations for how the world works. If we lose this desire for understanding, we then don\u2019t have much to say about how the world should be governed, how the world ought to be, or what the good is. If AI provides us with instant answers and provides us with an unearned certainty about the world, we lose what Eran Fisher\u00a0 calls the emancipatory interest to defend liberal institutions.<a href=\"#_edn10\" name=\"_ednref10\">[10]<\/a> For him, if our primary goal becomes AI derived optimized knowledge, we learn to see \u201cfreedom\u201d as coming from outside of us and not an internal drive to self-determination or self-governance. We become indifferent to the fact that AI might obscure causality or nuance and hence make it difficult for us to understand how it gets to its results. We won\u2019t care because we won\u2019t need other humans to help us understand the world. If the algorithm identifies \u201cother\u201d citizens as \u201cthreats to the state,\u201d we might lose our desire to challenge the \u201calgorithm\u2019s logic\u201d since we don\u2019t understand it, nor are we interested in understanding its rationale. As we build more relationships with human-acting machines, we lose our investment in defending the rights of other humans. We instead hand over decision-making to an intelligence that we can\u2019t quite explain or wrap our heads around, much like leaving it up to \u201cGod\u2019s will.\u201d Megan O\u2019Gieblyn\u00a0 refers to the coming age of AI as \u201cthe new dark age\u201d for this reason.<a href=\"#_edn11\" name=\"_ednref11\">[11]<\/a><\/p>\n<p>[19] Paradoxically AI retains a power of certitude by seeming to be a form of magic. Lucy Suchman makes a case that the solution to retaining human-centeredness in the face of AI is to emphasize its \u201cthingness.\u201d<a href=\"#_edn12\" name=\"_ednref12\">[12]<\/a> Suchman argues AI is a floating signifier that, by remaining vague, serves the interest of AI promoters because it can attain an enchanted, magical quality. This has the effect of reifying AI as something more mystical and powerful than it is in actuality. By insisting that AI is energy-exhaustive complex math, we empty it of its mysticism.<\/p>\n<p>[20] While the Enlightenment project presumably disenchanted by making explanation possible, the algorithmic\/AI project &#8220;re-enchants&#8221; by being so computationally obscure that it returns us to astonishment. This will either invite us to break away from the insistence on empirical understanding towards the inner world of emotion and experience or it will further drive us towards desire-fulfillment technologies that separate us from humanity and make us less willing to defend institutions that protect individual rights. In recent years, a number of scholars have persuasively argued that we may be entering a post-secular age. This line of thought argues that the contract between the religious and the secular in society has been conducted on decidedly Western Protestant terms. The core of the complaint is that liberalism&#8217;s insistence on using reason as the foundation for deliberation in the public sphere is inherently biased. Liberals argue that empirical evidence was a <em>lingua<\/em> <em>franca<\/em> that we could all understand and hence arrive at a consensus. Religious arguments, by contrast, required arguments that could not be epistemologically proven and hence must be kept out of the public sphere. But some might argue, as Luther once said to the radical reformers from whom liberalism may have sprung, that reason always has a bias towards the highest bidder, making her the devil\u2019s most lovely whore.<a href=\"#_edn13\" name=\"_ednref13\">[13]<\/a><\/p>\n<p>[21] AI exacerbates this return to the non-rational, which can be either destructive or transformative. Although the algorithmic age includes no post-Lutheran Friedrich Nietzsche to tell us whether \u201creason is dead,\u201d we can still feel the seismic effects of the moment we are in.<\/p>\n<p>[22] As we navigate this new algorithmic age, we must grapple with its implications for faith, reason, and the very nature of human understanding. The challenge lies in finding a balance between the enchantment offered by algorithms and the critical thinking necessary for a functioning society. Perhaps, in this tension, we can find a new form of meaning that acknowledges both the rational and the mystical aspects of our existence without succumbing to seeing ourselves as desire-satisfiers regardless of the societal implications.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\">[1]<\/a> Sam Bowman quoted in N. Hassenfeld, \u201cEven the scientists who build AI can&#8217;t tell you how it works: \u2018We built it, we trained it, but we don&#8217;t know what it&#8217;s doing,\u2019\u201d\u00a0<em>Unexplainable<\/em>\u00a0[Podcast], Vox, (July 15, 2023).\u00a0<a href=\"https:\/\/www.google.com\/url?sa=E&amp;q=https%3A%2F%2Fwww.vox.com%2Funexplainable%2F2023%2F7%2F15%2F23793840%2Fchat-gpt-ai-science-mystery-unexplainable-podcast\">https:\/\/www.vox.com\/unexplainable\/2023\/7\/15\/23793840\/chat-gpt-ai-science-mystery-unexplainable-podcast<\/a><\/p>\n<p><a href=\"#_ednref2\" name=\"_edn2\">[2]<\/a> Henry Farrell, \u201cLarge Language Models are Uncanny,\u201d <em>Programmable Mutter, May 13, 2024 <\/em><a href=\"https:\/\/www.programmablemutter.com\/p\/large-language-models-are-uncanny\"><em>Large Language Models are Uncanny &#8211; by Henry Farrell<\/em><\/a><\/p>\n<p><a href=\"#_ednref3\" name=\"_edn3\">[3]<\/a> Miles Klee, \u201cPeople Are Losing Loved Ones to AI-Fueled Spiritual Fantasies: Self-styled prophets are claiming they have \u2018awakened\u2019 chatbots and accessed the secrets of the universe through ChatGPT,\u201d <em>Rolling<\/em> <em>Stone<\/em>, Online, (May 4, 2025).<\/p>\n<p><a href=\"#_ednref4\" name=\"_edn4\">[4]<\/a> M. P. Lynch, \u201cEpistemic arrogance and the value of political dissent,\u201d in <em>Voicing dissent<\/em>, ed. C. R. Johnson, (NY: Routledge. 2018).<\/p>\n<p><a href=\"#_ednref5\" name=\"_edn5\">[5]<\/a> David Noble, <em>The Religion of Technology: The Divinity of Man and the Spirit of Invention, <\/em>(Oxford: Oxford University Press, 1997), 7.<\/p>\n<p><a href=\"#_ednref6\" name=\"_edn6\">[6]<\/a>Jacques Ellul, <em>The Technological Society<\/em>, trans. J. Wilkinson, (New York: Random House, 1964), 325.<\/p>\n<p><a href=\"#_ednref7\" name=\"_edn7\">[7]<\/a> F. Pasquale,\u00a0<em>The black box society: The secret algorithms that control money and information<\/em>, (Cambridge: Harvard University Press 2017).<\/p>\n<p><a href=\"#_ednref8\" name=\"_edn8\">[8]<\/a> M. Fourcade and K. Healy, <em>The ordinal society<\/em>, (Cambridge: Harvard University Press, 2024).<\/p>\n<p><a href=\"#_ednref9\" name=\"_edn9\">[9]<\/a> C. Koopman,<em> How we became our data: A genealogy of the informational person<\/em>, (Chicago: University of Chicago Press, 2019).<\/p>\n<p><a href=\"#_ednref10\" name=\"_edn10\">[10]<\/a> Eran Fischer, <em>Algorithms and subjectivity: The subversion of critical knowledge<\/em>. (NY: Routledge, 2022).<\/p>\n<p><a href=\"#_ednref11\" name=\"_edn11\">[11]<\/a> M. O&#8217;Gieblyn, \u201cThe intelligence of machines has exceeded our own to the extent that programmers accept their decision-making with blind faith. Does that make AI our new god?\u201d\u00a0<em>The Believer<\/em>, (June 1, 2019)\u00a0<a href=\"https:\/\/www.google.com\/url?sa=E&amp;q=https%3A%2F%2Fwww.thebeliever.net%2Fartificial-intelligence-god%2F\">https:\/\/www.thebeliever.net\/artificial-intelligence-god\/<\/a><\/p>\n<p><a href=\"#_ednref12\" name=\"_edn12\">[12]<\/a> L. Suchman, \u201cThe uncontroversial &#8216;thingness&#8217; of AI,\u201d\u00a0<em>Big Data &amp; Society<\/em>, 10(2), 20539517231206794. (2023)<\/p>\n<p><a href=\"#_ednref13\" name=\"_edn13\">[13]<\/a> Martin Luther, \u201cThe Last Sermon in Wittenberg, 1546\u201d in <em>Luther\u2019s<\/em> <em>Works<\/em>, vol. 51, (Minneapolis, Fortress Press, 1959), 374.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[1] \u201cIf we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means&#8230;.We built the computers, but then we just gave the faintest outline of a blueprint and kind [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[151,154,1],"tags":[],"class_list":["post-6865","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-artificial-intelligence-ai","category-uncategorized"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding - Journal of Lutheran Ethics<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding - Journal of Lutheran Ethics\" \/>\n<meta property=\"og:description\" content=\"[1] \u201cIf we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means&#8230;.We built the computers, but then we just gave the faintest outline of a blueprint and kind [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/\" \/>\n<meta property=\"og:site_name\" content=\"Journal of Lutheran Ethics\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-21T01:25:06+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-01T21:32:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"250\" \/>\n\t<meta property=\"og:image:height\" content=\"250\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"heatherdean\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"heatherdean\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"13 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/\"},\"author\":{\"name\":\"heatherdean\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#\/schema\/person\/4493166c38ac3d4ed054c77e294df9fe\"},\"headline\":\"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding\",\"datePublished\":\"2025-11-21T01:25:06+00:00\",\"dateModified\":\"2025-12-01T21:32:12+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/\"},\"wordCount\":2927,\"publisher\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/#organization\"},\"articleSection\":[\"Artificial Intelligence\",\"Artificial Intelligence (AI)\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/\",\"url\":\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/\",\"name\":\"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding - Journal of Lutheran Ethics\",\"isPartOf\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/#website\"},\"datePublished\":\"2025-11-21T01:25:06+00:00\",\"dateModified\":\"2025-12-01T21:32:12+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/learn.elca.org\/jle\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#website\",\"url\":\"https:\/\/learn.elca.org\/jle\/\",\"name\":\"Journal of Lutheran Ethics\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/learn.elca.org\/jle\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#organization\",\"name\":\"ELCA - Journal of Lutheran Ethics\",\"url\":\"https:\/\/learn.elca.org\/jle\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg\",\"contentUrl\":\"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg\",\"width\":250,\"height\":250,\"caption\":\"ELCA - Journal of Lutheran Ethics\"},\"image\":{\"@id\":\"https:\/\/learn.elca.org\/jle\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/learn.elca.org\/jle\/#\/schema\/person\/4493166c38ac3d4ed054c77e294df9fe\",\"name\":\"heatherdean\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g\",\"caption\":\"heatherdean\"},\"url\":\"https:\/\/learn.elca.org\/jle\/author\/hdean\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding - Journal of Lutheran Ethics","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/","og_locale":"en_US","og_type":"article","og_title":"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding - Journal of Lutheran Ethics","og_description":"[1] \u201cIf we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means&#8230;.We built the computers, but then we just gave the faintest outline of a blueprint and kind [&hellip;]","og_url":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/","og_site_name":"Journal of Lutheran Ethics","article_published_time":"2025-11-21T01:25:06+00:00","article_modified_time":"2025-12-01T21:32:12+00:00","og_image":[{"width":250,"height":250,"url":"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg","type":"image\/jpeg"}],"author":"heatherdean","twitter_card":"summary_large_image","twitter_misc":{"Written by":"heatherdean","Est. reading time":"13 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/#article","isPartOf":{"@id":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/"},"author":{"name":"heatherdean","@id":"https:\/\/learn.elca.org\/jle\/#\/schema\/person\/4493166c38ac3d4ed054c77e294df9fe"},"headline":"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding","datePublished":"2025-11-21T01:25:06+00:00","dateModified":"2025-12-01T21:32:12+00:00","mainEntityOfPage":{"@id":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/"},"wordCount":2927,"publisher":{"@id":"https:\/\/learn.elca.org\/jle\/#organization"},"articleSection":["Artificial Intelligence","Artificial Intelligence (AI)"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/","url":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/","name":"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding - Journal of Lutheran Ethics","isPartOf":{"@id":"https:\/\/learn.elca.org\/jle\/#website"},"datePublished":"2025-11-21T01:25:06+00:00","dateModified":"2025-12-01T21:32:12+00:00","breadcrumb":{"@id":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/learn.elca.org\/jle\/machine-certainty-ellul-ai-and-the-crisis-of-democratic-understanding\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/learn.elca.org\/jle\/"},{"@type":"ListItem","position":2,"name":"Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding"}]},{"@type":"WebSite","@id":"https:\/\/learn.elca.org\/jle\/#website","url":"https:\/\/learn.elca.org\/jle\/","name":"Journal of Lutheran Ethics","description":"","publisher":{"@id":"https:\/\/learn.elca.org\/jle\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/learn.elca.org\/jle\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/learn.elca.org\/jle\/#organization","name":"ELCA - Journal of Lutheran Ethics","url":"https:\/\/learn.elca.org\/jle\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/learn.elca.org\/jle\/#\/schema\/logo\/image\/","url":"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg","contentUrl":"https:\/\/learn.elca.org\/jle\/wp-content\/uploads\/sites\/3\/2021\/01\/Journal_of_Lutheran_Ethics_Logo.jpg","width":250,"height":250,"caption":"ELCA - Journal of Lutheran Ethics"},"image":{"@id":"https:\/\/learn.elca.org\/jle\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/learn.elca.org\/jle\/#\/schema\/person\/4493166c38ac3d4ed054c77e294df9fe","name":"heatherdean","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1d3e5eff554ddaea495a274433db560cd82b346d68d3aeeb680955be3e7aa504?s=96&d=mm&r=g","caption":"heatherdean"},"url":"https:\/\/learn.elca.org\/jle\/author\/hdean\/"}]}},"_links":{"self":[{"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/posts\/6865","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/comments?post=6865"}],"version-history":[{"count":3,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/posts\/6865\/revisions"}],"predecessor-version":[{"id":6878,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/posts\/6865\/revisions\/6878"}],"wp:attachment":[{"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/media?parent=6865"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/categories?post=6865"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/learn.elca.org\/jle\/wp-json\/wp\/v2\/tags?post=6865"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}