Machine Certainty: Ellul, AI and the Crisis of Democratic Understanding

[1] “If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means….We built the computers, but then we just gave the faintest outline of a blueprint and kind of let these systems develop on their own. I think an analogy here might be that we’re trying to grow a decorative topiary, a decorative hedge that we’re trying to shape.” -Sam Bowman.[1]

[2] AI scientist and NYU professor Sam Bowman offers a frank admission about the limits of our knowledge of how large language models (LLMs) produce their outputs. LLMs for the most part use highly sophisticated transformer models to engage in a form of autocomplete–corrected or affirmed millions of times through human trainers—a process called reinforcement learning. The number of parameters (features) of the model that are manipulated by the algorithm are so vast (numbering in the tens of billions) that teasing out which feature was tweaked to produce which outcome is an impossible task. For his part, Bowman likens the development of transformer LLMs to a “decorative hedge” that researchers can only shape but not truly know its biology.

[3] The nature of deep learning is such that we are incapable of understanding how it arrives at answers. In October 2023, the Stanford Center for Research on Foundation Models examined 10 large AI models and found all of them received failing scores with respect to 100 transparency indicators that cover a wide range of issues including sources of training data, information about the labor used to train the data, and the amount of compute power/energy used, among others. In October 2023, none of the ten models received a passing grade. By May 2024, when the center re-examined the models, there had been marginal improvement in transparency (for example, Google’s AI went from a “40” to a “47” on its index), but the major AI platforms (OpenAI, Meta, Anthropic, Stability AI, Amazon, Google) all received scores of 60 or less. What this means in layperson’s terms is that we know very little about the data these models are trained on, who trains them, how the algorithms are applied, or how much energy they expend. They take on the characteristics of enchanted objects that appear to have mystical qualities.

[4] Henry Farrell analogizes LLMs to a planchette on a Ouija board.[2] In the same way that a planchette is designed to appear as if it is eerily moved by an external force, AI-generated text and art can have an eerie resemblance to human-generated outputs. This sense of eeriness is both compelling and repelling. We are fascinated by the ability of LLMs and other deep learning algorithms to mirror human creativity, yet we are also made uneasy by models patterned after the human brain that not only can mirror our thoughts but can also produce seemingly “other-worldly” outcomes.

[5] This other-worldliness can produce bizarre reactions from users. A May 2024 article in Rolling Stone interviewed a number of people whose significant others were engaging with large language AI chat apps as if the apps either had divine qualities or empowered the users to become divine. In one instance, a wife reported that her husband told her that the AI with which he was interacting on “was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.”[3]

[6] What does it mean when the world becomes too complex for us to understand? When AI/algorithms detach us from understanding and knowing, what do we base our authority upon?

[7] The more we use AI to dive into the world’s complexity, the more turbulent and confounding the world becomes. Paradoxically, as AI makes more scientific discoveries, the sense of the world for many becomes increasingly incomprehensible. If we can’t draw upon theories for how we think the world works, then what’s the point of debate and discourse in a democratic process? If we think the algorithm will figure it out for us, then why would we deliberate and debate and be part of decision-making? How can we evaluate whether our representatives are making good decisions? Increasingly, we rely upon recommendation algorithms-infused information diets that “feed us back to ourselves” and become susceptible to what M.P.  Lynch calls epistemic arrogance—belief without empirical validation.[4]

[8] AI models that can produce answers through pattern detection but have no sense of understanding signal a profound shift in scientific discovery. The concept of human understanding presumably links the scientific revolution to the Enlightenment. Scientific discovery of empirical facts can only help us better explain and understand the world if we are able to abstract from those facts a theory using reason.  Scientific method, therefore, provides us with the basis for rational decision-making. Self-governance and liberal democracy rely upon this potential for human understanding.

[9] Limits, however, have always been present in the Enlightenment project. Doubt has always been a central element of rigorous critical thinking. The world was too vast and complex to fully “know” it, but we could approximate this knowledge. Although we could not collect data on the entire world, we could use theory to abstract from the world just enough so we could conceptualize it. Abstraction was essential to science — any given theory will not explain every possible case, but we can approximate something about the characteristics of sand grains.

[10] Abstraction requires a form of intellectual humility. It represents the fact that we can never “know” the world in its entirety at any given moment because it is too vast and complex. A theory cannot explain 100 percent of the cases to which it is applied. This plurality of conditions means that empiricists from David Hume onward cautioned against claiming that science can “prove” propositions to be true. However, the inability to refer to anything as “true” also causes doubt and uncertainty. If the world is ultimately too complex to fully understand, that leaves room for the mystery and wonder of the transcendent.

Disenchantment and Modernity

[11] A common theme for understanding the break between modern and pre-modern societies is the theory of enchantment. Popularized by Max Weber in 1946, the theory suggests that the pre-Protestant Reformation world was bathed in mysticism and wonder. Spirits and deities were responsible for the mysteries of human existence. The capricious nature of these spirits presumably needed to be tamed by prayer or ritual. Aside from ritual, humans are a meaning-making species. Enchantment was a way to make sense of a dangerous and complex world. Before the language of scientific rationality, humans left the ultimate meaning of reality to the gods.

[12] However, the idea of a clean break between an enchanted pre-modern and a disenchanted modern has been vigorously debated. For example, David Noble in his 1997 book The Religion of Technology claims that in modernity, technology serves as a main source of enchantment. For Noble, there are dual impulses in modernity: the inevitability of technological advance and the increased efficacy of religious fundamentalism. But rather than see these in opposition, Noble finds a complementarity. Noble claims technology is an “essentially religious endeavor”[5] riven by the same impulses that drive religious faith. Technologists place a redemptive potential within technological progress. But Noble cautions against drawing a simple analogy between religious faith and technological faith by noting that what connects the two is an assumption of an originally sinful humanity whose “fallen nature” can be redeemed. Both faith and technology are then seen as a means of redemption through an external agent.

[13] Others have been more insistent on a disenchantment thesis. A particularly interesting view comes from Jacques Ellul’s The Technological Society. Ellul differentiates between technology and what he calls technique. By technique, he refers to states of mind that structure our habits. This is different than our usual conceptualization of technique, which refers to a distinct method for approaching a task or craft; we can think of the Alexander technique of movement as an example. In The Technological Society, Ellul posits technique as an all-encompassing ideology:

Technique has penetrated the deepest recesses of the human being. The machine tends not only to create a new human environment, but also to modify man’s very essence. The milieu in which he lives is no longer his. He must adapt himself, as though the world were new, to a universe for which he was not created.[6]

[14] Ellul’s concern was the widespread incorporation of technique at the expense of spontaneous action. Ellul argues that while technique existed in the pre-modern era (e.g., magic), it didn’t have the patina of rationality. He inverts the common view that the disenchantment of modern life is reducible to capitalism or the scientific method. He suggests that machines themselves are drivers of a rationality-based technique. The promulgation of machines instills in us habits of efficiency and bureaucratic rationality such that a greater and greater number of our interactions become oriented towards a certain kind of efficiency.

[15] Artificial intelligence is an efficiency machine par excellence.  It provides ready-made answers to practically any question a user poses without the painstaking task of conducting an online search and sifting through website.  That practice, in turn, was more efficient than driving to the library and wandering through the stacks of books to synthesize knowledge.  Yet, this optimization of the technique of knowledge acquisition, while making the process more efficient, reduces our curiosity regarding the debates and nuances embedded within the knowledge discovery process.

[16] This doesn’t simply apply to knowledge production. Ellul saw the logic of rational technique overwhelming all institutions in society (community, education, kin networks, etc.). This is so because a technique of rational efficiency has optimization as its telos; hence, only one solution can be optimal (just like AI produces one answer – unless otherwise directed). This makes the technique of rationality a universal pursuit, which undermines the plurality and diversity of localized techniques emerging from place-bound communities. To survive, humans and social institutions are required to adapt to the technique. Higher education, for example, becomes an exercise in “training for technique” rather than training in critical inquiry. Perhaps most perniciously, technique can be applied to impacting human behavior and uncovering “optimal” ways to control human behavior through propaganda and advertising.

[17] Much of our lives are managed by what F. Pasquale called black-box algorithms.[7] Algorithms dictate what we watch, what we order, whom we date, whether we get a loan or a job, all on the basis of some AI-aided classification metric. The sociologists Marion Fourcade and Kieran Healy call this increased engagement with systems of classification ranking us through our data the ordinal society.[8] Because these algorithms are proprietary intellectual property, we know little about how they are set upon our datafied lives to rank us or our choices. The political philosopher Colin Koopman argues the hoard of data collected from us, by invitation via engagement algorithms, constitutes a datafied “second self.” A self that is “from us” but not entirely constitutive “of us.” [9]Unsupervised machine learning classification models allow for the clustering of individuals into groups that individuals themselves may not even recognize themselves as members. If there are enough data points, this clustering may seem incomprehensible to us, and yet it constitutes an ordinal classification of us that is used to allocate resources and social rights—and most of us submit to this classification system without reflection because we believe it to be more optimal.  In Ellul’s language, it is our “technique.”

[18] These habits move us away from caring about what the algorithm doesn’t show us or the knowledge the AI doesn’t include in its answer. This has serious implications for liberal democracy. If we trade understanding for optimization, we begin to lose interest in explanations for how the world works. If we lose this desire for understanding, we then don’t have much to say about how the world should be governed, how the world ought to be, or what the good is. If AI provides us with instant answers and provides us with an unearned certainty about the world, we lose what Eran Fisher  calls the emancipatory interest to defend liberal institutions.[10] For him, if our primary goal becomes AI derived optimized knowledge, we learn to see “freedom” as coming from outside of us and not an internal drive to self-determination or self-governance. We become indifferent to the fact that AI might obscure causality or nuance and hence make it difficult for us to understand how it gets to its results. We won’t care because we won’t need other humans to help us understand the world. If the algorithm identifies “other” citizens as “threats to the state,” we might lose our desire to challenge the “algorithm’s logic” since we don’t understand it, nor are we interested in understanding its rationale. As we build more relationships with human-acting machines, we lose our investment in defending the rights of other humans. We instead hand over decision-making to an intelligence that we can’t quite explain or wrap our heads around, much like leaving it up to “God’s will.” Megan O’Gieblyn  refers to the coming age of AI as “the new dark age” for this reason.[11]

[19] Paradoxically AI retains a power of certitude by seeming to be a form of magic. Lucy Suchman makes a case that the solution to retaining human-centeredness in the face of AI is to emphasize its “thingness.”[12] Suchman argues AI is a floating signifier that, by remaining vague, serves the interest of AI promoters because it can attain an enchanted, magical quality. This has the effect of reifying AI as something more mystical and powerful than it is in actuality. By insisting that AI is energy-exhaustive complex math, we empty it of its mysticism.

[20] While the Enlightenment project presumably disenchanted by making explanation possible, the algorithmic/AI project “re-enchants” by being so computationally obscure that it returns us to astonishment. This will either invite us to break away from the insistence on empirical understanding towards the inner world of emotion and experience or it will further drive us towards desire-fulfillment technologies that separate us from humanity and make us less willing to defend institutions that protect individual rights. In recent years, a number of scholars have persuasively argued that we may be entering a post-secular age. This line of thought argues that the contract between the religious and the secular in society has been conducted on decidedly Western Protestant terms. The core of the complaint is that liberalism’s insistence on using reason as the foundation for deliberation in the public sphere is inherently biased. Liberals argue that empirical evidence was a lingua franca that we could all understand and hence arrive at a consensus. Religious arguments, by contrast, required arguments that could not be epistemologically proven and hence must be kept out of the public sphere. But some might argue, as Luther once said to the radical reformers from whom liberalism may have sprung, that reason always has a bias towards the highest bidder, making her the devil’s most lovely whore.[13]

[21] AI exacerbates this return to the non-rational, which can be either destructive or transformative. Although the algorithmic age includes no post-Lutheran Friedrich Nietzsche to tell us whether “reason is dead,” we can still feel the seismic effects of the moment we are in.

[22] As we navigate this new algorithmic age, we must grapple with its implications for faith, reason, and the very nature of human understanding. The challenge lies in finding a balance between the enchantment offered by algorithms and the critical thinking necessary for a functioning society. Perhaps, in this tension, we can find a new form of meaning that acknowledges both the rational and the mystical aspects of our existence without succumbing to seeing ourselves as desire-satisfiers regardless of the societal implications.

 

 

[1] Sam Bowman quoted in N. Hassenfeld, “Even the scientists who build AI can’t tell you how it works: ‘We built it, we trained it, but we don’t know what it’s doing,’” Unexplainable [Podcast], Vox, (July 15, 2023). https://www.vox.com/unexplainable/2023/7/15/23793840/chat-gpt-ai-science-mystery-unexplainable-podcast

[2] Henry Farrell, “Large Language Models are Uncanny,” Programmable Mutter, May 13, 2024 Large Language Models are Uncanny – by Henry Farrell

[3] Miles Klee, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies: Self-styled prophets are claiming they have ‘awakened’ chatbots and accessed the secrets of the universe through ChatGPT,” Rolling Stone, Online, (May 4, 2025).

[4] M. P. Lynch, “Epistemic arrogance and the value of political dissent,” in Voicing dissent, ed. C. R. Johnson, (NY: Routledge. 2018).

[5] David Noble, The Religion of Technology: The Divinity of Man and the Spirit of Invention, (Oxford: Oxford University Press, 1997), 7.

[6]Jacques Ellul, The Technological Society, trans. J. Wilkinson, (New York: Random House, 1964), 325.

[7] F. Pasquale, The black box society: The secret algorithms that control money and information, (Cambridge: Harvard University Press 2017).

[8] M. Fourcade and K. Healy, The ordinal society, (Cambridge: Harvard University Press, 2024).

[9] C. Koopman, How we became our data: A genealogy of the informational person, (Chicago: University of Chicago Press, 2019).

[10] Eran Fischer, Algorithms and subjectivity: The subversion of critical knowledge. (NY: Routledge, 2022).

[11] M. O’Gieblyn, “The intelligence of machines has exceeded our own to the extent that programmers accept their decision-making with blind faith. Does that make AI our new god?” The Believer, (June 1, 2019) https://www.thebeliever.net/artificial-intelligence-god/

[12] L. Suchman, “The uncontroversial ‘thingness’ of AI,” Big Data & Society, 10(2), 20539517231206794. (2023)

[13] Martin Luther, “The Last Sermon in Wittenberg, 1546” in Luther’s Works, vol. 51, (Minneapolis, Fortress Press, 1959), 374.

Jose Marichal

Jose Marichal is Professor of Political Science at California Lutheran University. He is the author of You Must Become and Algorithmic Problem (Bristol University Press). He also is affiliate faculty at the Center for Information Technology and Public Life at University of North Carolina-Chapel Hill.