The Critical Role of Lutheran Higher Education in the Age of Artificial Intelligence

[1] Artificial Intelligence is a highly contested topic. Many conversations in social, political, and academic contexts eventually turn to the implications of AI on job prospects, college success, and more.[i]

[2] While discussing this topic can often feel overwhelming, the role of synthetic thinking produced by AI requires us to analyze the broader implications felt throughout higher education. Lutheran higher education champions critical thinking as a fundamental tool in our development as cognitive and spiritual selves. It “lays the foundation for a kind of critical thinking that can still register awe. It exhibits a freedom of inquiry that challenges every assumption.” [ii] It is hard to observe the advances in artificial intelligence in the last year and not depart from it with a sense of awe and wonder.

[3] Put in simplistic terms, the vast majority of AI models are highly complex, deep learning algorithms trained on millions of data points. Text-based AI like ChatGPT, is part of a family of large language models (LLMs) trained on billions of words (and other grammatical elements) from all corners of the internet (social media, web pages, comment threads, etc.)  The magic of AI emerges from the “tokenization” (e.g. converting into numerical data) of these billions of words and their context.  This numerical data is placed into a massive mathematical array and analyzed through deep learning algorithms that uncover patterns in the structure of language. With generative AI models, these inexplicably complex multi-layered hidden operations can approximate human speech with astonishing results.

[4] Even more astonishing is the recent insight among AI researchers that anything can be “tokenized” and placed in an array and analyzed using transformer-based encoder/decoder models (a method too complex to describe here).  This means that the content medium is irrelevant: text can be mapped to images, audio can be mapped to speech, etc.  Even though the study of artificial intelligence dates back to the 1950s, the rate at which artificial intelligence has advanced in the last seven years has been breathtaking.  Yet these advances did not enter the cultural zeitgeist until the release of ChatGPT in early 2023.

[5] It’s difficult to know what impact this explosion of AI will have on society, but it will likely be seismic. It is likely that as people and companies get more adept at using AI, the need for humans to do many rote tasks will most likely decline leading to an overall reduction in white-collar positions.  These are the very positions that have traditionally supported many middle-class and upper-middle-class individuals and families.  Additionally, these are the jobs that many of our graduates hoped to acquire after graduation.  Currently, companies like OpenAI are developing “agents” that allow users, without coding knowledge, to apply AI models to specific tasks. For example, the model itself that was trained on billions of pieces of data can then in turn be trained on a state’s legal code or a state’s tax code to produce an “AI lawyer” or an “AI accountant.”

[6] Currently, our discussions about artificial intelligence, like the rest of society, are polarized. On one side are people who call for caution in overhyping AI.  AI scholars like Emily Bender and Timnit Gerbu call generative AI applications “stochastic parrots” that are good at mimicking human expression but are incapable of human understanding.[iii] On the other side are people like a16z venture capitalist Marc Andreessen (whose company is investing in many AI startups), who claim Artificial Intelligence will “save the world.”[iv]

[7] This is where Lutheran higher education has a crucial role to play because it can stake out a middle position between a sense of awe and wonder about scientific discovery/reason while holding a healthy skepticism about overestimating human capability. Lutheran higher education believes that reason and inquiry are intended to foster a “healthy sense of human limit.”[v] The expansion of human knowledge only deepens the awareness of human limitations, leading to a dual attitude toward learning that reaches for excellence yet registers suspicion about claims to complete understanding.[vi] It is incumbent upon us to engage in a serious conversation about how we turn the wonders of AI into something fruitful and productive while recognizing the limits of human understanding.

[8] Reckoning with the limits of human understanding is intuitive to any scholar who spends enough time engaged in scientific inquiry.  Each scientific discovery unpacks more questions than it answers.  The Enlightenment is rooted in this balance between human limitation and human capability.  The purpose of the Enlightenment project of which Lutheranism was a key driver was that learning and understanding bring us closer to the divine, but does not get us all the way there. Inherent in the scientific method is this sense of limit.  The entire concept of theory as applied to the sciences is rooted in the premise that we cannot collect data on the entire world.  As a result, we use “samples” to test hypotheses derived from theory.  Because we can’t collect data on the entire world, theories are a necessary abstraction from reality. We all know that theories do not explain every possible case. To do so is to fall into the trap of tautology: theories that explain everything, but paradoxically, explain nothing.  This understanding of limits has served to keep science “in its place” regarding deeper questions reserved for theology.

[8] We talk about theories that explain a good deal with a simple causal mechanism as being parsimonious. But to presume any theory can explain everything contradicts the healthy sense of limit accepted in Lutheran higher education. The actual world is far too disordered and complex to explain with simple theories.  But AI introduces for some the dangerous notion that science can explain everything.  We have the processor speed, the storage capacity and the data availability to answer previously unanswerable questions.

[9] And here’s where the challenge of a limit-to-human-understanding approach comes in. The more we use AI to dive into the world’s complexity, the more turbulent and confounding the world becomes.  In this environment, people are prone to seek out simple answers. This is the challenge for Lutheran higher education.  Our mission is more important than ever.  We need to produce young people who are “called and empowered to transform the world, who go into that world with wisdom, humility, and hope.”[vii] Not to take up this task is to fail to address the confusion, frustration, and instability of our modern era. Paradoxically, as AI makes more scientific discovery, the sense of the world for many becomes increasingly incomprehensible. The philosopher David Weinberger observed the increased use of big data “with the new database-based science, [and found] there is often no moment when the complex becomes simple enough for us to understand it.”[viii]

[10] This leaves us vulnerable to demagogues who promise to make the complex simple. We are in the late stage “great tech man” theory where formerly lionized figures like Elon Musk and Mark Zuckerberg’s platforms are accused of fueling ethnic conflict, spreading conspiracies and misinformation, and moving too slowly to take down harmful content. But, as writer Anna Della Subin’s recent wonderful book Accidental Gods highlights, the frequency with which we’ve turned our fellow humans have been deified is an unfortunate persistent feature of human society.[ix] Modernity/rationality was supposed to be a “resistance to gods,” a rejection of irrational impulses. But when society (and AI) become too complex for humans to comprehend, we become anxious and are more susceptible to the vicissitudes of demagogues. A great danger of our time is the ability of bad-faith actors to use AI tools to spread misinformation and otherwise disrupt democratic societies.  We must be clear-eyed about the challenges we face. Simple answers are appealing especially when they have the force of religious authority and dogma behind them.

[11] In this effort to remain vigilant with false claims of clarity in an otherwise unclear age, Lutheran higher education calls us to be reminded that “The divine is present in ordinary life. Every person and every creature [are] potential vessels of grace, and the whole of life displays sacramental significance.”[x] By adopting a position of gratitude, we can find inherent, unchanging beauty and knowledge such as the natural world, as a foundation for mental grounding. Rather than turning to authoritative figures, manipulative messages, or avoidance entirely, devoting time to discovering beauty in the pure simplicity of creation is a critical pathway toward freedom of being. Lutheran theology prioritizes “radical freedom,” described “as a freedom from false ideas about earning one’s worthiness and a freedom for a life of service to and with the neighbor.”[xi]

[12] Seeing others as “neighbor also resists all that brands them as ‘enemies’ or ‘threats’ or ‘strangers.'”[xii] To be a neighbor means to seek to understand and serve people and communities. As the world becomes more complex, people become attracted to simpler answers to know who to blame for their alienation or isolation. In “The Origins of Totalitarianism” Hannah Arendt identifies social isolation as creating a vulnerability to authoritarianism.  As they become detached from their society, they become vulnerable to alternative “unrealities” that appear to explain their condition.

[13] AI only adds to these challenges. If many people already feel isolated from the broader political community, how much more isolated will they feel when they are increasingly engaging with synthetic talk discourse on social media?  One of the goals of Lutheran Higher Education is to impart upon students “the essential relationality of Lutheran theology” which claims that “individuals flourish only as they are embedded in larger communities.”[xiii]  This is intimately connected to the ability to resist seeing the neighbor as enemy or threat.

[14] How does the inevitable widespread adoption of synthetic communication impact this ability?  Increasingly we forego being in community with one another for the comforts of the phone, the screen, and the algorithm feeding endless curated content. Increasingly, we opt for convenience of Amazon next day delivery rather than opt for an awkward conversation with a stranger at the checkout line.  AI gives us the possibility of forming more manageable less contingent synthetic relationships.  AI dating apps that provide virtual girlfriends are no longer the stuff of science fiction.

[15] Without regulation and guidelines tech companies may use AI to create even more addicting technology that may exacerbate current issues with youth and adult suicide and mental health. The use of AI created and curated push notifications in gambling and daily fantasy apps designed to boost interaction and addictive gambling behaviors is a prime example of the potential harms of AI. Those prone to gambling addiction can be manipulated through AI advertising designed to attract and take advantage of individuals betting history to curate types of bets and times that will maximize money spent and money lost.

[16] In a world that is becoming increasingly dissociative, fragmented and materially oriented, Lutheran higher education’s fight a lonely battle against the hardening of the human soul. Artificial Intelligence, as a piece of the technological age continues to place barriers between face-to-face interactions, causing distortions of the truth, not just physically (ex: deep fakes) but psychologically (how do I know what I am seeing or hearing is real?)

[17] Lutheran institutions of higher education are called to draw on the resources of both faith and learning to address human problems and to help students who are called to reduce suffering and improve well-being of themselves and those they encounter.  It is critical that Lutheran higher educational institutions engage in thinking about how artificial intelligence can address human problems and can best avoid causing human suffering.

[18] Although tools of AI can produce hope, they can also move us away from the natural world. AI can aid discovery at breathtaking speed, but it can also disrupt and destroy. The ability to tokenize millions of “data points” and instantiate models on high-speed Graphical Processing Units (GPUs) means that those who use this data can be detached from understanding and knowing. As a positive example, in January of 2024, a team of researchers at Microsoft announced that they were able to “analyze 32.6 million potential battery materials in 80 hours, a process that would have taken 20 years manually” to discover the desperately needed battery alternatives to lithium.[xiv]  Yet, these powerful tools can also be used for malevolent purposes.  A 2022 study in Nature reported that a Swiss research team used a machine learning model that had been created to identify pharmaceutical drugs to produce directions to create 40,000 potential biological weapons similar to nerve agents. [xv]  AI does not teach us how are we called to address the awesome power for both good and evil.

[19] Luther’s theology of the cross compels us to identify with the marginalized. Yet, AI has been trained on primarily Western data; this means that primary use of AI’s data will further marginalize the culture and languages of those in the Global South.  This phenomenon known as data colonialism often manifests in the disproportionate reliance on “Western”-centric datasets and knowledge collected from predominantly English speaking and “developed nations” reflecting historical power dynamics and biases inherent in the data collection process. Much of AI training data is sourced from Western English language content, forming a skewed representation of other cultures, languages, and perspectives. This overrepresentation of “Western” influence in technology reinforces a form of digital colonialism, where the voices and experiences of “non-Western” communities are excluded or ignored.

[20] Addressing the issue of data colonialism in AI is no small task requiring widespread diversification of training datasets and languages, prioritizing inclusive data collection practices. To foster collaboration across diverse communities ensuring that AI technologies are more representative, equitable, and respectful of nuance and a depth of understanding that comes with other belief systems ways of understanding and explaining the world. Ignoring the need for diversification of training data will cause more issues as AI is implemented further entrenching and digitally redlining society.

[21] Finally, Lutheran higher education is relevant in encouraging young people to “weigh the impact of their actions on other creatures, both human and non-human.”[xvi]  We are losing species at a rate of “8,700 species a year, or 24 a day”[xvii] It is comforting to think that we can “nerd” our way out of our behavior through scientific advancement.  While there are promising advances in the use of AI to address global climate change and its effects, without people who have an ethic of care, AI will prove futile.  We must resist the impulse to blindly adhere to technocratic answers.

[22] Lutheran higher education calls us to examine, monitor, and advocate for the environmental consequences of Artificial Intelligence. In an article discussing the book review session for Atlas of AI by Kate Crawford, the author notes that “undeniably, the AI industry is responsible for significant greenhouse gas emissions and the release of toxic chemicals, contributing to climate change and global warming, the harmful environmental impacts caused by it.”[xviii]  With this insight, there is a call to develop more sustainable and responsible AI systems that are more energy-efficient, reduce the use of single-use hardware, and prioritize using renewable energy sources. Lutheran higher education has a call to advocate for these measures, ensuring AI development is ethical and pursues just and sustainable treatment of creation.

[23] The mission of training students for this purpose drives professionals in Lutheran higher education. Lutheran higher educational institutions must engage in discovering ways that artificial intelligence can address human problems and avoid suffering.

 

 

 

[i] A version of this article was previously published in Intersections. Marichal, Jose; Goehner, Maya; and Haug, Tyler (2024) “The Critical Role of Lutheran Higher Education in the Age of Artificial Intelligence,” Intersections: Vol. 2024: No. 59, Article 8. Available at: https://digitalcommons.augustana.edu/intersections/vol2024/iss59/8   Volume 59 Spring 2024.  Republished with permission of the authors and the editor.

[ii] NECU, Rooted and Open: The Common Calling of the Network of ELCA Colleges and Universities (2018), p.3.

[iii] Emily M. Bender et. al., “On the dangers of stochastic parrots: Can language models be too big?,” Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (2021).

[iv] Marc Andreesen. Why AI Will Save the World | Andreessen Horowitz.

[v] NECU, p. 4.

[vi] See NECU, p. 4.

[vii] NECU, p. 5.

[viii] David Weinberger, “To Know, But Not Understand,” The Atlantic (January 3, 2012). http://www.theatlantic.com/technology/archive/2012/01/to-knowbut-not-understand-david-weinberger-on-science-and-big-data/250820/.

[ix]Anna Della Subin, Accidental Gods: On Race, Empire, and Men Unwittingly Turned Divine (Metropolitan Books, 2021).

[x] NECU, p. 7.

[xi] NECU, p. 4.

[xii] NECU, p. 5.

[xiii] NECU, p. 7.

[xiv] J. Calma, “How Microsoft found a potential new battery material using AI.” The Verge, 12 Feb. 2024.

[xv] J. Calma, “AI suggested 40,000 new possible chemical weapons in just six hours.” The Verge, 17 Mar. 2022.

[xvi] NECU, p.5.

[xvii] Fred Pearce. “Global extinction rates: Why do estimates vary so wildly?.” Yale Environment 360 (2015). Global Extinction Rates: Why Do Estimates Vary So Wildly? – Yale E360

[xviii] S. Ling Chan,  “Exploring the Environmental Costs of Artificial Intelligence (AI).” Cross-Current, 30 Jan. 2024.

 

Jose Marichal

Jose Marichal is Professor of Political Science at California Lutheran University. He is the author of You Must Become and Algorithmic Problem (Bristol University Press). He also is affiliate faculty at the Center for Information Technology and Public Life at University of North Carolina-Chapel Hill.

Maya Goehner

Maya Goehner is a recent Political Science graduate from California Lutheran University. She is three months into an internship with State Senator Henry Stern and has ambitions of becoming a District Representative. Although she feels passionate about constituent and legislative issues, she enjoys continuing to research artificial intelligence and ethics.

Tyler Haug

Tyler Haug is a 4th year political science undergraduate at University of Colorado Boulder.