[1] To describe the concerns raised by artificial intelligence (AI) would require us to speak of a broad swath of concerns: the environment, energy supply, human rights, privacy, employment, the future of the arts, and scholarship. While those who aim to profit financially from AI focus on its potential, which does indeed seem remarkable, other areas of the economy and society have expressed alarm. AI is a tool—a very powerful one, which can be used in beneficial ways, but has also already been used in visibly harmful ways. To wit: AI powers the surveillance state that crawls through vast numbers of social media account to target migrants. AI allows insurance companies to ‘save money” by denying care at a blistering pace, defying legal requirements for human review. Of further concern, training AI requires vast amounts of power and water from data centers, which are generally located in areas with insufficient water.
[2] One of the most concerning aspects of unchecked use of AI from my perspective is how it concentrates multiple forms of power in the hands of very few people. Formulators of AI by default are making decisions about things like who is arrested, who is targeted by military weapons, who receives health care, or what interest rate is offered on a mortgage. Instances of bias have already been noted as AI is rolled out across industries. The crux of my concern is that now instead of one person exercising bias in a limited range of influence, one person’s biases can affect multitudes more people. Whereas the purveyors of AI would like us to think that it is a “neutral” tool that is simply helping us make the “optimal” decision, evidence has already borne out that quite the opposite can be true. And indeed, as ELCA Lutherans, we are not surprised. Our social statement on genetics predicted this phenomenon when, nearly 15 years ago it described advances in genetics: “Knowledge and technology have never developed in a social vacuum, and genetic research and technology and their delivery are not socially neutral.” (Genetics, p. 6). Martin Luther understood all too well that secular knowledge was a gift from God, but like everything filtered through humans, flawed imperfect. Therefore we, too, should not assume that AI is infallible or neutral. It is a product of humans located in a place and time, and we should assume that it reflects those two things, and is only capable of the discernment that a human is capable of, just on a much larger scale.
[3] As a religious people who participate in society and the economy, we have a mandate to raise concerns around AI from this perspective. Believing that the God-given purpose of secular knowledge is to serve the common good, we are bound to address the “how” of AI—who is benefiting, who is being harmed, who is being excluded, who is being included. We should be using our individual and institutional means to point out who is not benefiting from how AI is used.
[4] Those who hold the reins of power when it comes to AI, mostly big tech and chip makers, are being granted, without much forethought or oversight, the kind of control that reaches into every facet of life. We may not even be fully aware of that control as it’s happening. AI powers searches that tell us what is true (when was the last time you looked something up in an encyclopedia?) and control our access to information. AI algorithms filter what we see on social media, forming our sense of what others think is true. At the heart of those algorithms is not seeking more perfect information or forming us into good citizens or at least more informed citizens. The purpose of those algorithms is to make money by drawing us in to spend more time on social media. If there was any doubt as to the fundamental purpose of the big tech that builds AI, witness that immediately after the 2024 election, Meta tipped its hand by eliminating the fact-checking it had put into place after the 2016 election. Meta is not concerned with truth, or its impact on civil society, Meta cares about making money. The reason we have AI is simple. It’s being built to make money for a very few people (who generally already have a lot of money) and who will be insulated from the effects of their work.
[5] Given that we are participants in this economy, both individually and institutionally, we are bound to be affected by the implementation of AI, even if we do not profit from it. When we as Lutherans evaluate technology, the first question is how is its use affecting those who have the least. We might begin to question how to make it a worthy enterprise. The data centers that store the data to train AI suck up massive amounts of energy in that process, creating environmental issues through greenhouse gas emissions and water scarcity and raising electricity prices for local consumers. Will AI be used, as promised, to lower energy use? Will it be used to improve medical care, (instead of merely denying medical care)? What is our role as religious people, and as an institutional church, in appraising and shaping this technology.
[6] As the institutional church, we engage in public witness in numerous ways, whether through public proclamation of scripture, public statements from church leaders, or advocacy by ELCA staff. The ELCA engages in state and federal advocacy, but also, through how we handle our investments, the ELCA engages in corporate social responsibility. Portico, a separately incorporated ministry of the ELCA which oversees investment of ELCA retirement funds, as well as other ELCA investments, uses ELCA social teaching to guide their work. Social criteria investment screens eliminate some corporations from its socially responsible funds. Issue papers which interpret ELCA social teaching on various subjects related to the corporate world provide the basis for corporate engagement and the potential for filing shareholder resolutions for public vote at a corporation’s annual general meeting.
[7] Issue papers allow Portico to approach a corporation for dialogue around a particular subject, whether it be measuring emissions or policies around human rights. If that dialogue does not result in constructive change, Portico has the ability (an ability currently subject to attempts to undermine it) to file a shareholder resolution, which other shareholders can choose to support at an annual general meeting. The ELCA has issue papers on topics relating to the environment, climate change, human rights, global and domestic health, and now artificial intelligence.
[8] This fall the ELCA Church Council approved an issue paper on artificial intelligence. Why do we need an issue paper on artificial intelligence? Right now, private industry controls artificial intelligence and its uses, and oversight of AI is best achieved through this channel. The corporate social responsibility community has been raising alarms for some time about the environmental impacts, human rights risks, and copyright violations of AI. We have the opportunity to shape use of AI by dialoguing with those who are developing and implementing AI, calling them to assess and account for the risks they are engendering, which can be articulated as risks they are imposing on the investments of shareholders. Shareholder engagement works in a number of ways, but one of the most powerful tools it has is asking corporations to assess and disclose the impact their activities are having. It helps investors make better decisions, and it keeps corporations accountable to their actions, minimizing future risk.
[9] Although we do not have a single body of teaching on AI, we do have social teaching principles from economic life, human rights, genetics, as I’ve already indicated, and climate care that we bring to bear upon this issue. I encourage you to read the issue paper posted here to see how ELCA corporate social responsibility is able to relate ELCA social teaching to the corporate world. Though the current political environment has targeted any investment that takes into account environmental, social or governance factors as “woke investing,” it has also given religious organizations implicit and explicit permission to exercise their values in every part of the public sphere. Now is the time for the religious investors to bring their values to the world of corporate engagement, and the ELCA has well-articulated principles that can call into question how AI is being developed and implemented.


