Ethical Considerations and Artificial Intelligence

The AI Revolution

[1] The ELCA Social Statement on Genetics establishes that [scientific and technological] developments “illustrate the abundant gifts of God’s creation” but cautions that “these developments also exemplify how contemporary human knowledge and technology are causing a different relationship between human power and life.”[i] Our church also teaches that “the Gospel does not take the Church out of the world but instead calls it to affirm and to enter more deeply into the world.”[ii] In this spirit, we have a special responsibility to cautiously approach technological and scientific developments with respect and not apprehension, viewing these developments as both potential and challenge.

[2] When it comes to generative AI, there are at present three typical responses: fear, ignorance and exploitation. The first response alludes to the fact that there are some who are so fearful of AI that they callously disregard its value. These modern-day Luddites reject AI technology, arguing that its espousal constitutes “playing God” rather than realizing that God gives humans responsibility to function as created co-creators. As a corrective to this fear, we propose a different take.  This technology, if used and engineered correctly, can be serviceable and useful.  It can coexist harmoniously with our natural and organic human lives in a way similar to other technologies we have developed as tool making human beings.

[3] In terms of ignorance, the vast majority of people in our society today are simply unaware of the many ways that AI impacts us. A recent Oxford University study indicates that, outside of popular film depictions of robot uprisings and societal apocalypses, large parts of the public are not particularly interested in generative AI.[iii] The third response, exploitation, uncritically promotes the use of this technology due to its availability without regard to human ingenuity, creativity, and autonomy.

[4] Our examination of Generative AI will demonstrate the ubiquitous presence of AI in our daily lives, particularly in our market economy, examine the potential and challenges of this technology, and propose an ethical framework that will promote responsible use.

The Ubiquity of AI

[5] My students are surprised when it is mentioned that Generative AI bots use an amalgamation of data points accumulated for the benefit of advertising and marketing. Every click, email, text message, search, and online purchase is registered for the purpose of fulfilling the market imperative of supplying perceived and actual consumer demand. IBM defines Generative AI as “deep-learning models that can generate high-quality text, images, and other content based on the data” they are fed.[iv]

[6] According to a Standard and Poor’s Financial Services report in 2023, the biggest users of Generative AI are banking, finance and insurance services.[v] These are areas with which we interact on a daily basis. These businesses already rely on AI for customer service.  There are many other ways that decisions are being made for us by AI. These include decisions about credit worthiness, interest rates, data collection and analysis, security and fraud protection, risk management, productivity and profitability, long and short-term planning, investment strategizing and wealth management, advertising, actuarial analysis, underwriting, and claims management. This list is impressive but not comprehensive.

[7] Advertising and marketing are areas where we encounter AI most frequently. It is familiar for us to be bombarded with tailored consumer advertising (hyperpersonalization) on our social media feeds and email blasts based on past purchases, likes, and Google” search results. Generative AI’s reach extends further. Hyperpersonalization encompasses personal marketing campaigns, individual data analysis, dynamic pricing and offers, automated chatbots, pre-populated applications, consumption generation (creating a new demand for a new product we did not think we needed), loyalty and reward programs, inventory management, risk mitigation, and customer service.[vi]

Ethical Challenges in the Rise of Generative AI

[8] AI is also prominently used in certain dating and friendship apps, as well as in the creation of digital romantic “partners.” For example, AI friendship apps are simply dialogical platforms or programs that respond to prompts in order to mimic conversations. These came to be popular during the pandemic as many felt isolated and lonely, in need of contact, friendship, and connection. They can serve a positive purpose, but there are studies showing they can lead users to become less empathetic and in some cases more abusive in their human relationships.[vii]  Digital romance provides an even more fascinating challenge. In an opinion piece on The Hill, Liberty Vittert a professor of data science at Washington University, reflected on an intriguing dating trend among young men aged 18-30: AI girlfriends.[viii] These self-confessed lonely and socially awkward men, (some may harbor a deeply rooted animosity towards women), can design a girlfriend to fit their specific relationship criteria. They can design a woman with a specific personality or body type, as well as desirable physical attributes and attitudes. Vittert’s report shows that these men often want women who will defer to them, fulfill their every whim, worship and desire them. Most importantly, they choose artificial girlfriends who are docile and “do not talk back.” Suffice it to say that Vittert is concerned with how these AI girlfriends will impact men and relationships in the near future as more and more men are seeking these romantic arrangements.

[9] There are many other areas of concern with the current capabilities and uses of AI. For example, in April 2020 a college student by the name of Breeze Liu was informed by a friend of a pornographic video of her online. When she informed the police, she was informed that there was nothing that could be done about it. This led Liu to create a company that produces a facial recognition program that allows victims to find and eliminate harmful videos.[ix] This use of AI facial recognition is but a small victory against a much larger and darker industry.  The rise of facial recognition technology and the affordability of more powerful professional video editing programs, have made it easier to digitally create deep fakes that are indistinguishable from recorded videos. This technology has not only been used to produce pornographic material with celebrities, but it has also been utilized to punish and embarrass past love interests. Loopholes in the laws that protect women from “revenge porn” have allowed these deepfakes to proliferate without legal consequence. In 2023 it is reported that 98% of all deepfake videos on the web were nonconsensual sexual deepfakes, and women constituted 99% of the victims.[x] Deepfake revenge porn due to the fact that is nonconsensual and exploitative constitutes a form of gender-based abuse, and constitutes a practice that is dehumanizing, degrading, and malicious. The possibility also exists that these can harm a person’s employment prospects, can be used for blackmailing the victim, and creates a legal problem of the use of unauthorized images.

[10]. Another ethical consideration of the use of deep fakes involves holographic concerts of deceased performers.  Forbes reported that the top 13 deceased performers in 2022 grossed 1.6 billion dollars.[xi] Most of these earnings were from the sales of the back-catalogs of these artists, but in the case of Whitney Houston, the earnings from her holographic concert may have contributed to this total. These holographic performances of dead performers have become more common not only in concert venues but also in award shows. Apart from the moral repugnance that may be fostered by these performances, a number of ethical concerns are evident including consent, profiting from deceased people, the exploitation of their likeness and art, respecting their legacy, the question of verisimilitude, and the problem associated with coding interactive dialogue with the audience. For example, Prince was recorded in an interview expressing his unease with performing with holograms.[xii] It stands to reason that he would not consent to a holographic concert, but what is to stop his estate from creating one and profiting? Secondly, since the process of creating the holograph entails imposing the face on the body of a real performer, it is logical to ask if this is an authentic representation of the artist. Lastly, one of the most troubling aspects of holographic concerts is that words and lines of dialogue are put into the mouth of dead performers in order to make the concerts seem relevant, fresh and recent.

[11]. Chat Generative Pre-Trained Transformer (GPT) is a language-based program that analyzes hundreds of billions of data points in order to expeditiously generate a text based on requested parameters.[xiii] This has produced a challenge to college instructors and high school and middle school teachers due to its potential for abuse and the problems associated with plagiarism. University officials have vacillated between condemnation and acceptance without a sharp vision for approaching the challenges posed. Journalists have also been impacted by Chat GPT. While journalists struggle with the morality of using AI for composing and editing their news stories, there have also been a number of cases where journalists have been accused of using Generative AI to produce and/or edit their stories. A fear also exists that instead of reporting the news, journalists may be tempted to use AI to make up stories or forecast dubiously credible news stories based statistical predictions.[xiv] Finally, creatives have to contend with corporations promoting the use of AI in music and entertainment as cost cutting measures, stifling in the process their sustenance and creativity. Hollywood writers and actors recently went on strike due to the insistence of studio executives not only to produce AI generated scripts but also to populate these films with AI generated actors.[xv]

[12] Highlighting the issues at stake in the use of AI in creative endeavors such as art, music and literature is the following case. The documentary Road Runner, about Anthony Bourdain, instigated a huge controversy. The filmmakers created an AI generated artificial voice narration and passed it off as the voice of Bourdain. The viewer was never informed that this was not the real voice of Bourdain, and it appears that the director may have taken creative liberties by embellishing stories or inserting words or lines that may not have been spoken by Bourdain.[xvi] On the one hand we have the potential of using AI creatively in order to promote a fascinating story and celebrate the life of an admired celebrity. The use of his voice certainly does provide the story with a sense of gravitas and anchors the story in a believable way. On the other hand, it is hard to justify the deception and lack of transparency. The documentarian Morgan Neville didn’t help matters whilst responding to the accusation of deception: “If you watch the film… you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know…We can have a documentary-ethics panel about it later.”[xvii] Neville stated to Variety that he had consulted and been given approval by Bourdain’s widow.

[13] Neville’s explanation raises more questions.

  • Who has control over the person’s image and likeness, particularly in the case of someone who is deceased and has not given their consent?
  • What responsibility does a filmmaker have to represent and depict the authentic voice, story and essence of a deceased person?
  • Is the viewer entitled to know that Generative AI was used in the production of a film?

[14] The biggest problem or issue associated with Generative AI deals with coded bias. A well-known truism known to computer programmers, and by extension to those engaged in coding, is “garbage in and garbage out.” In other words, a program is only as good as the data fed into it. Most experts and critics will point out that the biggest obstacle to creating a fully autonomous or serviceable AI is coding bias. Programmers are the product of society and will consciously or unconsciously insert their biases into the algorithm they are creating. In our American context, many will find it hard to separate themselves from considerations of race, class, sex, gender, generational cohort, religion or physical ability. In many cases our propensity to interject values and categories we hold dear (pertaining to capitalism, patriarchy, sexism, white supremacy, or any sense of entitlement) is inescapable. These biases can take two forms: a cognitive biases and an algorithm biases.

[15] The most common cognitive bias is the confirmation bias. This bias relates to the tendency to search, create and interpret information that supports what we already believe. Let’s use banking as an example. If a person has a presupposition about race and debt and assumes that people of a particular “race” will default on a loan, this bias will be reflected in the algorithm the person creates and the algorithm will perpetuate a generalization that will adversely affect all people from that group.

[16] Let’s use another example that impacts predictive policing and crime prevention. There is in our society an assumption of “black criminality.”[xviii] Biased algorithms reflect this myth and its presumptions. Let’s take the example of Rivelli and Cannon from the Medium piece on racial and gender bias presented in the link cited.[xix] Even though Rivelli (a white man) had been arrested four times for offenses ranging from domestic violence to grand theft, he is considered a lower risk by the algorithm than Cannon (an African American man) who has only one offense of petty theft. The Medium piece also shows an example based on sex where feminine names were associated with roles such as “family” while masculine names were associated with careers and terms such as “professional” and “salary.”

[17] An algorithm bias may also reflect the bias of the programmer but includes factors such as  the overreliance of one group of people over other groups when creating the algorithm, an attempt to quantify mathematically things that may not be quantifiable, the creation of a feedback loop, and the malicious manipulating of data to meet a predetermined goal or conclusion. Take for example a feedback loop. An AI can learn from a feedback loop and alter its mistakes (creating a positive loop), or it can continue to perpetuate these mistakes (creating a negative loop). There have been instances where AI chat bots have been turned off because they could not remove themselves from negative feedback loops related to racism and sexism. The fact remains that it is hard to determine in these cases if such negative loops were intentional and malicious or accidental. What we do know is that since these programs are autodidactic, it is necessary to continually search out and correct negative feedback loops.

A Proposed Ethical Framework

[18] The World Economic Forum has identified nine ethical issues or concerns associated with AI.[xx] These ethical issues are unemployment, inequality, challenges to what it means to be human, AI mistakes, racism, security, unintended consequences, the singularity, and robot rights. In this article, in our examination, we have focused on the concerns of unemployment, inequality and racism.

[19] It is estimated that a quarter of today’s jobs will be lost to AI and automation in the next ten years. For example, 78% of all industrial jobs (welding, assembly line, food preparation, and packing/processing) and 28% of all construction work, forestry and animal husbandry jobs will be lost. Suffice it to say that the impact on our workforce will be enormous and it is hard to determine what industries will employ these displaced workers.

[20] The second ethical concern over inequality acknowledges the challenges of a just distribution of the economic gains from AI and automation, and also relates to the just distribution of the harm caused by AI and automation. There is no denying that the displacement of workers will lead to greater profits. The problem lies with how these profits will be distributed, to whom, and whether the windfall will be extended to those who have paid for it with their jobs. Will these companies provide guaranteed income for displaced workers, or will the burden of support be placed on the taxpayers and consumers? Will these developments further exacerbate the disproportional chasm that is the wealth gap? Will the risks associated with this “post-work economy” be expected of the managerial class and corporate CEO’s or will they benefit from bonuses and other economic “incentives” to the detriment of the other workers?

[21] Finally, we cannot escape the issue of race in discussions of economy, justice and technology. Reports have surfaced that Kenyan, Venezuelan, and Filipino workers who perform AI and social media moderation have been subjected to what amounts to slave labor.[xxi] These workers sift through and eliminate (scrape) what amounts to a massive amount of data for the benefit of the algorithms that contribute to efficient AI programs. Racial biases in the programs themselves are also a concern that seriously challenge cherished principles such as inclusion, equity and non-maleficence.

[22] Even though the potential benefits are promising and can improve our ability to communicate, expedite the examination of massive amounts of data for research purposes, and provide creative avenues for art and entertainment, we are confronted with ethical concerns that must be addressed. While legislative and protective measures (for example privacy and transparency) are slow in coming in our country, [xxii] as a church we have an imperative to promote sound ethical principles for those engaged in technological pursuits as well as for those engaged in moral deliberation.

[23] It is necessary to affirm the four guiding ethical principles present in our Lutheran social teaching. These are not only the living embodiment of the neighbor justice ethic which empowers our social teaching, these also serve the hopes and dreams of the society we aspire to be.[xxiii] Sufficiency (care for the basic needs of all people) requires our society to address and advocate for the needs (both physical and economic) of people harmed by Generative AI and the upcoming displacement precipitated by automation. Sustainability (an acceptable quality of life for all generations) requires our society to use AI in ways that will not only benefit all people today but can provide viable and lasting assistance for the next generation.

[24] Solidarity (the interdependence of all of creation) requires us to respect the lived experience of women, minorities, people with disabilities, LGBTQIA+ people, the poor and financially struggling, and all who are at the mercy of algorithms created with patriarchal and racist biases. Solidarity also requires us to respect non-human life that is being callously destroyed in our research of new AI applications.[xxiv] Solidarity requires the risks and benefits of this technology and the accompanying challenges be shared by all, including the rich and powerful, those who labor to perfect this technology, and those with the onerous task of creating these algorithms.

[25] Finally, participation (the right of people to actively participate in activities that impact their lives) requires us to be able to control the data mined at our expense, to prohibit profiting through the sale of personal data, and to make autonomous decisions as to what we allow to be used and how. Participation also requires us to benefit from these technologies and not be harmed, neglected or abused by their application and use.

[26] Classical philosophical and theological principles (autonomy, veracity, justice,  beneficence, nonmaleficence, integrity and responsibility) will further buttress our ethical our ethical proposal. As autonomous moral agents, engaged in responsible moral deliberation, people are worthy of respect and dignity. Autonomy demands consent, acquiescence, and self-determination in light of the moral exigencies created by these developing technologies. Closely tied into autonomy is the principle of veracity or truth telling. In order to make informed decisions about our digital selves and information we are consuming, it is essential for us to be informed by truthful and objective data, arguments, algorithms, and media.

[27] Today, justice can best be understood as fairness. It involves equity, shared societal benefits and risks, and a concern for the common good. The ancient Greek philosopher Plato in his discussion of justice proposed a definition of justice as harmony. [He stated that] “justice as harmony implies that a just society strives to improve the overall quality of life for its citizens.”[xxv] This, too, applies to the just analysis and application of AI programs that relate to the economic and financial decisions made on behalf of consumers.

[28] Beneficence and nonmaleficence are Utilitarian principes that promote doing good and doing no harm to others. These moral qualities are closely connected to the principles of integrity (honesty and transparency) and responsibility (accountability).

[29] A rule based ethical proposal for those who work in the arts and humanities was proposed by the Documentary Accountability Working Group (DAWG) in order to stimulate conversations about the use of AI and create an ethical framework for documentarians.[xxvi] There are six proposed core values which have here been adapted as a practical rule-based approach to Generative AI. These values are:

  • The integration of anti-oppression practices into work
  • Transparency in relationships
  • Acknowledgement of positionality (where one is located in relationship to various social identities)
  • Respect for the dignity and agency of people
  • Prioritizing of the needs, wellbeing and experience of the people
  • Treating the potential consumer with dignity, care and concern

Conclusion

[30] It is not hyperbolic to state that we are at a crossroads; we can accept the inevitability and ubiquity of Generative AI and use it as a tool to benefit all people or we can ignore this technology and succumb to the most destructive instincts of those who adopt and use it. Generative AI is a tool that must be harnessed. It’s effect on our daily lives is awe-inspiring. The decisions made on our behalf are significant. The consumption of stimulating entertainment is necessary for human flourishing. For these reasons, we require a robust ethical framework that can sustain, support and benefit all people, not just the solipsistic and self-interested economic hegemonies that dominate our society. Herein lies the potential and change of Generative AI.

 

Notes:

[i] Genetics, Faith and Responsibility (Chicago: ELCA, 2011), 1.

[ii] The Church in Society (Chicago: ELCA,  1991), 2.

[iii] Downloadable PDF available. Richard Fletcher and Rasmus Nielsen. What does the public in six countries think of generative AI in the news? University of Oxford. May 28, 2024.  https://reutersinstitute.politics.ox.ac.uk/what-does-public-six-countries-think-generative-ai-news – header–2

[iv] Kim Martineau. What is generative AI? IBM. April 20, 2023. https://research.ibm.com/blog/what-is-generative-AI

[v] Nick Patience. 2021 AI and machine learning outlook. S & P Global. February 3, 2021. https://www.spglobal.com/marketintelligence/en/news-insights/blog/2021-ai-and-machine-learning-outlook. The percentages break down as follow: Banking 18%, Retail 12%, IT & Communication 18%, Automotive & Transportation 14%, Manufacturing 10%, Advertising & Media 8%, Healthcare 12%, and other 8%.

[vi] Bilal Jaffrey. Connecting meaningfully in the new reality: Hyper-personalizing the customer experience using data, analytics, and AI. Deloitte/Omnia AI. Accessed June 2024.  https://www2.deloitte.com/content/dam/Deloitte/ca/Documents/deloitte-analytics/ca-en-omnia-ai-marketing-pov-fin-jun24-aoda.pdf

[vii] Dan Weijers and Nick Munn, “AI Companions Can Relieve Loneliness—but here are 4 red flags to watch for in your chatbot ‘friend’. The Conversation. May 8, 2024. https://theconversation.com/ai-companions-can-relieve-loneliness-but-here-are-4-red-flags-to-watch-for-in-your-chatbot-friend-227338

[viii] Liberty Vittert. AI girlfriends are ruining an entire generation of men. The Hill. September 26, 2023.    https://thehill.com/opinion/technology/4218666-ai-girlfriends-are-ruining-an-entire-generation-of-men/

Vittert reports that 60% of men are single and one in five have no meaningful friendship. See also, https://futurism.com/the-byte/tech-exec-ai-gf-industry where an industry CEO promotes this as a billion-dollar business opportunity.

[ix] Johnny Dodd. The Moment I Learned Someone Made Deepfake Porn of Me—and How I’m Fighting Back. People Magazine. May 18, 2024. https://people.com/breeze-liu-fighting-back-against-ai-deepfake-porn-8649589

[x] Madyson Fitzgerald. States race to restrict deepfake porn as it becomes easier to create. Stateline.org. April 10, 2024. https://stateline.org/2024/04/10/states-race-to-restrict-deepfake-porn-as-it-becomes-easier-to-create/  At the present moment there is bipartisan legislation in the House and Senate to address deepfake revenge porn. The “Take It Down” bill would require internet providers to take down revenge porn in 48 hours. See Yash Roy, Bipartisan group of Senators targets deepfake revenge porn with new legislation. The Hill. June 18, 2024. https://thehill.com/homenews/senate/4727380-bipartisan-senators-target-deepfake-revenge-porn-with-new-legislation/ The problem lies with the fact that a video may be downloaded or shared hundreds or thousands of times within the 48 hour period.

[xi] Marissa Dellatto. The Highest-Paid Dead Celebrities of 2023. Forbes Magazine. October 23, 2023. https://www.forbes.com/sites/marisadellatto/2023/10/30/highest-paid-dead-celebrities-2023-michael-jackson-elvis-presley-whitney-houston/

[xii] Molly Claire. Is there an ethical way to use holograms in live performance? GRM Daily. June 28, 2022. https://grmdaily.com/hologram-ethics/#:~:text=Putting%20words%20in%20a%20dead,image%20act%20as%20a%20puppeteer.

[xiii] Introducing ChatGPT. Open AI. November 30,2022.  https://openai.com/index/chatgpt/ For a discussion of the ethics of AI, see Cindy Gordon, AI Ethicist Views on ChatGPT. https://www.forbes.com/sites/cindygordon/2023/04/30/ai-ethicist-views-on-chatgpt/

[xiv] Ibid., Richard Fletcher and Rasmus Nielsen. What does the public in six countries think of generative AI in the news? University of Oxford. May 28, 2024 https://reutersinstitute.politics.ox.ac.uk/what-does-public-six-countries-think-generative-ai-news#:~:text=Averaging%20across%20six%20countries%2C%20we,it%20will%20have%20a%20large

[xv] Molly Kinder. Hollywood writers went on strike to protect their livelihoods from generative AI. Brookings. April 12, 2024. https://www.brookings.edu/articles/hollywood-writers-went-on-strike-to-protect-their-livelihoods-from-generative-ai-their-remarkable-victory-matters-for-all-workers/#:~:text=Among%20their%20list%20of%20demands,complement%E2%80%94not%20replace%E2%80%94them.  Rumors also circulated that studios wanted to use the AI generated likeness of actors and extras in perpetuity without compensation. See, Rebecca Klar. Why actors are fighting for AI protections. The Hill. October 23, 2024. https://thehill.com/policy/technology/4267345-why-actors-are-fighting-for-ai-protections/

[xvi] Helen Rosner. The Ethics of a Deepfake Anthony Bourdain. The New Yorker. July 17, 2024. https://www.newyorker.com/culture/annals-of-gastronomy/the-ethics-of-a-deepfake-anthony-bourdain-voice

[xvii] Jazz Tangcay. Anthony Bourdain’s AI-Faked Voice in New Documentary Sparks Backlash. Variety. July 15, 2021.  https://variety.com/2021/artisans/news/anthony-bourdain-fake-voice-roadrunner-documentary-backlash-1235020878/ A second controversy, based on the diaries of Andy Warhol, also emerged. Andrew Rossi, the director of The Andy Warhol Diaries which premiered on Netflix, also used an AI generated narrator mimicking Warhol’s voice. What differentiated Rossi  from Morgan Neville was the explicit statement at the beginning of each episode and throughout that this narration was AI generated. See, Nora McGreevy, Hear an AI Generated Andy Warhol “Read” His Diary to You in New Documentary. Smithsonian Magazine. March 10, 2022. https://www.smithsonianmag.com/smart-news/an-ai-generated-andy-warhol-reads-his-diary-to-you-in-new-documentary-180979658/

[xviii] Ta-Nehesi Coates. The Black Family in the Age of Mass Incarceration. The Atlantic. October 2015. https://www.theatlantic.com/magazine/archive/2015/10/the-black-family-in-the-age-of-mass-incarceration/403246/ See also, YouTube video the Enduring Myth of Black Criminality. https://www.youtube.com/watch?v=cQo-yYhExw0

[xix] Lex Fefegha. Racial Bias and Gender Bias in AI Systems. Medium. September 2, 2018. https://medium.com/thoughts-and-reflections/racial-bias-and-gender-bias-examples-in-ai-systems-7211e4c166a1

[xx] Julia Bossmann. Top 9 Ethical Issues in artificial intelligence. October 21, 2016. https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

[xxi] Billy Perrigo. Open AI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time Magazine. January 18, 2023. https://time.com/6247678/openai-chatgpt-kenya-workers/ See also, Andrew Caballero-Reynolds, African Workers Doing Open AI’s Training Sat They’re Being Subjected to “Modern Day Slavery” https://futurism.com/the-byte/african-workers-openai-training and Noor Al-Sibai, That AI You’re Using Was Trained by Slave Labor, Basically, https://futurism.com/the-byte/ai-gig-slave-labor.

[xxii] The United Nations Educational, Scientific and Cultural Organization (UNESCO) produced a document in 2021 https://www.docaccountability.org/ https://unesdoc.unesco.org/ark:/48223/pf0000381137. The European Union has also produced a comprehensive legislative framework on Artificial Intelligence and its usage in May of 2024. See https://digital-markets-act.ec.europa.eu/high-level-group-digital-markets-act-public-statement-artificial-intelligence-2024-05-22_en#:~:text=The%20AI%20Act%20approved%20by,AI%20systems%2C%20including%20generative%20AI.

[xxiii] Faith, Sexism, Justice: Conversations toward a Social Statement, Chicago, Il: Evangelical Lutheran Church in America, 2015, Module 2, pp. 36-48. See also, Rodriguez, “Foundations for a Neighbor Justice Ethic.” Currents in Theology and Mission Journal, vol. 47, no. 2, 2020. https://www.currentsjournal.org/index.php/currents/article/view/236

[xxiv] Lloyd Lee. Elon Musk’s claim that no monkey died as a result of Neuralink’s implants contradicts records. Business Insider. September 20, 2023.  https://www.businessinsider.com/elon-musk-neuralink-monkeys-infections-paralysis-brain-swelling-implants-sec-2023-9. See also, Physicians Committee for Responsible Medicine request for the investigation of Neuralink by the SEC.  https://www.documentcloud.org/documents/23986937-sec-request-for-investigation-of-neuralink-20230920

[xxv] Rodriguez, Foundations for a Neighbor Justice Ethic, Ibid., p. 48.

[xxvi] Documentary Accountability Working Group, Core Values for Ethical and Accountable Nonfiction Filmaking.  https://www.docaccountability.org/ Accessed June 2024.

 

William Rodriguez

William Rodriguez is Assistant Professor of Religion and Philosophy at Bethune-Cookman University, Daytona Beach, Florida