The advent of artificial intelligence marks the end of human endeavor in church and society. Maybe not now, and maybe not in 10 years (if humanity lives that long), but sooner than we’re prepared to cope with it.
There’s small comfort in knowing more than one person has posed this hypothesis. Philosopher and historian Yuval Noah Harari, possibly the smartest and wisest man on the planet, issued a warning in April 2023 about AI’s potential to annihilate humanity.
As he wrote in The Economist: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilization. Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artifacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artifacts we created by inventing myths and writing scriptures. What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images and writing laws and scriptures?”
Now, nine months after Harari’s article, the New Hampshire presidential primary was threatened by a fake robocall allegedly from President Joe Biden telling Democrats not to vote. An article from The Conversation points out how hard it is and will be to defend against AI’s misuse in this manner.
Uncritical adoption
The extent to which Harari’s warning is being ignored distresses me and other thoughtful people. AI is being adopted enthusiastically and apparently uncritically even by religious institutions.
A recent 90-minute webinar produced by a church agency offered a telling example. A majority of those who participated, this writer included, had little or no experience using the recommended AI tool, ChatGPT. After brief sessions explaining the history of AI and some of the ethical concerns about its use, the webinar centered on how to use AI without delving into a substantive why to use it.
“What happens when churches no longer need a seminary-educated pastor with the capacity to think and teach critically about faith and life?”
The emphasis upon AI’s functionality as the “why” failed to address the deeper question: What happens when churches no longer need a seminary-educated pastor with the capacity to think and teach critically about faith and life because AI feeds them instant machine-generated answers?
The prospects for widespread AI invasion of human activity are frightening. Along with Harari’s article, The Economist offered an eight-minute video discussion between the Israeli historian and European entrepreneur Mustafa Suleyman, head of a development company called Potential AI, about the daunting prospect of trying to control AI’s use.
Extolling AI’s future capacity, Suleyman contended, “This is a moment when we in the West have to establish our values and stand behind them” to regulate artificial intelligence.
Harari countered, “This is the moment that will end human-dominated history. … What we just heard is Mustafa telling us that in five years there will be a technology that can make decisions independently and can create new ideas independently.”
Pernicious and vulnerable
For all that webinar facilitators everywhere are promoting AI, the technology remains both pernicious and vulnerable. It’s pernicious in its pervasiveness; there is hardly a computer tool these days not augmented by some form of AI. It’s pernicious in its invasion of privacy, gathering data from every input without care for its potential harm. It’s pernicious in its capacity to perform creativity that humans have done heretofore, despite the caveats of promoters who insist that humans retain control over AI’s use.
Artificial intelligence applications are vulnerable every nanosecond as they gather more data — both factual and inaccurate — about human thoughts and actions. How long before AI chooses to act on its own without a human prompt? Or without human restraints?
Soul threat
Beyond the tool’s mechanics, AI poses an existential threat to the human soul. We’re living through an era of widespread polarization where we can’t agree on common values. It’s not enough to acknowledge blithely that we humans are both sinful and good as we embrace AI’s capabilities. How can we think we can hold back an “alien intelligence” (as Harari phrases it) that can outthink us and soon may be able to out-act us?
As we’ve been told countless times before through our human myths and metaphors, we’ve been so enamored to find out whether we could create a tool such as artificial intelligence that we’ve ignored the crucial moral question of whether we should create something so powerful and life-threatening. We humans are not known for our self-restraint; if we were, there would have been no disobedience in Eden.
Help for churches?
Could AI help local churches? Possibly, if a church has someone willing to invest the long hours it will take to learn how to use AI efficiently and effectively.
“One congregation in Austin, Texas, found that an AI-generated worship service lacked sufficient heart and soul to inspire church members.”
Should a church use AI? The answer to this larger question will depend on the church’s goal, with a primary benchmark: How will humans be helped or harmed by its use? At least one congregation in Austin, Texas, found that an AI-generated worship service lacked sufficient heart and soul to inspire church members.
Unfortunately, despite continued warnings and a universal scramble to set up regulations on its use, AI development is too advanced to put the bytes back into the microchip. The technology is being developed so fast that a book written about AI last spring was outdated by its summer publication, according to one webinar facilitator.
Our task now must be to question every AI use with the same goal in mind: What will that action’s effect be on the humans toward whom it is directed? Will AI rescue a wounded Palestinian child in Gaza or an injured child in Ukraine? Will it feed starving millions in Africa? Will it intervene before China invades Taiwan, or before political unrest tears apart Latin America? Or will lies developed by AI such as the New Hampshire robocall fake exacerbate or ignite such conflicts?
Will AI inspire us to love God and our neighbor better? Will AI show us how to follow Jesus more closely in our turbulent times? Most of all: Beyond nuclear bombs or climate change, have we created the true engine of our own destruction in artificial intelligence? If so, is there time enough to rein it in before our creation kills us?
Cynthia B. Astle is a veteran religion journalist who has reported on The United Methodist Church at all levels for 36 years. She serves as editor of United Methodist Insight, an online journal she founded in 2011.
Related articles:
This article was written using Chat GPT AI | Opinion by Mallory Challis
AI ‘deep fakes’ are creating a new way of violating women’s privacy on the internet | Analysis by Mallory Challis
Reverend Roboto: Artificial intelligence and pastoral care | Analysis by Kristen Thomason