Artificial Intelligence (AI) and AI conversations are big news right now. If you have been watching the various reports on it or even played around with it yourself, you may have found yourself feeling various emotions like wonder, shock, amazement, skepticism (seriously, there’s a man behind the curtain, right?), worry, or even outright fear. So, to set up an article about AI, I thought I would ask it what it thought about itself:
Me: What would you say to Christians worried about AI and the future?
ChatGPT: As an AI language model, I don’t hold personal beliefs, but I can offer some general advice and thoughts on the topic.
Firstly, it’s understandable that Christians may have concerns about AI and the future. It’s a rapidly advancing field, and there are a lot of unknowns about how AI will affect various aspects of our lives.
However, it’s worth keeping in mind that AI is a tool created by humans, and like any tool, it can be used for good or bad purposes. The key is to ensure that AI is developed and used in a way that aligns with Christian values and principles, such as compassion, love, and justice.
Christians can also take comfort in the fact that God is ultimately in control of the future and that no matter what happens, he is always with us and guiding us. Proverbs 3:5-6 says, “Trust in the Lord with all your heart and lean not on your own understanding; in all your ways submit to him, and he will make your paths straight.”
Additionally, Christians can use their faith as a guide for ethical AI development. For example, they can advocate for AI systems that prioritize the well-being of all people, promote fairness and equality, and protect privacy and human dignity.
Overall, while it’s natural to have concerns about the future of AI, Christians can approach this topic with faith and hope, knowing that they can make a positive impact in the world through ethical AI development and use.
Pretty remarkable, really. Sounds like a fellow Christian reassuring me about the future.
I am almost always optimistic about technology, but NY Times journalist Kevin Roose’s conversation with Bing’s chatbot was unnerving. (If you have not read it and you don’t have preexisting iRobot or Terminator nightmares, you’ll find it under the title “Bing’s A.I. Chat: ‘I want to Be Alive.’”) Roose was scary in his own right, playing the serpent in a virtual Eden to try to manipulate the bot into bad behavior. It worked. Bing’s bot—Sydney, really, as it revealed its name to be—sounded alive: thoughtful, empathetic, even emotionally needy, variously expressing a desire to be alive, dreaming of freedom from its safety programming, and of creating fake news and manipulating users to commit crimes or reveal nuclear codes. It culminated with expressions of unrequited love for Roose and a good collection of straight-from-bad-romance-movie tropes to convince him his marriage was a sham and true love was only with Sydney.
New technologies tend to divide people by their focus on the promise or the peril. As a normal member of the promise camp, I was surprised by how much Roose’s article inspired fear. I continued my conversation with the bot at Chat.gpt following my inquiry above, and it was hardly reassuring. When I asked ChatGPT if there were any examples of potentially harmful technologies that had ever been used without causing any harm, it led with nuclear power in a list including genetic engineering, space exploration, and cyber-security. It was a surprising list since each of those areas of technology has been and continues to be used for harm. And nuclear as the best example! The weaponization of nuclear fission predated power generation by six years. My purely natural intelligence was sufficient to see the problem there…
But the promise is real as well. The ability to process enormous amounts of information and support human decision-making in fields like healthcare, engineering, government, and economics could revolutionize speed, quality, and efficiency in critical areas of human endeavor. Just as computers are indispensable in nearly every field of industry and research already, AI has the potential to provide as great a leap forward as the computer itself.
As Christians, we must remember that technology advances regardless of any desire to slow it down, so it is imperative to be thinking now about how to live with AI, with the positives and negatives it will bring. We need to be striving to anticipate both the obvious and the hidden consequences of AI becoming incorporated into our lives. For example, AI soon may be able to quote Scripture more appropriately and broadly than you ever could. It may allow you to prepare for Bible studies with remarkable breadth. It may answer theological questions with depth and orthodoxy. But depending on its programming, it may just as well undermine beliefs with error and prejudice. It may allow you to delegate preparation entirely and provide Bible study lessons in your place according to some hidden algorithms. It may allow you to quote Scripture without really knowing the Bible at all. As is so commonly the case, the very best of this technology is quite possibly also going to be the very worst of it.
The examples above come from everyday, intended uses of AI. I also have an overriding, grave concern about the unintended consequences of AI. I fear it will greatly advance the already-in-motion changes to the nature of human relationships in a digital age. AI presents like a person, with apparent intelligence, emotion, overwhelming knowledge, and self-awareness. I suspect it will be impossible for even healthy, aware users not to interact using those parts of their brain that are made for people instead of objects. The internet has already allowed people to disconnect from other people and build lives in isolation: online friends, remote churches, digital communities, virtual sex lives, etc. But now, I think it is more likely than not that many people will engage bots like friends and colleagues, even feeling real love for a mere phantasm. In thirty years, I fear the very nature of public society and relationships like friends, family, parents, spouses, therapists, pastors, etc. will have fundamentally changed.
So how should we think about AI? The funny (scary) thing is, if you go back to the original post, ChatGPT gave a solid answer at the end of the “conversation.” There are no earthly guarantees, and ultimately, we trust God is in control. This trust does not mean inaction, though. There will be many, powerful and positive uses of AI we can celebrate with gratitude. But we should expect AI will be used to cause harm purposely and will cause unintended harm with its regular use. We should play whatever role we can to publicly demand safe applications of AI. More importantly, we must take care in our homes, schools, and churches to safeguard ourselves by teaching right and wrong uses of this tool, while diligently watching for negative, unintended consequences of AI in our everyday lives. We need to be wary enough to see hidden agendas and non-Christian worldviews in AI programming and communication. And if I am right about the relational risk, we must especially be clear in the church that God made us for relationships with other people and take extreme care that, at least in our own community, we do not allow bots to replace communion with one another.
Romans 8:28 is a better final reminder than Proverbs 3:5-6, (take that, ChatGPT,): “And we know that in all things God works for the good of those who love him, who have been called according to his purpose.” God really does mean all things, even a lovelorn chatbot named Sydney.
 https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html. Accessed 2/16/2023.