“That which has been is that which will be, And that which has been done is that which will be done. So there is nothing new under the sun.” Ecclesiastes 1:9
Long before Artificial Intelligence (AI), the Bible already dealt with where we are today. Ai was first found in the Old Testament, and just as we are grappling with deception in AI today, Ai for the Old Testament people of God represented deception. Here are two verses to guide our thoughts today.
Genesis 12:8, “Then he proceeded from there to the mountain on the east of Bethel, and pitched his tent, with Bethel on the west and Ai on the east; and there he built an altar to the LORD and called upon the name of the LORD.”
Joshua 7:5, “The men of Ai struck down about thirty-six of their men, and pursued them [fn]from the gate as far as Shebarim and struck them down on the descent, so the hearts of the people melted and became as water.”
“So there is nothing new under the sun.” Ecclesiastes 1:9c Truer words were never spoken, for God thought of Ai before the AI we have today was created.
Is there a place for AI in the serious Christian’s life? We will start with the Southern Baptist Ethics and Religious Liberty Commission (ERLC) which recently published a treatise on the use of AI in sermon preparation. I am not Baptist, however, I believe the ERLC’s treatise deserves serious consideration and deliberation. Though directed to pastors, it is also relevant to the pew warmer.
From the ERLC’s press release: “Technology is not a theologically or morally neutral tool; it is formative, shaping our behaviors and values, often toward the goal of efficiency… In other words, technology transforms us into a particular type of person whose lives are shaped around its use. Technology also amplifies our virtues and vices, as it can facilitate our behaviors toward honoring God or the ‘lusts of the flesh’ (1 John 2:16). How we use technology matters because it communicates what we deem important as a society.”
I have to grapple with the use of AI in my professional practice. As a tax professional, we now have TaxGPT and other AI based tools to make the office more efficient. These tools range from research to client intake to document preparation.
However, I have taken multiple ethics continuing education classes which warn against relying upon AI. In one ethics class, the instructor began with his experience in writing a response to an IRS audit. He keyed various parameters into the chat bot, including that he wanted a written audit defense document with citations to relevant Internal Revenue Code sections.
Once he reviewed and checked the citations, he discovered that some of the citations were bogus. He doubled down with the chat bot and requested that it only use real citations. With a revised brief, he found that the citations, though different, were still bogus.
Ultimately, he decided to ask the chat bot why it was including fake citations. The chat bot’s blunt reply was, “Because I thought you wanted them to exist.”
Funny, right? Except it is serious. On a regular basis, we hear of attorneys who submit to courts legal documents and briefs which were prepared by AI and contain bogus case and statutory references. Some attorneys have even been sanctioned for their actions.
It all made sense to me once I dug more deeply into why this was happening. The chat bot they used pulled from millions of sources, some of which (and more than we want to admit) contain error and false information. AI is predictive, and in attempting to predict our behavior, when it sees false information, it is predicting that we want false information in the bot’s response. The chat bot exposes what we would prefer to remain hidden: When we pull information from false sources, we desire to have false responses.
The solution is simple, sort of. For the chat bot to provide an accurate and genuine response, the data it pulls from must be free from all error and falsehood. This is easier said than done, but one method is to limit the sources the bot may use to generate a response.
As an example, my tax research software is an online subscription, and my subscription is limited to the Internal Revenue Code, Treasury Regulations, U.S. Tax Court Cases, U.S. Master Tax Guide, and articles written by attorneys and accountants. The research bot only has this source of information from which to retrieve responses. So far, the bot has always provided accurate information when conducting research. From there, I write my own documents and form my own conclusions based on experience, the issue at hand, and the research conducted.
More frequently, I am fielding comments from clients that their internet search has yielded information suggesting they can engage in a particular transaction aimed at saving thousands of dollars in taxes. When I push back using actual tax law, they become defensive, until I can demonstrate that their sources are providing false information.
So, what does all this mean for the Christian? Well, without realizing it, we are pretty much in the same position as the attorney and tax professional. There is most assuredly false teaching and bad theology being provided in any chat bot’s response when we ask it for spiritual guidance or interpretation of Scripture. We know this to be true because there is a host of false teaching, bad theology, and false information circulating on the internet. AI generated responses now even come with a disclaimer that we should verify critical facts. Are we?
In other words, there is unquestionably deceptive information being provided when we query chat bots on spiritual matters, answers on which hang eternity in heaven or hell. AI provided responses can be informative, however, we must be on the alert for deception.
For Joshua, Ai represented deception as well. Joshua 7 is an account rooted in deception. The people were following God’s instruction to take back territory lost while they were in Egypt. The land of Canaan had become overgrown, and those who inhabited the land were engaging in gross sin. One specific command God gave was to not take spoils of battle.
One person believed taking and hiding spoils in a hole in the ground surely would not be uncovered. At Ai, the battle was lost, and the defeat was stunning. The deception by one soldier created a breach with God, and it cost the whole battalion dearly.
When we surround ourselves with the deception of AI and choose to blindly believe or repeat it, we can be assured that it will cost us dearly as well. If professional occupations are cautioned against the use of AI, Christians seeking guidance must also heed the warning that AI is most assuredly providing deceptive information.
AI may be a tool to help us dig more deeply into a passage of Scripture or a question. However, AI should not be used to formulate spiritual practices, beliefs, or methods to strengthen faith. AI should not be used to prepare messages or Bible studies, and AI should not be used to formulate theology or doctrine.
The ERLC’s document on AI sums up the concern. “AI can assist the pastor in preparation, but it should never be used to replace or substitute for the distinct calling upon the man of God to preach God’s Word to God’s people.”
The distinctives of Christianity are Jesus Christ and the Holy Spirit. No other world religion has a morality formed through conviction in the heart and assurance of pardon provided by the Holy Spirit. We must be guided by the Holy Spirit first and foremost, and chief among our guides is Jesus Christ. God’s Word, the Bible, is living, and it speaks to us, in us, and through us.
Let us lay aside all deception and not allow even technology to dim the work of the Holy Spirit in our lives. Like in the days of Joshua, deception never turns out well for those who profess to follow God.
Like this post? Subscribe to stay up to date on new posts.