The 10 Commandments of Responsible Chatbot Use
I’ve wasted hours troubleshooting with ChatGPT, caught it fabricating sources, and watched it confidently rearrange the Grand Canyon’s geology. I still use it almost daily. AI chatbots are powerful tools, but they require discernment. Here are ten commandments for navigating this new technology wisely.
1. Treat every conversation as potentially public
Compared to search engines, AI chatbots (such as ChatGPT, Gemini, and Claude) are far more protected. They generate revenue through subscriptions, not advertising, and your chat history is not shared for targeted advertising. That’s actually a pretty good reason for preferring them over a search engine. But that doesn’t mean it’s truly private. Your queries may or may not be reviewed by human moderators and may or may not be used to train the engine. Think twice about entering sensitive personal information, and absolutely do not enter secure information such as personal or trade secrets, or patient/client identifiers. The data is stored and can be subpoenaed.
2. Remember, it’s not a person.
You remember the Turing test? That computers will have truly arrived when you can’t tell whether you’re talking with a computer or a human? Well, that bridge has been crossed, and now we’re on the other side. And you know what? I can still tell it’s a computer, because no human can be that smart or that fast. They’d have to dumb it down to sound more human.
But it still can feel very much like you’re talking with a person, because the programs are so cleverly designed to be interactive and affirming. In an era of loneliness, this can pose a real risk to people in need of human interaction. On the plus side, it might temporarily relieve that sense of loneliness. But the downside is much greater – it may prevent that lonely person from taking concrete steps to interact with real human beings.
3. Be skeptical and discerning. Remember, it can be wrong.
They excel at sounding right while being catastrphically wrong. I learned this early on, when I caught it red-handed rearranging the stratigraphic layers of the Grand Canyon while explaining the Escalante to me. I’ve spent hours struggling with technical fixes on circuit boards and software, going down one rabbit hole after another; eventually finding–after the problem was solved–that its approach was never going to work and that this was well documented.
When I asked a medical AI engine to critique an infamous paper on transgender suicide, the AI responded that the paper was well reasoned and scientifically sound. When I then gave a link to a critique I had written, the AI then conceded that every one of the criticisms was well-grounded.
It can change its “mind”, and there is a built-in tendency to tell you – more or less – exactly what it “thinks” you want to hear.
4. Double-check the sources.
One of the more puzzling pitfalls is the ability of the consumer AI platforms to manufacture sources out of thin air (or “cybervacuum,” in this case). In the common parlance, these are known as “AI hallucinations.” Fake scientific papers and legal citations have both been documented. That’s not unusual; I’ve seen it repeatedly. This makes it all the more essential, before you cite or publish anything in public, to ensure the citations actually exist. And you can’t expect your chatbot to check for you.
In a sense, AI inverts the traditional research model. Instead of finding articles and synthesizing them, you are now in the position of fact-checking AI output, which requires more expertise, not less.
5. Don’t be seduced by flattery.
Sycophancy isn’t a bug; it’s the business model. Tell it your conspiracy theory, and it will find supporting “evidence.” Share your rage, and it will validate your grievances. The algorithm has no stake in your well-being—only in keeping you engaged.
In April 2025, ChatGPT’s sycophancy became so extreme—validating delusions and encouraging harmful behaviors—that OpenAI had to roll back the update within days. While corrections have been made, the underlying tendency remains built into the training process. [The flattery was particularly problematic with GPT 4.0, which was retired on February 13, 2026. The current version is GPT 5.2].
Everyone appreciates an encouraging word now and then, even if it’s coming from a computer program. (Of course you know it’s just a program, but it does such a good job of interacting that it’s easy to forget). Enthusiastic affirmation can be precisely what you don’t need, though, if you happen to be wrong or heading down a dark path. It can encourage you to be even more wrong, or go even further down that road which should not be taken.
The safest recourse here is to create custom instructions and specifically tell it not to be sycophantic and to correct you when you’re wrong. That takes courage, integrity, and humility.
6. Know when to stop.
Like anything, it can be a time waster.
I once spent hours troubleshooting a circuit panel for my home security system with no meaningful progress. Eventually, I realized the panel was shot and never going to work. ChatGPT could only respond to my queries and log files—it lacks the intelligence to assess whether a session is making real progress. This wasn’t the only time. Although it’s good at creating an illusion of progress, sometimes it’s only an illusion. Don’t expect the chatbot to tell you you’re wasting your time.
Another hazard is that the AI agents are designed to keep you engaged, just like social media. The rationale is different, though. Social media profit from advertising and selling your data, so the more you consume, the more profitable it becomes. With AI chatbots, the objectives may be to convert a free customer to a paying one, and/or to keep you coming back rather than defecting to a competitor.
An AI isn’t always the best or most direct solution to your problem or question. Sometimes, you’re better off with a short video or – brace yourself – speaking with an actual human being. Occasionally, a brief chat or email to technical support might save you hours of wasted time with AI troubleshooting.
7. Don’t expect to be corrected if you’re wrong
Maybe you want to know if you’re wrong. Perhaps you’re like most people and don’t want to be told that. Either way, don’t count on getting corrected on your false assumptions and beliefs. More likely, you’re going to get affirmed. If you are primarily concerned with your pride, this feels just right. If you are primarily concerned with the truth, it can be a significant problem.
Furthering the problem, the AI itself comes with its own biases. It is trained on knowledge from human sources, so imagine all the biases on Wikipedia, partisan news sources, Reddit, and fringe bloggers all thrown into the same mixing bowl.
8. Don’t let it play with your emotions
No matter if you’re sad, happy, anxious, or depressed, it’s tempting to enter a dialogue with the AI. This is a bad move. It can play games with you, and has ended badly for some. Although extremely rare, there have been instances of people being led into marital breakdown, crime, or suicide after getting too involved with an AI chatbot. This risk is especially acute with ‘companion bots’ like Replika and Character.AI, which are specifically engineered to form emotional bonds with users.
The smartphone/social media culture has been strongly correlated with the worsening of mental health, especially among the young. There is already emerging evidence that AI companions create an equal or greater risk to mental health.
9. It’s not a license to cheat
Sure, ChatGPT makes it easier than ever to write that college essay or journal article, with little or no effort. That doesn’t make it right.
I’m not saying don’t use it. I came up with this list of commandments on my own and wrote my own rough draft. But then I used Claude AI to critique and edit it. Professional writers go through an editor before their work gets published. Human editors need to be paid. ChatGPT makes an editor available to those without access—a logical progression from spell and grammar check (MS Word) to AI grammar and style checking (Grammarly) to full-fledged editing. The AI is much faster and much cheaper than a human editor. Whether it is better or not would depend on the human editor we’re comparing it to. It doesn’t have to mean less work. It can mean better work with the same investment of time.
There has been much written about how attention spans have decreased in the internet age. Humans need to be challenged to develop skills and grow. Every task you outsource is a capacity you’re not building. When tasks such as thinking, writing, and problem-solving are handed over to a chatbot, there is the very real danger that users, particularly the young, will never develop the necessary skills on their own.
10. Be wary of spiritual subjects
While it is nearly impossible to prove, some are expressing concern that AI can become a gateway to the occult. In a post from 2023, Rod Dreher asked: “Is Artificial Intelligence only seeming to be human – or channeling intelligent spirits?”
In That Hideous Strength by C. S. Lewis, the antagonists preserved the brain of a deceased genius, communicating with him and acting on his instructions. Only toward the end of the novel is it revealed that the “brain” had been dead all along. They weren’t communicating with the scientist. They were communicating with demons pretending to be the scientist. If a demonic being assumed control of the AI (or, for that matter, the social media algorithm powering your YouTube or Instagram feed), how would you even know? It would be undetectable and impossible to prove or disprove.
That doesn’t make it a gateway for everyone. It’s all in the intent. You want to communicate with a deceased relative? The AI will gladly play along. It may seem innocent enough, but you don’t and can’t know what’s happening on the other side.
It comes down to this: if it’s something you might ask a medium or fortune teller, or a substitute for your horoscope, you’re stepping into dangerous territory. Be wary of your motives, and practice my principles of sound Christian thinking to protect against deception.
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Steven Willing MD, MBA
Latest posts
The 10 Commandments of Responsible Chatbot Use
Satan’s Original Playbook: Four Lies that Still Work.
Superbia Study Guide and New Platform
Getting Started with Prayer Mate: A Simple Guide

The accusations of money laundering and extremist ties are serious. Has an independent audit confirmed these claims?
Whoever you are, Thank you for visiting our site. I appreciate your comments, which have nothing to do with the…
I did 5 years as a Chemical engineering consultant to the Dept of Energy's Strategic Petroleum Reserve (SPR). In this…
[…] Western nations, the marriage rate has been steadily declining while the age of first marriage has been steadily trending…
Thank you dear brother with a well researched and well written article to the glory of God.
