3 AI dangers you might consider

Here are three emerging AI dangers, with brief comments on their implication for religious professionals and congregation. Since a large percentage of the population is already using generative AI for various purposes, let’s make sure we’re using those services wisely and well.

AI danger number 1

Your chatbot logs, and the queries you make to chatbots, may be accessed by lawyers during lawsuits. See, for example, how one law firm used such files in a defamation lawsuit against a Youtube influencer. In this lawsuit, the Youtube influencer is being sued for defamation by a woman, about whom he allegedly made intentionally defamatory comments. The woman’s lawyers claim to show that the influencer’s ChatGPT logs reveal his malicious intent.

As usual with anything to do with Big Data (including the web, the broader internet, text messaging, etc.) — you have to assume that anything you put into electronic format can and will be made public in ways that you might not like.

Nothing new here, but it’s a good reminder that congregations and religious professionals should refrain from placing any confidential information into chatbots. in addition, congregations and religious professionals can help educate people about this very real danger — including educating teens (e.g., in OWL programs), people going through divorces, etc.

AI danger number 2

The title of a peer-reviewed study says exactly what AI danger number 2 is: “Sycophantic AI decreases prosocial intentions and promotes dependence.” To quote the editor’s summary in full:

An obvious implication is that there are specific and measurable dangers if you use AI as an inexpensive therapist. Unfortunately, lots of people have good reasons for turning to chatbots for mental health support — mental health professionals are expensive and may not be covered by insurance; in many places there is a shortage of mental health professionals; for many people there remains a significant social stigma for referring to mental health professionals; etc.

Congregations and religious professionals should be aware that some people are relying on chatbots for mental health support. While we are not qualified to provide mental health support, this might be an area where we could help create low- or no-cost mental health services and/or steer vulnerable people to existing low or no-cost services.

AI danger number 3

The U.S. Copyright Office has denied copyright protection to certain AI-generated works: “In general, the office will not find human authorship where an AI program generates works in response to user prompts….” See the U.S. Congress webpage on “Generative Artificial Intelligence and Copyright Law.” There remain questions about how much human influence is required before a work may be protected by copyright.

I’d expect this to be mostly a concern for religious professionals. If we use generative AI to come up with sermons, music, curriculum materials, etc., we should assume that material is not protected by copyright and can be used freely by anyone. In addition, it’s wise to be aware that generally speaking your prompts (and maybe even output generated by your prompts) can be used by AI companies for many purposes, so e.g. assume that you are giving away the rights to any text you enter into a chatbot.


There are legitimate uses for generative AI (think: people with dyslexia who use it to clean up writing). However, it appears that many current generative AI services are not well designed, nor do they make clear the potential dangers in using their services. I’m not saying “don’t use generative AI ever,” but I’m also not saying “AI is the solution to all our problems and we should use it for everything.” Using generative AI is analogous to using a chain saw — great tool for specific purposes, used wrongly it can cut your leg off. So read the (non-existent) warning label and wear safety gear.