3 AI dangers you might consider

Here are three emerging AI dangers, with brief comments on their implication for religious professionals and congregation. Since a large percentage of the population is already using generative AI for various purposes, let’s make sure we’re using those services wisely and well.

AI danger number 1

Your chatbot logs, and the queries you make to chatbots, may be accessed by lawyers during lawsuits. See, for example, how one law firm used such files in a defamation lawsuit against a Youtube influencer. In this lawsuit, the Youtube influencer is being sued for defamation by a woman, about whom he allegedly made intentionally defamatory comments. The woman’s lawyers claim to show that the influencer’s ChatGPT logs reveal his malicious intent.

As usual with anything to do with Big Data (including the web, the broader internet, text messaging, etc.) — you have to assume that anything you put into electronic format can and will be made public in ways that you might not like.

Nothing new here, but it’s a good reminder that congregations and religious professionals should refrain from placing any confidential information into chatbots. in addition, congregations and religious professionals can help educate people about this very real danger — including educating teens (e.g., in OWL programs), people going through divorces, etc.

AI danger number 2

The title of a peer-reviewed study says exactly what AI danger number 2 is: “Sycophantic AI decreases prosocial intentions and promotes dependence.” To quote the editor’s summary in full:

An obvious implication is that there are specific and measurable dangers if you use AI as an inexpensive therapist. Unfortunately, lots of people have good reasons for turning to chatbots for mental health support — mental health professionals are expensive and may not be covered by insurance; in many places there is a shortage of mental health professionals; for many people there remains a significant social stigma for referring to mental health professionals; etc.

Congregations and religious professionals should be aware that some people are relying on chatbots for mental health support. While we are not qualified to provide mental health support, this might be an area where we could help create low- or no-cost mental health services and/or steer vulnerable people to existing low or no-cost services.

AI danger number 3

The U.S. Copyright Office has denied copyright protection to certain AI-generated works: “In general, the office will not find human authorship where an AI program generates works in response to user prompts….” See the U.S. Congress webpage on “Generative Artificial Intelligence and Copyright Law.” There remain questions about how much human influence is required before a work may be protected by copyright.

I’d expect this to be mostly a concern for religious professionals. If we use generative AI to come up with sermons, music, curriculum materials, etc., we should assume that material is not protected by copyright and can be used freely by anyone. In addition, it’s wise to be aware that generally speaking your prompts (and maybe even output generated by your prompts) can be used by AI companies for many purposes, so e.g. assume that you are giving away the rights to any text you enter into a chatbot.


There are legitimate uses for generative AI (think: people with dyslexia who use it to clean up writing). However, it appears that many current generative AI services are not well designed, nor do they make clear the potential dangers in using their services. I’m not saying “don’t use generative AI ever,” but I’m also not saying “AI is the solution to all our problems and we should use it for everything.” Using generative AI is analogous to using a chain saw — great tool for specific purposes, used wrongly it can cut your leg off. So read the (non-existent) warning label and wear safety gear.

Noted with minimal comment

I use AI tools for certain tasks. But there are other tasks I would never use them for. Bloomberg News has an article on how AI-generated recipes are taking over both the web and social media. And it’s not going well, neither for the food bloggers, nor for the people trying to use AI-generated recipes. They interview Eb Gargano, who writes the Easy Peasy Foodie blog:

Noted without comment: What an artist thinks about AI art

Artist Matt Inman has a long cartoon/blog post on his website The Oatmeal, in which he sets forth his feelings about AI-generated art. He is thoughtful, while at the same time he pulls no punches (including the use of some salty language). Here’s an excerpt:

This was posted to MetaFilter, where one commenter noted:

Screenshot of the blog post on The Oatmeal

AI and UU sermons

Should Unitarian Universalists use so-called AI (Large Language Models, or LLM) to write sermons?

Since Unitarian Universalists don’t have a dogma to which we must adhere, there will be many answers to this question. Here are my answers:

I/ Adverse environmental impact of LLMs

Answer: No. The environmental cost of LLMs is too great.

First, we all know about the huge carbon footprint of LLMs. And the more complex the answer required from the LLM, the more carbon that is emitted. Deborah Prichner, in a June 19, 2025, Science News article on the Frontiers website, sums up the impact by quoting someone who researched energy use of LLMs:

“‘The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,’ said … Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences…. ‘We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.’”

Thus, not only do LLMs have a big carbon footprint, but handling something as complex as a sermon could result in a carbon impact 50 times greater than the lowest LLM carbon footprint.

Second, the data centers running LLMs use a tremendous amount of fresh water. In their paper “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models,” Pengfei Li (UC Riverside), Dr. Jianyi Yang (U Houston), Dr. Mohammad Atiqul Islam (U Texas Arlington), and Dr. Shaolei Ren (UC Riverside) state:

“The growing carbon footprint of artificial intelligence (AI) has been undergoing public scrutiny. Nonetheless, the equally important water (withdrawal and consumption) footprint of AI has largely remained under the radar. For example, training the GPT-3 language model in Microsoft’s state-of-the-art U.S. data centers can directly evaporate 700,000 liters of clean freshwater, but such information has been kept a secret. More critically, the global AI demand is projected to account for 4.2 – 6.6 billion cubic meters of water withdrawal in 2027, which is more than the total annual water withdrawal of … half of the United Kingdom.”

Third, on 1 May 2025, IEEE Spectrum reported that “AI data centers” cause serious air pollution. The article, titled “We Need to Talk About AI’s Impact on Public Health: Data-center pollution is linked to asthma, heart attacks, and more,” raises several concerns. The authors write:

“The power plants and backup generators needed to keep data centers working generate harmful air pollutants, such as fine particulate matter and nitrogen oxides (NOx). These pollutants take an immediate toll on human health, triggering asthma symptoms, heart attacks, and even cognitive decline.”

In sum: Because my religious commitments call on me to aim for a lower ecological impact, the environmental impact of LLMs alone is enough to stop me from using them to write sermons.

II/ Sermons as human conversations

Answer: No. I feel that sermons should be the result of human interaction.

You see, for me, a sermon should arise from the spiritual and religious conversations that people are having in a specific congregation or community. As a minister, I try to listen hard to what people in the congregation are saying. Some of what I do in a sermon is to reflect back to the congregation what I’m hearing people talk about. At present, a LLM cannot access the conversations that are going on in my congregation — a LLM can’t know that P— made this profound observation about their experience of aging, that A— asked this deep question about the reality of the death of a family member, that C— made a breakthrough in finding a life direction, that J— took this remarkable photograph of a coastal wetland. Some or all of those things affect the direction of a sermon.

Mind you, this is not true for all religions. Deena Prichep, in a 21 July 2025 article on Religion News Service titled “Are AI sermons ethical? Clergy consider where to draw the line,” states that “The goal of a sermon is to tell a story that can break open the hearts of people to a holy message.” In other words, according to Prichep, for some religions the role of the preacher is to cause other people to accept their holy message. Prichep quotes Christian pastor Naomi Sease Carriker as saying: “Why not, why can’t, and why wouldn’t the Holy Spirit work through AI?” I can see how this would be consistent with certain strains of Christianity — and with certain strains of Unitarian Universalism, for that matter, where the important thing is some abstract message that somehow transcends human affairs.

But that’s not my religion. My religion centers on the community I’m a part of. Yes, there is a transcendent truth that we can access — but as a clergyperson, I don’t have some special access to that transcendent truth. Instead, truth is something that we, as a community of inquirers, gradually approach together. Any single individual is fallible, and won’t be able to see the whole truth — that’s why it’s important to understand this as a community conversation.

As a clergyperson, one thing I can do is to add other voices to the conversation, voices that we don’t have in our little local community. So in a sermon that’s trying to help us move towards truth, I might bring in William R. Jones, Imaoka Shinichiro, or Margaret Fuller (to name just a few Unitarian Universalist voices). Or I might quote from one of the sacred scriptures — i.e., from one of the sources of wisdom traditions — from around the world. Now it is true that maybe a LLM could save me a little time in coming up with some other voices; but given the huge environmental costs, it seems silly to save a small amount of time by using a LLM.

III/ Biases built into LLMs

Answer: No, because of hidden biases.

LLMs are algorithms trained on digitized data which has been input into them. For a LLM, the digitized data is mostly in the form of text. But we know that certain kinds of authors are going to be under-represented in that digitized data: women, non-Whites, working class people, LGBTQ people, etc. The resulting biases can be subtle, but are nonetheless real.

As a Universalist, I am convinced that all persons are equally worthy. I have plenty of biases of my own, biases that can keep me from seeing that all persons are equally worthy of love — but at least if my sermons are affected by my own biases, my community can successfully challenge me about my biases. If I use a LLM model to write a sermon, a model that’s riddled with biases that I’m not really aware of, that makes it harder for my community to help me rid my sermons of my biases.


IV/ Final answer: No

Would I use a LLM to write a sermon?

No. It goes against too many things I stand for.

Should you use a LLM to write your sermons?

I ‘m not going to answer that question for you. Nor should you ask a LLM model to answer that question for you. We all have to learn how to be ourselves, and to live our own lives. Once we start asking others — whether we’re asking LLMs or other authority figures — to answer big questions for us, then we’re well on the road to authoritarianism.

Come to think of it, that’s where we are right now — on the road to authoritarianism. And that’s a road I choose not to follow, thank you very much.

Ethics and “AI”

On the Lawyers Guns and Money blog, Abigail Nussbaum writes:

“The companies that make AI — which is, to establish our terms right at the outset, large language models that generate text or images in response to natural language queries — have a problem. Their product is dubiously legal, prohibitively expensive (which is to say, has the kind of power and water requirements that are currently being treated as externalities and passed along to the general populace, but which in a civilized society would lead to these companies’ CEOs being dragged out into the street by an angry mob), and it objectively does not work. All of these problems are essentially intractable.”

What interests me here is how she focuses in on the main ethical problem with “AI” — the huge environmental impact of “AI.” Yes, it is evil that the “AI” companies steal people’s writing and steal people’s artwork. Yes, it is evil that the plutocrats want to have “AI” replace real humans (though as Nussbaum points out, if you factor in the real environmental costs, human labor is cheaper than “AI”). Yes, it is evil that “AI” is a product that doesn’t provide consistently good results. Yes, it is evil that”AI” is another way that the plutocrats can steal your personal data.

But here we are in the middle of an ecological crisis, and “AI” uses huge amounts of energy, and huge amounts of fresh water for cooling. “AI” is an environmental disaster. That is the real ethical problem.

Some truths about “AI”

In an article on the New Scientist website, science fiction author Martha Wells tells some truths about “AI”:

“The predictive text bots labelled as AIs that we have now aren’t any more sentient than a coffee cup and a good deal less useful for anything other than generating spam. (They also use up an unconscionable amount of our limited energy and water resources, sending us further down the road to climate disaster, but that’s another essay.)”

That’s at least three uncomfortable truths about “AI” (or as Ted Chiang calls it, “applied statistics”):

(1) “AI” is not sentient, i.e., it’s not an intelligence.
(2) The only thing “AI” can really do is generate spam.
(3) In order to produce spam, “AI” takes an enormous amount of energy.

I’m generally enthusiastic about new technology. But not “AI,” which strikes me as a boondoggle start to finish.

AI lies

Science fiction author Charles Stross took Google’s “Bard” for a test drive. Bard is what popular culture calls “Artifical Intelligence,” a.k.a., but which is more properly called a Large Language Model (LLM); or, to use Ted Chiang’s more general nomenclature, it’s merely Applied Statistics.

In any case, Stross asked Google Bard to provide five facts about Charles Stross. Because he has an unusual name, he was fairly certain there were no other Charles Strosses to confuse Google Bard. The results? “Bard initially offers up reality-adjacent tidbits, but once it runs out of information it has no brakes and no guardrails: it confabulates without warning and confidently asserts utter nonsense.”

Stross concludes his post with a warning: “LLMs don’t answer your questions accurately — rather, they deliver a lump of text in the shape of an answer.” However, a commenter adds nuance to Stross’s warning: “Bard is clearly showing signs of prompt exhaustion, and that should have triggered a ‘this answer is out of confidence’ error and terminated the output. In a well-designed system you would not have seen those answers.” But even admitting that Bard is a poorly-designed LLM, how would the average user know which LLM is well-designed and which is not?

LLMs deliver answer-shaped text — with no way of judging how accurate it is.

The non-neutrality of “AI”

Whatever you call it — “artificial intelligence,” “machine learning,” or as author Ted Chiang has suggested, “applied statistics” — it’s in the news right now. Whatever you call it, it does not present a neutral point of view. Whoever designs the software necessarily injects a bias into their AI project.

This has become more clear with the emergence of a conservative Christian chatbot, designed to give appropriately conservative Christian answers to religious and moral questions. Dubbed Biblemate.io by the software engineer who constructed it, it will give you guidance on divorce (don’t do it), LGBTQ+ sex (don’t do it), or whether to speak in tongues (it depends). N.B.: Progressive Christians will not find this to be a useful tool, but many conservative and evangelical Christians will.

I wouldn’t be surprised to learn that Muslim software engineers are working on a Muslim chatbot, and Jewish software engineers are working on a Jewish chatbot. Then as long as we’re thinking about the inherent bias in chatbots, we might start thinking about how racism, sexism, ableism, ageism, etc., affect so-called AI. We might even start thinking about how the very structure of chatbots, and AI more generally, might replicate (say) patriarchy. Or whatever.

The creators of the big chatbots, like ChatGPT, are trying to pass them off as neutral. No, they’re not neutral. That’s why evangelical Christians feel compelled to build their own chatbots.

Mind you, this is not another woe-is-me essay saying that chatbots, “AI,” and other machine learning tools are going to bring about the end of the world. This is merely a reminder that all such tools are ultimately created by humans. And anything created by humans —including machines and software — will have the biases and weaknesses of its human creators.

With that in mind, here are some questions to consider: Whom would you trust to build the chatbot you use? Would you trust that chatbot built by an evangelical Christian? Would you trust a chatbot built by the Chinese Communist Party? How about the U.S. government? Would you trust a chatbot built by a 38-year-old college dropout and entrepreneur who helped start a cryptocurrency scheme that has been criticized for exploiting impoverished people? (That last describes ChatGPT.) Would you trust a “free” chatbot built by any Big Tech company that’s going to exploit your user data?

My point is pretty straightforward. It’s fine for us use chatbots and other “AI” tools. But like any new media, we need to maintain a pretty high level of skepticism about them — we need to use them, and not let them use us.

Let us name it … ASS

People talk about “artificial intelligence.” They get corrected by people who say, It’s not intelligence, it’s “machine learning.” But actually machines don’t learn either. All this false terminology isn’t serving us well. It obscures the fact that the humans who design the machines are the intelligences at work here, and by calling the machines “AI” they get to dodge any responsibility for what they produce.

In a recent interview, science fiction author Ted Chiang came up with a good name for what’s going on:

” ‘There was an exchange on Twitter a while back where someone said, “What is artificial intelligence?” And someone else said, “A poor choice of words in 1954”,’ [Chiang] says. ‘And, you know, they’re right. I think that if we [science fiction authors] had chosen a different phrase for it, back in the ’50s, we might have avoided a lot of the confusion that we’re having now.’ So if he had to invent a term, what would it be? His answer is instant: applied statistics.” [quoted by, originally in, emphasis mine]

Applied statistics is a much better term to help us understand what is really going on here. When a computer running some ChatBot application comes up with text that seems coherent, the computer is not being intelligent — rather, the computer programmers had assembled a huge dataset to which they apply certain algorithms, and those algorithms create text from the vast dataset that sounds vaguely meaningful. The only intelligence (or lack thereof) involved lies in the humans who programmed the computer.

Which brings me to a recent news article from Religion News Service, written by Kirsten Grieshaber: “Can a chatbot preach a good sermon?” Jonas Simmerlein, identified in the article as a Christian theologian and philosopher at the University of Vienna, decided to set up a Christian worship service using ChatGPT. Anna Puzio, who studies the ethics of technology at the University of Twente in The Netherlands, attended this worship service. She correctly identified how this was an instance of applied statistics when she said: “We don’t have only one Christian opinion, and that’s what AI [sic] has to represent as well.” In other words, applied statistics can act to average out meaningful and interesting differences of opinion. Puzio continued, “We have to be careful that it’s not misused for such purposes as to spread only one opinion…. We have to be careful that it’s not misused for such purposes as to spread only one opinion.”

That’s exactly what Simmerlein was doing here: averaging out differences to create a single bland consensus. I can understand how a bland consensus might feel very attractive in this era of deep social divisions. But as someone who like Simmerlein is trained in philosophy and theology, I’ll argue that we do not get closer to truth by averaging out interesting differences into bland conformity; we get closer to truth by seriously engaging with people of differing opinions. This is because all humans (and all human constructions) are finite, and therefore fallible. No single human, and no human construction, will ever be able to reach absolute truth.

Finally, to close this brief rant, I’m going to give you an appropriate acronym for the phrase “applied statistics.” Not “AS,” that’s too much like “AI.” No, the best acronym for “Applied StatisticS” is … ASS.

Not only is it a memorable acronym, it serves as a reminder of what you are if you believe too much in the truth value of applied statistics.

Scraped

The Washington Post investigated which websites got scraped to build up the database for Google’s chatbot. The Post has an online tool where you can check to see if your website was one of the ones that got scraped. And this online tool shows that danielharper.org was one of the websites that got scraped.

Screenshot showing the Washington Post online tool.

True, there were 233,931 websites that contributed more content than this one did. Nevertheless, I’m sure that Google will compensate me for the use of my copyright-protected material. So what if they used my material without my permission. Soon, a rep from Google will reach out to me, explaining why their scraping of my website is unlike those sleazy fly-by-night operations that steal copyright-protected material from the web to profit themselves without offering the least bit of compensation to the author. Not only will they pay me for the use of my material — they will also issue a written apology, and additional compensation because they forgot to ask permission before stealing, I mean using, my written work.

I heart Big Tech. They’re just so honest and ethical.