Noted with minimal comment

I use AI tools for certain tasks. But there are other tasks I would never use them for. Bloomberg News has an article on how AI-generated recipes are taking over both the web and social media. And it’s not going well, neither for the food bloggers, nor for the people trying to use AI-generated recipes. They interview Eb Gargano, who writes the Easy Peasy Foodie blog:

Noted without comment: “performance of meanness”

From a story by Fiona Murphy titled “How ‘RaptureTok’ amplified an extreme corner of faith” (Religion New Service, 26 Sept. 2025). The story documents how minority religious views are often mocked and belittled on TikTok….

AI and UU sermons

Should Unitarian Universalists use so-called AI (Large Language Models, or LLM) to write sermons?

Since Unitarian Universalists don’t have a dogma to which we must adhere, there will be many answers to this question. Here are my answers:

I/ Adverse environmental impact of LLMs

Answer: No. The environmental cost of LLMs is too great.

First, we all know about the huge carbon footprint of LLMs. And the more complex the answer required from the LLM, the more carbon that is emitted. Deborah Prichner, in a June 19, 2025, Science News article on the Frontiers website, sums up the impact by quoting someone who researched energy use of LLMs:

“‘The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,’ said … Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences…. ‘We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.’”

Thus, not only do LLMs have a big carbon footprint, but handling something as complex as a sermon could result in a carbon impact 50 times greater than the lowest LLM carbon footprint.

Second, the data centers running LLMs use a tremendous amount of fresh water. In their paper “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models,” Pengfei Li (UC Riverside), Dr. Jianyi Yang (U Houston), Dr. Mohammad Atiqul Islam (U Texas Arlington), and Dr. Shaolei Ren (UC Riverside) state:

“The growing carbon footprint of artificial intelligence (AI) has been undergoing public scrutiny. Nonetheless, the equally important water (withdrawal and consumption) footprint of AI has largely remained under the radar. For example, training the GPT-3 language model in Microsoft’s state-of-the-art U.S. data centers can directly evaporate 700,000 liters of clean freshwater, but such information has been kept a secret. More critically, the global AI demand is projected to account for 4.2 – 6.6 billion cubic meters of water withdrawal in 2027, which is more than the total annual water withdrawal of … half of the United Kingdom.”

Third, on 1 May 2025, IEEE Spectrum reported that “AI data centers” cause serious air pollution. The article, titled “We Need to Talk About AI’s Impact on Public Health: Data-center pollution is linked to asthma, heart attacks, and more,” raises several concerns. The authors write:

“The power plants and backup generators needed to keep data centers working generate harmful air pollutants, such as fine particulate matter and nitrogen oxides (NOx). These pollutants take an immediate toll on human health, triggering asthma symptoms, heart attacks, and even cognitive decline.”

In sum: Because my religious commitments call on me to aim for a lower ecological impact, the environmental impact of LLMs alone is enough to stop me from using them to write sermons.

II/ Sermons as human conversations

Answer: No. I feel that sermons should be the result of human interaction.

You see, for me, a sermon should arise from the spiritual and religious conversations that people are having in a specific congregation or community. As a minister, I try to listen hard to what people in the congregation are saying. Some of what I do in a sermon is to reflect back to the congregation what I’m hearing people talk about. At present, a LLM cannot access the conversations that are going on in my congregation — a LLM can’t know that P— made this profound observation about their experience of aging, that A— asked this deep question about the reality of the death of a family member, that C— made a breakthrough in finding a life direction, that J— took this remarkable photograph of a coastal wetland. Some or all of those things affect the direction of a sermon.

Mind you, this is not true for all religions. Deena Prichep, in a 21 July 2025 article on Religion News Service titled “Are AI sermons ethical? Clergy consider where to draw the line,” states that “The goal of a sermon is to tell a story that can break open the hearts of people to a holy message.” In other words, according to Prichep, for some religions the role of the preacher is to cause other people to accept their holy message. Prichep quotes Christian pastor Naomi Sease Carriker as saying: “Why not, why can’t, and why wouldn’t the Holy Spirit work through AI?” I can see how this would be consistent with certain strains of Christianity — and with certain strains of Unitarian Universalism, for that matter, where the important thing is some abstract message that somehow transcends human affairs.

But that’s not my religion. My religion centers on the community I’m a part of. Yes, there is a transcendent truth that we can access — but as a clergyperson, I don’t have some special access to that transcendent truth. Instead, truth is something that we, as a community of inquirers, gradually approach together. Any single individual is fallible, and won’t be able to see the whole truth — that’s why it’s important to understand this as a community conversation.

As a clergyperson, one thing I can do is to add other voices to the conversation, voices that we don’t have in our little local community. So in a sermon that’s trying to help us move towards truth, I might bring in William R. Jones, Imaoka Shinichiro, or Margaret Fuller (to name just a few Unitarian Universalist voices). Or I might quote from one of the sacred scriptures — i.e., from one of the sources of wisdom traditions — from around the world. Now it is true that maybe a LLM could save me a little time in coming up with some other voices; but given the huge environmental costs, it seems silly to save a small amount of time by using a LLM.

III/ Biases built into LLMs

Answer: No, because of hidden biases.

LLMs are algorithms trained on digitized data which has been input into them. For a LLM, the digitized data is mostly in the form of text. But we know that certain kinds of authors are going to be under-represented in that digitized data: women, non-Whites, working class people, LGBTQ people, etc. The resulting biases can be subtle, but are nonetheless real.

As a Universalist, I am convinced that all persons are equally worthy. I have plenty of biases of my own, biases that can keep me from seeing that all persons are equally worthy of love — but at least if my sermons are affected by my own biases, my community can successfully challenge me about my biases. If I use a LLM model to write a sermon, a model that’s riddled with biases that I’m not really aware of, that makes it harder for my community to help me rid my sermons of my biases.


IV/ Final answer: No

Would I use a LLM to write a sermon?

No. It goes against too many things I stand for.

Should you use a LLM to write your sermons?

I ‘m not going to answer that question for you. Nor should you ask a LLM model to answer that question for you. We all have to learn how to be ourselves, and to live our own lives. Once we start asking others — whether we’re asking LLMs or other authority figures — to answer big questions for us, then we’re well on the road to authoritarianism.

Come to think of it, that’s where we are right now — on the road to authoritarianism. And that’s a road I choose not to follow, thank you very much.

Turning twenty

(I wrote this a few days ago, then forgot to post it. Here it is now….)

On February 22, 2005 —twenty years ago last Saturday — I wrote my first blog post. If you want a summary of this blog’s boring history, try here, here, and here. But I don’t want to look at the past, I want to think about the ongoing role of independent blogs like this one.

Twenty years ago, most blogs were a mix of day-to-day trivia, snarky commentary, and a few more serious long-form posts. All three of these have now migrated to other platforms.

The day-to-day trivia gets posted to social media outlets like Facecrook, TikFok, YouCrude, Instacrap, etc. Much of it consists of images, graphical memes, and videos. There’s no longer much interest in text-based day-to-day trivia.

Snarky commentary has also moved to social media outlets. Again, there’s been a movement away from text-based snark to videos, graphical memes, and images. Snark has also declined in intelligence, creativity, and kindness; I wouldn’t even call it snark any more, I’d call it Rage Porn.

Long-form text-based posts have moved to outlets that cater to that format, such as Substack and Medium. This move is generally a good thing; writers can focus on writing, and they can stop worrying about the technical challenges of publishing online.

In short, most of what appeared on independent blogs twenty years ago has now moved to other platforms. There’s a good reason for these moves: it has become increasingly challenging to stay current with web technology.

Take, for example, WordPress, the blogging platform I use. I started out in 2005 using WordPress 1.5, when it was simple and uncomplicated blogging software. Today, WordPress has morphed into a major CMS capable of running today’s most complex websites. I no longer have the time to stay current with its capabilities. That’s one of the reasons I still use a nine-year-old theme: I don’t have the time to make the move to a new theme. Sure, I could hire a WordPress consultant to do it for me; but that gets away from the DIY ethos that I found so appealing about blogging back in 2005.

Whatever platform you choose, web security has become increasingly difficult, as the evil hackers get bolder and more skilled. I’m lucky I have a good web host who helps me keep current with security issues. But it’s getting harder and harder for me to stay current with web security. I can thoroughly understand why writers would want to move to a platform like Substack or Medium.

Beyond the challenge of staying current with technology, I don’t think there’s much of an audience for independent text-based blogs any more. Most of my early readership long ago migrated to social media platforms. Once you’re hooked in to a social media platform, there’s not much reason to go visit an independent website. Potential new audiences tend to prefer audio or visual podcasts; they don’t want to read text, they want to watch or listen to content.

The only reason to write an independent text-based blog like this one is because you like to write. That describes me. I enjoy the process of writing, and I write all the time. As long as I’m writing something, I might as well publish it. And even though publishing a blog has gotten more difficult in the past twenty years, it’s still far easier than the printed fanzines I used to publish in the 1980s and 1990s.

So what if the audience for independently hosted blogs is tiny? I’m still having fun, which has always been the point of this blog. I hope you’re still having fun, too — and thank you for continuing to read.

Web, c. 2007

I’ve been spending too much time online for the past two decades. But recently I’ve been reducing my screen time, and — surprise, surprise — I feel better. That’s why I’ve reduced my posting schedule to about once a week.

But back in 2007, I lived way too much of my life online. I spent way too many hours writing daily blog posts, commenting on other people’s blogs, hanging out on Twitter, producing a weekly video, watching other people’s videos (back then, blip.tv was the place to really hip creative videos), and on and on.

I also created several random websites, just for fun. Recently, I found the HTML code for a whimsical website I created in 2007. What happened was this: Carol had a website called fishisland.net which she used to publicize ecological projects. Last year, that site got taken over by malicious actors. Our web host shut it down for us. I told Carol I’d restore it but never got around to it (I’m limiting my screen time, remember?).

Well, this week I came down with a nasty head cold. I couldn’t sleep last night because my cough kept waking me up. So I wrapped myself up in a sleeping bag, and tried to resuscitate the hacked web site. And lo and behold, I discovered what I had forgotten — that fishisland.net had originally been my website, that I had hand-coded it in HTML 3.0 with state-of-the-art CSS. The hackers had trashed everything else, but plain old HTML is pretty robust, and I was able to resuscitate the website pretty much as it looked in 2007.

Here’s the resuscitated website. The only real problem I ran into was that the full-size photos had disappeared; I had to take the 200px-wide thumbnails and scale them up in GIMP. Actually, the whole website looks so primitive today, but back then it looked pretty slick. If you’re into HTML, check out the CSS — can you believe how few lines of code it required?

However, don’t try to look at this website on your phone — it will look like crap. And that’s really the big change in the web since 2007. Back then, no one looked at websites on their phones. Now, more than half of all web views are on phones.

Screenshot of website.
A screenshot showing what the resuscitated website looks like.

Update (1/31/25):

A little bit of thought and research revealed that it is in fact possible to have a static HTML website render reasonably well on different sized screens (e.g., laptop, smartphone) without building a responsive site using Javascript. In the case of this website, my CSS originally had an ID selector that styled the second nested div (the first div sets the background color, this div sets size on the screen) as follows:

#wrap {width: 42em; margin: 0 auto;} 

I simply changed that to:

#wrap {width: 95%; max-width: 42em; margin: 0 auto;}

Duh. So obvious. Of course I also had to change padding and margin for various other CSS elements so the site would look OK on a smartphone, which took some time. I also added the following line to the header:

<meta name="viewport" content="width=device-width, initial-scale=1">

Now the site works reasonably well on various sized screens. Is it as good as a responsive website? No. And I’m sure I’ll find more problems. But I had fun, and I like that the CSS is compact and manageable.

And now I’ve spent waaaaay too much time staring at screens today.

What they’re doing now…

Recently, I’ve had a number of conversations bemoaning the long slow decline of UU World magazine, the denominational magazine of the Unitarian Universalist Association. Ongoing budget cuts at the UUA have cut many departments, and UU World is no exception. In the past two decades, staff has been cut, print publication has dropped from six times a year to twice a year, and online publication is less frequent.

UU World may have hit its peak as a glossy publication in the 2000s. Chris Walton, one of the sharpest commentators on the UU scene, was on the editorial staff (Chris later became editor of the publication), while the editor-in-chief was Tom Stites, a long-time journalist who had been part of two Pulitzer Prize-winning teams. Chris started his own design business. But what happened to Tom Stites?

I happened to run across Tom Stites when I was researching an upcoming series of sermons on challenges to democracy. It turns out that Stites is now the president of the Banyan Project, a nonprofit organization working to create community new outlets based on a coop-ownership model. It’s an ambitious project — they’ve even designed a new software platform for community news outlets based on a coop model.

This is a super interesting project. The demise of local newspapers remains one of the biggest challenges to democracy in the United States today — just as the echo chambers of social media remain one of the biggest threats to democracy today. If you live in a local news desert, it’s very hard to learn what’s going on in local government, and very hard to make informed decisions as a voter and as a citizen. A coop model may not work for every news desert, but at this point we need as many options as possible — anything that can help to eradicate news deserts is A Good Thing.

Definitely worth taking a look at the Banyan Project website.

Kids, mental health, and social media

Last year, Dr. Vivek Murthy, the U.S. Surgeon General, issued an advisory report on social media and the mental health of kids:

“The current body of evidence indicates that while social media may
have benefits for some children and adolescents, there are ample indicators
that social media can also have a profound risk of harm to the mental health
and well-being of children and adolescents….” — Social Media and Youth Mental Health (U.S. Surgeon General’s Office, 2023)

Since then, Dr. Murthy has called on Congress to place health warning labels on social media sites.

This is not just a public health concern. It’s also a religious concern, or should be. In a recent opinion piece, Rabbi Jeffrey Salkin writes:

“A religious temperament might mean questioning our utter reliance on such technology: creating islands of time, like the Sabbath or Sunday, when we would liberate ourselves from technology and being more self-aware of how we use our tools, which have become our toys…. That [old] rabbinic statement that has become a cliche: ‘Whoever saves one life, it is as if they have saved the entire world.’ If regulating access to social media will save the life of one kid, it will be worth it.”

We now know that social media has serious adverse effects on adolescent and pre-adolescent health. So let’s do something about it.

Some truths about “AI”

In an article on the New Scientist website, science fiction author Martha Wells tells some truths about “AI”:

“The predictive text bots labelled as AIs that we have now aren’t any more sentient than a coffee cup and a good deal less useful for anything other than generating spam. (They also use up an unconscionable amount of our limited energy and water resources, sending us further down the road to climate disaster, but that’s another essay.)”

That’s at least three uncomfortable truths about “AI” (or as Ted Chiang calls it, “applied statistics”):

(1) “AI” is not sentient, i.e., it’s not an intelligence.
(2) The only thing “AI” can really do is generate spam.
(3) In order to produce spam, “AI” takes an enormous amount of energy.

I’m generally enthusiastic about new technology. But not “AI,” which strikes me as a boondoggle start to finish.

Better web search?

Google’s search results just keep getting worse. These days, do a search through Google and you’re likely to wind up with tons of websites with content written by AI, websites designed to be the top search result on Google merely so it can sell you something. And that’s after you sort through dozens of ads, which are so cleverly concealed that sometimes you click on them even when you don’t mean to.

I now use DuckDuckGo as my primary search engine. DuckDuckGo is slightly better than Google. DuckDuckGo doesn’t steal my data, while Google rapaciously steals my data so they can monetize me. And DuckDuckGo makes it slightly easier to separate the ads from the actual search results.

But I keep wishing there were an alternative engine. And — now there is.

Kagi is a fairly new search engine company (founded 2018) that works on a subscription model. So right away, no more ads. And their privacy policy appears to be as good as that of DuckDuckGo. Those two things alone mean Kagi has a leg up compared to Google.

A review of Kagi on Stack Diary from last September reveals that Kagi is a modestly good search engine. According to the reviewer, Kagi’s image search works better than Google’s. Kagi seems to be slightly less likely to return websites that are pure click bait. On the other hand, Google crawls the web thousands of times a day, so Google still has an edge.

But — Kagi allows you to customize your search results. Let’s say you’re searching for reviews of a household appliance. You know that the Good Housekeeping website contains fake reviews and is not worth looking at. With Google, Good Housekeeping is always going to appear in your search results. Using Kagi, you can Block Good Housekeeping so that it never appears in your search results. Or you can Lower it in your search results, so it’s still there but buried further down in the results. Kagi has what its developers call Lenses that allow you to state which websites you trust or don’t trust. The power to customize your search results means you’re not at the mercy of a search algorithm that you can no longer trust.

I’m thinking about subscribing to Kagi. But before I do, I’m trying to find people who are already subscribers, to see what they think. I’m posting this on the off change that someone who reads this is using Kagi, and is willing to share their experience….

AI lies

Science fiction author Charles Stross took Google’s “Bard” for a test drive. Bard is what popular culture calls “Artifical Intelligence,” a.k.a., but which is more properly called a Large Language Model (LLM); or, to use Ted Chiang’s more general nomenclature, it’s merely Applied Statistics.

In any case, Stross asked Google Bard to provide five facts about Charles Stross. Because he has an unusual name, he was fairly certain there were no other Charles Strosses to confuse Google Bard. The results? “Bard initially offers up reality-adjacent tidbits, but once it runs out of information it has no brakes and no guardrails: it confabulates without warning and confidently asserts utter nonsense.”

Stross concludes his post with a warning: “LLMs don’t answer your questions accurately — rather, they deliver a lump of text in the shape of an answer.” However, a commenter adds nuance to Stross’s warning: “Bard is clearly showing signs of prompt exhaustion, and that should have triggered a ‘this answer is out of confidence’ error and terminated the output. In a well-designed system you would not have seen those answers.” But even admitting that Bard is a poorly-designed LLM, how would the average user know which LLM is well-designed and which is not?

LLMs deliver answer-shaped text — with no way of judging how accurate it is.