Cohasset Patriots

It’s Patriots Day tomorrow, and I’m giving a sermon tomorrow telling the stories of three Cohasset Revolutionary War heroes and heroines, all of whom would have attended services in our 1747 meetinghouse. These three were Persis Tower Lincoln Hall, Briton Nichols, and Noah Nichols.

Due to the time constraints of a sermon, I have to give shortened versions of their life stories tomorrow. I had hoped to post fuller versions of their life stories here, but the research took much longer than I had planned and I’m out of time. Instead, I’ll put a timeline of Noah Nichols’s life after the jump — just to get the information on the web where it’s publicly accessible.

So… just in time to commemorate Patriot’s Day, here’s the life of Captain Noah Nichols….

Continue reading “Cohasset Patriots”

Philip Gulley on why war doesn’t work

I first encountered Philip Gulley a couple of decades ago in the book he co-wrote with James Mulholland titled If God Is Love: Why God Will Save Every Person. In that book, Gulley and Mulholland set forth a Quakerly approach to universalism.

The current U.S. war in Iran has prompted me to seek out other pacifists. This is not an easy time to be a pacifist. While I’m hearing quite a few people who are opposed to the war, I’m not hearing people who are opposed to all war — only to this war. Or maybe they’re just opposed to the current administration.

So I was pleased to stumble across a blog post Philip Gulley wrote back in March in which he makes the case that all war is wrong:

And he adds a pacifist statement that is both Quakerly and Universalist:

If you’re a Universalist pacifist like me, you might find Gulley’s post worth reading in its entirety.

Snow and moss

I’m on study leave this week. A friend of Carol’s offered to let stay in her house in Maine, which is good for my studying, since there are fewer things here to distract me. And when I need a break from studying (to stretch my legs and rest my brain), I can go outside and look at the amazing diversity of mosses and liverworts around here. Mosses and liverworts can be surprisingly beautiful, as in the photo below.

This is a view through the microscope of the peristome on a capsule of Dicranum species — the peristome is a structure that holds the spores in the capsule until they are ready to be released. I find the colors and shapes quite beautiful.

As an added bonus, it snowed for several hours. Although it was cold enough to snow, it was too warm for any accumulation of snow to build up; we had the beauty of snow without the mess. And it was quite something to watch large flakes of snow fall on tiny moss plants.

Snow falling on a rocky hillside covered with moss.

3 AI dangers you might consider

Here are three emerging AI dangers, with brief comments on their implication for religious professionals and congregation. Since a large percentage of the population is already using generative AI for various purposes, let’s make sure we’re using those services wisely and well.

AI danger number 1

Your chatbot logs, and the queries you make to chatbots, may be accessed by lawyers during lawsuits. See, for example, how one law firm used such files in a defamation lawsuit against a Youtube influencer. In this lawsuit, the Youtube influencer is being sued for defamation by a woman, about whom he allegedly made intentionally defamatory comments. The woman’s lawyers claim to show that the influencer’s ChatGPT logs reveal his malicious intent.

As usual with anything to do with Big Data (including the web, the broader internet, text messaging, etc.) — you have to assume that anything you put into electronic format can and will be made public in ways that you might not like.

Nothing new here, but it’s a good reminder that congregations and religious professionals should refrain from placing any confidential information into chatbots. in addition, congregations and religious professionals can help educate people about this very real danger — including educating teens (e.g., in OWL programs), people going through divorces, etc.

AI danger number 2

The title of a peer-reviewed study says exactly what AI danger number 2 is: “Sycophantic AI decreases prosocial intentions and promotes dependence.” To quote the editor’s summary in full:

An obvious implication is that there are specific and measurable dangers if you use AI as an inexpensive therapist. Unfortunately, lots of people have good reasons for turning to chatbots for mental health support — mental health professionals are expensive and may not be covered by insurance; in many places there is a shortage of mental health professionals; for many people there remains a significant social stigma for referring to mental health professionals; etc.

Congregations and religious professionals should be aware that some people are relying on chatbots for mental health support. While we are not qualified to provide mental health support, this might be an area where we could help create low- or no-cost mental health services and/or steer vulnerable people to existing low or no-cost services.

AI danger number 3

The U.S. Copyright Office has denied copyright protection to certain AI-generated works: “In general, the office will not find human authorship where an AI program generates works in response to user prompts….” See the U.S. Congress webpage on “Generative Artificial Intelligence and Copyright Law.” There remain questions about how much human influence is required before a work may be protected by copyright.

I’d expect this to be mostly a concern for religious professionals. If we use generative AI to come up with sermons, music, curriculum materials, etc., we should assume that material is not protected by copyright and can be used freely by anyone. In addition, it’s wise to be aware that generally speaking your prompts (and maybe even output generated by your prompts) can be used by AI companies for many purposes, so e.g. assume that you are giving away the rights to any text you enter into a chatbot.


There are legitimate uses for generative AI (think: people with dyslexia who use it to clean up writing). However, it appears that many current generative AI services are not well designed, nor do they make clear the potential dangers in using their services. I’m not saying “don’t use generative AI ever,” but I’m also not saying “AI is the solution to all our problems and we should use it for everything.” Using generative AI is analogous to using a chain saw — great tool for specific purposes, used wrongly it can cut your leg off. So read the (non-existent) warning label and wear safety gear.