Ethics and “AI”

On the Lawyers Guns and Money blog, Abigail Nussbaum writes:

“The companies that make AI — which is, to establish our terms right at the outset, large language models that generate text or images in response to natural language queries — have a problem. Their product is dubiously legal, prohibitively expensive (which is to say, has the kind of power and water requirements that are currently being treated as externalities and passed along to the general populace, but which in a civilized society would lead to these companies’ CEOs being dragged out into the street by an angry mob), and it objectively does not work. All of these problems are essentially intractable.”

What interests me here is how she focuses in on the main ethical problem with “AI” — the huge environmental impact of “AI.” Yes, it is evil that the “AI” companies steal people’s writing and steal people’s artwork. Yes, it is evil that the plutocrats want to have “AI” replace real humans (though as Nussbaum points out, if you factor in the real environmental costs, human labor is cheaper than “AI”). Yes, it is evil that “AI” is a product that doesn’t provide consistently good results. Yes, it is evil that”AI” is another way that the plutocrats can steal your personal data.

But here we are in the middle of an ecological crisis, and “AI” uses huge amounts of energy, and huge amounts of fresh water for cooling. “AI” is an environmental disaster. That is the real ethical problem.

Some truths about “AI”

In an article on the New Scientist website, science fiction author Martha Wells tells some truths about “AI”:

“The predictive text bots labelled as AIs that we have now aren’t any more sentient than a coffee cup and a good deal less useful for anything other than generating spam. (They also use up an unconscionable amount of our limited energy and water resources, sending us further down the road to climate disaster, but that’s another essay.)”

That’s at least three uncomfortable truths about “AI” (or as Ted Chiang calls it, “applied statistics”):

(1) “AI” is not sentient, i.e., it’s not an intelligence.
(2) The only thing “AI” can really do is generate spam.
(3) In order to produce spam, “AI” takes an enormous amount of energy.

I’m generally enthusiastic about new technology. But not “AI,” which strikes me as a boondoggle start to finish.

AI lies

Science fiction author Charles Stross took Google’s “Bard” for a test drive. Bard is what popular culture calls “Artifical Intelligence,” a.k.a., but which is more properly called a Large Language Model (LLM); or, to use Ted Chiang’s more general nomenclature, it’s merely Applied Statistics.

In any case, Stross asked Google Bard to provide five facts about Charles Stross. Because he has an unusual name, he was fairly certain there were no other Charles Strosses to confuse Google Bard. The results? “Bard initially offers up reality-adjacent tidbits, but once it runs out of information it has no brakes and no guardrails: it confabulates without warning and confidently asserts utter nonsense.”

Stross concludes his post with a warning: “LLMs don’t answer your questions accurately — rather, they deliver a lump of text in the shape of an answer.” However, a commenter adds nuance to Stross’s warning: “Bard is clearly showing signs of prompt exhaustion, and that should have triggered a ‘this answer is out of confidence’ error and terminated the output. In a well-designed system you would not have seen those answers.” But even admitting that Bard is a poorly-designed LLM, how would the average user know which LLM is well-designed and which is not?

LLMs deliver answer-shaped text — with no way of judging how accurate it is.

The non-neutrality of “AI”

Whatever you call it — “artificial intelligence,” “machine learning,” or as author Ted Chiang has suggested, “applied statistics” — it’s in the news right now. Whatever you call it, it does not present a neutral point of view. Whoever designs the software necessarily injects a bias into their AI project.

This has become more clear with the emergence of a conservative Christian chatbot, designed to give appropriately conservative Christian answers to religious and moral questions. Dubbed Biblemate.io by the software engineer who constructed it, it will give you guidance on divorce (don’t do it), LGBTQ+ sex (don’t do it), or whether to speak in tongues (it depends). N.B.: Progressive Christians will not find this to be a useful tool, but many conservative and evangelical Christians will.

I wouldn’t be surprised to learn that Muslim software engineers are working on a Muslim chatbot, and Jewish software engineers are working on a Jewish chatbot. Then as long as we’re thinking about the inherent bias in chatbots, we might start thinking about how racism, sexism, ableism, ageism, etc., affect so-called AI. We might even start thinking about how the very structure of chatbots, and AI more generally, might replicate (say) patriarchy. Or whatever.

The creators of the big chatbots, like ChatGPT, are trying to pass them off as neutral. No, they’re not neutral. That’s why evangelical Christians feel compelled to build their own chatbots.

Mind you, this is not another woe-is-me essay saying that chatbots, “AI,” and other machine learning tools are going to bring about the end of the world. This is merely a reminder that all such tools are ultimately created by humans. And anything created by humans —including machines and software — will have the biases and weaknesses of its human creators.

With that in mind, here are some questions to consider: Whom would you trust to build the chatbot you use? Would you trust that chatbot built by an evangelical Christian? Would you trust a chatbot built by the Chinese Communist Party? How about the U.S. government? Would you trust a chatbot built by a 38-year-old college dropout and entrepreneur who helped start a cryptocurrency scheme that has been criticized for exploiting impoverished people? (That last describes ChatGPT.) Would you trust a “free” chatbot built by any Big Tech company that’s going to exploit your user data?

My point is pretty straightforward. It’s fine for us use chatbots and other “AI” tools. But like any new media, we need to maintain a pretty high level of skepticism about them — we need to use them, and not let them use us.

Let us name it … ASS

People talk about “artificial intelligence.” They get corrected by people who say, It’s not intelligence, it’s “machine learning.” But actually machines don’t learn either. All this false terminology isn’t serving us well. It obscures the fact that the humans who design the machines are the intelligences at work here, and by calling the machines “AI” they get to dodge any responsibility for what they produce.

In a recent interview, science fiction author Ted Chiang came up with a good name for what’s going on:

” ‘There was an exchange on Twitter a while back where someone said, “What is artificial intelligence?” And someone else said, “A poor choice of words in 1954”,’ [Chiang] says. ‘And, you know, they’re right. I think that if we [science fiction authors] had chosen a different phrase for it, back in the ’50s, we might have avoided a lot of the confusion that we’re having now.’ So if he had to invent a term, what would it be? His answer is instant: applied statistics.” [quoted by, originally in, emphasis mine]

Applied statistics is a much better term to help us understand what is really going on here. When a computer running some ChatBot application comes up with text that seems coherent, the computer is not being intelligent — rather, the computer programmers had assembled a huge dataset to which they apply certain algorithms, and those algorithms create text from the vast dataset that sounds vaguely meaningful. The only intelligence (or lack thereof) involved lies in the humans who programmed the computer.

Which brings me to a recent news article from Religion News Service, written by Kirsten Grieshaber: “Can a chatbot preach a good sermon?” Jonas Simmerlein, identified in the article as a Christian theologian and philosopher at the University of Vienna, decided to set up a Christian worship service using ChatGPT. Anna Puzio, who studies the ethics of technology at the University of Twente in The Netherlands, attended this worship service. She correctly identified how this was an instance of applied statistics when she said: “We don’t have only one Christian opinion, and that’s what AI [sic] has to represent as well.” In other words, applied statistics can act to average out meaningful and interesting differences of opinion. Puzio continued, “We have to be careful that it’s not misused for such purposes as to spread only one opinion…. We have to be careful that it’s not misused for such purposes as to spread only one opinion.”

That’s exactly what Simmerlein was doing here: averaging out differences to create a single bland consensus. I can understand how a bland consensus might feel very attractive in this era of deep social divisions. But as someone who like Simmerlein is trained in philosophy and theology, I’ll argue that we do not get closer to truth by averaging out interesting differences into bland conformity; we get closer to truth by seriously engaging with people of differing opinions. This is because all humans (and all human constructions) are finite, and therefore fallible. No single human, and no human construction, will ever be able to reach absolute truth.

Finally, to close this brief rant, I’m going to give you an appropriate acronym for the phrase “applied statistics.” Not “AS,” that’s too much like “AI.” No, the best acronym for “Applied StatisticS” is … ASS.

Not only is it a memorable acronym, it serves as a reminder of what you are if you believe too much in the truth value of applied statistics.

Scraped

The Washington Post investigated which websites got scraped to build up the database for Google’s chatbot. The Post has an online tool where you can check to see if your website was one of the ones that got scraped. And this online tool shows that danielharper.org was one of the websites that got scraped.

Screenshot showing the Washington Post online tool.

True, there were 233,931 websites that contributed more content than this one did. Nevertheless, I’m sure that Google will compensate me for the use of my copyright-protected material. So what if they used my material without my permission. Soon, a rep from Google will reach out to me, explaining why their scraping of my website is unlike those sleazy fly-by-night operations that steal copyright-protected material from the web to profit themselves without offering the least bit of compensation to the author. Not only will they pay me for the use of my material — they will also issue a written apology, and additional compensation because they forgot to ask permission before stealing, I mean using, my written work.

I heart Big Tech. They’re just so honest and ethical.

“AI” generated writing

Neil Clarke, editor of a respected science fiction magazine, reports on his blog that numbers of spammy short fiction submissions are way up for his publication. He says that spammy submissions first started increasing during the pandemic, and “were almost entirely cases of  plagiarism, first by replacing the author’s name and then later by use of programs designed to ‘make it your own.'”

Helpfully, he gives an example of what you get with one of the programs to “make it your own.” First he gives a paragraph from the spam submission, which sounds a little…odd. Then he provides the paragraph from the original short story on which the spam submission was based. However, Clarke says: “These cases were often easy to spot and infrequent enough that they were only a minor nuisance.”

Then in January and February, spammy submissions have skyrocketed. Clarke says: “Towards the end of 2022, there was another spike in plagiarism and then ‘AI’ chatbots started gaining some attention, putting a new tool in their arsenal…. It quickly got out of hand.” It’s gotten so bad that now 38% of his short fiction submissions are spammy, either “AI” generated,* or generated with one of those programs to “make it your own.”

38%. Wow.

Clarke concludes: “”It’s not just going to go away on its own and I don’t have a solution. … If [editors] can’t find a way to address this situation, things will begin to break….”

This trend is sure to come to a sermon near you. As commenters on the post point out, writers are already using chatbots to deal with the “blank page struggle,” just trying to get words on the paper. (To which Neil Clarke responds that his magazine has a policy that writers should not use AI at any stage in the process of writing a story for submission.) No doubt, some minister or lay preacher who is under stress and time pressure will do (or has done) the same thing — used ChatGPT or some other bot to generate an initial idea, then cleaned it up and made it their own.

And then “AI” generated writing tools will improve, so that soon some preachers will use “AI” generated sermons. For UU ministers, it may take longer. There are so few of us, and it may take a while for the “AI” tools to catch on to Unitarian Universalism. But I fully expect to hear within the next decade that some UU minister has gotten fired for passing off an “AI” generated sermon as their own.

My opinion? If you’re stressed out or desperate and don’t have time to write a fresh sermon, here’s what you do. You re-use an old sermon, and tell the congregation that you’re doing it, and why — I’ve done this once or twice, ministers I have high regard for have done this, and it’s OK, and people understand when you’re stressed and desperate. Or, if you don’t have a big reservoir of old sermons that you wrote, find someone else’s sermon online, get their permission to use it, and again, tell the congregation that you’re doing it, and why. Over the years, I’ve had a few lay preachers ask to use one of my sermons (the same is true of every minister I know who puts their sermons online), and it’s OK, and people understand what’s it like when you’re stressed and desperate and just don’t have time to finish writing your own sermon.

But using “AI” to write your sermons? Nope. No way. Using “AI” at any stage of writing a sermon is not OK. Not even to overcome the “blank page struggle.” Not even if you acknowledge that you’ve done it. It’s spiritually dishonest, and it disrespects the congregation.

* Note: I’m putting the abbreviation “AI” in quotes because “artificial intelligence” is considered by many to be a misnomer — “machine learning” is a more accurate term.

The singularity as atheist religion

In a talk titled “Dude You Broke the Future,” science fiction author and atheist Charlie Stross takes on Ray Kurzweil and other advocates of the “singularity,” the moment when all our problems will be solved with the emergence of transhuman artificial intelligence:

“I think transhumanism is a warmed-over Christian heresy. While its adherents tend to be vehement atheists, they can’t quite escape from the history that gave rise to our current western civilization. … If it walks like a duck and quacks like a duck, it’s probably a duck. And if it looks like a religion it’s probably a religion. I don’t see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we’re getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I’m going to wash my hands of the singularity as an explanatory model without further ado — I’m one of those vehement atheists too — and try and come up with a better model for what’s happening to us. …”

I find it delightful to see a self-proclaimed “vehement atheist” calling out other atheists for doing religion. This is especially admirable, since those other atheists would doubtless insist that they are not doing religion at all; they would claim that they are doing science. Not only that, those other atheists are doing bad religion — transhumanism is as bad as the Prosperity Gospel, insofar as both types of religion are barely believable, have no redeeming social worth, do not engage in worthwhile cultural production, assert that the vast majority of humanity will not be “saved,” spread fear, and are stupid and hard to believe.

This is just a parenthetical remark in a much longer talk — and the rest of the talk is definitely worth reading, particularly for Charlie Stross’ take on corporations as AIs that are making global climate change accelerate.