The non-neutrality of “AI”

Whatever you call it — “artificial intelligence,” “machine learning,” or as author Ted Chiang has suggested, “applied statistics” — it’s in the news right now. Whatever you call it, it does not present a neutral point of view. Whoever designs the software necessarily injects a bias into their AI project.

This has become more clear with the emergence of a conservative Christian chatbot, designed to give appropriately conservative Christian answers to religious and moral questions. Dubbed Biblemate.io by the software engineer who constructed it, it will give you guidance on divorce (don’t do it), LGBTQ+ sex (don’t do it), or whether to speak in tongues (it depends). N.B.: Progressive Christians will not find this to be a useful tool, but many conservative and evangelical Christians will.

I wouldn’t be surprised to learn that Muslim software engineers are working on a Muslim chatbot, and Jewish software engineers are working on a Jewish chatbot. Then as long as we’re thinking about the inherent bias in chatbots, we might start thinking about how racism, sexism, ableism, ageism, etc., affect so-called AI. We might even start thinking about how the very structure of chatbots, and AI more generally, might replicate (say) patriarchy. Or whatever.

The creators of the big chatbots, like ChatGPT, are trying to pass them off as neutral. No, they’re not neutral. That’s why evangelical Christians feel compelled to build their own chatbots.

Mind you, this is not another woe-is-me essay saying that chatbots, “AI,” and other machine learning tools are going to bring about the end of the world. This is merely a reminder that all such tools are ultimately created by humans. And anything created by humans —including machines and software — will have the biases and weaknesses of its human creators.

With that in mind, here are some questions to consider: Whom would you trust to build the chatbot you use? Would you trust that chatbot built by an evangelical Christian? Would you trust a chatbot built by the Chinese Communist Party? How about the U.S. government? Would you trust a chatbot built by a 38-year-old college dropout and entrepreneur who helped start a cryptocurrency scheme that has been criticized for exploiting impoverished people? (That last describes ChatGPT.) Would you trust a “free” chatbot built by any Big Tech company that’s going to exploit your user data?

My point is pretty straightforward. It’s fine for us use chatbots and other “AI” tools. But like any new media, we need to maintain a pretty high level of skepticism about them — we need to use them, and not let them use us.