The Ethics of AI and Education

Sermon copyright (c) 2026 Dan Harper. As delivered to First Parish in Cohasset. The text below has not been proofread. The sermon as delivered contained substantial improvisation.

Readings

The first reading was by John Dewey, from his book Democracy and Education:

The second reading was from a June 4, 2023, article in the Financial Times titled “Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’”:

Sermon

Let’s talk about artificial intelligence and education. First of all, let’s think about what AI actually is. One problem here is the phrase “artificial intelligence” is so imprecise. I’m a science fiction fan, and to my ears it sounds like a phrase left over from 1950s science fiction. I agree with science fiction author Ted Chiang, that the phrase “artificial intelligence” is inaccurate; I’d even go so far as to say it represents sloppy thinking. I much prefer Chiang’s term “applied statistics.”

Nevertheless, we’re stuck with the phrase “artificial intelligence.” But if by remembering that so-called AI is actually applied statistics, it becomes obvious that AI is a set of tools — just as hammers and saws belong to a set of tools, or armed drones and nuclear weapons belong to another set of tools. As with any set of tools, we as a society can choose which tools we use and how we use them.

With all that in mind, I’d like to present three case, as a way to consider the ethics of AI in education. As you listen to these case studies, try not to go directly to your gut feelings. If you’re an AI booster, don’t immediately say AI is good; if you’re an AI gloom-and-doomer, don’t immediately say AI is bad. Instead of immediately rushing to judgement, let’s see if we can think through both the positive and the negative implications.

Here’s the first case study:

A young man named Russell is extremely bright and creative, and he has undiagnosed dyslexia. As a boy, Russell dabbled in the creative arts and experimented with various technological innovations; he also struggled with school, having great difficulty reading and writing. Now, at age 19, he has been admitted to a prestigious college, with plans to major in social sciences. However, he has to drop out after a semester because he is unable to complete his written work in time, and as a result fails a couple of his classes.

Now here’s the question: Should Russell’s prestigious college allow him to use generative AI to assist him in completing his written assignments?

On the one hand, here is someone who is obviously bright and creative, and potentially has a lot to offer to the world. By allowing him to use generative AI to help him edit his written assignments, the college could offer support for his learning disability, which would help him to become a productive member of society. On the face of it, this seems like a no-brainer — let Russell use AI to complete his writing assignments, as long as he makes it clear to his professors that he is doing so.

On the other hand, remember that Russell has not been diagnosed with dyslexia; we know that he has dyslexia because we have the benefit of hindsight. His college does not know that he has dyslexia; for all they know, he could be lazy, or not bright enough for their prestigious college, or have some other fault. Plus, if they let him use generative AI, they have no way of determining if he’s only using it to help him edit, as opposed to letting AI completely write his papers.

In addition, both Russell and his college should consider the privacy policy of whoever is providing the AI. Will that company store information about Russell, and what will they do with that information?(1) Both Russell and the college should also consider the inevitable biases that are present in generative AI, and consider that when generative AI injects third party biases into Russell’s work, it is likely that he will unthinkingly adopt those biases as his own.(2) How will Russell and his college address these biases? And of course we have to consider the environmental impact of the data centers used by AI companies due to their large power consumption.(3) As a society, how do we balance support for Russell’s dyslexia and the environmental impact of generative AI?

Some of these ethical concerns might be addressed by having the college host an open-source AI model on its own servers; the college would then have some awareness of the biases in that AI model; they would have to possibility of finding renewable energy sources for that AI model; and so on. We might also suggest to that college that when a student seems to have difficulty completing assignments, they might want to have that student assessed for learning differences or learning disabilities. The point of fictional case studies like this one is to help us begin to consider what questions we will want to ask in the real world.

This case study raises the larger issue of how we educate persons who have learning disabilities. Generative AI has obvious potential for helping students with certain learning disabilities, and given the tight budgets of most school districts, AI might be a crucial tool for special education programs. Which are the AI tools which might best help students with dyslexia? What are the downsides to those tools — might AI exploit them, or inject hidden biases into their thinking? These are the questions we be addressing here.

One final comment: the young man in this case study is based on a real person — Russell Varian, who was extremely bright and seriously dyslexic; who struggled to graduate from Stanford University (in fact, Stanford refused to admit him to their doctorate program); and who went on to found Varian Associates, one of the very first Silicon Valley hi-tech companies.

Let’s try another case study.

A high school sophomore named Dolores has to write a four page paper for her English composition class. Dolores wants to go to college, and like most college-bound high school sophomores, she is already building her college resume. She is active in several extracurricular activities including Model United Nations, her school’s gay straight alliance, and the school newspaper. She’s active in sports, and having researched in which of her preferred sports she is most likely to receive a sports scholarship — her family cannot pay for her college education without some kind of scholarship — she has chosen rowing. In addition, she is active in her local congregation, where she volunteers regularly.

As you can imagine, she has almost no free time, and she often has to cut short her sleep time in order to have enough time to complete her school work to the level of excellence for which she strives. She finishes a complete draft of her four page paper, and only needs to do some final editing, when her mother has a relapse of a recurring mental health problem, leaving Dolores to take on some household duties, including supervising her sixth-grade brother. She no longer has the time to do the final editing she had planned. The paper is due tomorrow. Is it ethical for her to use generative AI to help her complete the final edit of her four-page paper?

On the one hand, we might say absolutely not. Her task as a high school sophomore in an English composition class is to learn the complete process of writing, from start to finish, including the final editing. So for her own sake, to maximize her personal learning, she should forgo using generative AI to edit her paper, even though it might result in a lower grade.

On the other hand, we might say she should be allowed to use generative AI for this purpose, as long as she discloses the fact to her teacher. For one thing, more and more workplaces are integrating AI into employee workflows, and by learning how to use generative AI responsibly, under the guidance of a teacher, she will be better equipped to handle the challenges of using AI when she gets into the workplace. It’s also important to remember that Dolores is typical of many college-bound high school students, who feel compelled to fill their schedules with activities that will build her college resume; she does not have the luxury of earlier generations who could take all the time they needed to learn how to write and edit their own work. And finally, given her mother’s sudden health crisis, we could ask whether she should have to receive a lower grade because she prioritizes her family responsibilities over school work.

This case study is based on a composite of teens I have known. As a composite, the details are going to be vague. Nevertheless, there should be enough here for us to explore some of the questions we have about using generative AI in school.

First of all, the reality is that college-bound students face lots of external pressures, and many of them are already using generative AI to complete assignments. Instead of trying to close the barn door after the horse has escaped, perhaps we should shift our expectations of our educational system so that we teach teens how to use AI responsibly.(4) If we start thinking along these lines, that raises the question of how generative AI might be used responsibly in the classroom. In one obvious example, AI could be used for doing research in much the same way that Wikipedia is already being used by students; in both cases, we can teach students that the results they get may not be correct; that they have to look for hidden biases; that they need to track down original sources of information; and so on.

We can also consider irresponsible uses of AI in education. Cheating is one obvious irresponsible use of AI. But remember, students are already using technology to cheat. For example, there are online services that will write your paper or complete your homework for a modest fee — and since generative AI is basically free, perhaps AI is good insofar as it helps make cheating accessible to poor students as well as to more wealthy students.

Beyond cheating, what are other irresponsible uses of AI in education? And we might ask: Why do we need AI to assist students? AI is often promoted as a way to increase efficiency, so we might ask why is it important for education to become more efficient. Is it because greater efficiency improves learning, or is it because greater efficiency makes it more likely to be accepted by an elite college, or is there some other reason? Or perhaps greater efficiency means a cost savings to the school district, allowing costly human employees to be replaced with machine tutors — and this may sound harsh in the relatively wealthy school districts of our area, but there are some school districts where AI tutors might be hugely helpful.

The reality is that we rely on a factory model of education. Educational historians have shown that in the mid-twentieth century our schools were quite literally designed to be factories. We haven’t ever broken away from that model. The factory model of education is designed to deal with masses of children with ever greater efficiency, so we can lower costs and increase productivity. By the inherent logic of our educational system, of course we should be using AI to help teach kids. Yet this might prompt us to ask: In our educational system, are children always considered to be ends in themselves rather than means? And if we deploy AI in our educational system, will that help us to treat children as ends in themselves rather than means? Educational philosopher John Dewey, the author of our first reading, was asking similar questions a hundred years ago as the factory model of schooling was emerging; we have to constantly ask these kinds of ethical questions.

Having said that, we might then ask whether AI tools are being developed that are designed specifically for supporting teaching and learning. We’ve already seen how other technological innovations can be adapted to work specifically for education — for example, course management software has helped make teachers more efficient by relieving them of some of the administrative drudgery of teaching. Both students and teachers are already using AI tools. How might things be different if the people designing generative AI worked with teachers and students so that AI tools would actually meet the needs of students and teachers? Who would pay for developing these hypothetical education-specific AI tools (the nonprofit education company Khan Academy offers one such model)?

Let’s do one more quick case study.

I recently learned about a company that’s developing an AI-based tool to create curriculum supplement with location-specific content for middle school and high school biology teachers to teach about local ecosystems. This company plans to draw on open source biodiversity data to generate lesson plans about specific organisms that have been found in the immediate area of any given school. This would help the teacher customize a generic curriculum and allow them to carry out a biology project rooted in local biodiversity.

On the one hand, we could say this is a good use of AI because it allows a teacher who ordinarily wouldn’t the time to customize a biology curriculum to their local ecosystem. Furthermore, it allows teachers to remain in charge of their teaching; it makes the teacher more efficient, while leaving the final use of the curriculum supplement up to their professional judgement.

On the other hand, in this specific case, it is not clear to me that biology teachers need or want this kind of curriculum supplement. In other words, do we start with the needs of the teachers and students and educational systems, or do we instead simply create AI-based tools without asking if they provide any real educational value? Many discussions of AI in education do not discuss what teachers and local schools actually need, let alone what students actually need.

Additionally, many discussions of AI in education show little awareness of curriculum design principles, educational philosophies, the psychology of learners, pedagogical methods, and so on. Yet there are really good models of how an independent company can develop technological resources based on sound educational principles. One example is Khan Academy, an educational technology nonprofit currently developing AI tools. The staff of Khan Academy includes both technology experts and professional educators to guide the development of educational technology, and they are already developing AI tools for students based on sound educational principles. This might prompt us to ask whether the nonprofit sector is the best place to develop educational technology; for-profit companies do not have the same luxury of hiring a large staff of educators.

That’s the end of the case studies. Let me see if I can wrap this up.

Earlier this week, I was talking with Kate Sullivan, our director of education, about AI and education; as you may know, Kate has a doctorate in developmental psychology, so I wanted to hear her opinions. She said, “It all goes to the question, ‘What is humanity?’” As she so often does, Kate got right to the heart of the issue. What is it to be human? This question lies at the root of all education. Many of our school systems have to focus on education as a way to learn how to earn a living: education is a means to an end, and the end is getting a good job; children are a means to an end, and the end is contributing to the economy. Obviously we have to eat to live, and so we have to earn a living. But as a wise rabbi once said, human beings do not live on bread alone; we also need that which is sacred or divine, in order to be fully human. The best teachers bring out that sacred spark in their students. And so we ask ourselves: Can AI be used for something more than helping children become productive economic units? The current hype around AI emphasizes cost savings, better test scores, and so on. What if the hype emphasized the sacredness of every child, of every human being? How would that change the conversation?

So while I don’t have any final answers about the ethics of AI in education, I can give you some questions to ask. When we are trying to decide together about how we will use AI in education, we are going to want to ask: Is AI being used merely to help children become means to someone else’s economic ends? Or is it being used in service of the sacredness of every human being?