Yesterday I listened to The Daily’s episode on the radical impact of AI on higher ed. A history professor compared himself to a monkey hugging a fake mother in Harry Harlow’s famous study of “comfort contact.” If all the effort put into course planning, creating discussion questions, and giving careful feedback to student writing only culminates in a conversation between the professor and ChatGPT, the intellectual contract between teacher and student has been betrayed. As one interviewer asked Harlow, is the monkey’s attachment to a cloth doll really love?
Harlow designed a cruel experiment, and it hurts my heart to watch it. But I think Harlow’s conclusions apply to faculty who have come to think of their relationships with students as the primary justification for tolerating low pay, a high service load, and other forms of exploitation — in essence, a form of comfort contact. The human bond that teachers crave in education has slowly been eroded by transactional goals like earning a credential and getting a job. Students operating with that mindset have always been more likely to plagiarize, and one could say that ChatGPT is just a more sophisticated version of Wikipedia, SparkNotes, or papers-for-hire. I always felt that it was possible to frame assignments and discussion questions in such a way that plagiarism required more work than original thinking, but that is no longer true. Professors are going to have to rethink curriculum from the ground up if they want to protect themselves from becoming Harlow’s poor monkeys, seeking satisfying intellectual exchanges from an inanimate app. In fact, I think institutions have an obligation to attend to the emotional toll that AI is taking on professors. Otherwise, aren’t colleges and universities running a much larger version of Harlow’s experiment on their own faculty?
You could argue that the solution would be for professors to also retreat behind the convenience and impersonality of AI by mining it for their own planning. Out of curiosity, I asked ChatGPT to write five discussion questions on Tuesday’s essay about neurotheology. The first results weren’t bad:
How does Willa Cather's novel Death Comes for the Archbishop explore the relationship between the body and spirituality?
In what ways does neurotheology contribute to our understanding of the spiritual experiences depicted in Death Comes for the Archbishop?
How does the concept of neurotheology challenge traditional notions of spirituality and religious experiences?
What are some of the physical triggers for spiritual experiences discussed in the essay, and how do they relate to the experiences of the characters in the novel?
Can neurotheology help bridge the gap between religious and secular perspectives on spirituality, as depicted in Death Comes for the Archbishop?
If you don’t like the first set of results, you can always select “Regenerate response,” and the app will give you five more discussion questions. It will then ask if that batch is better, worse, or the same than the original results. Here is the second set:
How does neurotheology explain the relationship between the body and spirituality as depicted in Willa Cather's novel Death Comes for the Archbishop?
In what ways does Father Latour's physical experiences, such as visual disorientation and thirst, contribute to his spiritual encounters in the novel?
How does the concept of "spiritual synchrony" in religious rituals align with the portrayal of shared religious experiences in Death Comes for the Archbishop?
What role does aestheticism play in Father Latour's spiritual experiences, and how does it relate to the distinction between religion and spirituality in neurotheology?
How does the concept of "miracles" in Cather's novel align with the findings of neurotheology regarding heightened awareness and perception during spiritual moments?
The second batch, to my eye, tries to target more specific phrases, and in that way it might be slightly better than the first set of questions. But these questions assume that my argument about neurotheology in Death Comes for the Archbishop is objectively true. The discussion questions might measure a reader’s comprehension of my essay, but they wouldn’t spark a free exchange of ideas. More importantly, I can’t use any of these prompts for my original purpose, which was to have a conversation about Tuesday’s essay with you, my readers. Is this AI’s shortcoming, or was I simply not being clear with my original directions?
I tried again, this time asking ChatGPT to write five discussion questions asking readers to apply Tuesday’s essay to their own lives. These results strike a little closer to the mark.
How can the concept of neurotheology help us understand the relationship between the body and spirituality in our own lives?
Have you ever had a spiritual experience that you can relate to the triggers discussed in the essay, such as physical strain or participating in religious rituals? How did these experiences affect you?
Reflecting on the discussion of near-death experiences, do you think physical stimuli can play a significant role in shaping spiritual encounters? Why or why not? Can you think of any personal or anecdotal examples?
In the essay, it is mentioned that religious rituals observed in a communal setting increase the likelihood of spiritual experiences. Have you ever experienced spiritual synchrony or a sense of connection during religious or group rituals? How did it impact your spirituality or sense of belonging?
Consider the idea that spiritual experiences can be both intellectual and emotional, stemming from nuanced ideas or aesthetic appreciation. How do you personally connect with spirituality? Is it more rooted in intellectual understanding, emotional connection, or a combination of both?
These questions could work for a Friday thread. I especially like this question (generated in the app’s second attempt at personal application of the essay): “Reflecting on the discussion of near-death experiences and their connection to spiritual encounters, have you ever had a brush with death or a life-threatening situation that affected your perception of spirituality or the divine?”
I was about to conclude that ChatGPT wasn’t really geared to spark scholarly discussion, since it kept taking my argument and the term “neurotheology” at face value. But then I asked the app to generate five discussion questions inviting debate about my essay. After a few “regenerate response” attempts, the tool eventually produced some questions that invited competing interpretations of Death Comes for the Archbishop and that opened up the term “neurotheology” to different definitions. Perhaps the most memorable result, which captures the ambiguity of Bishop Latour’s character, is this: “Can religious practice be purely aesthetic or conceptual, or is genuine belief necessary for a spiritual experience?” This one would be discussion-worthy in our comment thread, don’t you think?
But if professors were to use AI in this way for class preparation, and then students were to use AI to complete their assignments, it seems like we’d all become scientists like Harlow, and learning itself would be relegated to the cage in the lab. ChatGPT would then have completely colonized the examined life by assuming both roles in the Socratic exchange. We’d be standing behind the controls, maybe, tweaking the questions and possibly revising the results. But would we be transformed by the experience in any way resembling the freedom and growth that we’ve typically associated with liberal arts education? Or would we be setting ourselves up for an epidemic of Alzheimer’s disease due to cognitive decline from verbal sloth? (A grim, but testable, hypothesis)
I’ve heard some say that the higher-levels of craft are still safe, that ChatGPT cannot duplicate a sophisticated writer’s voice, that it lacks the power or real originality. But this is by no means self-evident. Planet Money recently produced a two-part series using AI. They asked AI to write their script, cloned the voice of a former host of the show to narrate it, and even used AI-generated questions to interview scholars. You know what those professors kept saying? “That’s an excellent question.” Polishing the episode still required a lot of tweaking, and it’s largely the juxtaposition of human elements and AI content that makes the series interesting. So it seems that revision may be where high-level craft remains the most relevant in our brave new world.
But I still think the struggle of creation has intrinsic value. Willa Cather captured the essence of it in her unforgettable manifesto, The Kingdom of Art:
“To keep an idea living, intact, tinged with all its original feeling, its original mood, preserving in it all the ecstasy which attended its birth, to keep it so all the way from the brain to the hand and transfer it on paper a living thing with color, odor, sound, life all in it, that is what art means, that is the greatest gift of the gods. And that is the voyage perilous, and between those two ports more has been lost than all the yawning caverns of the sea have ever swallowed.”
An artist who embraces the voyage perilous does so in anticipation of human connection. Those risks are only worth it if there is a reader on the other end to receive the idea, absorb it, and thereby keep it alive.
I believe the same is true of the scholarly conversation, and so I will close with a question from a colleague who approached me after my presentation on Cather in New York last week. She noted that my list of proximal triggers for spiritual experience (borrowed from Newberg and Yaden) was long. If factors as various as personality types, genetics, near-death experiences, psychoactive drugs, and synchronous religious rituals could all produce spiritual experiences, couldn’t one say that a spiritual experience could be triggered by almost anything? And if so, how useful is it to identify all of those triggers in the first place?
I think it’s safe to say that ChatGPT would not have come up with that question. I don’t have a good answer to it. Do you?
I taught college composition and lit for nearly 15 years, and this situation strikes me as an argument for in-class drafting. Not only might it reduce the chance to use AI, but it also might allow/ask the students to participate in *generating* questions about the text. For example, begin a class with this prompt: "Now that you've read the first x pages/chapters, write for 10 minutes about what aspect you most reacted to and why. Point to specific pages/examples." Then after than 10 minutes, have a discussion.
My impression from my time in higher ed was that we assume that the teacher's questions are always more important than the student's, which often discourages students from asking questions unless they can frame them in ways that sound like the teacher's. I wonder whether part of the appeal of AI is that it makes it easier for student writing to sound the way students assume their teachers want them to sound. I didn't think that was the point of teaching literature or writing, but I think a lot of academic act as though it is. I always much preferred messy engagement to fluent distance. That meant tolerating--even encouraging--messiness; it meant bringing messiness into the class and placing it front and center.
The signature reason I did well in English was that for my whole life as a student, I took *my own* reaction to the literature I read seriously; I treated my responses as though they mattered. I don't know where that confidence came from. But I don't think the vast majority of students are encouraged to approach literature (or maybe any of their subjects) that way. And as depressing as the possible use of AI is, that lack of encouragement long precedes the technology.
This is so interesting because I am finalizing my response to your Tuesday post. The fact it was AI generated explains a bit why I was having trouble with it. I pulled out my copy of DCFTA, and started reading it. One of my favorite scenes is when Latour is in the cave. The strong emotions he feels there continue to reverberate in me. However, it was not mentioned in the post.
What you published Tuesday was helpful to me in wanting to clarify what I just discussed on a retreat recently. I was just in Blowing Rock, NC discussing The Brothers Karamazov and the subject of Dostoyevsky’s epilepsy was discussed-how both the illness and the methods of controlling his illness likely contributed to his original thinking. (Just two year prior to publication of the novel, his 3yo son Alyosha died of epilepsy, hence complicating his own view of what fathers pass on to their sons and the guilt that this entails. This was such a poignant theme in the book.)
Anyway, it’s shocking to hear that what I was responding to was not all completely human. I think the ideas were, but the vehicle of conveyance was artificial. But in a way my response (that I am still editing) is the exact opposite of what was first posed. My post is informed by an extraordinarily human forum: a retreat of friends and strangers who voluntarily spend a week together to discuss a huge Russian novel. Out of our own pocket and not for credit. Is this what we are supposed to be doing instead of a debt-inducing college education that fails to treat more and more academics fairly?