Metropolitan Digital

Google


.

  • Written by Kristi Girdharry, Associate Teaching Professor of Arts and Humanities, Babson College
A writing professor’s new task in the age of AI: Teaching students when to struggle

I was early to the generative AI wave in higher education: I was among the first professors who teach writing to publish in an academic journal about generative AI and critical thinking[1], and I am now part of an interdisciplinary team[2] at Babson College thinking about how AI is impacting education, industry and society.

But that does not mean I am all in on AI – nor am I anti-AI. I am pro-learning. As my co-authors and I argue in a forthcoming book[3] on realizing the promise of higher education, even the most powerful tools are only as good as the learning environments we build around them.

So what does “getting learning right” look like in the age of generative AI? It involves a lot of experimentation and leaning in with students as a co-learner when I don’t have all of the answers, while remaining staunchly committed to sharing my expertise in writing, critical thinking and learning. I also hope that they trust me enough to follow my lead and persevere when the work becomes difficult.

From hope to grief

Navigating the rise of generative AI seemed easier to me in the earlier days. In spring 2023, for example, soon after ChatGPT went public, I asked students to use it to research their favorite musical artist and then fact-check the results as part of a unit in my senior-level social media class. The responses sounded polished and confident, but they were often wrong. Album dates were scrambled. Tours were invented. At one point, a student threw up her hands and shouted, “It lies!” The room erupted. The “lies” were especially apparent with less popular artists, about whom less had been written. “How might that translate to other knowledge areas?” I asked. They were pretty quick to thinking about whose voices might not make the cut in a different scenario.

While this was a promising start, by fall 2023 I found myself starting to grieve[4] the passing of the pre-AI-everywhere world. Once again, I leaned in with my students, now in a sophomore-level research writing class. In their proposals, I included a new required section called “Be Better Than a Robot” – the gist being that if ChatGPT could write your research paper, what was the point of us spending weeks on it?

I asked: Where would your own work – your own human thinking – need to come in to create a tiny piece of new knowledge in the world? We practiced primary research, we used time during class for reading and annotating, and I extended deadlines to account for the rigor we were undertaking.

AI usage was discouraged but not outright banned: If used, careful and explicit descriptions of exactly how were required, and I even gave examples of things like brainstorming academic titles as a potential option. While not all of the final research projects seemed completely AI-generated, the few that did caused me to spiral – like it was my fault that I didn’t come down harder on not using AI when I was trying to be neutral and understand how we could use it as a tool and not as a replacement.

Cognitive blind spots

Since those early days in 2023, discussions around college students’ use of AI have only become more fraught and complicated. There are no easy answers, and there are a lot of fears about overreliance[5], loss of learning[6] and even the value of a college degree[7]. There are also plenty of ethical concerns that go beyond academic integrity, such as the environmental impact[8] of AI and concerns over data and privacy[9]. But AI usage is not slowing down.

Recent data from the Pew Research Center[10] shows that more than half of teenagers are turning to AI for help with finding information and getting help with schoolwork. By the time these students arrive in my classes, many have already developed habits around these tools, and these habits may or may not serve their learning. For me, that’s not an argument for banning AI in the classroom, but rather an argument for taking it seriously.

But here’s the honest difficulty: When students use AI, they often can’t tell when they’re shortcutting their own thinking. A study published[11] from late 2024 in the British Journal of Educational Technology found that students using ChatGPT improved their essay scores in the short term but showed no meaningful gains in knowledge. Moreover, they were prone to what the researchers called “metacognitive laziness,” meaning a dependence on the tool that undermined their ability to self-regulate and engage deeply in learning. This is a result of cognitive offloading[12].

Teaching discernment

At this point, I feel my role is shifting from neutral observer or co-learner to something more like a guide with a point of view. I know what rigorous thinking looks like in my discipline. I know the difference between a paper that has moved through genuine intellectual struggle and one that has been assembled. My job is to make that difference visible to students who may not yet have the experience to see it themselves.

So, yes, there are moments in my writing courses where I ask students to write without AI. Not as a purity test, although I could see it used that way, and not because I believe they’ll go on to spend their careers avoiding it, but because understanding what AI does to your thinking first requires knowing what your thinking can do without it.

This matters especially now as many college students I meet arrive already anxious, already performing, already optimizing for the grade rather than the learning. Many have spent years learning to produce the right answer rather than to wrestle with hard questions. Before they can develop discernment about any tool, they need something more foundational: a sense of their own thinking as worth trusting.

In practice, this looks like drafting with AI and without it, comparing versions, and being asked to justify choices out loud. It looks like noticing when the tool accelerates routine work and when it flattens complexity.

Like many faculty navigating this moment, I find myself in what Auburn University professors Christopher Basgier and Lydia Wilkes describe as an “unsettled middle[13],” neither fully embracing nor refusing the technology, but doing the uncomfortable work of engaging with it critically. My students, I’ve found, often end up in a version of that same uncertain space. Learning to sit with that uncertainty – to tolerate the slowness and mess of thinking things through rather than reaching for the frictionless answer – is where discernment begins.

If students are going to continue encountering these tools throughout their lives, then ignoring that reality does them no favors. My responsibility is to help them develop the judgment to decide when a shortcut is strategic and when it undermines their own thinking. That is pro-learning.

References

  1. ^ generative AI and critical thinking (doi.org)
  2. ^ interdisciplinary team (www.babson.edu)
  3. ^ forthcoming book (mitpress.mit.edu)
  4. ^ starting to grieve (www.insidehighered.com)
  5. ^ overreliance (www.elon.edu)
  6. ^ loss of learning (news.harvard.edu)
  7. ^ the value of a college degree (www.usnews.com)
  8. ^ environmental impact (news.cornell.edu)
  9. ^ data and privacy (hai.stanford.edu)
  10. ^ Pew Research Center (www.pewresearch.org)
  11. ^ study published (doi.org)
  12. ^ cognitive offloading (evidencebased.education)
  13. ^ unsettled middle (doi.org)

Authors: Kristi Girdharry, Associate Teaching Professor of Arts and Humanities, Babson College

Read more https://theconversation.com/a-writing-professors-new-task-in-the-age-of-ai-teaching-students-when-to-struggle-276590