Laura Hardin Marshall, Webster University
Our institution’s journey with generative artificial intelligence tools, like many others, began in late 2022 with ChatGPT. Concerned faculty began to reach out to key administrative staff from the Online Learning Center, Faculty Development Center, and the Reeg Academic Resource Center (which houses both our academic integrity unit and the Writing Center), which led to a faculty and staff working group whose charge was to learn about this ‘alarming’ new development and report back with policy and pedagogy recommendations. After a year of meetings, listening sessions, and learning community gatherings, the university landed on a semi-formal policy that advises instructors to make their own decisions about the use of AI within their disciplines and their specific courses.
In the Writing Center, however, we weren’t particularly feeling the furor around AI. Within the first year or so of ChatGPT and similar tools becoming well known, we weren’t seeing overt evidence of it in the center, likely because the people who were using it weren’t coming into the center for writing help. Because AI was mostly a non-issue, our center made only a few adjustments to the training materials in our academic integrity unit, such as informing tutors about what it is and how it might affect their interactions with students. The main takeaways were 1) if AI comes up, always start by asking about the instructor’s policy (since those policies would likely vary course-by-course) and 2) approach the situation in whatever way aligns with the tutor’s own style and ethics. Don’t feel comfortable bringing it up? That’s okay; we’re not the AI police. Want to give the student a mini-lecture on how they’re short-changing their education? If you think they’ll be receptive to it, you do you. Overall, though, preparing tutors to encounter AI was only a small part of one unit, which was itself only a small part in the larger training regimen (probably about 15 minutes of 30+ hours of curriculum).
Now, in 2025, that approach is no longer sufficient. Where previously students who used AI and visited the center were rare, we have seen a rise in two categories of students: 1) students individually referred to the center by their instructor because of inappropriate AI use, and 2) students whose entire class is required to use the Writing Center and who have chosen to use AI to produce the majority of the work (despite their instructor’s request not to).
With the first group, the referral generally comes with the instructor’s expectation that we can teach students ethical writing practices. Through a single visit we can make clear all the necessary moves, and the student will then proceed with their educational journey, forever changed. And that may genuinely happen for some students (I like to think so, anyway). In most cases, though, it’s more likely that the student listens politely while strategizing ways not to get caught next time.
For the second group, we face a greater challenge. The instructors of these classes have incorporated the Writing Center into students’ assignments in the effort to help them understand the processes of writing (especially how to manage large-scale projects, as these classes typically involve capstone papers) and to see the value in discussing their work with others. But because so many of the students use AI in their projects, they are largely unable to and/or uninterested in discussing their work, identifying global concerns, or revising in any substantial way. Ultimately, these required appointments in particular have led to a sort of crisis in our center, where the tutors are at a loss about how to make these sessions productive.
Fortuitously, in the semester prior to this influx of required appointments, our Faculty Development Center put out their call for proposals for the annual Teaching Festival. This year’s theme was about transforming educational strategies through student input. The assistant director of the Reeg ARC (and the key contact for our integrity unit) reached out to ask if I’d be interested in presenting some of our previous work on how to talk to students about AI use, which I took as the perfect opportunity to open a dialogue with my tutors. Since we were just beginning to feel the increase in appointments with writers using AI, the tutors were perfectly positioned to observe and work through these new dynamics. What were their opinions about, experiences with, and approaches to discussing AI with writers in the center? I asked the tutors to respond to the following questions:
-
- What makes your “spidey sense” go off when you’re reading student work? What qualities in a work make you think, “This sounds like AI”?
- How do you approach situations where you think/know a student used AI? What questions do you ask and/or how do you normally start the conversation?
- Do you avoid the AI-use conversation entirely (and that’s okay, if you do!)? What factors lead to deciding when/how to talk about AI or not?
- What advice (if any) do you give students around writing with AI?
- Do you try to change a student’s mind about using AI?
- In your opinion/experience, what makes an overall “productive” conversation about AI?
In collecting their responses, my hope was to learn from the real situations and challenges the tutors were facing and use that to develop more effective guidance to incorporate into our training materials (a 15-minute discussion wasn’t going to cut it anymore). Through the Teaching Festival, I also wanted to offer recommendations to faculty and staff who have to address similar sticky situations. Though the dynamics of those discussions would differ since tutors don’t assign grades or enforce policies, there was certainly a lot that instructors could learn from tutors’ first-hand experiences.
The result of our discussions about AI was two documents. The first was on AI literacy (a brief overview of what AI is, expectations on how to use it more ethically/effectively, etc.); it concluded with common reasons students likely resort to using AI unethically and how to gently redirect them toward better writing practices and/or ways to engage with AI critically. The second document was an AI discussion protocol similar to the decision-making workflow. It offered several branching paths (taking a direct or indirect approach to ‘accusations’ of AI use, whether AI was allowed by the instructor or not, etc.) and questions tutors or instructors could ask to learn about and get students thinking about their writing, learning, and decision-making processes. With these resources, both our instructors and our tutors will feel more confident about navigating the varying policies and attitudes about AI and to make for more productive conversations along the way.