Lauren J. Short, Baylor University
As scholars have contemplated in issues of Composition Studies from spring and summer of 2023, the field of writing studies is ready to move forward in discussions beyond those merely seeking to detect AI usage (Hubbard, 2023; Johnson, 2023). In the context of the writing center where tutors are generally trained to respond to writing based on a non-evaluative model, I am interested in exploring the ways writing centers can evolve in their conversations with tutors about AI usage into the next stage of the discussion: moving past policing to help train and mentor tutors to respond to AI in student writing in productive ways.
Baylor’s Writing Center is part of a private R1 university with a student population of approximately 20,000. We employ undergraduate and graduate tutors from a variety of disciplines. The director and I, the assistant director, are also faculty members in the English department and, by December 2022, had heard of ChatGPT and its potential to outsource writing teachers. Over holiday break, I read Daniel Herman’s “End of High-School English” from The Atlantic and downloaded ChatGPT, feeding it prompts to test its capacity (2022).
In January 2023, following the holiday break, I was asked to be part of an English department-led discussion facilitating a conversation about what AI meant for our discipline, which sparked my initial interest in artificial intelligence in writing. These conversations inspired me to bring an open discussion into our writing center to help tutors, who, if they had not already, would be encountering discussions about AI in their sessions soon. I was interested less in “teaching” the technology, and more about thinking of the big picture implications of text generation on writing. For instance, how can tutors best help writers in the face of this new technology? What do tutors need to understand about AI text generators to help writers effectively? Based on these questions, our writing center has attempted to facilitate open-ended discussions, while simultaneously providing some structure through practice-based training utilizing mock scenarios, an official artificial intelligence policy in our tutoring handbook, and IRB-exempt interviews with tutors about their experiences encountering AI during sessions. Ultimately, our goal has been to help tutors shift their goals from detecting AI use in student writing to more productive conversations about its use.
Our writing center’s first AI-specific training session took place in March 2023. During the meeting, I walked students through various iterations of ChatGPT along with the benefits and drawbacks of ChatGPT, adapted from resources posted to the Wcenter Listserv by Ashley Squires of the University of Avila Writing Center in January 2023. Afterwards, I distributed three ChatGPT scenarios for tutors to respond to (see Appendix A). During this initial training session, several tutors asked what to do if they suspected a writer brought in work generated by AI, which first prompted our leadership team to consider ways to direct tutors away from believing they needed to police the use of AI in student work. After this session, our writing center’s leadership team determined that we needed to include an official artificial intelligence policy in our tutor handbook for the upcoming academic year, so I created one in summer 2023 based on my observations during our first staff training session and what I believed were tutor concerns for how to navigate AI. A portion of the most updated version of the policy reads as follows (See Appendix B for full policy):
Unless otherwise communicated by a course instructor, the use of ChatGPT or other AI text generators is a violation of [our university’s] Honor Code. If you suspect a student’s work has been generated by an AI, follow the same procedure you would in cases of plagiarism or improper citation. Lead with curiosity by asking the student about their writing process and how they composed their text. You may also ask if the course instructor sanctions the use of AI in their class or on the assignment.
If the student indicates that the course instructor does allow the use of AI in their class, you can work with the student to determine the extent of that use. If they don’t allow it, then work with the student to determine how to revise the essay. As with the plagiarism policy, our goal with the writer is not to be arbiters of suspicion, but instead to open a line of communication and educate the writer about the risks of AI use, particularly if not sanctioned by the instructor. The [writing center] does not report the use of AI in student drafts for the same reasons outlined in the plagiarism policy…
There are a couple of points about the creation of this policy I would like to highlight. Our university’s official stance (as of February 2025) still dictates that submitted work generated by AI is unacceptable and a violation of the Honor Code. However, the university also provides a list of strategies for instructors when dealing with AI-generated assignments and does not prohibit the use of AI-produced writing in an instructor’s courses. Essentially, the blanket policy prohibits AI-use in student assignments, unless individual faculty allow it. With that in mind, I understood that our writing center had to accord with the university’s official stance towards AI use and wanted to make sure that faculty who prohibit AI use in their classes did not view the writing center as being in conflict with them. Furthermore, we wanted to help students thrive in contexts where they are allowed or encouraged to use AI.
As I began drafting the writing center’s artificial intelligence policy, I also quickly realized how much it aligned with our plagiarism policy. In our plagiarism policy, we encourage tutors to speak with writers about suspected plagiarism, but not to assume that plagiarism is intentional. We recommend tutors educate writers about how to avoid plagiarism through explicit instruction and practice during the session about paraphrasing, citation, and patchwriting. The policy ends by stating that it is not the responsibility of the writing center to report plagiarism on a draft to instructors because drafts are unfinished documents, we value having the writing center as a safe space for students to discuss issues of academic honesty in a low-stakes environment, and we understand that reader expectation can vary significantly dependent on discipline, genre, and assignment and that only the professor can truly assess whether the student has successfully fulfilled requirements.
With the plagiarism policy in mind, I kept coming back to the spirit of the writing center as a safe space for students to have conversations about acceptable use of AI. I did not want our tutors to enter consultations with an air of suspicion towards writers, nor did I want them to feel the need to police work they suspected was generated by AI. Instead, I wanted to highlight the writing center as an educational, low-stakes environment where peers could have discussions about how to navigate this brand-new paradigm. Hence, I wanted our policy to reflect the center as a discussion-based space where writers could make mistakes and seek guidance without fear of being reported to their instructors or to the university’s honor council. Furthermore, I found the placement of the artificial intelligence policy directly following the plagiarism policy as strategic, so tutors could see the similarities in them and understand that they stem from the same set of core values: drafts are works-in-progress and writing centers are meant for practice, learning, and the opportunity to fail before submitting work in a higher-stakes environment. And while several similarities exist between the plagiarism and AI policies, unlike the AI policy, the plagiarism policy provides strategies for how tutors can educate writers about plagiarism and how to avoid it. The AI policy is intentionally vague for the sake of flexibility and because we are still learning the best ways to educate writers about how to use AI successfully in their writing.
In October 2023, about six months after our first AI-related training meeting, I revised our staff training materials and led a session on the same topic since we had a large influx of incoming staff. The session revision mostly dealt with updates to the technology that had been observed since the spring and allowed more space for tutors to talk about first-hand experiences with their peers. My hope in this second training session was that tutors would focus less on their role in monitoring AI-generated text in student writing and more on their role as peer guides to the writers they encountered. During the session, large group discussion shifted away from how to “catch” AI in student writing and more towards individual reflection on whether tutors handled AI-related sessions well.
In a small IRB-exempt study of my own writing center in summer 2024, I spoke with five tutors (four undergraduate, one graduate) about navigating AI-generated texts in their sessions. I was most interested in discovering which features of a text led tutors to believe it was AI-generated, how tutors navigated discussions about AI-generated texts during their sessions, and what role tutors believed writing centers played in regulating the use of AI in writing. All participants are referred to by pseudonyms. Because the sample size of this study is small, I do not intend to suggest my findings have wide-scale application. However, I do provide these responses to reflect on how tutors in our writing center have dealt with the emerging technology of generative AI, particularly after having experienced our training sessions and handbook policy.
Four out of five respondents admitted to having suspicions that writers brought work into a session that had been generated by AI or worked with writers who openly admitted to using AI. Suspected AI-generated text appeared for two tutors during online, asynchronous appointments and for another tutor, face-to-face, with a returning writer taking a First-Year Writing course. One tutor, Taylor, explained that during an asynchronous session she believed she encountered an AI-generated text “because it was saying a lot without really saying anything. In all of its ramblings and big words, there wasn’t any content…” Another tutor, Jamie, also encountered what she believed to be an AI-generated text during an asynchronous session “because while each paragraph was completely coherent and polished on the surface level, the same ideas were being repeated over and over again in different orders, sometimes using the exact same wording.” William, a graduate tutor, noted that “ChatGPT’s style is often very polished with a strong (if not jargony) vocabulary but is usually empty and repetitive.” According to Taylor, Jamie, and William, common features of AI-generated texts were writing that is polished and “saying a lot,” though it was repetitive or devoid of substantive content.
Another tutor, Ashton, worked with a returning student from a First-Year Writing course and noticed the student’s writing appeared more polished than in previous sessions. She reflected, “when I asked specific questions about the student’s writing, he struggled to respond clearly. He seemed unsure about the content and direction of his paper.” For Ashton, it was less about the features of the text that led her to believe the student writer used AI in his work, but more about his inability to articulate his writing processes and the content of his draft.
When asked to comment upon the challenges they faced with regards to advising students about AI use in their writing, participants most felt apprehension over how to open conversations with student writers if they anticipated that their work was generated with AI, or even if they should. Two tutors also felt an ethical responsibility to have such conversations, even if they were unsure how best to approach them. Taylor commented that one of her biggest challenges in dealing with AI-generated texts from student writers is “knowing how to confront the student… when I know they will probably face a harsh grade.” Another tutor, Jordan, said, “my biggest concern would be to not offend them or blatantly tell them that [AI use can be] academic dishonesty.” Ashton responded by saying, “I am apprehensive about having to confront a student about the use of AI… I am concerned about coming across as accusatory, but I also want what’s best for the writer’s integrity.” Finally, William was concerned about taking time during a session to discuss the ethics of AI use because it did not “contribute to helping the student with their essay.” Although, I would argue that having this discussion is important because it does affect the student’s work and could ultimately place them in jeopardy if the instructor does not allow the use of AI.
Despite responses from participants related to confronting a student writer about AI use in their writing, all tutors I spoke with believed it was not the role of the writing center to regulate a student’s use of AI in their writing. Taylor believes tutors should “redirect the student and encourage them to write the paper themselves” if it is explicitly clear that AI use is prohibited. William also noted that he believed it was our writing center’s official stance not to regulate AI use in student writing, and when a student did admit to cheating, he explained that the most he “could do was encourage them not to do it.” Jamie explained that “if we want students to see the [writing center] as a tool for them and their own learning, it is probably better not to have tutors flag AI use or attempt to regulate it” because it could “decrease the trust of the student in the writing center.” Jordan said, “it is our job to maintain the autonomy and ownership a writer has over their writing. Regulating students in their writing process on anything restricts their ownership. We can continue to advise and educate, but it is not our job to regulate.” Ashton also did not believe it was the job of writing centers to regulate AI use “but to learn how to adapt when presented with AI-generated writing.”
Although the tutors I spoke with ultimately did not believe writing centers should regulate the use of AI in student writing, it stood out to me that their greatest concerns about working with writers suspected of using AI in their writing was how to confront them (or if they should). The tutors who participated in this study appear to be making a distinction between institutional regulation (or the writing center’s role in “policing” student use of AI in their writing) and their responsibilities as individual tutors to educate students on the risks associated with using AI, either in contexts where it may be prohibited or because the text output is ultimately unsuccessful in meeting assignment goals or reaching its target audience. Put another way, tutors did not believe the writing center should prohibit the use of AI or report its usage institutionally; but they did have the impulse to confront writers out of concern for academic honesty and awareness of the rhetorical situation. Our center’s hope in implementing the AI policy has been that tutors will feel less encumbered by fear of “getting in trouble” for allowing the use of AI to go unacknowledged and empowered to shift the discussion towards how to use AI in rhetorically effective ways if a professor allows for its use in their class.
Based on participant responses regarding regulation of AI use in writing, at least a small subset of tutors understand that their role in a session is not to report AI use as a punitive measure against students. However, tutors from this study do have concerns that it is their responsibility to alert writers to potential pitfalls of AI use, which requires them to engage in conversations many of them are uncomfortable having. Because these conversations would require tutors to be suspicious of student work that does not fit a perceived standard, it can place them in an accusatory position that could ultimately offend the student writer, even if the tutor is genuinely invested in the student’s success and is simply trying to help.
Participants in the study appreciated the training and policy that relieves them of the pressure of being responsible for “catching” AI use in student writing though they expressed a desire to continue having conversations about how to handle AI in the writing center. Taylor believes AI should be addressed in writing center orientations, while Jordan said writing centers should include AI-related topics as part of training at least once a semester. Jamie and William both said they could benefit from learning more direct strategies for addressing AI in student writing, particularly about how to bring up the awkward conversations that can arise when AI is suspected in student writing. Ashton was like Jamie in saying that it was important for her to understand how to “approach students respectfully and collaboratively.”
While there is no one-size-fits-all approach on addressing AI use in writing centers, it is important to take time to listen to tutor concerns, either formally or informally. Our writing center has taken preliminary steps to address AI use in student writing, though we will continue to listen to tutor feedback about their concerns and needs. For example, it has become apparent to me that tutors desire more concrete strategies for “what to do” vs. “what not to do” in sessions that may feature AI-generated text. In upcoming training sessions and revisions to our AI policy, I envision us addressing phrases tutors can use to open what many believe are awkward conversations about how writers generated the texts they have brought into the writing center. Furthermore, now that tutors have had more experience with AI, I plan to seek tutor input for subsequent versions of the AI policy. Initially, I drafted the policy without input from them because they had little experience with the emerging software and sought leadership for guidance on how to approach their sessions. I wanted to provide direction for them, even imperfectly, as a baseline from which tutors could provide feedback as they grew more comfortable navigating AI and providing feedback to writers who may be working with AI.
Based on observations and conversations with tutors, our writing center has found it helpful to provide our staff with an AI handbook policy that releases them from the responsibility of reporting students who use AI in their writing and provides a basic model for how to approach discussions based on our plagiarism policy. Furthermore, our center has valued the open-ended discussions that have emerged from meeting to discuss AI regularly through training sessions. As evidenced by responses in the small-scale study from our writing center, tutors desire regular training related to AI, as the technology continues to evolve and further ethical concerns arise. Our writing center will continue to reinforce, through policy and training, that it is not the role of the center to police AI use in writing and continue having conversations about how best to engage with writers about the potential risks and benefits of AI use in their writing.
References
Herman, D. (2022). The end of high-school English. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412/
Hubbard, J. (2023). “The pedagogical dangers of AI detectors for the teaching of writing.” Composition Studies. https://compstudiesjournal.com/2023/06/30/the-pedagogical-dangers-of-ai-detectors-for-the-teaching-of-writing/
Johnson, G. P. (2023). “Don’t act like you forgot: Approaching another literacy ‘crisis’ by (re)considering what we know about teaching writing with and through technologies.” Composition Studies, 51(1).
Squires, A. (2023). Developing topics with ChatGPT [PowerPoint slides]. Avila University Writing Center. https://drive.google.com/drive/folders/14wqh83tjpHjK8w2tfuwYPNs4EDwmuscy