Generative AI Tutor Education in Our Writing Center: A Slow Approach

Kristina Aikens, Tufts University
Hannah Weildon, Tufts University

In April 2023, we embarked on our first nervous conversation with writing tutors at Tufts about Generative AI. ChatGPT had been released to the public five months earlier, and we were busy immersing ourselves in readings and in conversations with campus partners including librarians, our community standards office, and our faculty center for teaching. At this all-tutors meeting, we spoke with tutors about how students might be using ChatGPT and promoted a set of questions and considerations, informed by an early, thought-provoking conference presentation (Adams and Baker, 2023), to guide tutors toward critical and curious engagement with GenAI. Rather than creating policies, we encouraged tutors to take note of whether and when students were using GenAI and share their observations during meetings throughout the following year. Yet many questions remained. Would students who used GenAI still also see writing tutors, or would our students consist only of those who didn’t use it? Would students expect us to be GenAI experts who could help them use the technology better? What, in short, would our role be?

Until recently, much of the writing published about GenAI in educational settings appeared to fall into a binary of enthusiastic adoption—exemplified by Ethan Mollick in his popular One Useful Thing blog (2023)—or total rejection, leading to mistrust of students and false accusations of plagiarism (Klee, 2023). Even as technology experts raise concerns beyond plagiarism, such as bias in algorithms and GenAI development (Buolamweni, 2023), privacy and consent when requiring students to create chatbot accounts (Caines, 2023), environmental and workforce exploitation (Furze, 2023), and skill loss (Watkins, 2024), these cautions seem to be drowned out in the pursuit of staying ahead of—or firmly behind—the technological curve. That said, in an essay about the distrust that has arisen in the wake of GenAI, Jacob Riyeff critiques what he describes as “the false dichotomy of ‘embrace’ and ‘fear’,” arguing that this “occludes and prevents more nuanced responses” (2024). In reality, most of us are somewhere in between. Ignoring or actively discouraging GenAI are not tenable positions. Thus, we have turned to our values as a writing center to help us articulate a nuanced response. 

In Radical Writing Center Praxis, Laura Greenfield warns against binary thinking that can limit tutors’ ability to fully engage with students and our own ability to provide meaningful leadership. She challenges us as leaders of our programs to learn from tutors as we guide their practices, and to avoid oversimplifying our complex role, encouraging us to “develop methods that more critically dive into rather than turn away from the negotiation intrinsic to collaboration” (Greenfield, 2019, p.121, original emphasis). In our writing center, we strive to embody the values Greenfield identifies by emphasizing process, listening, reflection, collaboration, and mutual learning. Thus, as we began developing tutor education that more intentionally and consistently addresses GenAI, we launched a survey to capture tutors’ reflections on their experiences and views from the past year. We intended the survey as a way to capture information we had received informally from our staff meetings as well as to provide an opportunity for tutors who might have been less vocal during those conversations. Before making an education plan, we needed to know how familiar tutors were with the technology already and we wanted to use tutor experiences and beliefs as the basis for any activities we created. We also hoped the survey could provide real-life experiences for practice scenarios. The survey focused on four general areas: tutor familiarity with GenAI, beliefs about the role of writing tutors, hopes and concerns about GenAI, and requests for training.

Survey Results and Discussion

Participants and Methodology

Our writing center comprises two groups of writing tutors: Writing Fellows (WFs), who are undergraduates assigned to work with specific undergraduate classes, and Graduate Writing Consultants (GWCs), who are graduate students who meet with both undergraduate and graduate students on a variety of writing projects. We created an IRB-approved Qualtrics survey consisting of a combination of checkbox and open-answer questions. The survey was sent in May 2024 to 39 WFs and 29 GWCs. Twelve WFs and nine GWCs completed the survey, for an overall response rate of 31%. Responses were downloaded into an Excel spreadsheet. Checkbox answers were counted, and open-answer responses were coded manually to identify trending themes and patterns.

Tutor Familiarity with GenAI

Both WFs and GWCs reported some degree of familiarity with GenAI in the context of academic writing, but the specifics differed. WFs were more likely to have used GenAI tools themselves (typically the free version of ChatGPT), often in early phases of the writing process such as summarizing readings, brainstorming topics, and generating outlines. They were less aware than GWCs of the students they met with using GenAI [1]. By contrast, nearly all GWCs who took the survey reported that they had worked with students who used GenAI across all stages of the writing process, but were significantly less likely to have used the tools themselves, expressing skepticism toward their effectiveness and ethicality. These results suggested that both groups could benefit from more information about how students are using GenAI and from practice focused on helping students analyze GenAI output. 

The Role of Writing Tutors 

Overall, tutors were in agreement about the role of our writing center regarding GenAI tools. The majority opined that our writing center need not actively suggest GenAI use to students, perhaps because, as one GWC put it, “They will use it anyway.” A handful of WFs qualified their answers by saying that suggesting GenAI should be an option in tutoring sessions, as a “last resort” and within certain boundaries, though they expressed uncertainty about what those boundaries are. GWCs observed that students they worked with who used GenAI tended to uncritically accept GenAI suggestions or expressed a lack of confidence in their abilities (sometimes both); as a result, most GWCs believed their role should focus primarily on helping students think more critically about GenAI output and reassuring students that their own voices are preferable to technology-generated prose—or, in the words of one GWC, “harm reduction.” Somewhat similarly, WFs believed that their role should focus on discussing the “risks” of GenAI and teaching students how to use GenAI responsibly and effectively. Though few tutors worried that GenAI would replace human tutors entirely, both groups lamented that it could change their job into something they didn’t enjoy, wondering if their work will be “reduced to helping students make their computer-generated essays and ideas seem less computer-generated” and “searching [students’ writing] for signs that they’ve plagiarized,” damaging trust in the student-tutor relationship. These results affirmed our previous approach of neither encouraging nor discouraging GenAI use and offered insight into how we can emphasize the relational and critical thinking aspects of tutoring that tutors most value. 

Tutor Hopes and Concerns

Beyond apprehensions about their immediate work, tutors expressed concern that GenAI would contribute to a decline in meaningful learning experiences and skill development, particularly problem-solving and critical thinking skills, communication and interpersonal skills, originality and creativity, and work ethic. To a lesser extent, both groups associated GenAI with certain negative trends they saw in education, such as perfectionism, overemphasis on product over process, and the “transactional” nature of grades: “It encourages superficial, quick work instead of deep thinking, which is the opposite of what we want to foster.” Several worried about the technology stifling language diversity, the development of student voices, and the “unique and creative style choices” writers make, causing writing to become more homogeneous: “There are concerning implications that GenAI may perpetuate the idea of standard English and maybe suppress other unique and valid writing voices.” Several worried about increases in plagiarism and expressed confusion over their role in ensuring academic integrity–for example, concerns that they would be implicated if a student they worked with used GenAI when the professor had prohibited it. Although we asked about concerns broadly, most of the answers focused on education and the development of writing. Outside the education realm, a handful named data privacy, bias in the models, and copyright violations as concerns, though there was a notable absence overall of other important ethical concerns, such as environmental damage and labor exploitation. Very few positive aspects were discussed, though one WF said GenAI could “level the playing field” and a GWC mentioned GenAI could provide a good first step for students with disabilities or social anxiety to get basic help before approaching a tutor or instructor. When developing our educational approach, we wanted to acknowledge and further explore (and even expand) the valid concerns tutors raise about GenAI. We also wanted to clarify tutors’ role in academic integrity and address complex ways that GenAI could benefit students as an additional tool that does not replace tutors’ contributions.  

Training Requests

Tutors appeared uncertain about what further training would be helpful. Some requested further training on detecting GenAI usage. WFs especially wanted clearer guidelines on appropriate usage at our university, which does not have a unified GenAI policy, and they expressed interest in learning “best practices,” often using qualifiers like “ethical” or “productive.” While GWCs were not eager to learn how to use GenAI themselves, they were interested in better understanding how students are using it and how to help them think critically about its output. These results helped us understand how to frame our GenAI education plan, with a focus on clarifying tutors’ roles and allowing tutors to practice critically analyzing GenAI output.

Summary of Takeaways

The survey provided vital, “behind the scenes” information to gauge tutors’ experiences with GenAI, their sense of their roles, and their interest levels in additional conversations. The following key takeaways informed our education plan:

    • Concern about academic integrity was high, and we want to clarify that while tutors should support students’ academic integrity, they are not responsible for students adhering to academic integrity policies
    • Tutors wanted to learn how to “detect” GenAI use, which we see as counterproductive and increasingly difficult
    • Tutors wanted to know if and how GenAI could be used ethically
    • While GWCs observed GenAI use across the writing process, brainstorming and outlining were mentioned consistently as common uses
    • Tutors agree that they can play an important role in critical GenAI literacy, particularly in terms of helping students process and critically analyze GenAI outputs and usage
    • Tutors greatly value the relational aspects of tutoring, helping students build skills and confidence, and validating a diversity of voices
    • Tutors described experiences that could be incorporated into practice scenarios 

Education Plan

Based on these takeaways, we are introducing a “slow” approach to GenAI tutor education with a focus on critical GenAI literacy. Aligned with James Lang’s idea of “slow-walking” GenAI (2024)—building in room for reflection alongside use—our main goal is to give tutors space to think through GenAI together from different perspectives over the course of the academic year, as opposed to more prescriptive training on prompt engineering or GenAI-detection. By providing tutors with frequent, low-stakes opportunities to interact with and discuss GenAI, we hope to underscore critical thinking as a fundamental tutoring skill that is as vital as ever in the digital age.   

​​​To foster tutors’ critical GenAI literacy, we designed a series of discussions and activities with the following objectives: 

    1. Provide a forum for tutors to exchange and reflect on knowledge and beliefs about GenAI 
    2. Increase awareness and promote critical thinking around student writers’ GenAI usage 
    3. Emphasize a reader-response approach to tutoring 
    4. Practice critically analyzing GenAI output and guiding students to do so  
    5. Introduce opportunities to learn about and discuss key ethical concerns around GenAI 
    6. Clarify the tutor’s role in helping students maintain academic integrity 

The discussions and activities have appeared (and will appear) in a variety of settings for both GWCs and WFs, including pre-semester orientation, bimonthly student staff meetings, workshops and modules created by campus partners, and a weekly writing pedagogy seminar for new WFs.  

Overall, we are integrating GenAI into our tutor education by weaving it into the topics and activities that comprise the fabric of our programming, which typically covers topics such as process and perfectionism, responding to student writers, universal design for learning, multilingualism, and linguistic racism. Given the abundance of blog posts and podcasts by writing educators, we are adding a brief reading or listening to each topic to spark discussion about GenAI’s potential impact. This fall, for example, tutors read and discussed Susan D’Agostino’s “AI has a Language Diversity Problem. Humans do, too” alongside readings from Writing Centers and the New Racism to consider how GenAI exacerbates existing bias against “non-standard” forms of English. We intend for these readings to help tutors develop tutoring approaches that are inclusive of digital-age concerns.   

Through interdepartmental collaboration, we are offering resources that address tutors’ desire for more concrete guidance around academic integrity and our own observed need for more tutor education on information literacy. Specifically, we will host staff from the office of community standards at one of our bimonthly meetings to discuss the tutor’s supportive, rather than punitive, role in helping students maintain academic integrity. We will also share our library’s recently updated First-Year Writing modules with tutors to provide specialized guidance on information ethics and interpreting GenAI results.  

Finally, we have launched a series of hands-on activities involving GenAI tailored specifically for tutors. Our activities are multi-purpose, experiential, and flexible, intended to make a high impact in limited time and engage tutors with varying levels of familiarity and comfort with GenAI. To respect our tutors’ privacy and autonomy, we are not requiring tutors to make an account or submit their writing on a chatbot site; GenAI usage is voluntary, and we provide transcripts of GenAI interactions we conduct for tutors who don’t wish to use it.

Mock Session: Responding to GenAI-Generated Essay  

In our usual fall education, tutors conduct paired mock sessions in which one tutor plays the role of the student writer and the other acts as the tutor. The sessions are based on scenarios and sample papers that we provide. This fall, we provided one scenario in which a student brings an entirely GenAI-generated draft to the session. By reading this draft and acting out the session with the “student,” tutors were able to practice critically assessing GenAI output and conversing with students about GenAI usage. This activity also provided an opportunity to discuss the difficulty of GenAI detection, as the vast majority of tutors did not initially identify the draft as GenAI-generated, and clarify that their role is not to report GenAI usage but rather to help students work within course policies around GenAI. 

Reader-Response Tutoring vs. GenAI-Assisted Revision  

Building on an exercise we piloted last spring that compared GenAI writing feedback with human responses, we developed a new iteration focused intentionally on illustrating a reader-response approach to tutoring and exploring a variety of ways GenAI might be used in revision. To reinforce our reader-response approach, we had tutors do something we often do: read a sample paper and reflect on their reactions as general readers, including points of confusion and questions that arose. Then, we had them discuss how a tutoring session based on their reactions could unfold. 

In the second part of the activity, tutors pasted the sample essay [2] into a GenAI program and prompted it to respond in a few different ways: 1) to suggest revisions, 2) to act as a writing tutor and pose questions, 3) to act as a professor and provide expert feedback, and 4) to rewrite the paper. Finally, tutors reflected on the GenAI responses and compared them to their own. This exercise expanded tutors’ awareness of how students might be using GenAI in the revision process and allowed them to practice critically analyzing GenAI-generated essay feedback. Perhaps most significantly, our discussion underscored the purpose of dialogue in tutoring and the value of tutors as human readers who bring their perspectives and questions to student writers, rather than content expertise and answers. 

Collaborative Human Brainstorming vs. GenAI-Assisted Brainstorming  

Based on our survey, tutors (mostly WFs) were most likely to have personally used GenAI during the early stages of the writing process and deem such usage acceptable and potentially helpful for other students as well[3]. To foster conversation between tutors who have used GenAI in this way (either personally or with students) and those who haven’t, we designed a comparative brainstorming exercise. In this exercise, adapted from Rethinking Writing Instruction in the Age of AI, we provided tutors with a broad topic, and they worked in groups to brainstorm potential research questions, with each tutor approaching the topic from a different disciplinary perspective (Laist, 2024). Afterwards, they used GenAI to brainstorm a list of research questions based on the topic and compared the results, identifying the strengths and weaknesses of each approach. Ultimately, this activity enabled tutors to reflect deeply on the efficacy of GenAI-assisted brainstorming as well as what they offer to students as human collaborators.  

 Researching GenAI Concerns  

In this group research activity, tutors explored some of the concerns about GenAI they expressed in our survey along with other issues they had not yet considered. First, they chose one issue from a list we generated together encompassing educational concerns, such as learning loss and impact on campus culture, and ethical concerns, such as bias, disinformation, and energy usage. Each group researched their concern (with the option of incorporating GenAI in their research process), assessed solutions, and created a presentation about their findings and process. This activity was bookended by discussion to not only broaden and deepen tutors’ understanding of GenAI-related issues but also provide an opportunity to reflect on GenAI-usage during the research process. 

Creating GenAI Guidelines for Writing  

Though we have neither the purview nor desire to create GenAI guidelines for the university as a whole, we understand tutors’ frustration over a lack of guidance. Across multiple meetings in the spring, we will create internal guidelines through a community-based, scaffolded activity. First, we will establish our values and goals as a writing center, producing a written document for reference. Next, we will brainstorm various ways GenAI could be used in the writing process, supplemented by published examples, our own observations, and scenarios from our survey results. Finally, tutors will synthesize their work by illustrating how different uses of GenAI align with or diverge from our writing center’s values and goals. The end result, we imagine, will be a visual guide posted internally on our tutor webpage that tutors can refer to when working with students who are using GenAI. By involving tutors in a process of creating guidelines, tutors can take ownership of developing best practices that are based in community values and goals.

Conclusion

In developing this education plan, we have aimed to retain what tutors like about tutoring—helping students develop skills and confidence, building relationships, and encouraging student voice. In the spring, we will launch a new survey that continues to gather experiential information but will focus more on gauging tutors’ responses to the activities and levels of preparedness for working with students who use GenAI. Our hope is that this intentional, slow approach, grounded in tutors’ input and in community with each other, will help us develop strategies together that retain our values as educators and writers.

References

Adams, K., & Baker, M. (2023, April 1). E-tool literacy: Working with machine collaborators like ChatGPT (A discussion about what ChatGPT could mean for consultants) [Conference Presentation]. New England Writing Center Association Conference, Durham, NH, United States.

Buolamwini, J. (2023). Unmasking AI: My mission to protect what is human in a world of machines. Penguin Random House.

Caines, A. (2023, Jan 18). Prior to (or instead of) using ChatGPT with your students. Is a Liminal Space. https://autumm.edtech.fm/2023/01/18/prior-to-or-instead-of-using-chatgpt-with-your-students/

D’Agostino, S. (2023, July 10). AI has a language diversity problem. Humans do, too. Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/07/10/ai-has-language-diversity-problem  

Furze, L. (2023, Jan 29). Teaching AI ethics. Leonfurze.com. https://leonfurze.com/2023/01/26/teaching-ai-ethics/

Greenfield, L. (2019). Radical writing center praxis: A paradigm for ethical political engagement. Utah State University Press.

Klee, M. (2023, June 6). She was falsely accused of cheating with AI —and she won’t be the last. Rolling Stone. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/ 

Laist, R. (2024). Rethinking writing instruction in the age of AI. CAST Professional Publishing.

Lang, J. M. (2024, Feb 29). The case for slow-walking our use of generative AI. The Chronicle of Higher Education. https://www.chronicle.com/article/the-case-for-slow-walking-our-use-of-generative-ai

Mollick, E. (2023, June 12). Assigning AI: Seven ways of using AI in class. One Useful Thing. https://www.oneusefulthing.org/p/assigning-ai-seven-ways-of-using

Riyeff, J. (2024, June 27). Generative AI and the problem of (dis)trust. Inside Higher Ed.        https://www.insidehighered.com/opinion/views/2024/06/27/gen-ai-and-problem-distrusting-students-opinion 

Watkins, M. (2024). Rhetorica. https://marcwatkins.substack.com/

Footnotes
  1. The WFs’ lack of awareness of their students using GenAI surprised us because we expected students to feel comfortable sharing this information with their peers. Since we know from the survey results that undergraduates use GenAI in their writing, we theorize that WFs’ lack of awareness could be attributed to WFs being associated with the course requirements (i.e., that students might worry their usage would be reported to the professor), though it certainly could be another factor, such as how many questions are asked about writing process during the sessions.
  2. We used one of our own pieces of writing as the sample essay to avoid violating the copyright or consent of another writer. 
  3. One WF who was aware of students they met with using GenAI claimed that “most professors allow” it for brainstorming and outlining. We have not found that faculty hold this opinion, but the prevalence of using GenAI for these parts of the writing process suggests this opinion could be shared by other undergraduates.
https://thepeerreview-iwca.org