Isabelle Lundin, Northeastern University
We know writing is a social, emotional, cognitive, and rhetorical activity. But the rise of Artificial Intelligence (AI) and large-language-models (LLMs) like ChatGPT bring a whole host of new emotions, situations, and complexity that writers and writing teachers/tutors have never seen. For writing center studies and rhetoric and composition, the demand for AI strategies and policies harkens a new era of writing instruction, assessment, and writing center practice, leaving many scholars afraid, worried, angry, and uncomfortable. I argue that examining the root of these emotions and their relationship to our values and practices is important when considering how we contend with AI—and our emotional reactions to AI and writing–moving forward.
Because the explosion of AI in higher education is still relatively new, very little scholarship attends to the overlap between the social factors driving AI usage in writing and the development of AI literacy. Students’ AI use is never just about AI use: rather, it represents a similar assemblage of ideology, emotion, educational context, and larger social forces that, as writing center scholars and practitioners, we need to pay attention to. And if students’ AI use is never just about AI use, then it follows that our opinions and feelings about AI use are never really just about AI, either. This discomfort, frustration, and worry about AI use reveals a deeper set of ideologies about writing centers, writing, and the academy.
As a peer writing center tutor, I am very invested in bringing actual tutor voices to the forefront of writing center research and scholarship, since we are the ones who are often embodying the field’s values and best practices to visitors of the writing center. To that end, this research explores the role of AI in the writing center based on interviews with peer tutors. This study investigated the following research questions:
-
- What do writing center tutors see as the role of AI in the writing center?
- What are writing center tutors’ beliefs, assumptions, and perceptions of AI as a writing tool?
- How do writing center tutors want to respond to AI usage in higher education and the writing center?
Overall, tutors are concerned about the future of writing education and writing center practices due to the rise of AI usage, yet are motivated to facilitate students’ AI literacy and navigate best practices for using AI for writing tasks. However, tutors expressed discomfort with an overreliance on AI in writing, worried that it can hinder actual student writing development and perpetuate harmful language ideologies. The palpable array of emotions across the interviews reveals that tutors are grappling with what writing center tutors have been taught and trained to believe about writing versus how AI is changing the landscape of writing center practice and higher education. More broadly, highlighting our tutors’ experiences with AI amplifies the voices of peer tutors within writing center scholarship and demonstrates that writing center scholars and practitioners must interrogate our own assumptions, ideologies, and beliefs about writing and how writing should be taught if we are to usher in the age of AI and writing in ethical, justice-oriented ways.
Literature Review
AI in the Writing Center
In the wake of new AI technologies in academia, scholars have hypothesized what lies ahead in the future of writing centers. For some writing center professionals, the convenience, efficiency, and accuracy (for the most part) of AI and ChatGPT are a harbinger of the demise of writing centers and writing education (Hicks, 2024; Nicolas, 2023; Stowe, 2023). For others, the rise of these technologies presents a new set of affordances and challenges to writing center practice and the teaching of writing; these new challenges invite new tutoring approaches and strategies (Bombaywala, 2024; Coffill, 2023; Deans, 2024; Deans et al., 2023; Esfahani, 2024; MLA-CCCC, 2024).
Yet for scholars like Essid (2023) and Bryan (2023), writing centers are actually perfectly poised to navigate this seismic shift within higher education. The field of writing center studies is no stranger to “weathering technological change” (Essid, 2023, p. 38) and “defining identities for their centers and themselves” (Bryan, 2023, p. 23). In fact, it is our history as both institution and “third space” (Grego & Thompson, 2008), as both peer and administrator, “academic slum dwellers” (Davis, 1995) and legitimate scholars and researchers, that puts leading the charge towards the curation of AI policies and practices right within our skillset.
Peer Tutors in Writing Center Scholarship
While tutors’ voices on the problem of AI and the writing center are often heard in the creation of writing center AI resources (see: De Herder & Allen, 2024) or on writing center blogs, such as Another Word and The Dangling Modifier (see: Bombaywala, 2024; Deans et al., 2023), much of the qualitative and quantitative scholarship on writing centers and AI deals with the perspectives of writing center administrators and practitioners, not those of peer tutors. Most of the day-to-day practices that occur in writing centers are facilitated by peer tutors: greeting students, working with writers, answering their questions, and, undoubtedly, coming face-to-face with the myriad ways students are using AI. Therefore, the findings presented here, taken from interview transcripts with peer tutors at my writing center, aim to close this gap in scholarship.
Method
Within writing center studies, there is a rich history of qualitative, quantitative, and mixed methods scholarship that seeks to understand perceptions, beliefs, and phenomena to inform writing center practice (Driscoll & Wynn Perdue, 2012, 2014; Nordstrom, 2015; Mackiewicz & Babcock, 2020). Much qualitative inquiry produces transcripts from interviews, focus groups, or field research for researchers to engage with an iterative analytical process called coding. Generally, a code or a code unit “is most often a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data” (Saldaña, 2021, p. 5). Occurrences of similar codes help the researcher see patterns across multiple transcripts, build a codebook, tease out themes relevant to their research questions, consider the data’s implication, and perhaps even build theories as a result (Babcock, 2020; Messina & Lerner; 2020; Saldaña, 2021).
Building on writing center studies’ familiarity with qualitative research design and coding, this research looks specifically at 10 tutors’ responses to this question: “What do you see as the role of AI in student writing and writing center support?” The 10 tutor interview transcripts analyzed for this study are part of a larger IRB-approved research project at Northeastern University that investigates the experiences of multilingual writers at the writing center.
Our Writing Center
The Northeastern Writing Center (NUWC) is housed in the English Department at Northeastern University. Our center offers both in-person and synchronous online sessions via WCOnline, all of which are 45-minute-long appointments one-on-one with a peer tutor. In addition to scheduling in-person sessions in advance, we also offer walk-in sessions, where students without a prescheduled session can drop-in and work with one of our available tutors.
Tutor Demographics
Tutors for this study were recruited via email and were offered a gift card upon completion of an online interview. Of the ten tutors interviewed, two are master’s students (one alumni, one current MA student), seven are PhD students, one is an undergraduate student, and four of these tutors identify as multilingual. All the tutors interviewed have experience with synchronous online and in-person tutoring modalities and have worked at the Northeastern Writing Center for at least two semesters.
Coding and Analysis
I employed an open-coding approach, generally treating each tutor’s answer to the question “What do you see as the role of AI in writing center support?” as a codable unit. The length of these units was anywhere between one to four sentences. My first coding round focused primarily on describing tutor responses and making note of specific ways tutors recounted students using AI. From this first round, I identified common themes, such as the emotions of both tutors and students and tutors’ assessments of AI, and used these themes to refine my list of codes. I went through the dataset again, applying my new codebook (see Appendix A) to the responses. At the end of this process, I generated seven codes from 105 total coded units.
Findings
Tutors’ Assessments of AI: Cautiously Optimistic
In their assessment of AI, every single tutor acknowledged that AI is both a helpful tool and complex issue with real dangers to authorship, plagiarism, and writing development, as shown by the relatively even split between positive and negative assessments within the 26 occurrences of AI Assessment code (12 positive, 14 negative). Tutors noted that, especially for multilingual writers, AI can be indispensable in helping them mediate between one or more languages at once, “because sometimes like if they’re not sure what they’re trying to say, or like obviously writing in another language is like super real, super difficult.” Further, tutors notice that multilingual writers use ChatGPT to bridge gaps in language ability: “a student wants to see how a sentence might be reframed, and the reframing of that sentence is beyond their linguistic capability at that moment. I think that if [ChatGPT] offers them the ability to test that out, not only is it like, how did we learn new words other than reading them for the first time?” Tutors are aware of the ways AI can be a useful teaching tool. They are hopeful that it can continue to be developed and applied in productive, ethical ways, as shown by the 12 positive assessments within the AI Assessment code.
Overall, tutors expressed a desire to help students navigate ethical, fair, and appropriate use of AI, illustrating a need for students to develop critical AI literacies. Tutors are aware of the complexity of AI use, especially amid the diversity of professor expectations, disciplinary conventions, and unclear boundaries of fair usage, but some are sure to note that they are “optimistic that there is a way to use AI that is productive and thought-provoking for students.” Tutors try to distinguish uses of AI that fall under the umbrella of plagiarism (“if you were saying that you wrote this and GPT wrote this, like from start to finish, yeah, no, no, no, no. I’ll see you back for the academic integrity essay”), determine when AI use is appropriate (“Because I think that that’s something that students are maybe like struggling with is like when they can use it and like when they shouldn’t use it”), and reckon with their own opinions of AI in the wake of students’ concerns (“It’s really like, it’s really complicated, because like, I am so AI ambivalent”).
Tutor’s Negative Assessments of AI
Tutors were extremely vocal regarding their negative assessments of AI. Within the 26 occurrences of the AI Assessment code, 14 or more than half of these responses were coded as negative assessments. Tutors elicited exasperated groans as I asked them about AI in the writing center: “Oh my god, I hate it…everyone wants to talk about AI.” One tutor labeled herself a “grump about AI” in a disgruntled display of humor. Beyond their affective frustration, an echo across the tutors’ responses was that AI is a “slippery slope” and that using AI in writing can quickly become complicated and unethical without clear guidelines and appropriate instruction.
In their negative assessments of AI, tutors are most concerned with how AI use will affect student writing self-efficacy. They worry that AI use might “discourage multilingual students and other students and students who maybe come in with different abilities to give up on their writing abilities” and “students use it is basically to second guess their own writing and have it do it for them.” Another tutor expressed that they feel AI use is “an adequate coping mechanism though it is not a good nor generative motivator for writing content,” illustrating concern for its effects on student writing confidence and development.
Along those same lines, tutors worry that students using AI as an editing service or as a grammar checker are conforming to and perpetuating harmful standards of English: “I would also just be wary of encouraging multilingual writers to rely on it because aren’t we then just making them conform to the like colonial forms of writing that ChatGPT is modeled on?” The tutors are attentive to the helpful ways ChatGPT and Grammarly can “alleviate anxiety” surrounding sentence-level mechanics but worry that using it for grammatical support is not always effective: “if you plug something into chat GPT, there’s no telling that it’ll actually grasp meaning. So you could end up saying something that you didn’t mean.”
Interestingly, only one tutor talked about their own uses of ChatGPT. This tutor, a PhD student, described how they used ChatGPT to help them prepare for their comprehensive exams. They plugged their reading list into ChatGPT and would have a conversation with the AI bot about all the readings: “And when it got stuff wrong, I would correct it, but it was helping me learn and be more familiar with the readings at the same time…it’s helping me think through it critically because it’s giving some sort of feedback, even if it’s not good feedback.”
Despite their wistfully hopeful claims that there are productive and thought-provoking ways of using AI in academic writing, the other nine tutors expressed not using AI at all in their writing practice.
Discussion
The theme across all the tutor responses to the AI question is genuine concern and worry. Tutors are aware of students’ anxiety around academic writing and how that can influence students’ decision to turn to AI. They are also keenly aware of the potential damage a reliance on AI can wreak on students’ writing, critiquing AI’s rhetorical sensibilities, ethics, and role in perpetuating harmful language standards. They also provide commentary on the rhetoric of shame and laziness that accompanies discussions around AI use, suggesting that even the choice to use AI comes with a certain kind of baggage, echoing my statement at the start of this article that students’ AI use is never really just about AI use.
On a general level, this dataset’s discussion with tutors around AI in the writing center is consistent with the findings of Lindberg et al. (2023). Their first report on AI chatbots in writing centers detailed the results of a survey distributed across writing center professional listservs. While their participants were mostly comprised of writing center administrators, their major findings are consistent with our tutors’ observations. Like our tutors, their participants generally do not use AI themselves, have mostly negative attitudes towards AI in the writing center, and notice that students’ usage of AI is diverse and varied (Lindberg et al., 2023).
However, amid tutors’ concern and worry are genuine efforts to facilitate students’ AI literacy. Our tutors’ responses echo Cardon et al. (2023)’s construction of AI literacy that “involves four sets of capabilities: application, authenticity, accountability, and agency” (p. 277). Tutors want the writing center to be a place where students can ask questions about how and when to use AI while still encouraging critical thinking, collaboration, and the development of students’ writing skills and processes, consistent with how many writing centers have positioned themselves following the advent of ChatGPT (Bombaywala, 2024; Coffill, 2023; Deans, 2024; Deans et al., 2023; Esfahani, 2024; Hicks, 2024; MLA-CCCC, 2024). Our tutors feel that they, like writing centers writ large, “must remain adaptable, innovative, and committed to their core mission of fostering critical thinking and effective communication, even as the means of achieving these goals evolve” (Buck, 2024). In many ways, the glimmers of optimism and hope in tutors’ responses amplify the significance of “metacognitive questioning, active listening, and principles of fair use” so characteristic of writing center practice, and as Essid (2023) argues, are affordances that “lie beyond the reach of generative AI” (p. 38).
The tutors noted that the choice to use AI for writing is, too often, related to assimilating to the conventions and tone of a native English speaker, contrary to the disciplinary move towards linguistic justice within writing center studies. Students’ anxiety about grammar and mechanics seems to drive their usage of Grammarly and ChatGPT as tools to “check” their writing. This, of course, sounds alarm bells among our tutors: our training program and the mission of our writing center (and of writing centers writ large) attempt to decentralize grammar as much as possible to squash the myth of “right” and “wrong” English and writing. Tools like Grammarly and ChatGPT essentially do what we have been trained not to do. This reveals a tension between writing center ideology and the actual needs of student writers: if writing center tutors shouldn’t check grammar, ChatGPT shouldn’t, either.
Here, tutors’ discomfort with students’ reliance on AI and its invitations for students, as one tutor vehemently asserts, to “conform to the colonial standards that ChatGPT is modeled on” illustrate that tutors see themselves in an ideological crossroads: despite writing center pedagogy and tutor training models holding firm to linguistic justice, affirming all forms of English and multilingualism, and the de-centralization of grammar in sessions, the advent of ChatGPT puts a lot of what writing center tutors know and believe into precarity. Further, a reliance on AI for correctness and efficiency positions students as “tools in an economic machine” to graduate and join the workforce, which then positions colleges and universities—and writing centers by extension—as being “exclusively vocational” (Stowe, 2023). Additionally, the rise of AI is a harbinger for a larger fear in writing studies: there are so many ways in which we are already a precarious and undervalued field and the proliferation of AI only exacerbates this race to the bottom of our profession.
Though the concerns about AI held by our tutors and the field writ large are certainly valid, it’s important to consider how much of our writing center tutors’ frustration with AI is because “it was never a part of the longstanding writing and ‘student-ing’ success that motivated and qualified them to be a writing tutor in the first place” (Deans, 2023). That quote resonates with me, personally: perhaps our distrust and grumbliness about AI also come from some kind of high horse entrenched in our own histories as students. In the Northeastern Writing Center, many of us tutors are English majors and, for all intents and purposes, would be considered “good” writers and are “good” writers without having to use AI. All of us have gotten this far into academia and figured it out without AI, shouldn’t this generation of students have to as well?
Our tutors clearly favor the writing center as a resource for student writing development rather than AI or ChatGPT. This bias is expected considering where their paycheck comes from, but perhaps it reflects tutors’ larger beliefs about how writing skills should be developed. We can see the value of AI tools as bridges or mediators between students and writing in the academy—but only for a time. One tutor acknowledges that Grammarly is a “fine tool,” but is a “thing that you should grow out of.” Another tutor states that AI is “an adequate coping mechanism, though it is not a good nor generative motivator for writing content.” Despite admitting that “AI is always going to be there now,” our tutors, like so many of the students they describe, yearn for a clear demarcation of when AI should and shouldn’t be used. While many students and professors try to delineate best AI practices according to various contexts (when and how to use), it seems tutors want to draw the line of AI use in the sand once a writer has reached a certain skill level.
Could this be a place of privilege? The desire for students to “grow out of” AI use, to make clearer distinctions between what is plagiarism and what isn’t, and to have more boundaries about when and when not to use AI demonstrate that tutors (including myself!) cling to a pre-ChatGPT era or an idealized notion of academia that was never really true. Even when acknowledging AI’s “colonial” standards of language and expressing worry about writerly self-efficacy and development—particularly for multilingual writers—they do so with a conviction that writing centers are the antidote to all plagues of standard academic English, rose-colored glasses that may fail to see the bigger picture of these students’ experiences. Perhaps even the choice to use AI, like choosing to come to the writing center, is “raced, classed, gendered, and shaped by linguistic hierarchies” (Salem, 2016, p. 161), a reality that even a well-meaning writing center linguistic justice statement or multilingual framework can push into the periphery.
It would do us good to remember that writers are not “brains on sticks” (Hrach, 2021); rather, they are selves with bodies, bodies with selves. Thus, the body, both its cognitive and affective components, influence the act of writing, a process of knowledge making and meaning making, because “To know as a body is to feel. Simply put, emotional expressiveness deserves our explicit attention for its embodied entanglement with meaning and the ways it inflects our writing and learning encounters” (Wenger, 2011, p. 51). Similarly, Hardcastle (2003) states that emotion is “the ‘core’ around which we structure ourselves and the world” (p. 43). The crucial limitation of AI is that it does not have a body and does not feel in the way we do. While scholars like Essid (2023) and Bryan (2023) argue that this is to writing centers’ advantage, institutional and writing center AI policies rarely, if ever, address the emotional nuance that comes with how one decides to use AI, how one actually uses AI, and what that decision means to them as a learner and writer. Even more scarce within AI policies is the emotional balancing that our writing center tutors must also navigate when faced with AI in the writing center.
For writing centers to truly lead the charge in facilitating students’ AI literacy within higher education, we must hold fast to what we know about writing development and the role writing centers can play in it. Writing centers, through one-on-one peer tutoring, can work to mitigate writing anxiety (Martinez et al., 2011; Mitchell et al., 2015; Trossett et al., 2019), increase writing self-efficacy (Mitchell et al,, 2015, Lundin et al., 2023), and support rhetorical and genre awareness (Bleakney & Pittock, 2019)—all factors directly related to the emotional components of writing development. Because “we use our emotions cognitively” (Hardcastle, 2003, p. 43), factoring in the emotional nuance involved with AI usage is a crucial step that writing centers must consider in the development of policy and practice. To advocate for “better writers, not better writing” (North, 1984) in writing centers is also to advocate for holistic writing development, directly opposed to the ultra efficient and correct communication sponsored by AI, language standards, and economic mobility.
However, this also means that writing center scholars and practitioners must continue to interrogate and refine our own assumptions about writing and teaching. Yearning for pre-ChatGPT days of teaching writing out of frustration when we see AI-generated papers being passed off as originals ignores the real issues student writers are actually having. While concern for multilingual writers using ChatGPT as a “crutch” for grammar and language correctness might be justified, given writing centers’ positions on linguistic justice, this emotion seems to elicit more questions than it answers: If grammar or language issues aren’t meant to be checked by AI or discussed with a writing center tutor, what does that say about the language ideology of writing centers writ large? Are we actually committed to helping students develop their writing skills and navigate writing in the academy, or are we merely committed to perpetuating a certain kind of writing development? In her powerful narrative of counterstory vignettes, Martinez (2016) writes,
It is then the responsibility of the Writing Center, not to liberate underserved students, but to recognize its own complicity within the colonial functioning of the academy, to reflect on these colonial tendencies, and to build resistance and space with underserved students through coalitional practices that centralize the narratives of marginalized students as crucial to best serving their needs in this space. (p. 60)
As writing center practitioners, we must ask ourselves if desiring students to “grow out of” using AI for extra writing help is an instance of our continued “complicity with the colonial functioning of the academy” (Martinez, 2016, p. 60). Just like students’ unique writing identities and histories, their choice to use AI is impacted by a variety of social and emotional factors—nuances that should not be overlooked in a simple rejection of AI. We must reflexively consider the assumptions rooted in our own emotions and opinions of AI as well to truly usher writing center practice into the era of AI.
Conclusion
As we move into an even more distinct posthuman era within writing studies because of the rise of generative AI, these tutors remind us that the messiness of human emotion is not exempt from the ways we work with students who might or might not use AI; rather, emotional experiences are inherently entangled, enmeshed, and entrenched in our evolving AI literacy. If writing is a vulnerable, emotional, and embodied activity, these comments suggest that the same can be said about tutoring writing and using AI in higher education.
References
Babcock, R. D. (2020). Grounded theory: Explanation and possibilities. In J. Mackiewicz & R. D. Babcock (Eds.), Theories and methods of writing center studies (pp. 109-117). Routledge.
Bombaywala, R. (2024, April 22). A.I’s role in the writing center. The Dangling Modifier. https://danglingmodifier.psu.edu/category/tutoring-stories-and-advice/
Bleakney, J., & Pittock, S. P. (2019). Tutor talk. Writing Center Journal, 37(2), 127–160. https://www.jstor.org/stable/26922020
Bryan, M. D. (2023). Bringing AI to the center: What historical writing center software discourse can teach us about responses to artificial intelligence-based writing tools. In C.D.M. Andrews, C. Chen, & L. Wilkes (Eds.), The Proceedings of the Annual Computers and Writing Conference 2023 (pp. 15-26). WAC Clearinghouse. DOI: https://doi.org/10.37514/PCW-B.2024.2296.2.02
Buck, I. (2024, September 5). Beyond the hype: Writing centers and the AI revolution in higher education. The Digital Rhetoric Collaborative. https://www.digitalrhetoriccollaborative.org/2024/09/05/beyond-the-hype-writing-centers-and-the-ai-revolution-in-higher-education/
Cardon, P., Fleischmann, C., Artiz, J., Logemann, M., & Heidewald, J. (2023). The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age. Business and Professional Communication Quarterly, 86(3), 257-295. https://doi.org/10.1177/23294906231176517
Coffill, M. (2023, October 12). Writing center consultants at forefront of AI tools to assist their peers. GV Next. https://www.gvsu.edu/gvnext/2023/writing-center-consultants-at-forefront-of-ai-tools-to-assist-their-peers.htm
Davis, K. (1995). Life outside the boundary. The Writing Lab Newsletter, 20(2), 5-7.
Deans, T. (2024, September 5). AI (kind of) in the writing center. The Digital Rhetoric Collaborative. https://www.digitalrhetoriccollaborative.org/2024/09/05/ai-kind-of-in-the-writing-center/
Deans, T., Praver, N., & Solod, A. (2023). AI in the writing center: Small steps and scenarios. Another Word. https://dept.writing.wisc.edu/blog/ai-wc/
De Herder, W., & Allen, S. (2024). Using LLMs in writing center sessions: A quick guide. Saginaw Valley State Writing Center. https://www.svsu.edu/media/writingcenter/Using%20LLMs%20in%20Writing%20Center%20Sessions.pdf
Driscoll, D., & Wynn Perdue, S. (2012). Theory, lore and more: An analysis of RAD research in “The Writing Center Journal,” 1980-2009. The Writing Center Journal, 32(2), 11-39. https://www.jstor.org/stable/43442391
Driscoll, D., & Wynn Perdue, S. (2014). RAD research as a framework for writing center inquiry: Survey and interview data on writing center administrators’ beliefs about research and research practices. The Writing Center Journal, 34(1), 105-133. https://www.jstor.org/stable/43444149
Esfahani, M. N. (2024). The changing nature of writing centers in the era of ChatGPT. International Journal of Scientific Research and Management, 12(8), 1362-1370. https://doi.org/10.18535/ijsrm/v12i08.ec01
Essid, J. (2023). Writing centers & the dark warehouse university: Generative AI, three human advantages. Interdisciplinary Journal of Leadership Studies, 2(3), 38-53. https://scholarship.richmond.edu/ijls/vol2/iss2/3
Grego, R. C., & Thompson, N. (2008). Institutional critique and studio as thirdspace. Teaching/Writing in Thirdspaces: The Studio Approach (pp. 59-96). Southern Illinois University Press.
Hardcastle, V. G. (2003). The development of the self. In G. D. Fireman, T. E. McVay, & O. J. Flanagan (Eds.). Narrative and consciousness: Literature, psychology and the brain (pp. 37-50). Oxford UP.
Hicks, M. (2024, August 12). College writing centers worry AI could replace them. EdSurge. https://www.edsurge.com/news/2024-08-12-college-writing-centers-worry-ai-could-replace-them
Hrach, S. (2021). Minding bodies: How physical space, sensation, and movement affect learning. West Virginia University Press.
Lindberg, N., Domingues, A., Zweers, T., & Goktas, S. (2023). Report on AI chatbots’ impact on writing centers. Research Gate. https://www.researchgate.net/publication/380264917_Report_on_AI_Chatbots%27_Impact_on_Writing_Centers
Mackiewicz, J., & Babcock, R. D. (2020). Introduction to the collection. In J. Mackiewicz & R. B. Babcock (Eds.), Theories and methods of writing center studies (pp. 1-8). Routledge.
Martinez, A. Y. (2016). Alejandra writes a book: A critical race counterstory of about writing, identity, and being Chicanx in the academy. Praxis: A Writing Center Journal, 14(1), 56-61. https://www.praxisuwc.com/martinez-141\
Martinez, C. T., Kock, N., & Cass, J. (2011). Pain and pleasure in short essay writing: Factors predicting university students’ writing anxiety and writing self-efficacy. Journal of Adolescent & Adult Literacy, 54(5), 351–360. https://www.jstor .org/stable/41038868
\Messina, C. M., & Lerner, N. (2020). Mixed-methods research in writing centers. In J. Mackiewicz & R. B. Babcock (Eds.), Theories and methods of writing center studies (pp. 208-218). Routledge.
Mitchell, K. M., McMillan, D. E., & Rabbani, R. (2019). An exploration of writing self efficacy and writing self-regulatory behaviors in undergraduate writing. Canadian Journal for the Scholarship of Teaching and Learning, 10(2), art. 8. https://doi.org/10.5206/cjsotl-rcacea .2019.2.8175
MLA-CCCC. (2024). MLA-CCCC joint task force on writing and AI: About. Knowledge Commons. https://aiandwriting.hcommons.org
Nordstrom, G. (2015). Practitioner inquiry: Articulating a model for RAD research in the writing center. Writing Center Journal, 35(1), 87–116. https://doi.org/10.7771/2832-9414.1799
North, S. (1984). The idea of a writing center. College English, 46(5), 433–446. http://www.jstor.org /stable/377047
Nicolas, M. (2023, November 14). Eliminate the required first-year writing course. Inside Higher Ed. https://www.insidehighered.com/opinion/views/2023/11/14/eliminate-required-first-year-writing-course-opinion
Salem, L. (2016). Decisions, decisions…Who chooses to use the writing center? The Writing Center Journal, 35(2), 147-171. https://www.jstor.org/stable/43824060
Saldaña, J. (2021). The coding manual for qualitative researchers. SAGE Publications.
Stowe, G. (2023, September 26). September 2023: Where are we now with AI in writing centers? Connecting Writing Centers Across Borders. https://wlnconnect.org/2024/03/27/rewind-reset-where-are-we-now-with-ai-and-writing-centers/
Wenger, C. I. (2011). “Feeling lore”: The “problem” of emotion in the practice of teaching. English Teaching: Practice and Critique, 10(3), 45-49. http://education.waikato.ac.nz/research/files/etpc/files/2011v10n3art3.pdf