A Future for Writing Centers? Generative AI and What Students are Saying

Joe Essid, University of Richmond
Cady Cummins, University of Richmond

Abstract

Large language models continue to evolve at a far faster pace than policies at colleges and universities. Writing instruction and peer-tutoring, in consequence, will have to change faster still. In six months of testing by the researchers, ChatGPT began to produce prose with ever greater clarity, analysis, and varied (if often formulaic) stylistic choices. At the same time, all AIs tested struggled with copyrighted materials, sometimes refusing to employ them or quoting sources while claiming not to have done so. The authors include preliminary suggestions for those who staff and direct writing centers, specifically methods for adopting generative AI rather than flatly opposing it. We draw from student responses to a campus survey administered in 2023 and 2024, plus one partnership between AI and sixteen first-year students. Such adaptation to AI may prove particularly useful for those helping writers otherwise marginalized by socioeconomic background, neurodiversity, or personal identity. Finally, we advocate getting ahead of any administrative efforts to dictate terms for use of AI that may lead to reduced status, or outright elimination, of human tutors.

Keywords: Generative AI, LLMs, pedagogy, prompt-engineering, praxis, drafts, working conditions, neoliberalism, employment

The AI Invasion of 2022

When OpenAI’s large language model (henceforth, LLM) ChatGPT debuted in late 2022, it generated opinion pieces decrying the end of many things, including the college essay (Marche, 2022). More recent Jeremiads claim that generative AI (henceforth, AI) has the potential to wreck our current model of higher education (Kirschenbaum & Raley, 2024), even “giving up on education, not advancing it” (Warner, 2024). 

At times, these prognosticators cited misunderstood events about Meta chatbots being hastily unplugged after the AIs invented a private language. In reality, the bots’ output had simply degenerated into gibberish (Kucera, 2017). Other commenters invoked Hal 9000, the charmingly wicked AI from the 1968 film and novel 2001: A Space Odyssey. In debates on social media, images of Dr. David Bowman flooded feeds, as Hal refused Dave entry to his own spacecraft. In an echo of Melville’s Bartleby, Hal flatly informed the academic/astronaut, “I’m sorry Dave, I’m afraid I can’t do that.”

Amid hysteria, voices like Ian Bogost’s (2022) seemed both measured and contrarian. Known for his scholarship about online communities and gaming, Bogost predicted that “any responses [AI] generates are likely to be shallow and lacking in depth and insight” (2022). While recognizing ethical and pedagogical concerns, he felt that LLMs are “less about persuasive writing and more about superb bullshitting,” resembling cases where a writer “tries to convert the skim of a Wikipedia article into a case of definitive expertise” (Bogost, 2022). This dismissal soon got put to the test. OpenAI, Anthropic, Alphabet/Google, Meta, and Microsoft rushed more advanced LLMs to market. The pace of change became frenetic, the future unfolding as we typed. 

When confronting campus alarmists, we might point to a “tech panic cycle” for several innovations before AI, all ending in a “point of practicality” (Grady & Castro, 2023). That point may be near for AI. By the time we finished a draft of this article in mid-2023, every Google document included the option “help me write.” By the time we began revisions a few months later, Google’s Gemini AI was firmly embedded into searches and Workspace files.

Historically, a deliberative process and deliberate pace typify academic work, in contrast to the mania of news outlets covering AI. Both approaches may miss the mark equally. Even as peer-reviewed journals announced special issues on AI (Blair, 2023), the “superb BS” of AI quickly improved until ChatGPT-4 “mastered our language, and for a fee, it is extremely available for questioning” (Andersen, 2023a). Our research seconds Andersen’s findings. We surveyed students in the spring semesters of 2023 and 2024, to assess their attitudes about AI and their instructors’ policies. We also used different LLMs to test two assignment prompts. Barring a few caveats we will discuss, AI employed a vocabulary more nuanced and a voice more stylistically consistent than that of most first-year students at our private, selective institution. With well-engineered prompts, LLMs also skirted some copyright protections. Yes, Dr. Bowman, we’re afraid generative AI can do that. 

Could it replace human tutors? We don’t know where the evolution of LLMs will end, but we do know that AI could serve both as accelerant and justification for other changes already happening to writing centers. In particular, we are troubled by accounts on mailing lists and at conferences of centers downsized or their pedagogy diluted, even programs shuttered. After a Cornell study indicated a 60% drop in usage among their students from fall 2022 to spring 2024, a follow-on survey of other Writing Center Administrators (henceforth, WCAs) revealed that 21 of 67 respondents reported a decline in visits since fall 2022, with 12 centers experiencing “drops between 31% and 80%”  (Lindberg & Domingues, 2024, p. 10). At the European Writing Centers Association (EWCA) conference in 2024, panelists from the Technical University of Munich revealed a similar slide (Wellershausen et al., 2024). We followed up for more details. After a rebound following the pandemic, numbers again declined from 1510 appointments, about half their pre-COVID peak, to “a considerable drop to 977 at the end of the 2023/24 winter semester, and at the end of the summer semester this year, the total amount of appointments decreased some more” (N. Wellershausen, personal communication, December 9, 2025). The Center has not empirically investigated the reasons for “the sharp and continuous decrease but we assume it is a combination of COVID and AI” (N. Wellershausen personal communication, December 9, 2025). Other colleagues at the EWCA 2024 spoke of senior administrators in the EU and US claiming that LLMs can do much of the work of human tutors, thus making writing centers redundant.

Just before AI emerged into our working lives, Zhang (2022) noted how “the ‘deaths’ of writing centers are largely underexplored, and research that specifically examines writing center closing is rare.” For now, perhaps, but the numbers like those from Lindberg & Domingues (2024) argue that we need more data from writing centers, globally. One precedent includes what happened to a former IWCA President when a newly promoted senior administrator wished to outsource all peer tutoring to a private, for-profit firm (Grogan 2020). Given such neoliberal threats to our work, we could respond as some colleagues have done, by rejecting AI outright. Instead, we contend that writing centers must take the lead in developing pedagogically fruitful methods for employing LLMs, thus jumping ahead of executive fiat from senior administration. Otherwise we fear a “Dark Warehouse” university, with automation replacing human jobs as surely as bots are replacing workers in Amazon warehouses (Essid 2024). Others at the EWCA conference spoke of writing centers morphing into something unrecognizable, managed by corporate-style systems of assessment and accountability. Instead of a dark warehouse, we’d have a brightly lit but pedagogically compromised service unit where workers labor under panoptic scrutiny. As we’ll explain in the conclusion, that may be the fate of our own center.

Our survey results may help others avoid such a fate by noting ways to use this innovation ethically. We argue that human tutors can leverage the power of AI in planning and revising work, even engaging metacognitive strategies in the process. Moreover, by using AI mindfully, writing centers may help to level the playing field for students marginalized by identity, neurodivergence, or socioeconomics. Though we must beware of falling into a false narrative about egalitarianism that has long haunted campus computing (Romano, S., 1993), today we are not talking about the 1990s digital divide of desktop computing. Back then, a student unable to afford the hardware could not fully participate in some classes. For AI at least, access is now as close as a Gemini-enabled Google search or Workspace document on a student’s phone, for free (if at the cost of one’s privacy).

In making our cases, we must argue for the advantages humans possess. Every day, we who work at writing centers assist students struggling to integrate their voices with those of others. We understand how to ask good questions while providing empathetic, insightful responses. That history, stretching back more than half a century, gives our feedback an advantage, as AI gets embedded in the near-peer relationship between writer and tutor. We both agree that bleakness may be unwarranted if we use AI in canny ways that draw upon decades of writing-center pedagogy. Specifically, we seek a humanist, rather than neoliberal, approach that leaves agency in the hands of a writer, not AI or senior administrators who never teach.

The Authors And AI

We come from very different generations and our perspectives on AI differ significantly. Student co-author Cady Cummins, from Generation Z, did not think that AIs would change her peers’ lives greatly, believing that the hype over the technology to be unwarranted. Now, Cady sees AI as integral for preparing for careers after college. Faculty author Joe Essid grew up without the Internet, punching IBM cards for mainframes in the 1980s. Joe became an early adopter of the Internet and a website designer in the mid-1990s, before Cady was born. He also identifies with the downbeat economic experiences and concurrent skepticism of Generation X: Money is involved, likely trillions of dollars. Of course bosses will replace us with machines, the sooner the better, because machines don’t join unions or need sick days. 

We both recognize that every communications tool will integrate AI, perhaps heralding the emergence of a cyborg consciousness (Haraway, 1991). Haraway today favors a partnership between humans and AI, though she satirizes industry goals for a next step in evolution as “white male phallic masturbation” (2023). Whatever the hyperbole from CEOs and transhumanists, our students are already using this innovation. 

While we should debate the propriety and implications of certain metaphors when describing AI (Anderson, 2023), we prefer to table such theorizing in order to get ahead of discussions by those who might replace human labor with generative AI. That bad outcome seems to us as dangerous to our struggles for social justice as anything else we debate on campus. We view outflanking power-brokers as doubly urgent in a time of college closures, mergers, and program reductions. At many schools, AI may provide leadership with a purportedly inclusive tool that actually aligns writing instruction and centers with the race-to-the-bottom logic of transnational neoliberal capitalism and its metric of success, one that marginalizes nonwhite, less-affluent students (Bazaldua et al., 2024).

How, then, might we respond as more writers begin to bring AI-generated drafts to conferences at the writing center? We begin by looking backward.

Revisiting the Fears and Promises of the 1990s

Before the Internet became ubiquitous in our classrooms, some scholars welcomed it with caveats. Moran (1991), using Daedelus Interchange for real-time electronic conferences, conceded that “the screen environment, as one composes, makes writing exciting and active” (p. 55). Yet while watching frantically typing participants, he asked “why write if there’s no one reading?” (Moran, 1991, p. 52). Others made cases for chat software to facilitate online discussion of difficult texts (Essid, 1992), for helping students marginalized in face-to-face discussion by disability or difficulties with public speaking (Langston & Batson, 1990), and for including those previously shut out by age and economic status (Spitzer, 1989).

While writing centers largely welcomed the opportunities provided by word-processing software and networked computers, scholars like Grimm (1991) advised first studying the history of centers before going wherever new technology might lead our pedagogy and policies. Some faculty and tutors mistrusted the “faceless nature of technological communication” and resisted the development of Online Writing Labs (OWLs) out of “philosophical and ethical viewpoints of those. . .who view technology as being at odds with humanism”  (Nelson, J. & Wambeam, C., 1995, p. 138).

Partly in answer to such concerns, Blythe (1997) recommended developing an intellectually rigorous praxis, one alert to how new tech empowers or alienates those who work in or visit our centers. We see these sentiments echoed today in materials such as guidelines for teaching writing with AI from Elon University’s Center for Writing Excellence (2024). Any such praxis with AI today must consider how, historically, new technologies often disadvantage less affluent students (C.L. Selfe, 1992). A consensus opinion emerged at the time that pedagogy must shape the use of the tool, rather than having the tool determine the pedagogy. This could mean for “a careful and critical inclusion of computer technologies into writing centers and writing-center planning” (R. Selfe, 1995) and a nuanced development of online-only options such as MUDs, MOOs, and chat rooms for virtual tutoring (Hobson, E., 1998). We find this mediated approach again salutary, given the largely uncritical marketing of AI from industry. 

We should also note that the Internet of the 90s and early 2000s neither led to utopia nor put our institutions out of business. From early experiments instead came today’s campus networks, supporting multiple types of devices and operating systems. Likewise, we have deployed robust and effective course-management systems. These established technologies provide a predictable, even boring, experience to those who recall the early online frontier, yet their very existence shows how completely pedagogy and institutional evolve. The authors propose that we make similar, even mundane, adaptations to LLMs but on our own terms, before they get imposed on us by administrators who never teach. We need equitable data access on campuses as much as we need physical teaching spaces. Our Internet use created a dependence that scholars have long noted, even before the ubiquity of smartphones and WiFi (Gergen, 1991; Turkle, 2011). 

Facing these facts, we advocate an approach like that of Barry Maid’s     technorhetoricians (1995) in the first era of campus networking, trusting that “the young, free to act on their own initiative, can lead their elders in the direction of the unknown” (Mead, qtd. in Strickland, 2011, p. 313). Even without AI, students and faculty already exploit new opportunities, such as collaborating on Google-Workspaces documents. These habits of composition broaden audience to classmates. What if we extended this collaboration to AI instead of banning it, as 31% of faculty did, according to the students who responded to our 2023 survey?

What Students Are Saying And “A New Type of Schoolwork”

Here, we build upon ideas from the first of three MLA-CCCC position papers (2023). This report revealed several dangers for students, faculty, and peer-tutors. These included but were not limited to a loss of confidence by students that studying language and writing holds value, increased inequity as some students and schools lag behind more affluent peers, an inability of institutions to keep up with the pace of change, larger class sizes as AI offers perceived efficiencies previously impossible, and changes in pedagogy forced on teachers without “adequate time, training, or compensation for their labors” (MLA-CCCC, 2023, p. 7). Two subsequent white papers deepened the discussion considerably.

As compared to how we academics do business, AI development follows the Silicon Valley mantra of Mark Zuckerberg to “move fast and break things” (“CEO Zuckerberg,” 2012). To get a sense of change from those who move fastest on campus, we administered an IRB-approved survey to our student body. We received 112 anonymous responses in 2023, another 74 in 2024. We asked about the ethics of AI usage, our Honor Code, and campus policies. We focus on a few representative examples that eloquently express common responses. Results indicate that students were early adopters while, at least as they perceive it, faculty remained laggards. Let’s look in more detail about where this gap emerged, based upon the two years of data from an undergraduate student body of approximately 3200 students. While two years do not suffice for spotting trends, our findings suggest a few patterns for writing centers to watch.

The number of students who have used AI jumped from 69.4% in 2023 to 91% in 2024; note, too, that the only AI reported earlier was ChatGPT 3.5 whereas, in the second year, students named eight AIs. Also in 2024, 64.2% of respondents indicated that they had employed LLMs in some manner for a writing assignment as compared to 31.2% a year earlier. For the second survey,  we also asked more specific questions about how students employ AI:

    • 58.1% to assess work already drafted
    • 48.8% to create a draft for ideas used to structure a submitted draft
    • 46.3% to create an outline for a project
    • 38.8% to help understand a professor’s prompt
    • 32.6% to create a draft used for helping with vocabulary and style in a submitted draft
    • 34.3% to research background information
    • 20.9% to create a draft for helping better incorporate sources in submitted draft

Despite these nuances, faculty discussion on campus continues to focus on plagiarism. However, our results show that only 4.7% of respondents used AI to create a draft submitted for ungraded feedback and 2.3% (one respondent) for a grade. 

A larger concern for colleagues should be how students have adopted AI more rapidly than have faculty. A blanket “no AI” policy by 31.1% of our faculty, as reported by respondents in 2024, does not align with the practices of a majority of students who used the technology not to cheat but to brainstorm or to assess work already written. In terms of instructors with more flexible policies, in 2024, 21.6% of students reported that at least one faculty had mandated AI in some manner (for writing or other work), 10% more than in 2023 but still behind the rate of student uptake. Of these respondents, 16.2% were permitted to use AI when writing a draft. 

Students noted that nearly 76% of faculty discussed specific AI policies in at least one class in 2024. The awareness does not mean guidance, however; nearly 45% of our 2024 respondents took at least one class in which AI went completely unmentioned, in spite of a campus directive that all faculty place a policy in their syllabi. 

We hope that future surveys can track emerging trends, especially as LLMs evolve and become as integral to our writing tools as spellcheck. A few respondents noted clever uses of AI, such as having one AI check the accuracy of another AI’s output. Both years, several respondents discussed the failures of work assigned to them, such as “when I use GPT it’s because the assignment is so repetitive that I don’t want to do it. If it weren’t repetitive and mindless, GPT wouldn’t be very useful.” Another felt that “ChatGPT acts as an extension to the human mind and makes obsolete many tedious aspects” of assignments; a third agreed, saying, “people can outsource some mundane work to [free] up their time to focus on more complex things.” A fourth noted other tools that students are required to use already, asking “How is this resource any different from resources like Chegg, Quizlet Answers, and StackExchange? Some students will learn with it, some will cheat with it, but they are only cheating themselves.”

Others cited equity issues, believing that AI helps because “it is an online resource available to all,” and “I worry that in an effort to ban the AI, some writers who are not as strong, like students whose first language is not English, could be unfairly punished.” Most vividly, a respondent complained, “Hell these affluent students get exclusive help all the time, why can’t it be more accessible?” In the fall of 2024, a student group, noting the relative paucity of help given by Grammarly’s free product, petitioned the Faculty Senate to support access to Grammarly Premium for all students; the Senate voiced support for the request.

So what might a future of writing with AI look like? Looking beyond local responses, one way forward from a culture of “busywork” and inequities may be to employ AI as a multiple-step draft builder. Columbia undergraduate Owen Kichizo Terry (2023) described it:

[Y]ou have the AI walk you through the writing process step by step. You tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Depending on the topic, you might even be able to have it write each paragraph the outline calls for, one by one, then rewrite them yourself to make them flow better.

While it lies beyond the scope of our study to determine if Terry’s method constitutes learning as we currently understand it, we agree with one survey respondent that AI “can end tedious busy work and actually force professors to give us real work for a change.”

That “real work” could map onto good pedagogy and not what a cynical respondent feared, that “Students will have more time for unhealthy habits, such as excessive partying.” To avoid this, Terry forecasts a shift to “AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.” As we consider what such work might involve for writing center staff, we provide examples from testing in early 2023.

AIs Write Two Essays: Lessons for Tutors

Getting students to partner with AI reflectively, analyzing its stylistic tics while drafting their own work, may prove far more productive than AI bans. A turn to stylistic analysis, one focused on repetition and generalization, yields a fruitful method for working with AI-copiloted drafts. We began testing these ideas with OpenAI’s ChatGPT 3.5, using the prompt “Analyze why Lady Jessica in Frank Herbert’s Dune fails as a feminist hero, using specific examples.” Save for one change specifying word count and requesting specific textual support, we did not change the prompt for subsequent requests nor ask the AI to revise its own work. We wished to compare the output to the first efforts of a human, as well as seeing if the responses would evolve over time. From an early draft:

In Frank Herbert’s Dune, Lady Jessica is a complex character who has both feminist and non-feminist traits. On one hand, she is a strong and independent woman who defies the strict gender roles of her society by choosing to become a Reverend Mother, a position traditionally reserved for men. . . .

However, Lady Jessica’s actions and motivations are not always consistent with feminist ideals. For example, she is a concubine to Duke Leto Atreides and bears his son, Paul, . . .

This first response could, as Bogost (2022) feared, come right from Wikipedia. Readers who know the book learn nothing new when the AI wrote “she is a strong and independent woman who defies the strict gender roles of her society . . . .However, Lady Jessica’s actions and motivations are not always consistent with feminist ideals.” The response contained a major factual error, as well: the Reverend Mothers of the Bene Gesserit were always women, in Herbert’s otherwise patriarchal society. Of use to tutors here: the AI failed to support its claims with direct evidence, beyond generally mentioning events and characters. We also witnessed how an AI synthesizes information; one phrase it uses, “questionable motives,” appeared in a Quora discussion as well as in Amazon customer reviews of the sequel Children of Dune

A second draft with the same prompt provided a stronger summary but not what the authors would call critical analysis. For round three, with a word-count and request for direct evidence added, output slightly improved with more specific examples. Still, the AI cited the wrong edition of the text and repeated key words and phrases excessively, particularly “complex,” “traditional gender roles,” “power structure,” and “patriarchal.”  Save for this repetition, however, ChatGPT 3.5 employed a vocabulary richer than what we see from many first-year writers. In every draft, the AI emphasized “concubine,” essential in defining gender politics in Herbert’s world. A tutor, encountering that word, might do what an AI cannot: ask the writer why concubinage alters one’s view of Jessica’s role and agency.

In this third draft, the AI cited page numbers from a 1965 edition, though we have been unable to verify the accuracy of this quotation. It may be hallucinated or copied from an inaccurate mention online. Luckily for writing centers, we have training to help us spot weakly integrated evidence that does not match a claim. LLMs are not able at this time to respond “I don’t see how that quotation fits your claim. Let’s go back to your text and check.” Tutors do this constantly, spurring metacognitive thinking and ethical use of sources with writers. A 2024 study (Streiss et al.) reached a similar conclusion when comparing 200 human-created responses to a similar number from LLMs. They only did better than humans for “criteria-based” feedback (Streiss et al., 2024).

We humans have the ability to actively listen to our partners’ words, rather than simply predict, from a large data set, what the next word in a sentence should be. Suppose a writer were to submit a claim such as the AI’s “though Lady Jessica is a strong and independent woman, she is still confined to the traditional gender roles that are imposed upon her by the patriarchal society.” Our first question might be “Interesting point, but can you provide support, please?” The resulting AI-generated essay never delved deeply into the text, beyond generalizations about plot. In place of close reading, we got repeated buzzwords such as “patriarchal” (10 instances), “power structure” (4 instances), and “gender roles” (4 instances). While a human writer might fall into repetition, neither of us had ever seen so many abstract terms recycled in such a short piece of prose.

The Dune example also suggested methods for prompt-engineering techniques to achieve Terry’s “new style of schoolwork.” Writing conferences could focus on devising new prompts for a draft given to AI, such as “help me identify possible counterarguments about Lady Jessica,” or “revise the draft with fewer instances of the following phrases. . .,” or “revise to avoid starting so many topic sentences with transitional phrases.” Writers and tutors would work alongside a LLM to polish drafts, with each iteration submitted to the instructor. In short: we make the AI partnership completely transparent.

ChatGPT 3.5 repeated a problem common among human writers working inductively, an issue we often address in first-draft conferences. In both the second and third drafts, the AI’s strongest claim appeared at its end, in a statement beginning “she ultimately fails to fully embody the role of a feminist hero due to her inability to fully reject  the patriarchal power structures that shape her actions and her inability to actively work towards dismantling those structures.” This final claim worked far better than the introduction’s vague “several reasons.” Moving it forward would provide a set of promises to a reader. Consultants at our writing center advise this tactic for those who develop arguments in an inductive manner, often L2 speakers of English or L1 speakers writing a discovery draft. Though ChatGPT 4.0 and Anthropic Claude provided some holistic advice about thesis placement during testing, tutors do so with less prompting. A writer would need to carefully engineer a request for AI feedback in order to achieve similar results (Yoon et a, 2023).

Six months later, after many iterations of the same prompts, the AI reduced repetition, using “gender” in varied phrases such as “gender equality,” “gender biases,” and the earlier “gender roles.” This draft worked well except for the lack of a “why” in its introduction and the blandly stated implications of the topic in the conclusion, signaled by the generic remark about how the novel has influenced “literature and society.” Significantly, AI continued to place reasons for its thesis in the conclusion: “Her compliance with patriarchal norms, limited agency, subservience to the male gaze, reliance on manipulation, and absence of female solidarity.” This inductive method for generating responses may vanish as training data evolves; in future tests, we plan to train AIs with data that mandate deductive, thesis-driven prose.

To see if a more advanced model would do better, we put the Dune prompt to ChatGPT 4.0. After we added requirements for direct quotations with proper MLA citation, the AI echoed HAL, noting “I’m really sorry, but I cannot provide direct quotations from copyrighted texts. I can, however, provide an analysis of the character of Lady Jessica from Frank Herbert’s Dune without utilizing direct quotes.”

 AI’s refusal to quote may be fleeting. In time, students of means could hire AI whose creators pay a fee paid to rights-holders, so the LLM could peruse copyrighted works. Old inequities would then rear their heads in new ways: wealthy students might subscribe to multiple AIs. Given that probability, instructors should take notice. We recommend a few activities to disabuse students tempted to turn the crank and generate an AI draft. Instructors could, as Joe Essid did in his first-year seminar, mandate an AI-generated first draft plus the style-analysis techniques we discussed earlier. In written reflections on the AI draft, students noted how voiceless the LLM sounded, found factual errors, hallucinations, and boring repetition of the same key phrases. The class then held a fruitful discussion of why such mistakes matter in human-created drafts.

A different prompt posed other challenges for our robotic co-pilots, again pointing the way toward new pedagogy in writing centers. We asked ChatGPT 3.5 to “Write an essay of at least 1500 words analyzing the homosexual undertones of Sam and Frodo’s Relationship in Lord of the Rings.” We received, as with the Dune essay, an introduction without a thesis and a series of  formulaic counterarguments, plus general discussion of Tolkien’s era and the view of homosexuality at that time.

A human tutor reading this draft would likely detect “waffling” in statements such as “Some readers have. . . .Other readers have argued. . . . Ultimately, it is up to each individual reader to decide how they interpret Sam and Frodo’s relationship. There is no right or wrong answer. . .” We also found that the AI included terms we fed it whenever possible, just as some students do. For instance, the word “undertones” got parroted far too many times without sufficient direct evidence. Were a writer to submit this text, a tutor would rightly request sources for the claims about sexual mores in Tolkien’s era. Moreover, who were these “some readers” and those “other readers”? As with so much output from free LLMs and without prompt-engineering, the prose remained competent but formulaic. From ChatGPT 4.0:

Many readers and scholars have interpreted the bond between the two as having strong homosexual undertones, and in this essay, we will explore the evidence for this interpretation and its implications for the story as a whole.

In the body of the essay, the LLM provided direct quotations without page citations, yet this evidence added little in the way of deep analysis and no literature review of prior scholarship. In another instance, the LLM wrote, “Additionally, Frodo’s statement, ‘I am glad that you are here with me,’ has been interpreted by some as implying a deeper emotional connection” indicates little about homoerotic attraction; Frodo speaks it to Sam as they are cowering on Mount Doom, waiting to die after the destruction of The Ring. Finally, the AI folded in material not from Tolkien’s trilogy, writing “Frodo also seems to return Sam’s feelings. He says, ‘I would have gone with you to the ends of the earth, Sam.’”  Here it invents a quotation based on a line delivered by a different character in Peter Jackson’s film adaptations.

Other flaws abounded, including repeating syntax, overusing “undertones,” starting most topic sentences with transitional phrases, employing evidence taken from Jackson’s films. Prompt-engineering to cover material “in the book, not film” would help, but all too often ChatGPT 4’s output resembled a “quotation hunt” by writers who have not carefully read the text.

AIs, Filters, Copyright: An Edge for Human Tutors

When considering such flaws, we should not expect the most subtle of these LLMs to remain static. To remain vital in the age of such co-pilots, WCAs should share our findings with faculty in order to lead on this issue. On our campus, if a writer tells us that faculty permit AI in the class, writing consultants employ AI in the ways outlined in this article.

A request for more direct quotations about hobbit homoeroticism elicited a strange conversation of interest to tutors. ChatGPT 4.0 said it could not create “content that may be sensitive or controversial.” The training data may have guardrails about sexual topics; we saw this in late 2024 when testing SpiderAI, a service available on the Richmond campus. The managers set the bar high for permitted content, so a paper about human trafficking and another police shootings, with graphic quotations, led the AI to end the session and log off the students. We shared this outcome with SpiderAI’s managers; they are working to allow previously forbidden topics and language.

In terms of direct quotations, another filter comes into play: fear of lawyers. When ChatGPT 4.0 stopped quoting, we pointed out that it had already done so in a prior draft. It then dodged the question, noting that it could not directly access “proprietary texts.” Follow-up questions, pointing out direct quotations given in earlier responses, as well as asking where the AI pulled data, proved futile. Clearly, few humans helping in a writing center would shut down a conversation that way. Instead, we would continue to prod our writer gently, using a balance of directive and non-directive techniques to find out why the essay does not deploy sources in a more varied manner.

In the summer of 2023, we asked version 3.5 why it stopped directly quoting and instead began paraphrasing:

I don’t have direct access to copyrighted works, and I cannot provide direct quotations from specific books, articles, or other copyrighted sources. . . .It’s essential to use me as a supplementary resource and not a replacement for critical thinking, proper citation practices, and responsible research.

As of winter, 2024, OpenAI’s free products cannot quote directly from works licensed in the Creative Commons or that have entered the Public Domain. When asked about Fair Use, ChatGPT 3.5 replied it “can be complex and depends on various factors, such as the purpose and character of the use, the nature of the copyrighted work, the amount used, and the effect on the potential market for the original work.” Meanwhile, ChatGPT 4.0 quoted directly from a public domain source only after we sent links showing that the text is no longer copyrighted.

Current legal rulings have focused mostly on how training-data for LLMs may infringe copyright (Zirpoli, 2023). Issues of academic integrity have not, at least as far as the authors can determine, been part of these court cases. Luckily for human tutors, current writing center and reference library pedagogy more than suffices to coach writers on the principles of fair use.

Beyond ChatGPT, the authors briefly compared the limits of copyright with two Google AIs, Bard (now Gemini) and Document. The results would be worth more study in depth, but one response merits consideration now, as it illustrates a common shortcoming for all the LLMs we studied.  

After Bard refused to quote from Tolkien’s works, we asked, “Why can’t you use direct quotes in an essay you generate?” It responded much as ChatGPT 3.5 had:

I am not yet able to understand the nuances of copyright law. I don’t want to accidentally plagiarize someone’s work, so I am not able to use direct quotes. However. . . . If you need to use direct quotes in your essay, I recommend that you hire a human writer who is familiar with copyright law.

Go hire a human! That was a delightful comeback. The lack of direct, cited quotations, as well as AI’s failure to cite paraphrased material, should give hope to faculty fretting over student cheating, yet these features raise other questions about inequity. 

DEI, Affluence, Moore’s Law

Some scholarship, as well as practice at our center indicated that bringing an LLM into the writing process may assist students otherwise marginalized by race, ethnicity, sexual identity, or neurodiversity. Bradley (2024) claimed that AI may lower barriers and increase access for those who otherwise struggle when pursuing a college education. The authors have helped students with dyslexia and anxiety disorder use generative AI to help them organize their thoughts and spot errors otherwise invisible to them. This corroborated preliminary findings from recent research about students marginalized by neurodiversity (Zhao et al., 2024) and/or socioeconomic background (Addy et al., 2023). Students can use AI to help with voice-to-text work or, for learners who have trouble interacting interpersonally, for asking questions outside of a busy and intimidating classroom environment (Chronicle, 2024). More broadly, another study found that the benefits accrue to those  “at odds with traditional academic structures” (Leung 2024, p. 34).

While at least one major source of AI information for business focused on empowering racial and ethnic minorities (McKinsey, 2023), not enough attention has gone to how AI might accelerate, not obviate, existing campus inequities. Returning to the scholarship about process-based pedagogy in the networked classrooms of the 1990s, we should still ask, “Are hardware and software configurations working toward the maintenance and the perpetuation of existing hierarchies of privilege. . . .And what alternatives can teachers imagine and create?” (Kaplan, 1991, p. 37). Cynthia Selfe found that once students got access, equipment went first to the wealthy, and when the less fortunate got technology, it often got employed for rote learning and other menial tasks (1992, p. 31). Today, free AI puts the power into all hands, but AIs with monthly fees, such as Grammarly Premium and ChatGPT 4.0, amount to concierge services for those who can afford them. Our testing showed that Grammarly’s free tool provided far less detailed advice on drafts than did the subscription-based Premium version. That version also scored essays slightly lower. Similarly, with LLMs costing $240 or so annually, a student who can engineer prompts (or pay another student to do so) can get an AI to quote and paraphrase, at least from Open-Access and Public-Domain sources and by uploading text to train the AI. Future research should look at the most popular LLMs to compare output from free and paid versions. 

Our campus moved toward equitable access with SpiderAI and, in fall 2025, Lexis and Westlaw with AI for all law students. We will see if faculty policy follows, given the Faculty Senate’s approval of the student resolution to provide Grammarly Premium to all. Grammarly claimed that an institutional price as low as $15 annually per student would be possible. The cost seems reasonable. We likely pay more annually providing free WiFi and public computers to bridge an older digital divide, so we can provide AI access as well. 

As in our campus’ case, critical praxis about this issue can emerge from both student groups and a faculty/staff learning community. WCAs should join them or start one, so none we teach are left behind by their more affluent peers. A respondent in our campus survey gave voice to this, saying, “If this tool can give a disadvantaged person a chance, then Fuck yea we should have it be used with no punishment.” For those with access, chat bots “will be the beginning of their writing career, because they will learn that even though plenty of writing begins with shitty, soulless copy, the rest of writing happens in edits, in reworking the draft, in all the stuff beyond the initial slog of just getting words down onto a page” (Bradley, 2024). 

That description of writing sounds a lot like the process-based pedagogy we have embraced, in writing classrooms and centers, since the 1960s. Mina Shaughnessy, in her influential Errors and Expectations, held that writing “is something writers are always learning to do” (1977, p. 276). Shaughnessy’s axiom has new resonance today for less-prepared writers who might turn to AI. In turn, that calls into question what we mean by “writing” now. Yancey (2004) recognized an imperative to reconsider defining literacy in the age of electronic texts, finding that then-current disruptions in pedagogy, “specifically those associated with the screen, and in that focus, they return us to questions around what it means to write” (p. 304). Her question remains essential, as LLMs begin to write alongside us as co-creator; as she put it then, we are “in the midst of a tectonic change” (Yancey, 2004, p. 298).

That imperative to provide equal access to all is urgent in the United States, given the Supreme Court’s decision to strike down affirmative action programs, governors’ and legislators’ rollbacks of DEI programs at public institutions, threats by the second Trump administration to close The Department of Education, and some conservative groups’ lobbying to weaken the Americans with Disabilities Act (Slatin, 2020). Conversely, a bad approach for maintaining diversity, equity, and inclusion would be to instinctively clamp down on student AI use. In any case, our data indicated bans to be exercises in futility. From our 2024 survey, 91% of respondents used AI for some academic task, nearly two-thirds for some task involving academic writing, yet 31% encountered faculty who banned all AI use outright in at least one class. A student who answered our survey scoffed at bans, stating, “it’s also really easy to generate writing students can pass off as their own. Professors basically have no idea as AI. . . can rewrite essays . . . . It’s too slippery of a slope to control.” 

 A steep and slippery slope indeed: Moore’s Law, with its doubling of power in integrated circuits every two years, operates at a speed faster than institutional norms can evolve. Writing centers, with our scrappy origins and history of innovation, have been “ideally placed” to embrace multimodal, multimedia work (R. Selfe, 2010, p. 110). This meets one of Mollick’s four criteria for adopting AI, to always consider its use for tasks (2024). At the same time, we should avoid a campus where students use AI to compose without reflection, their faculty using the same tools to comment and assess impersonally. Mollick’s second rule,  “be the human in the loop” could mitigate that bad outcome (2024, pp. 52-54). A journalist found cynical and uncritical acceptance already happening, when a student outed a professor using AI to grade (Fernandez, 2023). 

One of our respondents contended that “Students are going to use [AI] no matter what, so it’s not worth it to try to control how or when they use it. Instead, reworking curriculum is probably the most realistic option.” Our school’s new general-education curriculum may provide a model by mandating “an iterative process” for upper-division courses designated “written communication”: students will take two after a writing intensive, first year seminar in their first year.

Questions To Be Answered, With Or Without Us

Could we push this further, with all classes including human-AI intellectual partnerships?  Writing centers should lead in that discussion; data from publicly available materials at colleges and universities a decade ago “affirm the centrality of writing centers and individualized writing instruction in college and university life in the US” (Issacs & Knight, 2014, p. 36). Our WCAs have enjoyed stability of employment and a higher status since the early days of our profession (Caswell et al., 2016). 

Such recent influence could still falter with new challenges. First, our status is never guaranteed in an era of declining enrollment on many campuses. Second, while a bare majority of writing-center administrators enjoy protection of tenure, a substantial minority fall into a “problematic category” of  “faculty administrator, below tenure-track and non-tenure-track faculty” (Jade, 2024, p. 134). As recently happened on our campus, such positions can be changed without faculty oversight.  

The changes to our working conditions parallel those happening on our screens, making what  OpenAI’s CEO Sam Altman said about AI apply to WCAs and our programs as well; if we  “want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon” (Andersen, 2023b). We can draw upon precedent from the history of campus technology, notably the praxis that emerged in the 90s as we wired our campuses and adopted the Internet for teaching and scholarship. We likewise adapted our training and conferencing techniques to the social effects of mobile computing. The Keynote speaker at EWCA 2024 noted, however, that networked communication merely enhances how we share existing information, whereas generative AI creates content and is thus transformative (Buck, 2024). We wish to qualify that claim; earlier adaptations do show a way forward, at least as long as those at writing centers continue to have jobs. Buck (2024) advocated “future proofing” our centers by moving away from a sole focus on writing–something many of us have done by assisting with multimodal projects–into centers for future skills that include whatever form academic writing takes as AI becomes ubiquitous. 

First, however, we must ensure that writing centers retain their missions and pedagogical autonomy. Unlike early days of writing centers, today we are on the minds of neoliberal administrators, many of whom never enter a classroom. Gallagher (2024) found a dystopian outcome already unfolding in the UK’s higher education sector, based upon assumptions that:

Bureaucrats should be the ones setting minimum standards; that minimum standards “level the playing field” (recall the equity/equality discussion earlier); that the bureaucrats’ authority trumps providers’ autonomy; that punishments and reward are the proper mechanisms for getting results; that accountability should run in only one direction—the list goes on. The standards and accountability (S&A) policy regime seems commonsensical only if one is already operating within the logic of technocratic neoliberalism (p.340).

Stories at EWCA 2024 led the authors to revisit with careful attention Michel Foucault’s ideas about power. What he described for certain prisons could easily apply to untenured academic and other intellectual labor, with AI wardens far better at surveilling than current back-office and productivity tools. We could end up with “a state of conscious and permanent visibility that assure the automatic functioning of power” (Foucault 1979, p. 201). Our work might be assessed down to each keystroke, as we are held to an agenda we have never seen.

At our own institution, in 2024, all student-support directors transitioned from faculty to staff status, without any discussion, ostensibly out of a concern over “equity” for other employees who were already staff. The integration of services occurred for a worthy reason: a major gift from a donor realized the dream of a quarter century, integrating support into a learning center. Despite that realized vision, implementation proved painful for directors who had served in our School of Arts and Science for many years. They moved to the Provost’s Office, again without discussion, losing hard-won benefits and opportunities for promotion that A&S program directors, ranked now as “teaching faculty,” had negotiated over several years. Luckily for morale, a new Provost and Executive Director arrived, putting a more collaborative and helpful face on building the learning center. Yet the deeds were done.

As a result of the seismic shift in working conditions, our learning center directors look to be stripped of the freedom to do research, except on their own time after the end of the “business day,” as it’s increasingly called on our campus. Productivity could soon be monitored by the Workday software, to roll out in early 2025. Given the accrediting standards for our school, it remains uncertain if the learning center “professional staff” will be able to teach. In consequence, the untenured faculty author of this article, who co-developed our credit-bearing training course with a former WPA, hinted heavily that faculty status remained essential to avoid his immediate resignation. That wish was granted, but the author will have retired by the time this piece is published. Two other directors at our Learning Center resigned immediately, taking with them institutional memory and pedagogical practices that had worked well for decades. Sadly, such losses included a collegial relationship with tenure-stream faculty that cannot be easily reduced to data points, the sacred currency of neoliberal administrators (Giamo & Lawson, 2024, p. 10).

We face these troubling prospects globally. Those in power can issue mandates based on their vision of AI, ignoring half a century of writing-center theory and pedagogy. When leaders justify unwise changes with platitudes about cost-savings or efficiency, we should be prepared to speak back in language they respect, when possible, using empirical evidence to make a case for human-led tutoring. At our institution, three decades of faculty support and an increasing use of the Center since the turn of the century, student surveys, and marketing efforts may have retained what autonomy we enjoyed for many years. At other schools, peer tutors and alumni may have to act surreptitiously to stir up student anger. The MLA-CCCC task force, as well as one from the IWCA, hope to address the challenges AI poses to our profession.

Not ironically, we end by asking an LLM about the future of writing centers in a world of ubiquitous AI. In seconds, it synthesized an answer more humane than that provided by many humans who administer our institutions. Our cybernetic co-pilot noted that human tutors will still focus on higher-level skills, provide collaboration in person, offer ethical and contextual education beyond the AI’s training data, partner with AI to help neurodiverse learners, and integrate services with a local curriculum.

In short, we all must get involved and, heeding Sam Altman’s advice, make strong cases soon for human-driven conferencing with LLMs as helpers. It will be up to WCAs, tutors, consultants, students, faculty, staff, and other teaching stakeholders (rather than non-teaching senior administration and politicians) to shape how AI gets used to help, not replace or monitor us.

References

Addy, T., Kang, T., Laquintano, T., & Dietrich, V. (2023). Who benefits and who is excluded? Transformative learning, equity, and generative artificial intelligence. Journal of Transformative Learning, 10(2), 92-103. https://jotl.uco.edu/index.php/jotl/issue/view/36 

Andersen, R. (2023a, March 28). ChatGPT has imposter syndrome. The Atlanticwww.theatlantic.com/technology/archive/2023/03/chatgpt-ai-language-model-identity-introspection/673539/ 

Andersen, R. (2023b, July 24). Does Sam Altman know what he’s creating? The Atlantic. www.theatlantic.com/magazine/archive/2023/09/sam-altman-openai-chatgpt-gpt-4/674764/ 

Anderson, S. S. (2023). “Places to stand”: Multiple metaphors for framing ChatGPT’s corpus. Computers and Composition, 68, 1-13. doi.org/10.1016/j.compcom.2023.102778 

Bazaldua, C., Hawkins, T., & Monty, R.W. (2024). Writing centers’ entanglements with neoliberal success. Praxis, 21(2). https://www.praxisuwc.com/212-bazaldua-et-alhttps://scholarworks.utrgv.edu/wls_fac/132/

B​​lair, K. L. (2023). Letter from the editor. Computers and Composition, 68. doi.org/10.1016/j.compcom.2023.102780 

Blythe, S. (1997). Networked computers + writing centers = ? Thinking about networked computers in writing center practice. The Writing Center Journal, 17(2), 89–110. http://www.jstor.org/stable/43442023

Bogost, I. (2022, December 7). ChatGPT is dumber than you think. The Atlantic. www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/ 

Bradley, R. (2023, February 27). A chatbot is secretly doing my job. The Atlantic. www.theatlantic.com/technology/archive/2023/02/use-openai-chatgpt-playground-at-work/673195/ 

Buck, I. (2024, June 11-14). Redefining writing centers in the age of AI: Embracing their role as “sponsors of future skills.” European Writing Centers Association Conference, Limerick, Ireland. https://www.ewca2024.com/agenda 

Caswell, N. I., McKinney, J. G., & Jackson, R. (2016). The working lives of new writing center directors. Utah State University Press.

Center for Writing Excellence at Elon University. Teaching writing with generative AI. https://www.elon.edu/u/academics/writing-excellence/teaching-writing-with-generative-ai/

CEO Zuckerberg: Facebook’s five core values. (2012, May 17). CBS News. www.cbsnews.com/news/ceo-zuckerberg-facebooks-5-core-values/ 

Chronicle of Higher Education. (2024). How generative AI is changing the classroom.  Research Brief. https://www.chronicle.com/featured/digital-higher-ed/how-generative-ai-is-changing-the-classroom

Essid, J. (2024). Writing centers & the dark warehouse university: Generative AI, three human advantages, Interdisciplinary Journal of Leadership Studies, 2, 38- 53 https://scholarship.richmond.edu/ijls/vol2/iss2/3/ 

Essid, J. (1992). Hard books, deep reading, and synchronous conferences in the humanities “pickle factory.” Computers & Composition Online. cconlinejournal.org/essid/index.html 

Farago, J. (2023 December 28). A.I. can make art that feels human. Whose fault is that?” New York Times. www.nytimes.com/2023/12/28/arts/design/artists-artificial-intelligence.html 

Fernandez, S. (2023., May 11). “Did you even read it?”: Student catches professor using AI to give her feedback. Daily Dot. www.dailydot.com/news/professor-using-ai-student-feedback/ 

Foucault, M. (1979). Discipline and punish: The birth of the prison (A. Sheridan, Trans.) Vintage.

Gallagher, C. W. (2024). What need not be said: Transnational policy regimes and England’s technical proficiency in writing policy. College English, 86(3), 326-355. https://doi.org/10.58680/ce2024864326

Gergen, K. (1991). The saturated self. Basic Books.

Giaimo, G.N. & Lawson, D. (Eds.). (2024). Storying writing center labor for anti-capitalist futures. WAC Clearinghouse.

Goldberg, E. (2024, July 19). Training my replacement: Inside a call center worker’s battle with AI. New York Times, www.nytimes.com/2023/07/19/business/call-center-workers-battle-with-ai.html 

Grady, P. & Castro, D. (2023, May 1). Tech panics, generative AI, and the need for regulatory caution. Center for Data Innovation. https://datainnovation.org/2023/05/tech-panics-generative-ai-and-regulatory-caution/

Grimm, N. M. (1995). Computer centers and writing centers: An argument for ballast. Computers and Composition, 12(3), 323–329. https://doi.org/10.1016/S8755-4615(05)80071-4

Grogan, S. (2020). I feel the earth move. In J. Essid & B. McTague (Eds.), Writing centers at the center of change (pp. 157-170). Routledge.

Haraway, D. (2023) Donna Haraway on AI [Video]. YouTube. https://www.youtube.com/watch?v=4FycNIeS6GY

Haraway, D. (1991). Simians, cyborgs, and women: The reinvention of nature. Routledge.

Herbert, F. (1976). Children of dune. Berkeley.

Herbert, F. (1965). Dune. Berkeley.

Hobson, E. (1998). Introduction. In Wiring the writing center (ix-xxvi). Utah State University Press.

Isaacs, E. & Knight, M. (2014). A bird’s eye view of writing centers: Institutional infrastructure, scope and programmatic issues, reported practices. WPA: Writing Program Administration, 37(2), 36-67.

Jade, Silk (2024). Writing center exile: Third gender as third class in a third space.  In G. N. Giaimo & D. Lawson (Eds.), Storying writing center labor for anti-capitalist futures (pp. 133-135). WAC Clearinghouse. https://wac.colostate.edu/books/practice/storying/

Kaplan, N. (1991).  Ideology, technology, and the future of writing instruction. In G. E. Hawisher, & C. L. Selfe (Eds.), Evolving perspectives on computers and composition studies: Questions for the 1990s (pp. 11-42). NCTE. 

Kirschenbaum, M & Raley, R. (2024, October 31). AI may ruin the university as we know it: The existential threat of the newest wave of ed-tech. The Chronicle of Higher Education. https://www.chronicle.com/article/ai-may-ruin-the-university-as-we-know-it

Kucera, R. (2017, August 7). The truth behind Facebook AI inventing a new language. Medium. towardsdatascience.com/the-truth-behind-facebook-ai-inventing-a-new-language-37c5d680e5a7 

Langston, D. M. & Batson, T. W. (1990). The social shifts invited by working collaboratively on computer networks: The ENFI project. In Handa, C. ( Ed.), Computers and community: Teaching composition in the twenty-first century (pp. 140-159). Boynton/ Cook. 

Leung, H. (2024). Artificial intelligence as agents to support neurodivergent creative and critical thinking modules [Unpublished Master’s Thesis]. Simon Fraser University.

Lindberg, N., & Domingues, A. (2024, August). 2024 Report on AI writing tools’ impacts on writing centers. Researchgate. https://www.researchgate.net/publication/383365521_2024_Report_on_AI_Writing_Tools’_Impacts_on_Writing_Centers

Maid, Barry. (1995, June). WPA or technorhetorician: Different margin, same problem [Conference Presentation]. 23rd Wyoming Conference on English, Laramie, WY.

Marche, S. (2022, December 6). The college essay is dead. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/

McKinsey Institute for Black Economic Mobility. (2023 December 19). The impact of generative AI on black communities. https://www.mckinsey.com/bem/our-insights/the-impact-of-generative-ai-on-black-communities 

MLA-CCCC Joint Task Force on Writing and AI (2023). MLA-CCCC joint task force on writing and AI working paper: Overview of the issues, statement of principles, and recommendations. https://aiandwriting.hcommons.org/working-paper-1/

Mollick, E. (2024). Co-intelligence: Living and working with AI. Penguin.

Moran, C. (1991). We write, but do we read? Computers & Composition, 8(3), 51-61.

Nelson, J. & Wambeam, C. (1995). Moving computers into the writing center: The path to least resistance. Computers and Composition, 12(2), 135-143. https://doi.org/10.1016/8755-4615(95)90002-0

Romano, S. (1993). The egalitarianism narrative: Whose story? Which yardstick? Computers & Composition, 10(3), 5-28. https://doi.org/10.1016/S8755-4615(17)30135-4

Selfe, C. L. (1992). Preparing English teachers for the virtual age: The case for technology critics. In Hawisher, G. E. & P. LeBlanc (Eds), Reimagining computers and composition: Teaching and research (pp. 24-42). Boynton/Cook.

Selfe, R. (2010). Writing centers: A safe educational haven. In D. M. Sheridan, & J. Inman (Eds.), Multiliteracy centers: Writing center work, new media, and multimodal rhetoric (pp. 109-129). Hampton Press. 

Selfe, R. (1995). Surfing the tsunami: Electronic environments in the writing center. Computers and Composition, 12(3), 311-322. https://doi.org/10.1016/S8755-4615(05)80070-2

Shaughnessy, M. P. (1977). Errors and expectations: A guide for the teacher of basic writing. Oxford University Press.

Slatin, P. (2020, August 6). Senate republicans have ‘declared war on the ADA,’ says Duckworth. Forbes. https://www.forbes.com/sites/peterslatin/2020/08/06/senate-republicans-have-declared-war-on-the-ada-says-duckworth/?sh=1e7f31be1edb 

Spitzer, M. (1989). Computer conferences: An emerging technology.” In G. E. Hawisher, & C. L. Selfe (Eds.), Critical perspectives on computers and composition instruction (pp. 187-200). Teachers College Press. 

Streiss J., Tate, T., Graham, S., Cruz, J., Hebert, M., Wang, J., Moon, Y., Tseng, W., & Strickland, J. (1991). The politics of writing programs. In G. E. Hawisher, & C. L. Selfe (Eds.), Evolving perspectives on computers and composition studies: Questions for the 1990s (pp. 300-17). NCTE.  

Terry, O. K. (2023, May 12). I’m a student. You have no idea how much we’re using ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt  

Turkle, S. (2011). Alone together. Basic Books.

Warner, J. (2024, July 9). Calling b.s. on the AI education future. Inside Higher Ed. https://www.insidehighered.com/opinion/blogs/just-visiting/2024/07/09/embracing-ai-means-abandoning-learning

Warschauer, M., Carol Booth Olson, C. B. (2024). Comparing the quality of human and ChatGPT feedback of students’ writing. Learning and Instruction, 91. https://doi.org/10.1016/j.learninstruc.2024.101894.

Wellershausen, N., Stepel, M., Pregent, G., & Carroll, C. (2024, June 11-14). Changing writing center narratives through collaboration and reflection [Conference presentation]. European Writing Centers Association 2024, Limerick, Ireland. https://europeanwritingcenters.eu/conference

Yancey, K. B. (2004). Made not only in words: Composition in a new key. College Composition and Communication, 56(2), 297–328. https://doi.org/10.2307/4140651 

Yoon, S., Miszoglad, E., Pierce, L.R. (2023). Evaluation of ChatGPT feedback on ell writers’ coherence and cohesion (publication number 2310.06505). Arxiv.  https://doi.org/10.48550/arXiv.2310.06505

Zhang, J. (2022) Through students’ voices: What does the death of a writing center tell us? The Peer Review, 6(1). https://thepeerreview-iwca.org/issues/issue-6-1/through-students-voices-what-does-the-death-of-a-writing-center-tell-us/

Zhao, X., Cox, A., Chen, X., & Coleman, B. (2024) A report on the use and attitudes towards generative AI among disabled students at the university of Sheffield information school. University of Sheffield. https://orda.shef.ac.uk/articles/report/A_Report_on_the_Use_and_Attitudes_Towards_Generative_AI_Among_Disabled_Students_at_the_University_of_Sheffield_Information_School/25669323

Zirpoli, C. T.  (2023). Generative artificial intelligence and copyright law. Congressional Research Service. https://crsreports.congress.gov/product/pdf/LSB/LSB10922

https://thepeerreview-iwca.org