Conversation Shaper: GenAI Tools Can Revive and Revise Writing Center Discussions of Attribution, Authorship, & Plagiarism

Annalee Roustio, CUNY Queens College

Abstract

As generative artificial intelligence (GenAI) tools develop, questions about the legality and ethics of their constitution arise. This conversation shaper calls on stakeholders in writing centers to begin meaningfully addressing that training methods for, and generated outputs of, popular GenAI tools exhibit source use which may not meet the current academic integrity standards widely upheld by writing centers. Acknowledging this dissonance in future discussions paves the way for responsibly and transparently incorporating GenAI into writing center work. It also ensures that we pay attention to new technologies (Selfe, 1999) and more critically interpret our reality via a multitude of perspectives (McKinney, 2013). The shaper concludes by contending that writing center workers, and in particular, peer tutors, are well positioned to act as thought leaders regarding the future of writing with GenAI.

Keywords: academic integrity, artificial intelligence, copyright, chatbots, ChatGPT, generative AI, intellectual property, plagiarism, large language models, text and data mining, writing centers

Introduction

Generative artificial intelligence (GenAI) tools employing Large Language Models (LLMs), such as OpenAI’s ChatGPT, have and will continue to recontextualize writing and the teaching of writing (Laquintano et al., 2023). Because writing centers and tutors also mediate the teaching of writing, we know that GenAI will by extension recontextualize the work done by and in writing centers. Already, scholars have explored the ethics, affordances, and limitations of writing with GenAI, including writing in the sciences (Buriak et al., 2023) and healthcare (Zohny et al., 2023), writing in composition classrooms (Cummings et al., 2023; Su et al., 2023), and writing in creative contexts (Arathdar, 2021; Hu, 2023). Further, several have surveyed the nuanced applications and implications GenAI has for multilingual writers [1] (Barrot, 2023; Escalante et al., 2023; Liang et al., 2023; Woo et al., 2024). These preliminary publications pertain to writing center practitioners because writing from across disciplines and from multilingual writers are both inextricable from writing center work.

Writing centers and their allied disciplines recognize their stake in scholastic dialogues concerning writing with GenAI. Special issues such as this one and the Spring 2023 edition of Composition Studies, blog entries by the Canadian Writing Centre Review of the Canadian Writing Centres Association and Another Word from the Writing Center at the University of Wisconsin-Madison, as well as resource repositories such as that from the WAC Clearinghouse (Mills, 2022) comprise meaningful, useful responses. Largely unexplored in these initial discussions, so far, is the fact that the technological infrastructure of many popular GenAI tools is one of “theft” (Kraaijeveld, 2024, para. 13), both in terms of the texts inputted for training and the outputs generated. This conversation shaper invites deliberate attention to this novel aspect of the larger, evolving nexus of GenAI and writing because, as Jackie Grutsch McKinney’s (2013) Peripheral Visions for Writing Centers reminds us, we further disappear perspectives that we do not acknowledge, reinforcing an oft incomplete grand narrative. McKinney’s notion hearkens to Cynthia L. Selfe’s (1999) landmark essay, “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which encourages paying attention to technology regardless of whether we use it, as it’s “inextricably linked to literacy and literacy education” (p. 414). Selfe, too, insists that what we don’t pay attention to, we allow ourselves to ignore.

Thus, as writing centers and writing studies scholarship take up the topic of GenAI, they should also begin naming the “theft” which made and makes GenAI possible and examining the subsequent array of questions it raises regarding authorship and academic integrity. For one, we know that, among their many other foci, writing centers act as sites of education around plagiarism and how to avoid it (Shamoon & Burns, 1999; Brown et al., 2007). As such, scenarios wherein a writing center tutor may intuit a student has patch-written a source or neglected to include a citation are not uncommon, and tutors are not without interventions. However, in a text written with GenAI, neither the tutor nor the student writer-user can reasonably know if, or when, any part of the thousands of texts the chatbot ingests has been reproduced without proper credit. This leaves writing center tutors and administrators less certain of when and how to respond, both when they suspect plagiarism (inadvertent or otherwise) and when they don’t. We have no reason to believe scenarios like the latter are rare, either: as Matthew D. Bryan (2024) articulates, “whether or not such [GenAI] applications are endorsed by faculty or institutions…students are well-aware of them and will continue to bring their experiences with them to writing centers” (p. 16). Therefore, writing center administrators and tutors can benefit from informed guidance regarding writing with GenAI, especially with respect to authorship, attribution, and plagiarism, so that their pedagogy and praxis might adapt.

Already, there are repercussions for the “completed and copyrighted materials violently stolen for LLMs” (Byrd, 2023, p. 139). As this conversation shaper is being written, several authors are suing research and development companies like OpenAI for “misus[ing] books [the] authors have written to train the models behind…popular chatbot ChatGPT and other artificial-intelligence based software” (Brittain, 2023, para. 1). Other creators, from visual artists to software coders, have also pursued legal restitution for what they allege is unlawful “scraping” of their copyrighted materials for use in training GenAI models (Kahveci, 2023). That GenAI tools may violate existing laws does not itself necessarily signify a problem; that creators did not consent to their works being scraped, and are neither compensated nor credited for the use of their works, is more saliently troubling. As it happens, “the vast majority of data that generative AI systems have assimilated…have been de facto obtained without the express authorisation of the rights holders” (Lucchi, 2023,  pp. 14–15): one dataset pirated over 70,000 books (Chesterman, 2024). Accordingly, the number of authors “asking artificial intelligence companies like OpenAI and Meta to stop using their work without permission or compensation” has reached the thousands (Veltman, 2023, para. 1).

As mentioned, the “illegally acquired” (Lucchi, 2023, pg. 13) text and data mined for use as training inputs is not the only cause for concern. Many feel that the tools’ outputs also infringe on intellectual property rights because they “directly compete with the originally ingested materials” (Lucchi, 2023, p. 13). Additional lawsuits allege that this obscures consumer choice in and access to the original materials, which does not constitute fair use (Kahveci, 2023; Lucchi, 2023). Sometimes, the outputs are not only derivative, but they also contain entire “parts of works that are in the training data” (Kahveci, 2023, p. 798); one analysis (Kandeel & Eldakak, 2024) found that outputs reproduced poetic verses from training inputs verbatim and without credit. Legality aside, that GenAI tools produce outputs replicating existing works—whether they’re works in the public domain or in-copyright—runs contrary to values writing centers and their staff broadly uphold regarding responsible source use. The discussions that follow in our scholarship and in our centers regarding writing with GenAI should therefore address this discrepancy.

Importantly, though there is perhaps a dissonance in incorporating into our writing center work technologies that have potentially negative material consequences for working writers (all while the companies which own said technologies make billions) (Brittain, 2023), it’s not just bestselling authors impacted. In June of 2024, Meta updated its policy so that public content posted to its platforms, including photos and captions, can be used as GenAI training data; users in Europe can opt out of this, but users in the United States—who were not notified of the change—cannot (Mauran, 2024).  We can surmise from Renee Brown. et al.’s (2007) momentous article “Taking On TurnItIn: Tutors Advocating Change” that since some student writers have objected to having their works submitted to TurnItIn’s proprietary database, some might ostensibly object to having their works scraped for text and data mining (TDM).

Gavin P. Johnson (2023) posits, “it is absolutely necessary to model critical digital literacies by examining and being transparent about the potential impact engaging with tools, like [C]hatGPT, has on user data and intellectual property” (p. 172). Therefore, below is a collection of sources which more specifically address this emerging aspect of the conversation. Because the scope of GenAI is both transdisciplinary and international, the sources reflect standards for research and academic writing as well as intellectual property laws of the United States of America and elsewhere. Readers will find recommendations from experts in various fields, including writing centers, that academics, administrators, and tutors alike might use as a foundation for ensuing conversations in their writing center work regarding GenAI.

The sources represent the status quo at the time of writing. Because GenAI is dynamic and quickly developing, readers can expect, even perhaps by the time of publication, that there might be advancements in the technology, changes to its policies and user agreements, resolutions to ongoing lawsuits, and so forth.

Relevant Sources

Sources which illuminate the history of writing with automation and AI include M. D. Bryan’s Bringing AI to the Center: What Historical Writing Center Software Discourse Can Teach Us about Responses to Artificial Intelligence-Based Writing Tools (2023); W. Hart-Davidson’s “Writing with Robots and other Curiosities of the Age of Machine Rhetorics” (2018); G. P. Johnson’s “Don’t Act Like You Forgot: Approaching Another Literacy “Crisis” by (Re)Considering What We Know about Teaching Writing with and through Technologies” (2023); and T. Laquinano et al.’s introduction to TextGenEd: Teaching with Text Generation Technologies (2023). 

Sources which deliberate questions of academic integrity and GenAI as an author/co-author include D.R.E. Cotton et al.’s “Chatting and cheating: Ensuring academic integrity in the era of ChatGPT” (2023)interestingly penned almost entirely by ChatGPTand M. E. Kandeel & A. Eldakak’s “Legal dangers of using ChatGPT as a co-author according to academic research regulations” (2024); while helpful sources more broadly exploring authorship and agency include C. R. Miller’s “What Can Automation Tell Us about Agency?” (2018); C. Seader, J. Markins & J. Canzonetta’s “Mediated authority: The effects of technology on authorship” (2018); and S.L. Tan & W. Liang Tan’s “AI, author, amanuensis” (2022). In tandem with contemporary sources, it may be useful to revisit foundational texts which have critiqued and progressed our understanding of authorship, including R. M. Howard’s “Sexuality, Textuality: The Cultural Work of Plagiarism” (2000), as well as A. Lunsford’s “Collaboration, Control, and the Idea of a Writing Center” (2001).

Privacy and intellectual property issues surrounding GenAI are considered in S. Chesterman’s “Good models borrow, great models steal: intellectual property rights and generative AI” (2024); Z. Ü. Kahveci’s “Attribution problem of generative AI: a view from US copyright law” (2023); N. Lucchi’s “A Case Study on Copyright Challenges for Generative Artificial Intelligence Systems” (2023); and S. Falati’s “How ChatGPT Challenges Current Intellectual Property Laws” (2023).

Some sources provide frameworks for dialogue as well as recommendations for writing with GenAI across various educational and research settings, all of which may be drawn from by writing center stakeholders. These include A. Bedington et al.’s “Writing with generative AI and human-machine teaming: Insights and recommendations from faculty and students” (2024); V.G. Dianova et al.’s “Discussing ChatGPT’s implications for industry and higher education: The case for transdisciplinarity and digital humanities” (2023); T. Foltynek et al.’s “ENAI Recommendations on the ethical use of Artificial Intelligence in Education” (2023); D. T. K. Ng et al.’s “Conceptualizing AI literacy: An exploratory review” (2021); and D.R. Rowland’s “Two frameworks to guide discussions around levels of acceptable use of generative AI in student academic research and writing” (2023).

For relevant scenarios and strategies specific to writing center tutors, see T. Deans et al.’s “AI in the Writing Center: Small Steps and Scenarios” (2023); M. Imran & N. Almusharraf’s “Analyzing the role of ChatGPT as a writing assistant at higher education level: A systematic review of the literature” (2023); and T. W. Kim & Q. Tan’s “Repurposing Text-Generating AI into a Thought-Provoking Writing Tutor” (2023). An exceptional frame of reference for empirical instances of texts usurped and algorithmically reconstituted (and the “legal sleight[s] of hand” (p. 14) which can follow) is R. Brown et al.’s “Taking on TurnItIn: Tutors Advocating Change” (2007). Lastly, A. Byrd’s “Where We Are: AI and Writing Truth-Telling: Critical Inquiries on LLMs and the Corpus Texts That Train Them” (2023) examines further implications of texts used to train GenAI tools.

Future Directions

Because of the exciting and still unknown potential GenAI holds for writing at large, we may find ourselves in somewhat of a dilemma: how can we harmonize writing center work with writing tools that could display source use widely considered impermissible? Going forward, we should consider this dilemma a productive one. GenAI and its potentially plagiaristic means to its ends present us with an opportunity to pluralize the perspectives present in our discussions. Discussions we have as a field, within our centers, and within individual tutoring sessions: for writing center pedagogy is often “more about conversations than answers” (McKinney, 2013, p. 58).

In fact, writing centers should consider themselves not just participants in this arena but thought leaders, given writing center staff work at the confluence of composition (and the tools which assist in it) and institutional standards for academic integrity. Where the technology of GenAI intersects or even infringes on common writing center tenets around plagiarism prevention is precisely the locus where we can create more robust knowledge about attribution, authorship, plagiarism, and even germane legislation, especially as copyright laws evolve to keep pace with advances in technology (Lucchi, 2023). William Hart-Davidson (2018) puts it well when suggesting that we [2] “can perhaps be among those who influence both how they [robots] work and how they are incorporated into the writing practices of people and institutions” (p. 254).

In fact, writing center administrators and tutors have already been visible innovators and advocates where technology, intellectual property, and enterprise overlap. For example, “Taking On TurnItIn: Tutors Advocating Change” (Brown et. al, 2007) was authored primarily by peer tutors, suggesting an empirical interest in the topic. It also points to a body of work from which we can encourage tutors to continue building. Questions scholarship might address with respect to this include:

    • How can writing centers lead discussions about GenAI if the technology functionally challenges the values and practices our centers uphold? 
    • How should GenAI nuance and shape our conceptualizations of plagiarism?
    • How might we prepare tutors to meet student writers’ questions of using and citing GenAI? 
    • When should GenAI be considered an author/co-author?
    • Might some versions of GenAI (e.g., ChatGPT) be more appropriate, and perhaps more ethical, for writing center work than others?

This conversation shaper and the sources therein initiate a fuller and more transparent recognition of accounts that Selfe and later McKinney would likely contend should not go unnoticed. Among these: that many GenAI tools are not only made possible due to “illegally obtained” and “pirated” texts (Chesterman, 2024, p. 2) but also fail to credit them, even when they’re directly quoted in user-prompted outputs (Kandeel & Eldakak, 2024). These truths—paradoxical, multiplicitous—warrant inclusion in the conversation of GenAI and writing, for what we give our attention begets and becomes our prominent reality (McKinney, 2013). As a field, writing center administrators and tutors should be pragmatic in critically interpreting the moment we’re in and pointing the needle for the future of writing with GenAI.

References

American Psychological Association. (2023, August). APA Publishing Policies. American Psychological Association. https://www.apa.org/pubs/journals/resources/publishing-policies?tab=3

Arathdar, D. (2021). Literature, narrativity and composition in the age of artificial intelligence. TRANS- [Online], 27. https://doi.org/10.4000/trans.6804

Barrot, J.S. (2023). Using ChatGPT for second language writing: Pitfalls and potentials. Assessing Writing, Vol. 57. https://doi.org/10.1016/j.asw.2023.100745

Bauman, K. (2004). Raising questions about plagiarism. In Bruce, S. & Rafoth, B. (Eds.), ESL Writers: A Guide for Writing Center Tutors. Portsmouth: Boynton/Cook, 105–116.

Bedington, A., Halcomb, E. F., McKee, H. A., Sargent, T., & Smith, A. (2024). Writing with generative AI and human-machine teaming: Insights and recommendations from faculty and students, Computers and Composition, Vol. 71.

Brittain, B. (2023, December 20). Pulitzer-winning authors join OpenAI, Microsoft copyright lawsuit, Reuters. https://www.reuters.com/legal/pulitzer-winning-authors-join-openai-microsoft-copyright-lawsuit-2023-12-20/

Brown, R., Fallon, B., Lott, J., Matthews, E., & Mintie, E. (2007). Taking on Turnitin: Tutors advocating change. Writing Center Journal: Vol. 27, Iss. 1, Article 4. https://doi.org/10.7771/2832-9414.1613

Bryan, M. D. (2024). Bringing AI to the Center: What Historical Writing Center Software Discourse Can Teach Us about Responses to Artificial Intelligence-Based Writing Tools. In Andrews, C. D. M., Chen C., & Wilkes, L. (Eds.), The Proceedings of the Annual Computers and Writing Conference, 2023, 15-26. WAC Clearinghouse. https://doi.org/10.37514/PCW-B.2024.2296.2.02

Buriak, J. M., Akinwande, D., Artzi, N., Brinker, C. J., Burrows, C., Chan, W. C. W., Chen, C., Chen, X., Chhowalla, M., Chi, L., Chueh, W., Crudden, C. M., Di Carlo, D., Glotzer, S. C., Hersam, M. C., Ho, D., Hu, T. Y., Huang, J., Javey, A., …Ye, J. (2023). Best practices for using AI when writing scientific manuscripts: Caution, care, and consideration: Creative science depends on it. ACS Nano, 17(5), 4091-4093. https://doi.org/10.1021/acsnano.3c01544

Byrd, A. (2023). Truth-Telling: Critical Inquiries on LLMs and the Corpus Texts That Train Them. Composition Studies, 51.1, 135–142. 

Chesterman, S. (2024). Good models borrow, great models steal: Intellectual property rights and generative AI. Policy and Society. https://doi.org/10.1093/polsoc/puae006

Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. https://doi.org/10.1080/14703297.2023.2190148

Cummings, R.E., Monroe, S. M., & Watkins, M. (2024). Generative AI in first-year writing: An early analysis of affordances, limitations, and a framework for the future. Computers and Composition, Vol. 71. https://doi.org/10.1016/j.compcom.2024.102827

Deans, T., Praver, N. & Solod, A. (2023, August 1). AI in the writing center: Small steps and scenarios. Another Word. https://dept.writing.wisc.edu/blog/ai-wc/

Dianova, V. G., & Schultz, M. D. (2023). Discussing ChatGPT’s implications for industry and higher education: The case for transdisciplinarity and digital humanities. Industry and Higher Education, 37(5), 593–600. https://doi-org.queens.ezproxy.cuny.edu/10.1177/09504222231199989

Escalante, J., Pack, A., & Barrett, A. (2023). AI-generated feedback on writing: Insights into efficacy and ENL student preference. International Journal of Educational Technology in Higher Education, 20(1), 57. https://doi.org/10.1186/s41239-023-00425-2

Falati, S. (2023, February 22). How ChatGPT challenges current intellectual property laws. New York Law Journal. 

Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. ENAI Recommendations on the ethical use of Artificial Intelligence in Education. International Journal for Education Integrity, Vol. 19. https://doi.org/10.1007/s40979-023-00133-4

Harris, M. (1995). Talking in the middle: why writers need writing tutors. College English, Vol. 57, No. 1, 27–42. 

Hart-Davidson, W. (2018). Writing with robots and other curiosities of the age of machine rhetorics. In Alexander, J. & Rhodes, J. (Eds.), The routledge handbook of digital writing and rhetoric. Routledge. https://doi.org/10.4324/9781315518497

Howard, R. M. (2000). Sexuality, textuality: The cultural work of plagiarism. College English, Vol. 62, No. 4., 473–491. https://doi.org/10.2307/378866

Hu, Y. (2023). Literature in the age of artificial intelligence: A preliminary study on the big language model AI. Advances in Social Science, Education and Humanities Research. 10.2991/978-2-38476-092-3_228 

Imran, M., & Almusharraf, N. (2023). Analyzing the role of ChatGPT as a writing assistant at higher education level: A systematic review of the literature. Contemporary Educational Technology, 15(4), ep464. https://doi.org/10.30935/cedtech/13605

Johnson, G.P. (2023). Don’t act like you forgot: Approaching another literacy “crisis” by (re)considering what we know about teaching writing with and through technologies. Composition Studies, 51.1, 169–175

Kahveci, Z.Ü. (2023). Attribution problem of generative AI: A view from US copyright law. Journal of Intellectual Property Law & Practice, Vol. 18, Iss. 11, 796–807. https://doi.org/10.1093/jiplp/jpad076

Kandeel, M. E. & Eldakak, A. (2024). Legal dangers of using ChatGPT as a co-author according to academic research regulations. Journal of Governance & Regulation, 13(1), 289–298. https://doi.org/10.22495/jgrv13i1siart3

Kim, T. W. & Tan, Q. (2023). Repurposing text-generating AI into a thought-provoking writing tutor, ArXiv. https://doi.org/10.48550/arXiv.2304.10543

Kraaijeveld, S.R. (2024). AI-generated art and fiction: signifying everything, meaning nothing? AI & Society. https://doi.org/10.1007/s00146-023-01829-4

Laquintano, T., Schnitzler, C. & Vee, A. (2023). Introduction to teaching with text generation technologies. In A. Vee, T. Laquintano, & C. Schnitzler (Eds.), TextGenEd: Teaching with text generation technologies. WAC Clearinghouse. https://doi.org/10.37514/TWR-J.2023.1.1.02

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, Vol. 4, Iss. 7. https://doi.org/10.1016/j.patter.2023.100779

Lucchi, N. (2023). ChatGPT: A case study on copyright challenges for generative artificial intelligence systems. European Journal of Risk Regulation, 1–23. doi:10.1017/err.2023.59

Lunsford, A. (1991). Collaboration, control, and the idea of a writing center. The Writing Center Journal, Vol. 12, No. 1, 3–10.

Mauran, C. (2024, May 30). Meta is using your posts to train AI. It’s not easy to opt out. Mashable, https://mashable.com/article/meta-using-posts-train-ai-opt-out

McAdoo, T. (2023, April 7). How to cite ChatGPT. APA Style Blog. https://apastyle.apa.org/blog/how-to-cite-chatgpt

McKinney, J. G. (2013). Peripheral Visions for Writing Centers. University Press of Colorado. https://doi.org/10.2307/j.ctt4cgk97

Miller, C. R. (2018). What can automation tell us about agency? In Gunn, J. & Davis, D. (Eds.), Fifty Years of Rhetoric Society Quarterly. Routledge.

Mills, A. (Curator). (2022). AI text generators and teaching writing: starting points for inquiry. WAC Clearinghouse, https://wac.colostate.edu/repository/collections/ai-text-generators-and-teaching-writing-starting-points-for-inquiry/ 

Modern Language Association (2023, March 17). How do I cite generative AI in MLA Style? Ask the MLA. https://style.mla.org/citing-generative-ai/ 

Ng, D. T. K., Leung, J.K.L., Chu, S.K.W., & Qiao, M.S. (2021) Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, Vol. 2. https://doi.org/10.1016/j.caeai.2021.100041

Rowland, D. R. (2023). Two frameworks to guide discussions around levels of acceptable use of generative AI in student academic research and writing. Journal of Academic Language and Learning, 17(1), 3169. 

Seader, C., Markins, J., & Canzonetta, J. (2018). Mediated authority: The effects of technology on authorship. In Alexander, J. & Rhodes, J. (Eds.), The routledge handbook of digital writing and rhetoric. Routledge. https://doi.org/10.4324/9781315518497

Selfe, C. L. (1999). Technology and literacy: A story about the perils of not paying attention. College Composition and Communication, Vol. 50, No. 3, 411436. 

Shamoon, L. & Burns, D. H. (1999). Plagiarism, rhetorical theory, and the writing center: new approaches, new locations. In Buranen, L. & Roy, A.M. (Eds.), Perspectives on Plagiarism and Intellectual Property in a Postmodern World, 183–192. SUNY Press.

Su, Y., Lin, Y., & Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms. Assessing Writing, Vol. 57. https://doi.org/10.1016/j.asw.2023.100752

Tan, D. & Liang Tan, W. (2022). AI, author, amanuensis. Journal of Intellectual Property Studies, 5(2), 132.

Veltman, C. (2023, July 17). Thousands of authors urge AI companies to stop using work without permission. NPR

Woo, D. J., Susanto, H., Yeung, C. H., Guo, K., & Fung, A. K. Y. (2024). Exploring AI-Generated text in student writing: How does AI help? Language Learning & Technology, 28(2), 183–209. 

Zohny, H., McMillan, J. & King, M. (2023). Ethics of generative AI. Journal of Medical Ethics, Vol. 49, No. 2, 7980. doi.org/10.1136/jme-2023-108909

Footnotes
    1. Here, “multilingual writers” is used as an umbrella term; the terminology used by Barrot (2023) is “second language (L2) writers”; by Escalante et al. (2023) is “English as a New Language (ENL) learners”, by Liang et al. (2023) is “non-native English writers”, and by Woo et al. (2024) is “English as a Foreign Language (EFL) students”, respectively. 
    2. The audience Hart-Davidson addresses is one of rhetoric and composition professionals. His call can extend to writing center practitioners because, though the instruction differs, they also support the teaching and learning of writing (Harris, 1995).
https://thepeerreview-iwca.org