Balancing Efficiency and Ethics: Student Perspectives on ChatGPT

Mariana Zieve-Cohen, Middlebury College 
Nhan Huynh, Middlebury College 
Genie Nicole Giaimo, Hofstra University

Abstract

This study investigates student perspectives and usage of generative AI writing and learning tools, like ChatGPT, in their writing and learning processes through interviews with 15 undergraduate students at Middlebury College. Researchers uncovered how students perceive and interact with generative AI in their writing and learning practices. The study methodology consisted of semi-structured interviews, with questions focused on eliciting experiences and attitudes related to ChatGPT. Analysis of transcripts using open coding revealed that students found ChatGPT to be a helpful tool for structuring academic and personal writing and learning tasks. However, students also expressed ethical concerns about academic integrity and a range of positive, neutral, and negative attitudes towards using generative AI. Students actively using ChatGPT exhibited pragmatic attitudes about improving efficiency and productivity while non-users expressed reservations about intellectual impacts and cheating. First year students tended to have the strongest anti-ChatGPT sentiments. The researchers applied findings to writing center praxis, including training interventions focused on ethical technology use, student values, and workload pressures. The study underscores the importance of nuanced approaches to incorporating generative AI, in writing centers that consider its benefits alongside its ethical risks. 

Keywords: Generative AI, Writing Process, Student Perceptions, Learning Process, Writing Centers, ChatGPT.

Introduction

The introduction of ChatGPT into the educational landscape has been radical and swift. ChatGPT, a generative AI system created to provide information to users through text-based conversations, is powered by a neural network machine learning model called the Generative Pre-trained Transformer (GPT). The GPT processes the user’s input, analyzes patterns, and predicts the most appropriate responses. ChatGPT was trained on a huge dataset of online text conversations to learn the nuances of human language. It also received reinforcement learning from human trainers who provided sample conversations and ranked the chatbot’s responses. This training enables ChatGPT to have more human-like contextual conversations on a wide range of topics. While very capable, ChatGPT does not think or feel on its own but rather it relies on the continuous feedback from users, and developers–as well as the OpenAI database that stores the recording of the users’ conversations/data–allowing the system to keep improving through the machine learning process. OpenAI boasts about ChatGPT’s ability to comprehend prompts and provide detailed responses—a capability that has garnered its widespread attention (“Introducing ChatGPT” n.d.). ChatGPT was introduced by OpenAI in November 2022; it took two months for it to reach 100 million users (“What is ChatGPT,” n.d.). 

Although ChatGPT “dropped” late November 2022 to media hype and some fanfare, institutional conversations at Middlebury College started in Winter 2023. Faculty wanted to reassess their curricula and revise syllabi, assignments, exams, and other teaching materials. Initially, institutional discussion was tentative and quite preliminary and often took the form of discussion and debate rather than tools demos or roundtables. A prevailing concern among faculty members was the potential for students to misuse ChatGPT, compromising the integrity of our college’s honor code. In Spring 2023, writing center peer tutors facilitated a workshop that brought faculty, staff, and students together for the first time to discuss generative AI and its uses and drawbacks. Official syllabi language and institutional policy on generative AI was not shared until late summer 2023, a couple of weeks before Fall semester. Over the past year and a half, these conversations have largely centered on faculty and staff and the impacts of ChatGPT on student learning, rather than on students’ engagement with this technology, which, the researchers believe is a critical missing voice in the landscape about research on generative AI writing technologies. 

This study investigates students’ attitudes towards and engagement with generative AI, with a specific focus on students’ interactions with ChatGPT. In studying how students complete academic assignments using external supports like AI writing tools and peer tutoring, we can apply what we learn about students’ academic journeys to writing center training and praxis. Because Middlebury College has a robust embedded tutoring model (writing fellows) for first year and upper-level writing courses (n=65 courses supported annually), this peer tutor-led research project can support wraparound interventions on teaching/tutoring writing in the age of generative AI. 

We conducted semi-structured interviews with fifteen students, seeking candid insights into their experiences with ChatGPT and their perceptions of its presence on the Middlebury College campus. We wondered how the Writing Center and tutoring would be impacted by ChatGPT. Because the Writing Center is an important source of support for students at higher education institutions, this technology puts the Writing Center at the center of conversations about generative AI, yet, from our research and experience at our institution, we are just one of many spaces where these discussions are taking place–everyone is concerned about generative AI and figuring out their responses. Through research, we aim to gain a more comprehensive understanding of how students were actively employing ChatGPT in their educational pursuits in the first full semester following OpenAI’s release of ChatGPT. 

Initially, we assumed that there would be a split on attitudes toward the usages of AI Learning tools based on student disciplinary focus, with STEM majors having more positive attitudes and Humanities majors perceiving ChatGPT negatively. Our assumption was that writing–while arguably done across the curriculum at our college through a two-tier writing requirement (first year seminar and second-level writing)–is not necessarily completed in uniform ways, which impacts student writing practices. For example, many students avoid completing the second required writing course, sometimes until senior year. However, Humanities and STEM majors have mixed attitudes towards ChatGPT, and varied usage practices that are not consistent across disciplines. In exploring the dynamics between writing center usage and AI usage, we also hypothesized that there exists a meaningful connection between the two. Our findings, however, indicate that AI usage and writing center usage are siloed, which we discuss below.  

During our study, we have observed how rapidly the AI learning landscape has changed: new and improved technologies are released frequently. Furthermore, AI is having a broad impact not only on academic writing and public writing, but many other fields like visual arts, law, marketing, journalism, and medicine. AI has become a modern-day gold rush (replete with cottage industries offering AI tutoring, AI assessment, and AI hiring to higher education institutions). With its vast potential to reshape many industries and our society, it is imperative that we examine both the potential benefits and challenges associated with this boom. By understanding the ramifications of AI advancements on writing center praxis, we can navigate this transformative era of responsibility, ensuring that the benefits are maximized while mitigating potential risks. The emergence of this technology is challenging the traditional perspective about academic integrity and human capabilities.

Literature Review

The integration of large language models (LLMs) like ChatGPT into education has gained substantial attention over the past two years. The sources analyzed cover diverse aspects, including student interactions, academic integrity concerns, the quality of generated content, ethical considerations, and the perspectives of educators and students. 

Several research studies contribute to our understanding of ChatGPT’s role in education. Han et al. (2023) present the RECIPE4U dataset, focusing on semester-long interactions between college English as a Foreign Language (EFL) students and ChatGPT as they wrote and revised their essays. Through collecting data on students’ interactions with ChatGPT, including revisions, chat conversations, and student-specific data on intent and satisfaction levels, we can learn a lot about how students engage with ChatGPT in long-term and extensive writing and revision processes. Herbold et al. (2023) compared human-written and ChatGPT generated writing and found that essays generated by AI were consistently evaluated as higher quality by teachers. Furthermore, the researchers found that there are significant differences in the linguistic characteristics of writing created by humans and generative by AI, with human writing being more loosely constructed and variable than AI generated writing which has a repetitive and rigid structure. They ultimately argue that teachers need to reimagine homework, teach writing process, and incorporate AI into basic learning outcomes in order to free-up time to focus on more complex learning goals (Herbold et al., 2023). 

One common theme in scholarship and media pieces on generative AI is how it has impacted academic integrity, which, in turn, impacts how educators teach (especially, how they teach writing). Several authors (Sullivan et al., 2023; Gecker, 2023; Nolan, 2023) highlighted faculty concerns with cheating and misuse of AI tools. Some faculty have changed their pedagogical approach, turning to proctored exams, pen and pencil writing tasks, and redesigned assignments (D’Agostino, 2023). Others (Nolan, 2023), recognize the difficulty of proving students used AI and therefore cheated on writing assignments. Additionally, researchers have found that AI detector software is not always accurate; for example, it is biased against multi-language learners (Myers, 2023).   

An element of the focus on academic integrity and ChatGPT is the “how-to” genre, which offers faculty measured advice such as “don’t panic” and bring students into the AI conversation (D’Agostino, 2023). Many of these more popular pieces collect and share advice by and for academics on how to harness ChatGPT’s capabilities while mitigating risks and emphasizing human-centric learning. Yet, while these guides ask educators to embrace generative AI, and thoughtfully integrate it into their pedagogy, they also raise the specter of academic misconduct and the challenges educators currently face as they teach writing and critical thinking. This kind of “doublespeak” around generative AI in higher education research and popular media–we cannot get left behind but there are many hazards to teaching in the age of gen-AI–underscores the need for data-driven and student-centered research on this topic. We also need to learn more about student perceptions of this technology, as preliminary findings suggest that they are not cheating any more than pre-generative AI (Singer, 2023), we have found that they have varied attitudes about using generative AI in their academic work.

We need more detailed classroom-based research on generative AI to determine whether concern of faculty, including at Middlebury college, is warranted, as well how this technology can be effectively deployed. Beshero-Bondar (2023) gives the readers a glimpse of how this technology can be deployed in the classroom by outlining a pedagogical approach that gradually introduces several computer science concepts to the students. For example, students learn about natural language processing (NLP) concepts by utilizing ChatGPT as a conversational agent in different assignments, which begin by crafting prompts for ChatGPT, saving responses, and building a corpus of texts for further exploration. Students then progress to processing the texts in Python, where they learn to select words of interest and explore similarity calculations using Python and spaCy. This pedagogical approach sparks ethical considerations, like bias in word choice by AI, etc. Other authors like Ju (2023) and Malik et al. (2023) also emphasize the need for responsible integration, transparency in training programs and all educational settings. Beshero-Bondar (2023) demonstrates how students can be gently introduced to AI by developing their understanding of NLP while also acquiring Python skills to work with large datasets. Other authors like Frazier and Hensley (2023) suggest incorporating generative AI tools into a college course focused on learning and motivation strategies as a way to enhance metacognition, critical thinking and ethical use. The approach involves introducing ChatGPT to students, prompting them to generate S.M.A.R.T goals (Specific, Measurable, Achievable, Relevant and Time-Bound) and then comparing these goals to ones they crafted themselves. Reflection and class discussions center around evaluating the outputs generated by AI, recognizing the limitations of the technology, and considering the ethical implications for learning and future careers. 

There is a split between scholars who believe generative AI will improve student learning outcomes, like writing and critical thinking skills, and those who do not. Several (Frazier and Hensley, 2023; Dayton and Buck, 2023) propose an ethical and positive incorporation of AI in the writing classroom which could improve student writing outcomes. Dayton and Buck (2023) argue that the rise of ChatGPT gives educators a chance to rethink their writing pedagogy and to reframe the exigency of teaching writing to students. In other words, we can center student voices, improve our assignments and assessments, and center the writing process. In their framing, ChatGPT is a tool rather than a game changing technology. Similarly, Frazier and Hensley provide a writing assignment that incorporates ChatGPT to improve students’ metacognition and their generative AI literacy. Although these are pedagogical pieces rather than empirical ones, they demonstrate the early adaptor enthusiasm for technology with some focus on ethicality and critical thinking.  

Aimée Morrison (2023) explores the role of AI in writing, discussing applications such as grammar and spell-checking, content creation, language translation, text analysis, and writing assistants. While acknowledging the potential benefits of AI in assisting writers, Morrison, like many others, emphasizes that AI should be viewed as a supplement rather than a substitute for human creativity and critical thinking. Morrison’s writing is characterized by personal expression, giddiness, and a desire to engage the reader emotionally. In contrast, the AI-generated response to her article’s topic is described as correct, mild-mannered, and objective but lacking in depth and coherence. Morrison makes the distinction between writers and writing, which, ultimately, challenges both why we write and what we produce when writing. The article advocates for nurturing students’ passion for writing and encouraging them to express their own ideas, challenging the notion that writing is about approaching a predefined ideal and grammatically correct answer. So, in similar ways to several other scholars, Morrison challenges us to rethink the project of teaching/assigning writing and center meaning making and student-driven inquiry. 

In writing centers, scholars are exploring the impact of ChatGPT on feedback and tutoring praxis. Lindeberg et al. (2023) discuss the influence of AI chatbots, particularly ChatGPT, on writing centers, addressing concerns about authorship and the need for administrators to adapt to potential changes. Deans et al. (2023) provide real examples of using ChatGPT in writing center sessions and highlighting ethical considerations. Bryan (2023), referencing the history of technology and its impact in writing centers argues that they “can expect to become sites of negotiation around questions of ownership of AI-(co)authored texts and the value of AI-supported pedagogies on college and university campuses” (p. 16). Steiss et al. (2024) found that trained human experts give more effective and high-quality writing feedback than ChatGPT, but that ChatGPT might be used in early writing processes to address time pressures and constraints on providing writing to large numbers of students. However–though conversations on listservs like WCenter, and professional organizations like IWCA and C’s are now focusing on emerging technology and generativeAI in conference themes and proceedings in 2024 and beyond, there are few articles about ChatGPT in writing centers, especially on student attitudes or perceptions towards this technology.

As a field–and in higher education more broadly–we are still at the very early stage of examining and applying this technology. Current research and popular pieces underscore the impact of ChatGPT on writing practices, with concerns raised about potential threats to critical thinking skills and academic integrity/authorship. There is also an ongoing dialogue about the delicate balance between harnessing the benefits of ChatGPT and addressing its challenges, with a consistent call for educators and policymakers to actively involve students in shaping ethical policies and practices. In writing centers, the challenges and opportunities of AI loom large, given the historical role that writing centers have occupied on college campuses around the United States and globally. 

Methods

Participants

This IRB approved study interviewed fifteen undergraduate students (Table 1). Participants were recruited through flyers posted around campus and class announcements. Inclusion criteria required that participants be currently enrolled as full-time undergraduates at the time of the interview, which took place during spring 2023. Five were first year students, three were sophomores, and six were seniors. Over half the junior class goes abroad and there are February cohorts of students that impact rank. Additionally, the majority of the seniors interviewed were super seniors who took gap semesters due to COVID-19. 

Table 1 

Interview participants’ information by major, academic rank, attitude about ChatGPT and pseudonyms for reference in results and discussion section 

Name Major  Year  Attitude about ChatGPT Degree of ChatGPT Usage
Glyn  Economics  Senior  Positive  Frequent 
Paul Economics and Computer Science  Senior  Positive  Occasional-Frequent
Russ Physics First Year  Positive  Occasional
Sunny Chinese and Architecture  Sophomore  Positive  Occasional
Asher  Neuroscience First Year  Positive  Frequent
Cassie  Political Science  Senior  Positive  Frequent
Wesley  Geography  First Year  Negative  Never 
Willow  Computer Science Sophomore  Negative  Occasional
Rachel  Neuroscience  Junior Negative  Once or Twice
Gregory  Undecided/Humanities  First Year  Negative  Never
Lorenzo  Chemistry  Senior Negative  Once
Perry English  Senior  Neutral  Frequent
Teddy Psychology and Sociology Senior  Neutral  Once or Twice
Ethan  Intended Political Science First Year  Neutral  Twice
Ryder  Political Science and Religion  Sophomore  Neutral  Occasional

Materials

Initially, IRB approval (and exemption) was secured by the research team. Included in materials for the study were the interview questions (Appendix A), a written consent script, and a verbal consent script. Interviews were recorded and transcribed using Zoom or Otter.ai. 

Procedure

In-depth semi-structured interviews lasting approximately thirty minutes, were conducted by three student researchers in-person on campus in semi-public spaces, such as the library and open classrooms or via Zoom. The interviewers were flexible with their time, and some interviews went over time by 10 – 15 minutes. Interviews were transcribed by Zoom or Otter.ai through AI-powered tools. All transcripts were then checked for accuracy and “cleaned” manually before interpretation by the student researchers. Because of the institution’s small student population and residential structure (nearly all students live on campus for their undergraduate career), it was hard to control for variables like race and gender. The researcher team, however, included two male-identifying and one female-identifying students, with a range of grade levels (one senior, one junior, one sophomore). Two of the student researchers were BIPOC and one was white. Additionally, all student researchers had human subjects training through our college’s IRB and two of the researchers had additional methods training from WRPR212: Issues and Methods in Tutoring Writing.  

Coding and Analysis

The interview transcripts were analyzed using open coding (Saldana, 2011). The research team reviewed each transcript and identified repeated words and key concepts first by the interview participant, then by commonly occurring themes (Appendix B). We then listened to each interview to search for vocal interactions between the interviewer and interviewee. We analyzed the language tone, how the questions were asked and responded to, as well as ways in which the interviewer contributed to the questions and responses. Finally, we used the PANAS (positive and negative affect schedule) (Crawford & Henry, 2004) to analyze the attitudinal dispositions of the interview participants, which we detail in the table below (Table 1). The coding rubric was developed after several rounds of iterative coding, consensus and agreement testing among researchers, and further discussion. All codes and analysis were agreed upon by the research team through multiple rounds of individual coding, followed by group meetings that included consensus forming on definitions. Disagreements were addressed by consultation with the PI and further iterative coding until agreement was reached. This required roughly seven rounds of individual coding cycles and follow-up meetings over six months. 

Results

Participants’ attitudes towards ChatGPT were affectively mixed; 40% of interview participants expressed positive attitudes towards ChatGPT, 33% expressed negative attitudes, and 26.7% expressed neutral attitudes (Table 1). First year students’ attitudes about ChatGPT were evenly split compared to sophomores or seniors. Furthermore, first-year students were less likely to use ChatGPT compared to students of other academic ranks (Table 1). All the seniors had used ChatGPT at least once, with several of them expressing that they used the technology frequently. The majority of participants had attended the writing center once, mainly through their First Year Seminar and the embedded writing fellows program. In terms of the relationship between attitude about ChatGPT and the degree of ChatGPT usage, we found that all participants that felt positive used the technology regularly or occasionally while those who felt negative used ChatGPT infrequently (1 – 2 times) or never.

In our interviews, students demonstrated awareness and thoughtfulness regarding their use of ChatGPT, recognizing both its value and its shortcomings. Attitudes toward ChatGPT varied with an almost even split between positive, negative, and neutral valances towards the tool. None of the students interviewed used ChatGPT to fully produce papers, though the tool was used for brainstorming and editing purposes. Additionally, students who either refrained from using ChatGPT entirely, or had used it sparingly, expressed profound reservations about the technology, compared with regular users of the technology, who expressed fewer ethical issues. Finally, all participants had worked with the Writing Center–mainly through its embedded tutoring program in First Year Seminar–however, there was little intersection between the kind of support sought at the Writing Center and the support offered by ChatGPT. Whereas students noted that they came to the writing center for help with the final stages of writing, they used ChatGPT earlier in the process for outlining or for collecting quotes and other evidence. Moreover, students indicated that the writing center was somewhere they went to if they were struggling or felt like they needed help whereas using ChatGPT did not have such emotional valences. For example, Lorenzo, a senior Chemistry major stated, “I wouldn’t say that I have much experience with the writing center. I haven’t really ever had too much of a problem going through with any of my writing assignments.” Here, Lorenzo implies that the writing center is a place for people with “problems.” There seems to be a negative connotation or shame surrounding the writing center whereas ChatGPT does not. 

We also found that students thoughtfully reported their learning needs. Regardless of students’ attitudes about ChatGPT, they provided well-developed, insightful, if also not depressingly pragmatic, reasons for how they incorporate ChatGPT into their learning processes. Some factors include overwhelming workloads, differential learning processes, and low writing confidence, but others are related to broader socio-cultural challenges. This study found that students are ready and willing to have conversations about the benefits and drawbacks of ChatGPT, hold a wide range of opinions about AI technologies, and are trying to figure out the purpose of their education and their agency in obtaining it.

Discussion

Our participants are deliberate in their engagement with ChatGPT, though their usage is highly varied. Humanities majors like Perry and Rachel attribute their cautious attitudes towards ChatGPT to discipline writing that requires analysis, inventiveness, reflection, and nuanced interpretation. STEM students like Russ view ChatGPT as a valuable tool for structuring their work, particularly for tasks related to lab reports and essays. While social science majors like Sunny and Paul find utility in ChatGPT for a broader spectrum of tasks, like computer language coding and email drafting. While several interviewees embrace AI tools, others, like Wesley, prioritize a human-centric approach to education echoing Aimée Morrison’s (2023) belief that writing instructors should empower student writers and nurture their ideas and voices . 

Students also differed in conceptualizations of the goal of college, which we think might impact how they engage (or refrain from engaging) with ChatGPT. As many scholars that we cite in the literature review note, ChatGPT and generative AI raise ethical concerns around academic integrity and the development and honing of critical thinking skills (Frazier and Hensley, 2023; Dayton and Buck, 2023). The learning culture at an expensive, highly selective institution is already competitive and transactional; several students who were interviewed expressed concerns about ChatGPT further eroding the purpose of a liberal arts education. The challenge lies in integrating technology like ChatGPT in ways that align with the values of rigorous learning, critical thinking, and skill development that are the essence of a liberal arts education.

Most interviewees have engaged with both the Writing Center and ChatGPT but for divergent needs (working through feedback and revision on one end vs. study prep, idea development, and shifting writing tone on the other). To this point, we discuss major specific findings below and then offer guidance on preparing tutors to work with writers who use AI. We also provide resources for peer tutors to engage in campus-wide dialogue about ChatGPT as we are observing siloed conversations about AI and writing that leave out students who represent a necessary and diverse set of voices.

Attitudes Towards ChatGPT Impacts Usage

Students who refrain from using ChatGPT, or use it sparingly, harbor reservations about the technology. For example, Wesley, a first-year geography major who has never used ChatGPT, expressed concerns about AI’s potential negative impacts on democracy and job security. His deliberate answer along with his intentional avoidance of ChatGPT seem to underscore his commitment to independent learning separate from enhancement tools, of which ChatGPT is arguably one. Wesley’s concerns about AI’s negative impacts may reflect broader societal anxieties about the consequences of increased automation and reliance on artificial intelligence. He stressed the importance of learning for oneself and actually doing the work: “frankly, I’m paying to be here in order to like, learn. And so I’ve never been one to take shortcuts. Even when [… ] stuff academically gets challenging.” Wesley does not think that there is an advantage to using ChatGPT because “at the end of the day, like I, you know, I’m gonna come out of it with more skills.” His comments highlighted that, in the context of highly selective institutions, there exists a diverse range of attitudes towards technology like ChatGPT. 

Similarly, Gregory, an intended art history major who also has never used ChatGPT, vehemently rejected ChatGPT, labeling it as an “anti-intellectual platform.” His strong language reflects concerns about the erosion of critical thinking and the potential dilution of intellectual discourse in a college setting. Remarkably, Wesley and Gregory–who are both first year students–were deeply concerned with the ethical and democratic impacts of ChatGPT and AI, whereas other participants reported using ChatGPT to help them work more efficiently. We wonder if first year students might be more eager and focused on learning for the sake of learning and independent inquiry. This might be a result of not yet being fully immersed in the academic intensity of  a competitive and high pressure institution like Middlebury College. Gregory and Wesley may feel less stress to complete tasks on time and more inclined to accomplish their work for a sense of fulfillment. As Gregory stated, “ …it’s like, you’re forfeiting your ability to think, to a machine, you know, and I don’t know, maybe that sounds very, like, dystopian or whatever. And maybe corny, but I think just on its own that, like, giving that away is kind of insane.” Wesley said, “ I’ve never been one to take shortcuts . . . I actually intentionally have avoided letting it change the way I’ve learned and think.” Gregory’s concerns about the impact of AI may also relate to his interest and passion for the arts and humanities, given his major in art history. In some ways, as he notes, AI could make the project of learning less salient for certain students. 

Glyn, a senior economics major, on the other hand, has a utilitarian perspective that prioritizes efficiency over ethical considerations. She views ChatGPT as a practical tool for building outlines and optimizing academic performance. Glyn notes that ChatGPT has “made me a more effective learner to be able to just like, be more efficient in my learning by, you know, getting information faster and being able to expand the depths of my learning because it’s like having a teacher who knows everything, one on one in front of you at any time.” With its ability to summarize, create outlines, and produce and edit text, ChatGPT can eliminate tedium associated with time consuming academic tasks. However, if students such as Glyn are outsourcing a lot of their work to AI tools then perhaps this is an indication of a broader concern regarding the types of work professors are assigning to students. At Middlebury College, it seems that ChatGPT is providing students with a way to relieve the stress of time constraints as they are overloaded with homework and other activities and commitments. For example, Ryder, a double major in political science and religion, said “Sometimes, if I’m behind on readings for class. I’ll ask it to summarize readings for me.” However she also acknowledged that ChatGPT can worsen the situation as it will summarize the wrong readings. 

Whereas students such as Gregory, Wesley, and Glyn had strong opinions either in favor of or against the usage of ChatGPT, other students had a more nuanced view. For example Teddy, a senior psychology and sociology double major stated that he does not use ChatGPT a lot, “but sometimes when I’m lazy, I do go into it to like getting if I’m looking for authors on a particular topic that I’m going to write about, instead of me Googling all of that, I can just go to it and ask it and then get the name of the authors and then go check out their works.” Although this statement does not seem to be a value judgment, Teddy’s usage of the word “lazy” is telling. In stating that they use ChatGPT when they are feeling lazy, Teddy implies that they only use it when they are unwilling–or unable–to complete the task. Furthermore, Teddy’s word choice hints at a sense that ChatGPT should not be used and that there is a better way to do academic work.

Ethical Considerations and Learning Efficiency

Gregory’s criticism extends beyond ChatGPT to machine learning platforms, deeming them “corrosive and insidious” to our society, not just college. He says, “I think it’s part of this like anti-intellectual thread that kind of runs through the political climate, but also just, like, everyday life where people are like, Oh, it’s funnier, more convenient. Time, you know, it’s easier to just, you know, shrug something off when it’s a difficult question instead of, like, actually sitting down and dealing with it. Like all these tech bros that are like making or using this stuff.” Glyn, however, acknowledges the tool’s limits but perceives it as a means to accelerate learning and accomplish tasks more efficiently. Despite their widely different attitudes towards generative AI, these students identify several ethical implications of using technology to streamline our academic and professional work. Gregory’s concern about a culture obsessed with convenience over deep thinking, however, is more dire than issues of gen-AI plagiarism or shoddy output. 

Perry and Rachel, Humanities and STEM majors respectively, expressed ethical concerns and fears of potential plagiarism arising from using ChatGPT and influencing their incorporation of the program into their learning. Ryder, a double major in political science and religion, highlighted the limitations of ChatGPT, expressing frustration over its tendency to provide incorrect or confused summaries of readings. Despite finding it often more of a hindrance than a help, Ryder acknowledged using the AI platform. She doubted its advanced nature stating, “I don’t think it’s that advanced yet, and it’s not smarter than me yet.” These perspectives underscore the utility of students’ decision to use ChatGPT, despite its acknowledged shortcomings, many of which center around academic integrity which is a mainstay of modern academic culture. This raises intriguing considerations about the practicality and advantages students may derive from such tools despite the drawbacks like fabricated or misrepresented information. At our college, a faculty committee is assessing generative AI’s impact on practices related to our honor code because of the challenges the technology poses with a culture of academic integrity; however, the “wild West” of gen-AI usage rules, which vary from instructor-to-instructor and class to class, continue as we write this in summer 2024.

Students Seek Different Supports for Different Writing Needs

While most of the students we interviewed have engaged with the Writing Center at some point, their motivation for doing so differed from their motivations for using ChatGPT. It appears that the Writing Center serves a function distinct from ChatGPT, insofar as students do engage in idea development, revision, and decoding instructor feedback in tutoring sessions, as compared with ChatGPT (which is not as embedded in these parts of the writing process). One student remarked, “Honestly, I was able to talk about my ideas, and think that I was able to like, organize my essay and get some nice tips” while another said,” I have on some instances worked with the Writing Center through Oratory Now to help train people.” These students emphasize the role of human-to-human contact that the Writing Center provides which might help guide writing center studies’ research on generative AI. Further exploration is needed to pinpoint specific areas where writing centers and AI intersect and where they diverge. Currently, we see students reporting the use of both the Writing Center and ChatGPT for quite separate and nuanced needs. Further exploration of the intersections and divergences between writing center clientele and adopters of ChatGPT will contribute to a more comprehensive analysis of the current landscape of writing assistance “tools” and services provided in a university setting. 

Where Do We Go From Here? Implications for Writing Centers

We believe that peer writing tutors could perform a necessary role in facilitating conversations among students and act as a link between the student and faculty/staff populations. Already, peer writing tutors assume a crucial mediating role between students and professors, especially in our embedded tutoring program. Tutors adeptly navigate both the student and educator perspectives, effectively bridging the gap in conversations about ChatGPT taking place between students and faculty/staff. Through deliberate writing across the curriculum (WAC) tutor training models, writing centers can provide guidance and support to faculty and students, alike, on generative AI writing tools. 

Our writing center is also actively trying to understand ChatGPT and students’ perceptions and attitudes toward AI-generated writing tools. This includes examining the technology landscape from a student’s perspective, as well as training tutors to discuss ChatGPT in their sessions. We added questions about generative AI use into our appointment and client report forms. However, we have found that tutors and students are not really reflecting on their use of generative AI. These conversations are likely occurring informally and outside tutorials. The lack of open conversation might be due to students’ concerns about academic integrity, or shame about seeking help or outsourcing their work. We need to pull back the curtain on ChatGPT use, and systematically study how it is changing the way students compose their writing and how tutors conduct their work in the center.   

Many writing centers, including ours (Appendix C), have produced in-house training and guidance. Our training shared research findings and initiated conversation among tutors regarding the application of ChatGPT in higher education. It was evident from the discussion that most tutors did not perceive ChatGPT as a significant concern or in opposition with writing tutoring or our mission.  Male tutors with STEM backgrounds, in particular, tended to be the most (positively) vocal in the conversation. However, tutors did note that the proliferation of policies about usage was frustrating and confusing to them as institutional policy is to defer to individual faculty preference. Our training sparked valuable reflections on the broader implications of ChatGPT in the educational landscape, though it was preliminary. 

While a start, we wanted our tutors to engage more deeply with generative AI ethics and why writers might be attracted to or dismissive of generative AI writing technologies. Tutors did not, however, discuss disciplinary background, writing confidence, workload management capabilities, or other factors that might drive AI use. They also did not raise issues with AI that are ancillary to writing (like ethics, democracy, and other big topics that result from using this technology). As such, it might not be enough to simply develop training and wade into these conversations without first discussing how tutors use technology to facilitate their writing and learning processes. From these more general conversations around habits, attitudes, self-efficacies, etc., we can segue into more fruitful discussion of generative AI writing technology. To that point, we provide below some questions for discussion (Appendix D).

Implications and Future Research

These findings have implications for educators, institutions, and policymakers concerning the integration of technology in education, ethics, and the fostering of critical thinking skills. However, we need empirical research on the impact of ChatGPT on student writing process and critical thinking skills. As generative AI users are educated and learn in an immersive ecosystem, we might be able to observe and measure specific changes to students’ critical thinking skills and writing processes, specifically in the pre-writing/brainstorming and idea generation phases. At the moment, however, we are in a brave new world where the impact on student learning outcomes and cognitive processes remains to be seen. Our findings indicate that there are diverse perspectives presented by students–not just administrators or faculty–on the affordances and possible pitfalls of ChatGPT, and other generative AI writing programs which underscore the need for more research from student perspectives. We also need to learn more about how time pressure and intense workloads impact usage and reliance on ChatGPT to complete academic work. As a writing center with an embedded writing fellows program, we are well-poised to have conversations that bring together faculty, staff, and students around ChatGPT and AI-generated writing tools and the pressures and opportunities that lead to using this technology in college.

Conclusion 

Our finding that students use AI and the Writing Center in distinct and separate ways makes us wonder if students will not only be left out of institutional decisions about these tools but will use them with little expert guidance. Already, students are using AI tools, though they report different effects on their learning. The excessive workloads on our campus, coupled with the lack of opportunities for reflecting on the purpose of a liberal arts education, might encourage uncritical engagement with AI. Students turning to AI tools to manage demanding workloads–potentially offloading what we see as necessary critical thinking work–is both disheartening and alarming. As interviewees Russ and Gregory note, when students outsource to AI, we lose the community that is created from peer learning. Writing centers hold significant and under-realized value in facilitating discussions about critical engagement with AI. Writing centers can be a haven for those enthusiastic about writing and the learning process, providing a counterpoint to the trend of optimizing productivity over the joy of learning. However, students are grappling with the ethics of AI long before they are preparing to write through honor code violations, as well as overreliance on technology, which speaks to broader concerns with learning in higher education. The intricate relationship between students, technology, and learning calls for nuanced research on the many technological tools currently shaping students’ educational experiences, not just ChatGPT.

It is imperative, then, to move beyond the confines of writing center conversations, where, as we observe through our appointment data, we see few explicit conversations around ChatGPT and its affordances and drawbacks taking place. We envision a comprehensive approach that involves the entire school community, fostering an environment where meaningful learning takes precedence over “business” culture. By instigating and sustaining this broader dialogue, we aspire to create a campus-wide awareness and commitment to prioritizing genuine learning experiences within the challenging landscape of contemporary education. 

References

Bryan, M. D. (2023). Bringing AI to the Center: What Historical Writing Center Software Discourse Can Teach Us about Responses to Artificial Intelligence-Based Writing Tools. In Proceedings of the Computers & Writing Conference (p. 16).

Crawford, J., & Henry, J.  (2004).  The Positive and Negative Affect Schedule (PANAS): Construct validity, measurement properties and normative data in a large non-clinical sample.  British Journal of Clinical Psychology 2004, 43(3): 245–65.   https://doi.org/10.1348/01446650317  52934

D’Agostino, S. (n.d.). ChatGPT Advice Academics Can Use Now. Inside Higher Ed. Retrieved January 18, 2024, from https://www.insidehighered.com/news/2023/01/12/academic-experts-offer-advice-chatgpt

Dayton, A., & Buck, A. (2023). The Rise of ChatGPT Can Make Student Writing Better. Teaching Hub. https://teachinghub.as.ua.edu/other/the-rise-of-chatgpt-can-make-student-writing-better/ 

Frazier, M., & Hensley, L. (2023). Promoting Ethical Artificial Intelligence Literacy with Generative AI Tools Like ChatGPT on an Undergraduate Course Project. WAC Clearinghouse. https://wac.colostate.edu/repository/collections/textgened/ethical-considerations/promoting-ethical-artificial-intelligence-literacy/ 

Gecker, J. (2023). College professors are in ‘full-on crisis mode’ as they catch one ‘ChatGPT plagiarist’ after another. Fortune. https://fortune.com/2023/08/10/chatpgt-cheating-plagarism-college-professors-full-on-crisis-mode/ 

Han, J. et al. (2023). RECIPE: How to Integrate ChatGPT into EFL Writing Education. https://arxiv.org/abs/2403.08272  

Herbold, S. et al. (2023). A large-scale comparison of human-written versus ChatGPT-generated essays. Scientific Reports, 13, 18617. https://doi.org/10.1038/s41598-023-45644-9

Introducing ChatGPT. (n.d.). Retrieved January 24, 2024, from https://openai.com/blog/chatgpt

Malik, A. et al. (2023, April 9). How is ChatGPT Transforming Academia? Examining its Impact on Teaching, Research, Assessment, and Learning.

Morrison, A. (2023, Spring). Meta-Writing: AI and Writing. Composition Studies, 5(1), 144-161.

Myers, A. (2023). AI-Detectors Biased Against Non-Native English Writers. Stanford University Human-Centered Artificial Intelligence. https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers 

Nolan, B. (2023). Two professors who say they caught students cheating on essays with ChatGPT explain why AI plagiarism can be hard to prove. Business Insider. https://www.businessinsider.com/chatgpt-essays-college-cheating-professors-caught-students-ai-plagiarism-2023-1 

Proceedings of the Tenth ACM Conference on Learning @ Scale, 416–420. https://doi.org/10.1145/3573051.3596200

Report On AI Chatbots’ Impact on Writing Centers.pdf. (n.d.). Google Docs. Retrieved January 18, 2024, from https://drive.google.com/file/d/1Yv7dUiInBQDnzeVVzHurYA2-903I5nMM/view?usp=embed_facebook

Saldana, J. (2011). Fundamentals of qualitative research. Oxford university press.

S, A. P., & Aithal, S. (2023). Application of ChatGPT in Higher Education and Research – A Futuristic Analysis. International Journal of Applied Engineering and Management Letters (IJAEML), 7(3), Article 3. https://doi.org/10.47992/IJAEML.2581.7000.0193

Singer. N. (2023). Cheating Fears Over Chatbots Were Overblown, New Research Suggests. The New York Times. https://www.nytimes.com/2023/12/13/technology/chatbot-cheating-schools-students.html 

Steiss, J. et al. (2024). Comparing the quality of human and ChatGPT feedback of students’ 
writing. Learning and Instruction, 91(101894).
https://www.sciencedirect.com/science/article/pii/S0959475224000215 

Sullivan, M. et al. (2023). ChatGPT in higher education: Considerations for academic integrity
and student learning. Journal of Applied Learning and Teaching, 6(1), Article 1.
https://doi.org/10.37074/jalt.2023.6.1.17

Testing ChatGPT Response Variety to Introduce Natural Language Processing—The WAC Clearinghouse. (n.d.). Retrieved January 18, 2024, from https://wac.colostate.edu/repository/collections/textgened/ai-literacy/testing-chatgpt-response-variety-to-introduce-natural-language-processing/

The Rise of ChatGPT Can Make Student Writing Better. (n.d.). Retrieved January 18, 2024, from https://teachinghub.as.ua.edu/other/the-rise-of-chatgpt-can-make-student-writing-better/

What Is ChatGPT? Everything You Need to Know. (n.d.). Retrieved January 24, 2024, from https://www.techtarget.com/whatis/definition/ChatGPT

Wood, P., & Kelly, M. L. (2023, January 26). “Everybody is cheating”: Why this teacher has adopted an open ChatGPT policy. NPR. https://www.npr.org/2023/01/26/1151499213/chatgpt-ai-education-cheating-classroom-wharton-school

Appendix A

Interview Questions

  1. Can you tell me a little bit about yourself and your experiences with writing at Middlebury College?
  2. What is your major/minor etc.?
  3. What is your experience with the writing center (course tutors, online, drop-in etc.)?
  4. What do you know about ChatGPT?
  5. How did you find out about ChatGPT?
  6. How do you use chatgpt in your writing for school?
  7. How often do you use ChatGPT?
  8. In what ways do you use ChatGPT?
  9. Does the way you use chatgpt vary by assignment/by discipline or subject?
  10. Would you pay for chatgpt if it becomes a subscription service? And what are the ethics of that?
  11. What do you feel about an AI creating or editing your writing for you?
  12. How do you feel chatgpt has impacted your learning experience?
  13. Is there anything else you would like to share that hasn’t come up yet in our conversation?
  14. Ask at the end for a pseudonym. 

Appendix B 

Coding Rubric

Main Codes Representative Quote 
Demographics (rank and discipline(s)) See Table 1
Writing center engagement “I’ve used the writing center tutors for help editing my essays and provide me feedback on my essays and papers. I haven’t used them quite as frequently in the past 2 years or so. I use them a lot more [in] my first 2 years.”
Ethical Concerns about ChatGPT and Generative AI Honor code?
“But the ethics of it being paywall, it’s definitely questionable”
Incorporation of ChatGPT into Writing Processes  “I use it. It’s like a useful tool for me to a, like, get an understanding of like, if I’m building an argument if I’m building an outline, and I like, put some sort of information in and ask AI to build something, it’s a good like sort of foundation for me to build off of and see like, what am I missing? What am I not like thinking about?”
Incorporation of ChatGPT into other Learning Processes “But sort of just preventing you from wasting time”
Attitudes towards ChatGPT and Generative AI Positive: 6
Neutral: 4
Negative: 5
Concerns about Learning Experience and Purpose of a Liberal Arts College Education in Age of ChatGPT  “ …it’s like, you’re forfeiting your ability to think, to a machine, you know, and I don’t know, maybe that sounds very, like, dystopian or whatever. And maybe corny, but I think just on its own that, like, giving that away is kind of insane.”
Limitations of ChatGPT  “I’ve never actually asked it to write an entire essay for me but like based on what I know about it, I don’t think that it would provide an essay for me that I would be willing to turn in. I feel like I’d either have to edit it beforehand”
Ownership Over Writing/Learning Processes “I mean, so I’ve pretty intentionally, I think there’s a lot of, I mean, frankly, I’m paying to be here in order to like, learn. And so I’ve never been one to take shortcuts. Even when stuff academically it’s challenging. And so, like, I actually intentionally have avoided letting it change the way I’ve learned and think.”

Appendix C 

Fall 2023 Training 

Exploring the Impact of OpenAI’s ChatGPT on Higher Education: A Focus on Writing Tutors

Overview: After the release of OpenAI’s ChatGPT, higher education was buzzing with questions about this new tool and what it would mean for writing and learning. As writing tutors, we took notice. We were focused on what it would mean for the Writing Center. After reading numerous articles and blog posts about ChatGPT, we decided that what was missing from the conversation was the students’ voice. Professors, administrators, and other staff were all publishing their views regarding ChatGPT but there were no student voices about ChatGPT and how they were using it. We decided to research students’ engagement with ChatGPT. 

Methods: We interviewed 15 full-time Middlebury College students. We conducted interviews both in person and via Zoom and then transcribed the interviews. Each interviewee was asked the same twelve questions about their experience with writing to their knowledge and usage of ChatGPT. 

Findings: We are still in the process of coding the transcripts. Here are some key words and phrases we have identified. 

    1. Tool 
      1. Students reported that they thought of ChatGPT as a tool. When asked to expand on this, many compared ChatGPT to Easybib or other digital writing tools, rather than a source that does the work for you. As tutors, we also use learning tools, so how can we think about working with ChatGPT instead of against it? Can we use it to our advantage? 
    2. Outline 
      1. Most students reported that they were using ChatGPT to help them outline and organize their work and thoughts. This is both a useful way to utilize ChatGPT but can also take away the critical thinking and creative processes required when developing an outline. 
    3. Struggle to Start/Starting Point 
      1. Students noted that they don’t see ChatGPT as a means to avoid their work; instead, they use it to expedite their writing process or to find guidance on how to begin. As tutors, this aligns with one of the skills we promote. However, when students rely on ChatGPT, it prompts us to consider its implications for the Writing Center. How can we emphasize the value of techniques like outlining and mind mapping as valuable tools for enhancing learning and writing, rather than simply outsourcing to AI?

Discussion Questions for Tutors: 

  1. Why do we write?
  2. How does ChatGPT affect how we tutor? 
  3. What are some practices that we can put in place during our sessions? 
  4. What are some questions we can ask writers about their use of ChatGPT?
  5. How do you feel about the College’s statement regarding the use of AI (see email for students and faculty/staff from August 2023). 

So what’s next…

ChatGPT Activities: 

    1. Read a paragraph written by ChatGPT and one that is written by a person. Determine: 1. Which one is which? And 2. What are some of the similarities and differences between the paragraphs? Discuss. 
    2. Test the tool: use the same writing that we shared and plug it into ChatGPT. Then, review the feedback that the tool provides (you can ask about tone, structure, organization, etc.) Consider what feedback you would have given to the writer versus what feedback for revision ChatGPT provides. How are they similar? How do they differ? Can these two tracks be used simultaneously in a tutoring section? Discuss. 
    3. Collect ChatGPT statements from course syllabi. Then, as a group, analyze how often ChatGPT is mentioned, and the student uses policy. Identify the prevailing opinions faculty have about ChatGPT and its uses in the classroom. Discuss how you will work–as an embedded tutor–with writers under specific kinds of ChatGPT/generative AI policies. What happens if a professor expressly forbids students from using these tools but a student discloses to you that they used them? What happens if the professor wants students to engage with these tools but the student is reluctant because of our honor code? Discuss. 

Appendix D

Questions for tutor & instructor training on AI 

  1. What measures can be implemented to maintain the quality and consistency of feedback provided by ChatGPT, and how can writing centers ensure the accuracy of information and guidance given to students?
  2. How can ChatGPT be effectively integrated into writing centers to support student writing?
  3. What should be taken into account when implementing ChatGPT in writing centers to align with the goals and values of higher education institutions?
  4. How do students perceive and interact with ChatGPT in comparison to human tutors, and what impact does this have on their learning experiences?
  5. What are the limitations of ChatGPT in assisting students with complex writing tasks, and how can these limitations be addressed in a  writing center context?
  6. How can ChatGPT contribute to enhancing accessibility and inclusivity in writing centers for students with diverse learning styles and backgrounds?
  7. How can tutors facilitate independent use and application of writing skills versus dependency on AI tools?
  8. What kinds of technology “infrastructure” have you built up to facilitate your own learning and writing processes? Do you think this infrastructure is fully supportive or distracts from the goals of learning/writing?
https://thepeerreview-iwca.org