Snapshots from Before a Revolution: A Talking Picture Book About AI in the Hendrix College Writing Center

Felipe Pruneda Sentíes, Hendrix College
Owen K. Edgington, Hendrix College
Eden E. Robbins, Hendrix College
Sable Alysse, Hendrix College
Katherine L. Scalzo, Hendrix College
The Writing Center, Hendrix College

Abstract

Innovation and technological adoption are continuous processes, which makes them difficult to periodize. At the same time, acquiring new tools and literacies inspires in the adopters a reflection, however brief, on their preparedness for the acquisition. Adopters may face the new technologies with confidence, excitement, curiosity, trepidation, or all the above. The emotions often result from a sense of how equipped adopters feel to receive the innovation. Yet the speed of innovation, and the social and professional need to keep up, might obstruct self-analysis that would ideally help define and sharpen the relevant skills and knowledge. This talking picture book documents how the Hendrix College Writing Center staff reflects collectively on the transition that the arrival of generative artificial intelligence has ignited. As of the Summer of 2024, our writing center has not yet implemented solid AI-related policies and procedures, working instead on research. By responding to four questions about encounters with AI with a still image and an accompanying oral, recorded narration, four student consultants and the center’s director make material memories about the current moment, which the rapid technological development has rendered elusive and even distant. The idea is to create a nostalgia for the present to intensify our recollections of the experiences and abilities that would enable us to interact and grow with AI when it becomes part of our regular operations.   

Keywords: technological adoption, the speed of technological change, assistive technologies, reflection, still photograph and the imaginary, voice recording and the real, preparedness 

This work—a collection of still images and voice recordings—examines a part of the process by which a writing center adopts a new technology—a reflection on the staff’s readiness. The Hendrix College Writing Center serves a small, liberal arts, private institution with around 1200 undergraduate students. With that in mind, we are designing procedures (for individual appointments, workshops, course collaborations, and so on) to tackle the AI-related needs of students and faculty. We have not formally implemented any of those procedures under the belief that we still need to learn more. Whether we will know when we have reached a critical mass of knowledge for the implementation to happen remains an open question (although we are certain the learning process will not stop). What we do know is how much self-reflection the recent prominence of text-generating AI has ignited in our center. Contemplation must eventually give way to actionable conclusions for the current moment, even if they might come with an expiration date. That fact does not mean we can’t extend the contemplation a bit longer for the purposes of investigating our Center and our campus at what will certainly be an inflection point. This piece attempts to stage two artificialities to give us more room to think and match the condition of its subject. 

The first artificiality concerns something that technological development never deliberately affords most citizens: a pause to consider who citizens are (a sense of their place in their lives and in their communities), and how ready they feel, before adopting a new technology. Everett M. Rogers’s (1962) technology adoption life cycle indicates that citizens incorporate technical advancements at different times, classifying them into five groups: “innovators,” “early adopters,” “early majority,” “late majority,” and “laggards” (p. 161). Given the particularity of the experiences and circumstances around every citizen, Rogers warns that models to track the timeline of technology diffusion across populations are “conceptual,” a useful tool to understand the impact of a continuous phenomenon and to identify trends. Something that becomes clear from following the spread of innovations is that innovators rarely spend time speaking to consumers about the effects and implications of their work before that work is widely available. Educational, legal, and governmental institutions struggle to anticipate technologically driven change. Instead, they react to every development. The lag happens because, for Preeta Bansal (quoted in Wadhwa, 2014), codified behaviors require social consensus, while technological innovation does not. The speed of the “technological vitalism” (p. 45) of which Paul Virilio (1986) speaks runs right past the much more difficult optimization of agreement. Our project is similar to Rogers’s in that it also exists on a conceptual plane: it conceives of a reflective stoppage in technological adoption as a situated, almost nostalgically defined period.

This talking picture book imagines what it would be like to expand the reflection before a community (in this case, the writing center) creates protocols to mark the perhaps irreversible presence of artificial intelligence in their practice. Like Rogers’s device, making visual and aural mementos of the current moment means to contain, however abstractly, an ungraspable and ongoing process. Yet we differ from Rogers in one respect: “Each adopter of an innovation in a social system could be described, but this would be a tedious task” (p. 159).  As believers in the counterhistorical value of the anecdote, however, we propose describing this small group of adopters in some detail, so that a fuller picture of AI’s spread comes into view—one harder to categorize in one of the five groups above. 

We distinguish between that pause and the preliminary groundwork for institutional change because, so far, the preparation we have undertaken has relied on current, forward-looking research. The past, the a priori of our technological and disciplinary knowledge, always informs the envisioning of our future. Still, our center has not defined that past in concrete terms. We have not named what we possess that would let us inhabit a practice alongside AI. Defining our past would, in turn, clarify our present, a perpetually in-flux moment that never stands still long enough to comprehensively assimilate it. An analog detailing of the conditions that shape the adoption of new tools at the writing center appears in research on the selection of assistive technologies for writers. Nankee et al. (2009), for example, break down the factors involved in writing: visual perception, neuromuscular abilities, motor skills, cognitive skills, and social-emotional behaviors (p. 4). While the authors composed this list to select assistive technologies for students with disabilities, reading the factors makes it clear that anyone who intends to write or even assist in writing needs to consider them. The same can be said of the writing process itself. In a discussion about assistive technologies in writing centers, DePaul University blogger Maggie C (2015) cites a study by Raskind and Higgins (2014) that shows text-to-speech software enhanced proofreading for students with learning disabilities. In their analysis, Maggie C observes that the issues “that all writers struggle with (proofreading, catching errors, etc.) [aren’t] unique because the people in this study had learning disabilities” (para. 3). Indeed, this kind of capabilities analysis can apply to the writing center staffers as well. Even if right now we do not treat AI as an assistive technology, framing its adoption in terms of what prepares and allows us to incorporate it reveals areas of interest to influence our eventual policies.    

So we propose taking stock not just of our capacities but of our collective mood before letting AI take residence in our writing center. The piece represents how we have identified the signals of change, or how we have developed a notion, however tenuous, that a (perhaps paradigmatic) shift is coming.

We are conscious that the past and present we will try to articulate are largely fictional—the second artificiality this work hopes to render. Artificial intelligence, and its applications to writing, have been with us for some time now. While students, faculty and staff at Hendrix College work, together and apart, to respond to its challenges and fulfill its opportunities, AI has made its way into our practice. To some extent or another, often inadvertently, we have adopted AI, further complicating our identification of a pre-AI moment. That fiction, however, remains useful because it will allow us to recognize (and perhaps even invent) qualities upon which we may rely to work with AI. Generative speculation represents a significant part of the exercise, as we list skills that both intuitively and counterintuitively empower us to face AI. It will also give us a reference point, a purposefully constructed memory of a period that we might need to revisit moving forward. It will provide a starting place for an approach to understanding the transition. Call it a preemptive act of writing center archaeology. We are building evidence for future excavations.

To create a reflective pause, generate a fictional past, and capture a mood during transition, we turn to a multimodal approach combining photographs with voice narration. The process began with four questions:

    • Which experience would you consider your first contact with large language model artificial intelligence? Can you talk about your first contact with it at the writing center?
    • At what moment did you realize large language model artificial intelligence became something that you would have to contend with in some form? Or, if you do not believe AI has attained that status in your life, why is that?
    • At what moment did you realize large language model artificial intelligence had a definite presence at Hendrix College? 
    • Are there skills, ideas, habits, convictions, anecdotes or facts that enable you to think critically and confidently about AI and its role in your work as a writer and a writing consultant?

The authors shared still photos that reminded them of their encounters with AI. Then, they recorded spoken descriptions of the photos, explaining their relevance to the questions and the memories they elicit. At times, the question prompted only the recorded reflection. In those cases, the door to our old writing center supplies the background image. The result is organized by the questions but also allows the audience to view and hear it in any order as if browsing through a family album. The choices of modalities follow the ideas of theorists Vilém Flusser and Friedrich Kittler. For Flusser (2004), photography “has interrupted the stream of history. Photographs are dams placed in the way of the stream of history, jamming historical happenings” (p. 128). It’s this “jamming” that makes still images an appropriate medium for this project, which temporarily and imaginatively arrests time to acquire an advantageous perspective on our history. On a personal level, we might be familiar with the connection between still images and remembrance. The essay is, in part, a picture book of our days before adding AI to our mission statement. The photographs literalize the piece’s title.

As for the voice recordings, we recall how Kittler (1999), in his psychoanalytic analysis of media, associated the gramophone and its capacity to mechanically store and reproduce sounds with the Lacanian Real, or the part of the world that exists beyond human signification (p. 37). For Kittler, when we record someone’s voice, we capture words, but also the uninflected, unintentional, unstructured noises that reveal something true about the speaker. Our tone, tics, and silences (those sounds free of signifiers) express the authenticity of our responses to AI and our ideas of how it will alter our writing assistance. Kittler, incidentally, would have something else to say about photography to elaborate on Flusser’s thoughts. As a mechanically constructed image of the world, the photograph belongs to the Imaginary—it creates a double of the world onto which viewers can project their ideals. In short, the affordances of still photographs and voice recordings allow us to weave our imagined past and pair it with the real hopes, mysteries, and anxieties involved in our incorporation of AI. Our goal is to evoke our world before that revolution. 

Before moving on to the picture book, here are a few words of the Hendrix College Writing Center staff who participated in this project:

Owen Edgington

In the writing center, I begin my sessions away from the page. I start a conversation sparked by questions like What do you want to say? What’s blocking you from that right now? What gets you fired up about this piece? I sprinkle in camaraderie and a touch of humor: Oh yeah that class is ridiculously hard or yeah one time someone came in here twenty minutes before their paper was due! The specifics vary, but the point is to create a space at the intersection of talking, thinking, and human connection. That’s where writing begins. It doesn’t spring magically into existence out of the end of a pen.  

I’m critical of that sort of “natural” approach to human writing. The idea that writing should “flow.” There’s nothing natural about the act of writing. It’s agonizing. It’s counterintuitive. So, I tend to start with conversation. I ask the writers who visit me to say what they’re trying to communicate. I let them think aloud until something greater than the separate pieces of our conversation emerges. Only then do we shape those thoughts into written form.  

I suppose I should mention my skepticism about AI. I’m not convinced AI can or will allow something greater to emerge. I’m reminded of Verlyn Klinkenborg’s (2012) description of cliché as “the debris of someone else’s thinking” (p. 45). Might that be an apt description of AI as well? 

To me, a writing center’s strength lies in its ability to create human connections. Before implementing AI in the writing center, we should ask ourselves how it supports that strength. 

Eden Robbins

My general approach to writing assistance is to analyze works for structural issues (how do ideas flow, satisfactory resolutions to concepts set up earlier, etc.) first and foremost and to center any aid around my findings. To me, AI has the downside of cheapening this process by reducing the structure of an essay into a template of what it could be, reducing the potential impact a work could hold. In addition, AI isn’t very good at following along with these threads of ideas when fed a paper, so it doesn’t do me much good to ask ChatGPT or so such about a paper I’m meant to look over.

Sable Alysse

I approach my duties as a writing consultant as if I am helping a friend with their homework without doing it for them. I see myself as the bridge that connects their contemplation of the assignment to their final project. This approach consists of talking to me as if I am a friend, where I listen without judgment. They simply describe what they think the rubric means or, if they’ve already begun writing, what thought they are struggling to put on paper. From there, we work to make the thought clearer and the assignment criteria more reachable.

I have seen firsthand how AI is a tool that can make the rubric digestible. It is a tool that can also help with spelling and grammar. This can be helpful because patrons are then able to enter the appointment already understanding the assignment, thus having questions and drafts ready. At the same time, however, AI can interfere as it makes it easier for someone to lapse in their work ethic, comprehension, creativity, and originality. When those lines are crossed, so is academic integrity. 

Katherine Scalzo

During my time as a writing consultant, I was a student majoring in psychology and minoring in biology. I think that my background in science afforded me a unique approach to writing assistance and writing in general, which contributes to my reservations about using AI in spaces of writing assistance. AI, by nature, does not allow that uniqueness or human variability, which can sometimes make all the difference in writing and helping others to write. In my experience, there are times in which the person-to-person conversations and connections create a soundboard that facilitates breakthroughs in a peer’s writing far more than any technical edits. Maybe it is arrogant, but even as AI continues to develop and earn its place as a supplement to writing assistance, I do not think it will ever replicate the peer-to-peer experience. As long as we respect AI’s limitations and honor the value of traditional writing assistance, I believe the two can work together to empower individuals in their writing journeys.  

Felipe Pruneda Sentíes (director)

If I invoke some clichés about mixed emotions at the arrival of generative AI, it is because they feel true. They also feel appropriate because I believe writing and writing assistance are about mixed emotions. I believe that, to find ways to express thoughts, writers and their readers need to embrace being a bit unsettled. I try to cultivate comfort with uncertainty as a necessary mindset for successful, truly exploratory writing. After advocating for such a double consciousness for years, I feel generative AI is the biggest challenge so far in practicing what I preach. Looking at the pictures we put together for this piece, I find great serenity— a reminder of how we reacted when we first realized how quickly a full-fledged essay could appear on an app’s screen. 

The Talking Picture Book

(Note: the numbers above each video thumbnail indicate where readers can find the transcripts for the voice narrations in the Appendix).

Which experience would you consider your first contact with large language model artificial intelligence? Can you talk about your first contact with it at the writing center?

1

2


3


4


5


6

7

At what moment did you realize large language model artificial intelligence became something that you would have to contend with in some form? Or, if you do not believe AI has attained that status in your life, why is that?

8

9

10

11

At what moment did you realize large language model artificial intelligence had a definite presence at Hendrix College?

12

13

14

15


Are there skills, ideas, habits, convictions, anecdotes or facts that enable you to think critically and confidently about AI and its role in your work as a writer and a writing consultant?

16

17

18

19

20

Parting Thoughts

We mentioned in one of the recordings above that the images we gather here frame absences. They portray lonely spaces. It occurred to the authors of this picture book that we worked on the project away from one another during the summer, further accentuating the loneliness. However, the incorporation of AI in higher education, as we have experienced it, has been anything but lonely. The idea exchange within our campus and with sister institutions and writing centers has been rather generous. Thus, this picture book has activated yet another pause—one to consider how the spread of this innovation has sparked great solidarity. Having experimented with the above format, we are encouraged to continue sharing documentation of our transition into an AI-literate writing center to further embody that sense of community.

References

Flusser, V. (2004). Writings (A. Ströhl, Ed.). University of Minnesota Press.

Kittler, F. (1999). Gramophone, Film, Typewriter (G. Winthrop-Young & M. Wutz, Trans.). Stanford University Press. (Original work published 1986).

Klinkenborg, V. (2012). Several Short Sentences About Writing. Knopf.

Maggie C. (2015, January 19). Using assistive technology in writing centers. UCWbLing, DePaul University. https://ucwbling.chicagolandwritingcenters.org/using-assistive-technology-in-writing-centers/

Nankee, C., Stindt, K., & Lees, P. (2009). Assistive technology for writing, including motor aspects of writing and composing. Wisconsin Assistive Technology Initiative. https://www.wati.org/wp-content/uploads/2017/10/Ch5-WritingMotorAspects.pdf 

Raskind, M. H., & Higgins, E. L. (2014). Assistive technology for postsecondary students with learning disabilities: An overview. Journal of Learning Disabilities, 31(1), 27-40. https://doi.org/10.1177/002221949803100104

Rogers, E. M. (1962). Diffusion of Innovations. The Free Press.

Wadhwa, V. (2014, April 15). Laws and ethics can’t keep pace with technology. MIT Technology Review. https://www.technologyreview.com/2014/04/15/172377/laws-and-ethics-cant-keep-pace-with-technology/

Virilio, P. (1986). Speed and Politics (Mark Polizzotti, Trans.). MIT Press. (Original work published 1977).

Appendix

Video transcripts

Note: the subtitles in the recordings contain the pauses, tentativeness and other speech particularities that the project wishes to preserve. However, the transcripts have been edited for clarity here. 

Which experience would you consider your first contact with large language model artificial intelligence? Can you talk about your first contact with it at the writing center?

    1. For my first contact with large language model artificial intelligence, it had little to do with its application in school settings or really any sort of constructive medium. I’ve chosen an image of a toy Pegasus to represent that is much of how I saw artificial intelligence: as a toy. Back before a ChatGPT came out, I watched some videos of people messing around with various more primitive generative AIs to produce brisk answers to questions, role-play different people, or just generally to play around. I don’t remember exactly what video was the first one I watched, but I had seen several over the course of months before ever even encountered someone attempting to use it in a classroom. And as such, I just saw it as a toy. I rarely even considered the fact that it could be used to write essays and the like.
    2. The first time I ran into the topic of AI in the writing center was actually as a joke. The person I was working with that day, you know, just made some sort of joke around the idea of, “I’m just going to do this on ChatGPT, this is too hard.” And we laughed and it was sort of a silly moment because that was right at the height of ChatGPT’s popularity. But looking back, I’m kind of struck by that resistance to writing for class. And it’s something I’ve encountered my whole life in school and there are a lot of things that can be said about that, but it seems to me that the sort of fear and the backlash against AI when it comes to schools is perhaps a result of the school itself and how we treat assignments and what sort of things we assign. I think the sort of “ChatGPT, get out of doing your homework” doesn’t unveil a laziness so much as a resistance to busywork.
    3. This is a photo of a restaurant where he would have a lot of monthly meetings for the writing center, where we would discuss kind of how things are going at the center, new ideas to bring to our work at the writing center, and just generally describe our experience. And actually, in my last semester at Hendrix and working at the writing center, a lot of our conversations at the dinner meetings did in one way or another relate to AI and its use in the writing center at Hendrix, or generally in the university setting. And I think it’s hard for me to remember clearly a first encounter with AI in the writing center in terms of engaging with a student who had maybe used AI in their work just because, at the time, I had not had a lot of experience using AI firsthand. So I think it was a little more difficult for me to actually spot it in the moment, though I’m sure I had encountered it along the way, unless it was really obvious. There aren’t a lot of moments that stick out of me realizing that someone was using AI in the writing center and kind of having a conversation about that. But at these dinner meetings, we one time had discussed another university’s policy on using AI specifically in their writing center. So this is their writing student associates using AI in their writing assistance practice. And that was really interesting because that was the first time I had actually seen a policy about AI specifically in use in a writing center. And so that was really cool, I think, to contextualize the use of AI, because a lot of our conversations with my colleagues in the writing center beforehand had talked about kind of the ethics of working with students who had used AI, and where we kind of draw the line with that. But this policy kind of shifted the narrative of how we could maybe use AI to make our assistance more fruitful and more helpful to peers. And so following our reading and discussion of that policy, just a lot of our conversations at these dinner meetings and otherwise just ended up coming to a place of talking about AI, whether that be how we’re using or not using AI at the Hendrix Writing Center or how we dealt with people coming in, having used AI, maybe in a way that is not ethical. And so these dinner meetings and the conversations that came from them, I think were one of the clearest experiences I’ve had with thinking about AI in the context of the writing center and thinking about it in both directions, I guess, with students using it, but also maybe us using it to help them as well.
    4. So this is a picture of the classroom where I teach a course called The Engaged Citizen, and it’s an interdisciplinary class. It’s like our first year seminar at Hendrix, and the class is about being a responsible cultural consumer. It’s about what that means and how that relates to citizenship and the first time I taught that class, I had COVID the first week of classes. That meant that the first sessions I had to teach, I taught them remotely. I was showing them a painting by Remedios Varo, the surrealist painter. And the painting is called The Creation of Birds. And you can see in the corner there a detail from it, on the screen. I was going to project it on the screen, or that was the plan. But at that point I had to do the class remotely. For the first assignment for the class, I wanted them to write about a piece of art. To give them an example, I had ChatGPT create an essay about that painting, The Creation of Birds. And I asked ChatGPT to tell me what values the painting expresses and to base those observations on details from the painting. And it turned out it talked about objects that were not in the painting. It said that there was a bookcase in that painting, but there isn’t one. So it was the first time I remember using AI consciously for a class. I had encountered artificial intelligence before in some form or another, but that was the first time I used it to create an example of an essay that I told the students, that was from an artificial intelligence, and that showed them, first, the factual errors, but also the way it created sentences that sounded good but didn’t really carry a lot of actual substance. So I remember that teaching for the first time of using AI for my class was also in the year I first had COVID. It was past the time of the isolating in place period of the pandemic. And that classroom still had traces of the pandemic. You can see the air purifier at the bottom and then also that the Owl camera-speaker-microphone combo device, you can see its eyes are on. It’s those two ring lights on that speaker. So you can see the traces of the pandemic in that classroom. And so I connect my first contact with AI, the recent sort of developments of AI like ChatGPT, with COVID, with the pandemic, or at least the time after what I would call the height of the pandemic. But I happened to get it after. So yeah, that image reminds me of that time teaching that class. And the first time I used the current kind of popular text-generating artificial intelligence.
    5. My elementary school writing process consisted of physically handwriting a draft, editing and revising the draft with red and blue pens and then physically rewriting the paper with another black pen. As soon as I got comfortable with this process, I transitioned into middle school where my final draft now had to be typed. Thankfully, though, I had my own laptop and was able to get through by speaking my words into the typing application instead of having to actually sit and type every single word. All throughout middle and secondary school, I anticipated the intersection of technology and writing developing only slightly more into spell and grammar check, like what we see with Grammarly. Fast forward to my sophomore year of college and I hear of AI in my computer science classes. I would then leave that class to go work at the writing center, where I would review rubrics and what I assumed were fully human, innovative writings. It was not until my senior year of college that I heard about ChatGPT. My teacher mentioned that our next writing assignment would be difficult, but to please not use ChatGPT or any other AI tools. The class laughed as if it were a “how does he even know about that?” moment. Whereas I was in shock. I started asking friends about it and came to find out that it was the hot new thing. After my initial thought of, “Wow, I could have been doing that the entire time,” I then began wondering, “Hmm. How many papers have I read and assisted with that were written with the help of ChatGPT and/or other AI?” 
    6. So this is a picture of my dorm room the night that I first started playing with OpenAI and ChatGPT, on the recommendation of a friend who told me I should download it and play around with it and that it was really, really cool. That’s all I knew. So I kind of went in completely blind. And it’s funny, I think one of the first things I typed in was like a simple math equation. I remember I had to imagine what even to ask. And so I sort of defaulted to, “Oh, it’s a calculator, I guess.” And yeah, I don’t know. I think that it says something when we run into this sort of paradigm shifting technology, we are inclined, or at least I am or I was, to use old frameworks or existing frameworks to try to make sense of it. So I thought that was interesting.
    7. To capture my first contact with large language model artificial intelligence, I’ve taken a photo of my laptop screen as I’m typing in a Word document using the predictive and generative text feature. And this isn’t necessarily my actual first contact with AI, but rather representative of that and representative of my first contact with AI that I didn’t necessarily realize was AI at the time. And so it’s hard for me to put a finger on exactly when in time this contact had happened, particularly because I wasn’t aware that things like this predictive text function—that’s present on a lot of different word processing formats in even something as simple as texting on an iPhone—were AI, and so I’ve been using that for who knows how long. And if someone had asked me at the time what my thoughts were about AI and how much I had used it, I wouldn’t have thought I’d used it at all. But looking back now, I do know that that is a form of AI. And so I think that was my first contact with AI, even though I didn’t necessarily realized it at the time. 

At what moment did you realize large language model artificial intelligence became something that you would have to contend with in some form? Or, if you do not believe AI has attained that status in your life, why is that?

  1. Now, I wouldn’t say that this type of artificial intelligence had a very big presence in my life, I guess. And what presence it does have is better covered under the next question. However, in general, I’ve always considered the use of artificial intelligence to write important documents for you just silly. It’s hard to control quality, that takes away all the hard work. It also takes away any of the fun or pride you can have with the final piece. It’s just cheap. My choice of picture this time is what a typical outline looks for me, mainly just for a point of contrast.
  2. I’m intrigued by the word choice of “contend” in question two and, more broadly, I’m intrigued by the sort of battle metaphors that get trotted out when talking about AI. And I’m interested in how we set up this sort of opposition, the human and the AI, the organic and the technological. And I’m really curious about the sort of marriage between those polarities and what that might look like in the future. I think there’s some great science fiction about that. But sticking to sort of my experience at Hendrix, I’ve been in so many conversations surrounding AI, specifically in my work at the writing center, that are sort of scared. Fear is such an element there. And the idea that we need to sort of get ahead of AI, that we need to have a plan of action in place as soon as possible. But I also have a lot of conversations about how we need to sort of slowly adapt to this and see where the wind is blowing and not have sort of a rigid structure. So it’s not a monolith there. But I did notice that there is a lot of fear around it and there’s a lot of uncertainty in the writing community specifically. English has already taken a backseat to STEM for many years now in higher ed. And so I think there is a wariness around AI as it seems to be encroaching on writing now. And that’s scary. I get it. But I think it’s also really interesting to think about how we might embrace and sort of intermingle the organic, the sort of spontaneous writing that comes from the soul, with this robotic, technological engine that just spits out stuff that sounds somewhat right from a large language model. So it’s complicated and it is emotionally confusing, I think, for both myself and other members the writing community, if you will, on campus. But I don’t know, I think we need to start questioning that fear and see and ask ourselves where it’s coming from and why we’re latching on to so many war metaphors and so many ways of thinking and talking about AI that revolve around fear.
  3. This is a photo of the Hendrix College campus looking out from the steps of Bailey Library, where I did a lot of work in my four years there. And this campus was the place where some of those moments happened, where I realized that large language model AI would be or become something that my peers and I would have to contend with a lot more than maybe I had thought in the past. So, for a long time, any conversations about AI, whether they be more casual or in the news or in an academic sense, all seemed extremely hypothetical to me. And I didn’t really think that they applied to me at the time. And that AI would be something that would become super relevant in my lifetime, or just in the work that I would be doing in school or in my career. Because I’m not someone that is abnormally tech savvy or plans to work in, say, computer science or computer engineering or something like that in the future. So I didn’t necessarily see those conversations about AI as super relevant to me, even though I was aware that the technology was improving and that it would probably eventually make its way into other parts of our lives. I didn’t necessarily know what that would look like, but an academic institution is a great place for more conversations about that to happen and also allows more opportunities for AI to make its way into our lives than in my everyday life. And so I remember very clearly, I think, the first time that AI had explicitly become a part of conversation at Hendrix, and it was in a discussion in one of my psychology classes when we were going over the syllabus. I want to say this was my sophomore year, or early in my college career, and this was the first time that a professor had explicitly mentioned AI when going over their syllabus. And so the policy on AI was not clear at the time. And of course, Hendrix still doesn’t have a policy on it, and they definitely didn’t at the time. So even though there was no campus wide policy about AI, this professor had made a point to mention it in the syllabus, albeit briefly. And so we kind of had a short conversation about how AI looks right now, how it’s going to look in the future. And so that was definitely a moment where I realized that, even though I am not working with computers all the time, AI is definitely going to have a presence in academia and in other parts of my live sooner than I realize.
  4. I took a picture of the notes that I took at a conference of the South-Central Writing Centers Association at the University of Arkansas in Fayetteville. It was my first time in Fayetteville, and I saw a presentation there where a professor talked about some AI applications to grade papers. And this was the first time I had heard of these applications. And it never occurred to me to actually use AI for grading. It occurred to me to use it for writing. But using it for grading hadn’t occurred to me. And I remember the professor talking about how those applications made grading more humane. And that has stuck with me because. I’m all for making anything more humane. My first reaction, though, was, well, I’m not sure how students would feel, or how the parents of students would feel, to learn that professors are using AI to grade. And not because I assume what their response might be, although I do imagine some would object, but first, I kind of rejected it like a lot of things related to AI. At least at first, it’s hard to avoid that knee-jerk reaction, or it has been for me. But when I saw that, when I heard about making grading more humane using AI, that really got me thinking. Finding that humaneness might have been the first time I heard its connection with AI articulated explicitly, because I’m sure at the basis of a lot of uses of AI is a desire to make something more humane, or more bearable, or less demanding for humans. So I’m still thinking about that, because even applying these tools, and at this point we haven’t, I’m concerned about how much I would let go of my grading, or if I’m going to feel like I’m letting go. But if it’s really making it more humane… I have tried different things in my own grading and I don’t know how much more humane it has become in the years I’ve been teaching. Is ChatGPT a solution? I am not sure. So this photo is of my notes from that time, and then a picture of Fayetteville at night, downtown Fayetteville at night. For me, they mark the occasion of having humaneness and AI really connected.

At what moment did you realize large language model artificial intelligence had a definite presence at Hendrix College? 

  1. I would have to say that it happened in genetics class during the Fall semester my sophomore year. Pictured is a Petri dish with some bacterial colonies on it that we grew in the lab, here used to represent how one of my table mates—we were all a group—suggested to just let ChatGPT do the work for one of our lab report write ups, and I remember panicking. For one, the professor had made it explicit one way or another what his stance was on the use of this type of AI, although this particular usage was unlikely to be popular with him. For another, this was the first time I’d ever seen someone in person just offloading their writing work to a computer, and since we were partners, my head was also on the line here. Thankfully, I managed to convince them otherwise, but it was kind of a scary first in-person encounter.
  2. I remember when I came back to campus in January of 2023. I remember having so many conversations with people about AI, and I think a common thread through those was not fear necessarily, but something related to fear that was tinged with a sort of excitement or uncertainty. There was a tension. I remember coming back after the break. And I remember this one conversation in the cafeteria with like six or seven people. And everyone had a take and I think I remember so vividly that it was this sort of connecting conversation. It brought everyone together to debate the pros and cons, to analyze how the future might unfold with AI. And it was a really interesting time. And it’s funny now because I’m recording this in July of 2024 and I don’t know, a lot of the wind has come out of the sails there, and maybe I’m just not up to date with the current unfolding of AI, but I do think it lost that popular excitement when ChatGPT was first released. Because I believe it really blew up in early December of 2022. And so we kind of all experimented with it either before we left for break or over break. And then that return for the Spring semester was just a really interesting time. So this photo is of an ice storm in January. And I remember huddling inside the cafeteria, which is right behind where this photo was taken. And I took this right after that big conversation with six or seven people. So, yeah, just looking back on that return is interesting because there was this air of potentiality that I think has sort of left the conversation around AI. 
  3. So this photo was taken in a film musicals class I took my senior year at Hendrix, and so this was an English course that we did a lot of writing in, as well as in-class discussions about readings we had read or films we had to watch. And while the actual film in the photo isn’t necessarily relevant, what matters is that this class was the place where it became a lot more evident that AI had a more significant presence on the Hendrix College campus, and that its presence was becoming a little bit more obvious to students and professors alike. So even though conversations about AI had been kind of swirling on campus before this and I had had professors address it, albeit briefly, in syllabi, those conversations were mostly in passing. And there wasn’t yet a formal campus wide policy on AI at the moment. So professors were kind of left to their own devices to consider whether they should have a policy, what that policy should look like, what that meant for students. And so students were also on the same boat. Those decisions were kind of up in the air still. But in this class we had a lengthier policy on AI in the syllabus. And our professor also invited us as students to contribute to the conversation surrounding this policy. So we actually took some class time to talk about AI and its role, especially in English courses that are a lot more writing-heavy and kind, and about what a policy should look like in that kind of class and why. And so it was really interesting being able to hear other people’s opinions moderated by our professor in that case. And ultimately, we came to the conclusion that AI use would not be appropriate in that class to maintain the creativity and the integrity of writing about film, an experience that is so innately human, and of being able to write honestly and reflect honestly and truly experience film with others in a way that we didn’t think AI could accurately or fairly capture. So I really thought that both our process in coming to that policy and the policy itself were really interesting, and changed my perception about AI and its presence on the Hendrix campus from something that was kind of just mentioned in passing and hadn’t really yet affected me as much. But being involved in that conversation definitely signaled that its presence was growing and changing and that we would definitely have to contend with it at some point as both students and faculty.
  4. That redbud tree is visible through the window of Mills Library, which is a space we have dedicated to former Arkansas Representative and Hendrix alum Wilbur Mills. A lot of meetings and gatherings happen at Mills Library, and our Associate Provost of Faculty Development held a conversation there about AI on campus for faculty. And it was the first such event that I attended. And if I’m not mistaken, I think it was the first event like that that was organized at an institution-wide level, at least for faculty, and the goal was to talk about concerns with AI and share the resources that were available at the time. But also, the Committee on Academic Integrity had a representative at that meeting who talked about the policy changes to academic integrity that they had made to respond to AI and how it was used in the classroom and in courses and so on. And I remember that was the first time I felt the depth to which the college leadership had been thinking about AI and what was going to happen with it. I remember feeling that everyone seemed, or everyone expressed at that meeting more worry than excitement. I think that’s certainly still the case. But that photo is from a day like the one when we had that meeting. It was a sunny day and the tree looked serene. And that contrasted with the atmosphere inside. It wasn’t panic or anything like that, but it was a feeling of “what do we do now?” This uncertainty, I think, remains to some extent. But I took the picture now and think: it’s a huge window, it’s hard not to have your eyes drawn to the tree. And it looked like a very nice day. And that’s when that meeting happened, which was the first time I felt the college showed how AI had already altered policy and documentation. So that tree is a memory of that moment.

Are there skills, ideas, habits, convictions, anecdotes or facts that enable you to think critically and confidently about AI and its role in your work as a writer and a writing consultant?

  1. It is my firm belief that those who use AI to write the majority of their work are simply crippling themselves academically. You pay exorbitant amounts of money to put your grade and student status on the line by not engaging with any of the concepts you’re paying for. Furthermore, for students in lower grade levels like, say, elementary and middle school, I believe that their use of AI of this nature can be downright detrimental to learning how to write competently in an academic setting. They’re not practicing their abilities to plan, execute on or revise their written work. This also applies, albeit less so, to my peers. However, I’m not concerned with AI taking over all writing positions and obsoleting my job. I had a philosophy professor say it directly last fall: AI is terrible at philosophy. It’s a side effect of how it’s just a random word generator with fancy weightings, and it fails in all instances that actually require a novel thought and idea synthesis. In addition, getting into my choice of image, AI is incapable of intention in writing for all the same reasons that it is bad at philosophy. My chosen image is a snapshot of a short remix poem made during an activity at school. An AI can indeed chop up words from different sources and rearrange them to make something that appears as a poem. But it is unable to attempt to work around a concept, or to develop any concepts provided to it, or to analyze any of the ideas given to it by another person. And well, all that remains solely within the context of the human brain.
  2. When I came back to school for the Spring semester of 2023, pretty much all of my teachers had some sort of class conversation during the first week of school where we talked about AI and how it would affect the classroom. And most of my teachers took a pretty firm stance against it. That it was too untested, that it was going to corrode the learning environment, that they didn’t want us using it to write essays and that sort of thing. And so, yeah, that was definitely frustrating because I really perceived it from the get-go as a tool, more than something to be feared. And I think possibly there’s an element of generational thinking there, but I think that’s a generalization. But regardless, I was sort of met by the school largely and from my perspective with more fear than excitement or anticipation. And I wonder at the relationship between how we perceive education in higher ed and the fear of AI. I think about our fixation in higher ed on competition and individual success and hyper-specialization. And all of these sorts of things are tailored towards the idea of individual genius, individual commitment and learning and the idea that someone might use AI to write a paper for them is anathema to that way of thinking that says that’s cheating, that’s wrong. But AI is here, and it’s just going to get stronger. It’s just going to get harder to distinguish between a human writer and an AI writer as time goes on. And so maybe instead of asking ourselves, “How do we stop this? How do we combat this?,” I think it’s interesting to think about, “How might we adapt our education system and these sorts of values that we are striving for, that we are hoping to embody as students and teachers? How might we reimagine what it means to educate and to be educated?” And if that means a more communal approach to learning, if it means more project-based pedagogies, if it means a nature-based curriculum, if it means an emphasis on what our bodies can do and how that impacts what our brains can do… there’s so many directions that might take. The point is that instead of viewing AI as an existential threat to our education system and sort of panicking around that concept, how might we reimagine what it means or how might we reimagine our very education system and the way we think about it?
  3. As a writer and a now former writing consultant, I think that my ideas and opinions about AI and its use in any scenarios are definitely still developing, mostly because the technology itself is always developing. And also, just my experience with it firsthand is still a little bit lacking. And so I like to think I’m open-minded about it, but I’m not completely sure where I stand in terms of its use, like a lot of people. I do definitely think that I have certain ideas and habits and practices that allow me to be able to think critically and confidently about AI and its role in my work professionally, academically or otherwise. And so this photo represents a little bit of those practices. This is a photo of my process brainstorming and developing a project for a film musicals class that I took my senior year at Hendrix. And so these flashcards laid out are a bunch of characters and scenes in the movie I am going to be writing about. And this piece was actually a creative nonfiction piece, which was my first time writing in this genre. And so it was definitely something I was uncomfortable with at the time. And so I spent a lot of time prepping and brainstorming and developing ideas for this project. And I thought that this picture really captured what I think AI still lacks in all of its development in writing. I think that this process that I went through was extremely labor intensive on the front end of this project. And the way I think writing about film in this case and writing about characters and how they relate to you in a genre like creative nonfiction is just something that I don’t think AI at this point can replicate. It’s not able able to draw connections between myself and characters, myself and my experience watching a film, and even writing about a film itself. Those are things that are very internal and very human that I just don’t think AI has gotten to yet. And so even though I can, of course appreciate the technology of AI and I think it does have a role someplace, even if we don’t know exactly what that is yet, I think it’s important to maintain kind of ideals like this that allow me to think realistically about the role of AI and its limits, because I think we can accept that AI has a growing influence, while also respecting its limits and respecting our ability as humans to write and create in a way that machines simply can’t replicate. And so I’ve done a lot of processes similar to this, pictured here, in all different kinds of subjects, even outside of writing. And I think also this process, these processes are really satisfying and allow you to grow as a writer and hopefully as a writing consultant as well, allowing people and encouraging people to develop those skills on their own and feel more confident in their own writing ability and using AI as more of a supplement to it rather than something to replace it.
  4. I took that picture at my house and that’s my cat, Sophie. And it was a nice image, I thought. But it also shows the office in the background. It’s empty. Neither my wife nor I were there, and I was thinking that a lot of the pictures that I have for this project are empty of people. And that absence to me is appropriate when talking about something like AI. But it’s also a reminder of a thing that I believe we have that we need to think about more seriously, which is our presence, right? That we have to fill the spaces that allow us to be present to that which we are studying, that which we’re going to write about. We need to be present to the work that we are going to read or see or listen to. And that presence is going to let us write about it. It’s going to enable this to write about it. So that presence, the fact that we have a body, is what creates the relationships that will come out in writing in words or other modalities that communicate. So having a body—something that I don’t think the AI has, at least not in the way we might conventionally understand it—is our best writing tool because it’s what generates our attention to something. And that’s the start of good writing. So my belief and what I turn to, to think about something we have that allows us to face writing in the age of AI, is that we have a body and more than ever I feel that we have to think about writing as a bodily activity and not a product of the so-called life of the mind. It is something we do with the body and we need to emphasize that right now.
  5. With my experience as a writer, especially a creative writer and a writing consultant, I can confidently say that AI lacks the experience, culture and diversity within the voice to replicate personal and deeper levels of work. At the same time, though, I can also say that AI has been beneficial to writing. I’ve had the ability to discuss the usage of AI and the writing processes of some of the people who would make appointments with me. Sometimes people would come simply to fill out the piece of paper that says that they came to the writing center. Yeah, they wouldn’t have many questions when normally people would come in and say, “I don’t get the rubric, or “Did I do this well?” So then I began wondering what has changed, what’s happening? So some of them would admit to using ChatGPT. And my initial reaction was that it is not good for academic integrity until they let me know that they used AI not to write the paper, but to copy and paste the rubric into ChatGPT in order to get a better understanding of the rubric. And then they were able to write the paper on their own. I then realized that AI can be a useful tool that doesn’t have to take away from academic integrity, but rather enhances it. I didn’t realize that AI can be very beneficial for people who struggle with comprehension or ability, ease and access, or just going through a rough time and need some assistance. At the same time, I’m still concerned and critical as that line can be blurred and crossed many times when ethics comes into play. For instance, scientific reports can often lack voice, individuality, thus being more likely to be subjected to AI. I do not believe that AI always has to fall into this negative connotation of taking away from the writing or the writer. But I do believe that it needs to have clear guidelines so as to remain a helpful tool. We are in a time period where developing and retaining critical reading, writing and comprehension skills are integral to the continuation of our literary society.

 

https://thepeerreview-iwca.org