Conducting and Composing RAD Research in the Writing Center: A Guide for New Authors

 

Dana Lynn Driscoll, Indiana University of Pennsylvania

Roger Powell, Indiana University of Pennsylvania

<  PREVIOUS ARTICLE  |  RETURN TO ISSUE OVERVIEW  |  NEXT ARTICLE  >

The field of writing center studies has expressed a growing interest in RAD research (replicable, aggregable, and data-supported research) as a tool for developing evidence-supported best practices. In addition to shaping how we serve writers within individual centers, RAD provides a common language to reach external audiences, thereby legitimizing our work. Despite its benefits, many writing center practitioners lack access to knowledge and education about RAD research. Further, few publications provide novice researchers with guidelines to effectively conduct and write about RAD research. In this article, we not only address this gap but also present RAD as more than a research concept: it is a process that shapes our inquiry, facilitates our scholarly identity, strengthens our credibility, and positions us to speak with authority.


The field of writing center studies has expressed a growing interest in RAD research (replicable, aggregable, and data-supported research) as a tool for developing evidence-supported best practices. In addition to shaping how we serve writers within individual centers, RAD provides a common language to reach external audiences, thereby legitimizing our work. Despite its benefits, many writing center practitioners lack access to knowledge and education about RAD research. Few publications provide novice researchers with guidelines to effectively conduct and write about RAD research. In this article, we not only address this gap but also present RAD as more than a research concept: it is a process that shapes our inquiry, facilitates our scholarly identity, strengthens our credibility, and positions us to speak with authority.


Defining RAD Research

RAD research is a term derived from Richard Haswell’s (2005) work that refers to any kind of data with three key features: it is replicable, meaning that others can conduct the same study in a different writing center; it is aggregable, meaning that the original work is specified and clear enough that it can be built upon by others; and it is data-supported, meaning that the claims it makes are supported with systematic data. While the term RAD is specific to composition studies, the above-mentioned principles govern most research done in the sciences and social sciences (which they just call “research”). Confusion about RAD is still present among writing center practitioners; it often manifests in claims that RAD is quantitative in nature and therefore inappropriate for writing center inquiry (Driscoll and Wynn-Perdue, 2014). RAD is not synonymous with a type of data collection but rather it is a systematic process for handling any data that we collect, be it qualitative, quantitative, or mixed. Many qualitative studies rest upon the principles of RAD; for example, Talk about Writing by Mackiewicz and Thompson (2015) is a qualitative RAD study because the authors specify their coding strategies, articulate all methods, and provide arguments rooted firmly in their data. As such, another researcher can replicate their method in a different center. While RAD research has not historically been a part of writing center research, it has a place within this field.


“Research” is another confusing term in the writing center community (Driscoll and Wynn Perdue, 2014). Perhaps some of this confusion stems from how research is defined in the teaching of first-year composition, typically as a secondary source review. Theoretically-informed articles (which some may call “scholarship”) are also sometimes called “research.” Following Haswell (2005), we define research as a systematic practice using original data that is distinct from or in addition to lore-based understanding. Roger’s own experiences demonstrate the competing definitions of research:

Before I began doctoral study at Indiana University of Pennsylvania (IUP), I conceptualized research in three ways: using scholarly sources to situate my argument within an academic conversation, drawing on others’ theoretical work and applying it to a new teaching/tutoring context, and using anecdotes from my teaching/tutoring experience as evidence to make scholarly arguments. As I began my doctoral program, I started to recognize RAD research as another form of viable research that constructs knowledge through systematically collecting data to better understand teaching, tutoring, and learning.

While we recognize that writing center lore is extraordinarily valuable, this article emphasizes RAD research and what it provides to writing center researchers.


RAD as a Discovery Process

RAD research is typically concerned with what can be learned from the data, how it can inform writing center practice, and how it best supports knowledge building in the field. However, individual researchers also reap tangible benefits. In this section, we step back and explore RAD as a discovery process.

When we conduct RAD research, we test our assumptions about tutoring, teaching, and administration. A host of complex factors lie behind what appears to be a simple problem, and only research can reveal the nature of those factors. RAD research is a valuable form of learning because it informs our practice and pedagogy by testing what we think we know, allowing us to arrive at a deeper and more complex understanding. Again, Roger discovered this first-hand:

In one of my first Ph.D. classes, I was asked to write down every writing assignment I had given as a writing teacher and then be prepared to justify that assignment. During class discussion, one of my colleagues discussed an analysis paper he assigned saying that he thought it helped students develop their critical abilities. My professor then rattled off a series of questions that overwhelmed my colleague: "Okay that is good, but how do you know that they developed those skills? Did you collect papers from students? Did you measure their learning through a developed instrument? Does it work with different students? Did you gather feedback from your students to see if they thought they learned from the activity?" My colleague realized he never tested if the assignment developed critical skills in his students. At this point, I also realized this was something I did in the classroom and in the writing center. In order to test my personal assumptions about teaching and tutoring, I began to realize that I needed to conduct RAD research to get multiple perspectives and test students learning through data collection. Just because I thought something worked doesn’t mean it actually does and only through gaining data could I be sure of its efficacy.


As Roger’s story depicts, we can use RAD research to test our personal assumptions by collecting and examining data on a specific learning activity or writing task from multiple perspectives. When we collect data, we gain insights from both participants and our co-researcher(s). These data encourage us to see our practices in new and helpful ways. If tutors/teachers understand what aids or hinders learning in a tutoring session or in a classroom from the perspectives of students, they can adjust their practice accordingly. In turn, as we will discuss in more depth below, others can learn from the data when we publish our findings in scholarly journals and books. An excerpt from “Theory, Lore, and More,” Dana’s (2012) collaboration with Sherry Wynn Perdue, is helpful to illustrate this point:


Our initial 2012 study of RAD research in The Writing Center Journal demonstrated that less than six percent of the articles published between 1981 and 2009 were RAD. After completing this first study, we focused on understanding barriers for conducting RAD research. Our research questions were rooted in our own experience, that is, we assumed that the problem with RAD was an educational barrier and that people weren’t being taught research methods in graduate school. This assumption was partially correct, but what we discovered in our interviews and focus groups was that while education was a barrier, it could be circumvented, and other barriers like time, job requirements, and sponsorship were much more difficult to overcome. What we initially thought was mostly a lack of preparation turned out to be highly complex set of circumstances.

This example illustrates how a rather simple outcome (like the lack of RAD research in the WCJ prior to 2009) may result from a host of complex causes, and it allowed Dana and Sherry to challenge their own understanding and assumptions about the work of writing center professionals.

Another critical part of the discovery that RAD provides is what happens when the data turn out differently than planned. When students enter graduate school and start reading RAD research studies in various journals, they typically see a neat package, the story that authors were able to tell with the data that fit into the confines of a journal article. The research story as reported rarely includes the “messy” parts of doing RAD, such as the tumultuous journey the researchers took to get to the point of publishing. Unexpected findings and messy data happen to novice and experienced researchers alike, and they should not be viewed as failures but rather as opportunities to gain new insights, ask different questions, or move in a new direction.

Although most researchers have expectations about what they may find in a study, keeping an open mind is critical. As is often the case, the data may disrupt your expectations. While this can cause tension and stress (especially if this occurs at the dissertation stage), it also can open the door to new knowledge, new avenues of study, and a great deal of personal learning and growth. The key is to look at your data honestly, see what stories it holds, and ask what it can teach you. Dana’s early study of transfer of learning illustrates this principle nicely:

I began my dissertation on writing transfer in 2007. My dissertation study was rooted in pedagogical practices, inspired by two of my committee members and mentors, Linda Bergmann and Anne Beaufort. I designed a study that examined how different first-year writing (FYW) curricula at Purdue influenced students’ subsequent transfer the semester after their FYW course. This design made perfect sense at the time—many early writing transfer publications emphasized curricular interventions and teaching for transfer. Based on these early findings, I expected that different curricula (e.g. rhetorical or literary focused) would lead to different kinds of transfer. But that’s not what my data said at all. The students often experienced these curricula through their own internal characteristics or dispositions; it was the dispositions that seemed to matter more than the curricula. These dispositions varied widely, for example, the value students placed on the course, what they believed about writing in their future career, the motivation they had, their beliefs about themselves as writers, the metacognitive awareness they had, and so forth. Once I had done a good part of the data analysis, I sat there in shock: how was I to complete my dissertation? How could I revise my opening chapters I had written already and that focused on curricula? What if my data was flawed? What if I was wrong? Why wasn’t anyone else publishing about this? In the process of weeping and gnashing of teeth, I met Jennifer Wells, also working on her dissertation. Jennifer had the exact same findings but with a different population of students. We both managed to finish dissertations that seemed to run contrary to the popular literature at the time and supported each other in that process. Eventually, we shared our findings in what is now a rather well-cited article published in Composition Forum in 2012. Since 2012, the field’s interest about student dispositions in learning has grown tremendously.

Dana’s story showcases key principles of the RAD discovery process. First, previous studies rarely hold all the answers and may be blind to other aspects of a given phenomenon. Second, researchers have to be flexible. Third, researchers must be true to their data. In this case, Jennifer and Dana’s findings led the field to reconsider the nature of student dispositions in writing transfer.

Another challenge occurs when we discover that the methodology or instrument is not adequate to measure the phenomenon under study. In the case of poor methods or study instruments, there are other kinds of lessons to learn. Even the most experienced researchers sometimes conduct a failed study, where, despite their best efforts, the phenomena they want to measure simply cannot be measured with the available tools or resources. Perhaps the survey questions did not really get at the heart of the issue, or the methods needed to understand a particular phenomenon had not yet been developed. While some situations are unavoidable, we can minimize the risk by asking mentors and collaborators for feedback, by operationalizing our research terms to ensure we are clearly measuring what we hope to measure, and to pilot our methods before engaging in a larger study. Sometimes, however, the best intentions result in “learning experiences” as Dana now describes:

After our successful article on writing transfer and student dispositions in Composition Forum, Jennifer and I joined a larger research team, The Writing Transfer Project, to study a factors that contribute to students’ long-term learning and transfer at four institutions. We developed numerous codes to analyze students’ reflective writing and interview data, and then worked to examine those codes in relationship to the student writing over a two year period. Included in the 98 study codes were 10 disposition codes, drawn from our two dissertations and related work (these included positive and negative aspects of value, persistence, self-efficacy, attribution, and self-regulation). The problems began almost immediately--the coders didn’t achieve high inter-coder reliability with dispositions as quickly as other groups. Throughout the coding, our disposition group struggled with interpretation of the codes. While the rest of the groups finished their coding within the time we had allotted, the dispositions coders barely made it halfway through the same material. The research team was so uncomfortable with how the coding went, we made the decision to throw out the first year of coding (over a week of coding by 5 coders), revised the codes in the next six months, and dedicated twice as many coders to coding dispositions during our second year of coding. Despite all we had learned, the second year was equally plagued with problems. While many of our findings in the broader study were fruitful suggesting that our overall methods and coding strategies conceptually worked well, the disposition codes were largely a failure. When we tried analyzing the data, unsurprisingly, none of the results made sense. Rather than giving up hope, we decided to step back and examine what happened during our coding. That is, we conducted an analysis of our coders’ interpretations of the dispositions, what was coded successfully, what was coded unsuccessfully, and what had been missed. We turned what I had frustratingly called a “train wreck” into a fascinating study-of-a-failed study that revealed really interesting patterns about the complex nature of studying and conceptualizing dispositions. This has led us to rethinking how we code for dispositions and how they intersect with writing transfer.

The best intentions do not always lead to the desired learning. Even when working from constructs established in the field, the research process and methods you choose to employ can still lead to problematic—and fascinating—junctions.

Sometimes researchers have recruiting difficulties. This may happened for any one of the following reasons: lack or time or interest on the part of potential participants; inadequate access to the desired population; project timeline is not good for participants (such as trying to interview students at the end of the term or in the summer months); poor sampling strategies; or inadequate financial resources to recruit participants. Dana’s experience in trying to conduct a faculty survey of Writing Lab use at Purdue illustrates this point:

I identified a gap in our understanding of faculty perceptions of the Writing Lab. Because sending out a survey to all faculty members via the university email system was impossible given university policy, we opted to use regular campus mail. I remember stapling and labeling over 1000 surveys to faculty, which took days. I remember the horror when only 35 of those surveys were eventually returned, some weeks or months later. Since I had proposed this project to my first IWCA conference, I was embarrassed at the low turnout but continued with the analysis and learned what I could. I did learn something valuable with the survey: busy faculty have little time to respond to anything and if I wanted to reach them I had to come up with a different approach.

RAD Research as Critical Reading

New researchers need to become “discerning consumers” of research because doing so instills an ability to judge the quality of the work and to decide if the findings are important in informing practice (Perry, 2011, p.3) When you read other researchers' work with a critical eye, you start to spot poorly designed studies that will yield fuzzy results and begin to realize that you cannot overgeneralize the lessons offered within one study. As a result of this heightened awareness, you realize that you have to be careful in applying insights from poorly designed studies to your writing center’s practice. In addition, becoming a discerning consumer of research allows you to make better methodological decisions for your own research. By reading others’ research critically, you are positioned to replicate and aggregate (that is, to form new questions and lines of inquiry that could be used to construct new knowledge about writing center practice). However, as we will explain in more depth in the following section, critically reading research should not tarnish our respect for other researchers’ work.


As much as you need to be a critical consumer of RAD research (or any other kind of scholarship), engaging in research should encourage a healthy respect for others’ work. This is particularly important for RAD research, because the workload that such a study requires is easy to underestimate until you conduct such a study yourself. RAD is often more time consuming and challenging than expected and it requires creativity, mental acuity, and full engagement. Dana remembers a relevant teaching moment with Patricia Sullivan during a research methods course in her Ph.D. program:

Pat had chosen a set of studies for us to read for class, many of which had serious flaws. When we started class, Pat asked us what we thought of one of the studies and as a class, we began heavily critiquing the studies. She listened to us some time. Then, after we had quieted down, she paused for a moment and said to us, “Have any of you ever attempted a study like this?” The room was silent--as new Ph.D. students, none of us had. She continued, “It’s very easy to tear someone else’s work apart, and very difficult to conduct your own work. Keep that in mind when you are reading.” That moment changed how I looked at research, even research that was poorly done, and it’s a concept I now teach my own Ph.D. students. Even if the research or writing isn’t up to publication standards, it still represents someone’s life energy and work, and we need to take that into consideration when reading and responding.

It’s easy to critique from the sidelines. Until you start doing the work yourself, it is hard to understand how much investment went into a study. You also do not see the “hidden” investments of time that may not be clearly revealed in the methods section. Often, researchers do not recognize the flaws until after the data is collected and they are in the middle of analysis. Should they throw out everything they have done? Or do they acknowledge what they have learned, and move forward, knowing there still is value in the work? Do they publish it in the hopes that others will learn from their mistakes? These are the kinds of questions that researchers face, and you can only appreciate these decisions when you face them yourselves. As Dana’s stories about studying dispositions show, our growth as researchers depends as much on our failures as it does on what goes as planned.
A final issue with respecting others’ work has to do with the scope and aims of a particular study. Even studies with small numbers of participants are highly valuable; sometimes in order to achieve RAD standards and to engage in early steps of research, researchers purposefully keep their perspective and approach narrow. Multi-institutional and collaborative work often comes after these smaller studies. And don’t forget, qualitative research can yield important insights whether or not the sample is one or many because its goal is not generalizability.

RAD Research Builds Scholarly Identity

Many graduate students go to their first conference, see scholars that they’ve read in their courses and wonder to themselves, “How could I possibly make a contribution? How can I have my ideas and my voice heard? How could I ever do even 10% of what so and so has done for the field?” We propose that RAD research gives new researchers and graduate students the opportunity to clearly and definitively build a scholarly identity and gain agency in three ways: to develop agency, ethos, and voice; to chart a research trajectory; and to develop academic writing expertise (which will be discussed in detail in the following section).


RAD Research as Agency

WCJ editors Eodice, Jordan, and Price (2014) recently wrote that a key problem in writing centers is that “[writing center scholars] have yet to embrace the identity of ‘knowledge-maker’”; this lack of this identity can reduce less agency (p. 12). RAD research helps fill this gap. As you collect data, you begin to carve out an identity as a researcher, one who is constructing knowledge in the field. The act of conducting RAD studies and uncovering interesting findings can empower new scholars—we start to realize that we have good ideas that need to be heard. One of Roger's professors, Dr. David Hanauer, describes it this way: "When you have good data, it doesn't matter who you are. You matter and people will listen to what you have to say." This agency can be particularly useful in writing centers because it can help directors demonstrate the efficacy of their writing centers and the legitimacy of the broader field.
Developing a Research Agenda

RAD research encourages the development of a research agenda by its very nature: as one study is completed, more questions arise. Initially, this can quite frustrating, but it is actually exciting and invigorating. Roger found this out in one of his first RAD studies done as a doctoral student:

In a study conducted to develop a tutor training program and assess the effectiveness of the tutoring for our synchronous Online Writing Center (OWC), I ended up discovering how much I didn’t know about RAD nor about the tutors who I worked with. In our OWC, we use Cisco’s WebEx videoconferencing to conduct real-time audio and video tutoring sessions. With students’ permission, we video-record sessions and keep three years’ worth of consultations. As the OWC coordinator, I created an observation protocol for our two summer online tutors to view three videos that I selected. The tutors watched these sessions and noted what tutoring strategies aided or hindered sessions. What I found was surprising: tutors had observations that I had did not have, and several of the key strategies from the tutor training curriculum that we had put in place were not recognized by the tutors. These results led me to ask more questions about online tutoring, the tutors, and conducting research: What is the role of tutors as researchers? How can we make tutor training stick with tutors over a longer period of time? How can we incorporate observations more into an on-going online tutor training program? How can I use a more effective methodology to study these above questions?

In Roger’s case, his work as the OWC coordinator led him to a series of questions and a potentially longer study about online tutoring.
If you are unsure about developing a research agenda, we suggest starting with questions that interest you. What have you noticed that leaves you saying “I wonder why?” What ideas in others’ work inspire you? At the end of an article or book, some researchers will discuss avenues for future research—these lists, from the newest articles in our field, are also a good place to start. After conducting even one study, you will find that you likely have much more complex and interesting questions. Ultimately, what RAD research does is help us develop clear research agendas through the act of simply collecting and analyzing data—the more data we collect, the more questions we are left with, and this can help us drive our own research forward. Dana’s example of her work on writing transfer and dispositions illustrates this point well—she did not set out to study dispositions, but that was where her data led her. The questions that RAD generates encourage us to continue to evolve as thinkers, tutors, and teachers. 

Composing RAD Research

Now that we have covered some basics about RAD as a discover process and RAD as scholarly development, we now turn to a discussion of how to write about RAD research. This section will examine the format of a RAD research article, general components of a RAD study, and common composing challenges.


IMRAD as a Research Format

Many RAD articles, quantitative and qualitative alike, use what is called the “IMRAD” format. This stands for Introduction, Methods, Results, And Discussion. This genre is used extensively in academia because it allows research to be presented in a manageable, readable, and clear format that promotes replication, aggregation, and data-supported approaches. Driscoll and Wynn-Perdue (2012) found articles that used an IMRAD format were more RAD in nature than those that did not. IMRAD allows us to enter a conversation using language and genre features that other disciplines will recognize—it is a near-universal way to present research and allow others, regardless of field, to access it. Once one learns how to read and write using an IMRAD approach, one can contribute to many different kinds of conversations, including those beyond writing centers. For a book-length example of this format in a qualitative study, see Mackiewicz and Thompson (2015).

Establishing the Problem and Developing a Question

Developing a research agenda and asking the right research questions are key parts of the research process. Fuzzy questions lead to frustrating results and confusion during the analysis and writing, ultimately leading to article rejections. Taking the time up front to read previous research and to map out clear research questions can save you a lot of trouble down the road. We suggest that research questions go through several iterations to ensure that they are both narrow and measurable. “Is my writing center effective?” is a problematic question because “effective” is not defined and the question is quite broad. A better question might be “Does my writing center help students improve their grades on writing assignments?” This question shows a researcher what to measure (grades on writing) and how to proceed, and will be more effective than an overly broad question in helping the researcher gather useful data.


Telling the Research Story: The Literature Review

As Dana and Sherry Wynn Perdue recently discussed in their 2015 Writing Center Live presentation, RAD research is a story. We begin that story in the literature review, framing the previous landscape and how our story fits within it. We continue with the beginning of our own research story, how we collected the data, what we did to analyze it; we then tell the story of the results and what we think the results mean. This is what IMRAD is all about, and if you break it down in these terms, you can see how the story flows.


New graduate students and those new to research may feel that literature (or lit) reviews are rather easy to conduct because they are simply a collection of secondary sources. But this is not so. The lit review tells the story of the previous research, and this story helps set up your study, both in terms of the methodology you will use and how you will explain your findings. By situating your claims within the existing scholarship, you can enter and extend your field’s conversation. Therefore, a literature review is a cornerstone of most RAD studies and without it, the collected data is de-contextualized. Literature reviews are critical to RAD for the following reasons:


     The landscape of previous research. A literature review demonstrates how your study is positioned in relationship to others’ work. We note here Learner’s (2014) finding that many authors in The Writing Center Journal failed to cite sources outside of the WCJ; they also typically cited a small set of articles that affirmed lore-based beliefs. Select a range of representative articles on the topic. Although lit reviews need not be exhaustive, they should present the most relevant and timely work, especially previous RAD studies. Replication and aggregation of previous research starts with drawing upon that work in a literature review.


     Influence on study design and concepts. A second way that a literature review is critical to RAD is that it helps writer identify key methodological influences upon which to draw for their own studies. This may include using previous research to define and operationalize key study terms from the research questions; describing methodological influences that you are replicating or building upon, or discussing findings that you are looking to extend and support (these last two points deal with aggregation).


     Identify a gap. A third way that a literature review is critical for RAD research is that it can help us identify a gap in previous research, one that our present study fills. This is a very persuasive moment for a research writer—you want to show how your study fits in with the broader framework as well as how it affirms, qualifies, challenges or extends that work (or makes the strong case for replication).


     Common mistakes to avoid within a literature review. There are many ways in which a literature review can be less useful to a researcher than it might otherwise be or fall short of a reader’s expectations. Here, we will discuss the potential harm that may result from beginning your primary research before conducting a lit review, casting too wide a net when looking for sources to include in the review, and, conversely, limiting the scope of your search unnecessarily.


     Writing your lit review too late. One of the biggest problems new researchers have is that they end up with a great idea for a study and immediately start collecting data before consulting the literature. Then they are forced to write a “retroactive” lit review, and most often, find all kinds of material that could have been useful to the study design. What we suggest is starting with a review of literature unless your method relies on not commencing with presuppositions, even if you may save the actual drafting of the text until later in the research process. Once you have your good idea for a study, start by seeing what previous work has been done, seeing what articles can help you operationalize your terms, define your methods, and so on. When it comes time to create your article or conference presentation, you will already know what previous research is important and will not encounter a situation where you are inadvertently replicating someone else’s work. You may notice that most dissertation processes follow this model--the previous research always precedes any new information or inquiry.


     The data dump. Those new to research—especially at the dissertation phase—may try to conduct an exhaustive literature review and find every study ever conducted on a topic. A good literature review is not exhaustive—it is focused and directly relevant. Figuring out what is directly relevant and what fits into your “story” is part of the challenge; determining what has not yet been said is also critical. You want to have a firm grasp of the important studies (the ones that are often cited) as well as newest work (because for research, timeliness matters) and those that help inform you. You may encounter a big group of literature that is often cited but marginally relevant--you can briefly cite it and say why it does not fit the study, and then set it aside (or footnote it). When you are writing your lit review, focus on telling a specific and explicit story of the previous work that led you to this point.


     The myopic view. As Learner (2014) has pointed out and as we mentioned above, you want to draw upon relevant research both within and beyond writing centers. Limiting your search only to writing centers will not allow you to see all there is to see on a topic; for example, a study of writing anxiety in the writing center should include literature not only from writing centers and composition studies, but also from educational psychology as this field also deals with various kinds of anxiety. Articles from other fields on managing anxiety can help you understand your data and develop an appropriate methodology. Reading more broadly is part of how new knowledge is created and applied. It is for this reason that literature reviews also help us bring new ideas from other fields into writing centers.

The Methods

Being clear and detailed in the methods is particularly critical for RAD research—without such, the research cannot be replicated or aggregated, nor can the results be trusted. A bad literature review is a problem, but a poorly written methods section is the difference between a RAD and a non-RAD study.


The methods are, in essence, the story of what you did and how you did it. This is where you show how you generated or collected the data and how you analyzed it. If you are doing work with human participants, you should have had your study approved by your university’s Institutional Review Board (IRB); some of the material from your IRB application can be adapted and included in your methods if appropriate. Your methods should include all of the following information, and while the order of the information is flexible, it should make sense to the reader.


     Research questions or hypotheses. While some writers like to end their literature review with their research questions, they may also come within the methods section. Research questions should be clear, direct, and relevant to the study. Any terms in the research questions should be explicitly defined (either by you or by the previous literature). Writing centers generally use research questions more than hypotheses, but sometimes hypotheses are also appropriate.


     Ethics. If you are working with human participants, mention that your study was approved by your IRB. Note any ethical issues that arose (this infrequently happens with writing center work, but it does happen and others should know to avoid whatever ethical issues you encountered).
Participants. Describe who your participants are and how they were recruited. Sometimes demographic information in a study shows up in the methods, and sometimes it begins the results section, depending on the study.


     Selection and sampling. This refers to how your data were selected and/or sampled. Sampling refers to a systematic means of selecting data points for the purposes of your study: so, if you collected 100 student texts to read, you would talk about how you collected them, why you chose 100, who they were collected from, who may have been omitted, and so on. A discussion on the selection of participants (that is, of who was recruited and how) is particularly important to include. Many articles published in the Writing Center Journal prior to 2009 did not specify why particular participants were selected for inclusion in the study (Driscoll and Wynn-Perdue, 2012). Lack of information participant selection not only prevents replication and aggregation, it also makes a reader question whether or not these participants created bias in the study in some way or were a convenience sample—which makes a reader question the results. Note that the selection of participants needs to be discussed in both qualitative and quantitative research: if four students are chosen as “case studies,” why were these four students selected? Do they represent other students?

     Instrumentation. If you used anything to collect data (like surveys, interview scripts, observation forms, and so on) you should describe the forms and how they were created or where they came from. It is a useful research practice to pretest the form or use other validation measures. It is also useful to include instruments as an appendix to your study for replication and aggregation.


     Study process. What did you do at each point of the study? Describe it in detail, making sure all major steps were covered. Remember that a methods section is the story of your research—tell that story to someone who has never heard it before.

 

     Data analysis. Data analysis is another area that is often not detailed enough or absent in writing center research (Driscoll and Wynn-Perdue, 2012). You will want to describe anything you did to analyze your data: for qualitative data, this might mean doing an initial read of observations, combined with creating coding categories, refining the categories, and reading through (you would also likely provide the categories and their definitions to readers). For quantitative data, this could include any statistical calculations you performed, and for inferential statistics, what kinds of tests (and assumptions about the data) you made. If you end up having to do any “cleaning” of data, meaning that you are modifying the dataset to remove data that is invalid or problematic (like session reports filled out with nonsensical information), you need to clearly indicate that you did so.

Study limitations. To some degree, the placement of study limitations is a personal preference—some choose to address them as part of the methods section, whereas others choose to place them at the end of their manuscript. Dana’s mentor and dissertation advisor, Dr. Linda Bergmann, made a compelling case for placing limitations in the methods: you don’t want to end your study on a downer, and since you are talking about methods already, why not include the limitations of the methods?

Common Pitfalls in Writing Methods

Try to avoid the most common pitfalls when writing methods:

     Missing or imprecise research questions. Research questions may be absent or so broad that they are not useful to the study.


     Failing to operationalize key terms. Defining key terms in a research study helps you to understand methods of data collection and helps you answer your research questions.


     Not telling the whole story of the methods. Smazorinsky (2005) argues quite effectively that the methods section, especially in qualitative research, often lacks enough detail for readers to evaluate the data and claims. We advocate, as did Smazorinksy (2005), for detailed methods in which the approach and analyses are clear and detailed enough to be replicable.


     Explicitly address sampling and selection. As mentioned above, this is one of the most missed areas in prior writing center research.

    Not discussing study limitations. Discussing the limitations of the study is a critical component if ethical and transparent research. Further, if other want to replicate your work, they need to understand the problems you faced.


     The results. What often happens in research, particularly in qualitative research, is that you will have more results than you have room to discuss. As a researcher, your goal is to decide which results help you tell the story you want to tell and discuss those results clearly (saving the other results for another article, or using them as a springboard for future study). Bogging down the results section with everything you have learned not only will cause you to exceed the length requirements and sacrifice critical room for other necessary parts of the article (like the discussion section) but also will exhaust your reader and obscure your most important findings. The best results section is one that is compelling, that leads clearly from the lit review and methods, and that presents only key findings.

Organizing a results section can be difficult, especially if you have a number of findings to share. A few approaches that work are to A) present results based on your research questions; B) present results based on major themes; or C) if you have multiple groups or kinds of data, present results based on those groups or types.


     Mistakes in the results section. A common pitfall in writing your results is combining the results with your discussion, where you may present a result and then discuss your interpretation of the result in the same section. Results should be reported first, so that the results of the study are clear, and then discussion should be separate so that you have a moment to step back from the result itself and think about its meaning. Otherwise, the results become “fuzzy” and it’s hard to tell what the researcher found and what their opinion is about this finding (this is the same reason that qualitative researchers use a double-entry notebook—to separate direct observations from thoughts about those observations).
Another common pitfall when writing complex or statistical results is not presenting the results in multiple, accessible formats. A good rule of thumb, taught to Dana by Donal Carlson, a social psychologist at Purdue, is to write your results in plain English first so “that your mother could read them.” While this approach is critical to the accessibility of inferential statistics, it is also a useful approach for any complex data, regardless of whether it is qualitative, quantitative, or mixed methods. It is advisable to use an accessible sentence that your mother could read as your topic sentence followed by a detailed paragraph about your findings. Additionally, complex data can be presented in a table or graph—but only if that table or graph is likewise accessible and readable. Again, the job of the RAD researcher is to make everything as clear and understandable as possible—be careful not to mask your findings through unclear language or obtuse statistics.


Concerning statistics, the placement of inferential statistics in articles (and dissertations) on writing centers and in the broader field of rhetoric and composition is inconsistent. Dissertation committees, editors, and reviewers have differing opinions. More than once, Dana and her co-authors have been asked to place statistics that were critical to an argument in an appendix or footnote, and out of the main text because not all readers could understand those statistics (even with the “your mother can read it” approach). We feel that this devalues the role that those statistics play in research because some RAD studies rely heavily on the conclusions that can be drawn inferential statistics; these are an important and meaningful part of the study and should be in the main text. Given this, presenting statistics in the text with explanations in a footnote (as necessary) is a good approach.


The Discussion


Readers often arrive at the end of a compelling study only to find one short page—or less—that discusses the meaning and implications of the study. This often occurs because authors have run over their word limit by the time they began writing the discussion and did what they could to cram it in. Finding less than a page discussing the implications and results of an article is frustrating to readers who just spent considerable time reading and understanding the results. Given this, we argue that the discussion section is where the meaning-making and field building occurs. If you think about the RAD research article as a story, the discussion is the “climax” of the story (with the “falling action” being the conclusion). Why would you short-change your climax?


One issue that new writers experience is referencing results in the discussion without feeling that you are repeating yourself. One of the ways you can think about the split is that results are “this is what I found” and discussion is “this is what it means and how it connects.” You might think about this as the difference between summary and analysis in first-year writing courses: summary should be short and succinct, especially when the audience has already read the text; it is the analysis that matters here. Given this, we find it helpful to briefly review key findings prior to the discussion and then spend the bulk of the time discussing the findings that were reviewed.


So what exactly should this “analysis” of results look like? Good discussion sections seek to elaborate upon and discuss specific findings and their implications for writing center practice. Discussing key findings and how they connect with previous work (the work probably mentioned in the lit review) and working through their nuances can do much in the way of knowledge building for the field. This is because when we have multiple studies in multiple contexts reporting similar findings, we are aggregating knowledge and building more general awareness that we can apply across contexts. If different studies have conflicting findings, then this is an issue we need to continue to investigate further and directly address. Here, questions can be raised that future research can consider.


Discussion sections include explicit considerations of key findings but also discuss broader implications in related areas. This means that your discussion section should not only talk about what your results mean but also help readers contextualize those results in the larger field and demonstrate how they impact the larger picture. The key with discussing broader issues is that you want to use hedging language rather than firm language. That is, you want to say things like “this could lead us to conclude” or “given this, we speculate” or “this may connect”; broader connections are important, and your study provides evidence, but it is never the be-all, end-all of research.


A common misstep in discussion sections is overstating your findings. Speaking honestly about what you learned (and what did not go well) and being realistic about what it teaches us about writing center practice is an important and humbling part of this process. This leads to the “data-supported” part of RAD; our conclusions must be rooted in the data that we provide—not in what we wished we found, or what we hope to find next.
A good discussion section also addresses future work to be done—in their 2012 article Dana and Sherry found discussions of limitations and future work were the weakest of all areas in WCJ research articles. This speaks to a strong need for considering the future in our discussion sections: talk about what questions this study raised for you, what data could be collected, and what other issues that you had not originally considered arose. This is critically important work for the field, for it is here that you and/or someone else can plan out how to continue this work.


Features of Good Research Writing


Now that we’ve presented an overview of the RAD research format, we conclude with some general writing tips. Some of the principles of well-organized writing that we teach our first-year writers are really useful for research writing. Research writing should be direct and clear—our methods and results are often hard enough to understand. If we complicate our language and provide a disorganized manuscript, it makes it very difficult to follow.


     Clarity and precision. Research writing requires a level of clarity and precision not found in all other academic or non-academic writing genres. Pay careful attention to the language that you use to describe research and make sure it is accurate and precise. For this reason, it is also wise to avoid using the passive construction in research writing as it may obstruct meaning.


     Organization. Even within the IMRAD format, you want to build in substantial organizational markers to help your readers understand what you are trying to convey. This includes making clear use of a focus statement or paragraph in the introduction that describes the study and using signposts and transitions. Headings and subheadings are consistently useful for reporting research.


     Avoid overconfidence. Resist seeing your work as the “be all, end all” study rather than a study that is making one contribution among many. Hedging terms like “seems to suggest”, “provides evidence”, or “or “we might consider how” help you avoid overconfidence.


     Terminology. When you are writing research, it’s wise to avoid the word “proof,” which implies that something is known beyond a shadow of a doubt (and really, can we ever say that for writing center work?). It’s also wise to reserve the word “significant” only for use with statistical significance (such is found after conducting a t-test or ANOVA). If you say your “findings are significant” in research, that means they are statistically so, and that requires a certain kind of study design.


     Conciseness. In a research article, every word counts. Learning how to say more with less is another key skill. Article word limits, especially in the 7500 range, present challenges for more complex or detailed RAD research studies. This is why two of the top journals that publish writing research (Written Communication and Research in the Teaching of English) allow for longer limits and why journal editors are sometimes flexible with word limits. Even so, when you begin writing RAD research, you will see how quickly you can accumulate 7500 words—and you will experience the juggling act of being clear and precise while still being concise. In this article, Roger and Dana were able to take over 11,000 words down to 9000 with a conciseness edit.


Conclusion and Collaboration

A final benefit of RAD is modeled through the writing of this article—a collaboration between a graduate student new to publishing and a more experienced faculty member. Collaborative research not only offers additional perspectives, it can also provide “sponsorship” opportunities between developing writing center scholars and already established ones (Wynn Perdue and Driscoll, under review). This mentorship helps new scholars develop research skills and can benefit writing center studies because of the increase in higher quality research. However, as one co-author of this article (Dana) has found in her research on the scholarly identity of writing center administrators, collaborative research and co-authoring has yet to be fully embraced in writing center research and humanities more generally (Wynn Perdue and Driscoll, under review). Writing center studies needs to encourage collaborative research to benefit its developing scholars and counter the stigma towards collaborative research that has developed in the humanities . As Wynn Perdue and Driscoll argue, "sponsorship,” between new authors and established scholars could be one of the most vital aspects for new authors’ future success .


We conclude by encouraging new scholars to try RAD research. While it can be frustrating and at times almost daunting, it is quite possibly some of the most rewarding research a scholar will ever do. Building the knowledge in the field, building one’s own scholarly agenda, and learning some really interesting things are only some of the many joys that await a RAD researcher.


References

Driscoll, D. L., & Wynn Perdue, S. (2012). Theory, lore, and more: An analysis of RAD research in The Writing Center Journal, 1980-2009. The Writing Center Journal, 32(2), 11-39.

Driscoll, D. L., & Wynn Perdue, S. (2014). RAD Research as a framework for writing center inquiry: Survey and interview data on writing center directors’ beliefs about research and research practices. The Writing Center Journal, 34(1), 105-134.

Eodice, M. Jordan,K. and Price, S. (2014). From the editors. The Writing Center Journal, 34(1), 11-14.

Geller, A. E. & Denny, H. (2014). Of ladybugs, low status, and loving the job: Writing center professionals navigating their careers. The Writing Center Journal, 33(1), 2013.

Haswell, R. (2005). NCTE/CCCC’s recent war on scholarship. Written Communication, 22(2), 198-223.

Lerner, N. (2014). The unpromising present of writing center studies: Author and citation patterns in The Writing Center Journal, 1980 to 2009. The Writing Center Journal, 32(1), 67-104.

Mackiewicz, J & Thompson, I. (2014). Talk about writing: Tutoring strategies of advanced writing tutors. New York: Routledge.

Perry, F. L. (2011). Research in applied linguistics: Becoming a discerning consumer. New York: Routledge Taylor & Francis Group.

Smagorinsky, P. (2008). The method section as conceptual epicenter in constructing social science research reports. Written Communication, 25(3), 389-411. doi: 10.1177/0741088308317815


. . .. . .

RETURN TO TOP