CMCM LOGO
Computer-Mediated Communication Magazine / Volume 2, Number 5 / May 1, 1995 / Page 10


"How Will This Improve Student Writing?" Reflections on an Exploratory Study of Online and Off-Line Texts

by Steve Krause (skrause@andy.bgsu.edu)

I first became interested in the pedagogical uses of online discussion groups in the Spring of 1993 when my then Bowling Green State University colleague Bill Hart-Davidson and I (with help, participation, and support from John Clark and Mick Doherty) set up an email listserv between our two first year composition courses. I don't think we were entirely sure what would happen, but there was a wide variety of evidence in a number of academic journals suggesting it would be beneficial to our students. Computer online discussions similar to the one we set up have been credited with increasing class participation (Faigley, 1990; Groote, 1993), increasing the effectiveness of writing centers (Balester, 1992; Puccio, 1993), helping basic writers (Batschelet and Woodson, 1991; Balajthy, 1989), and encouraging collaboration (Fey, 1992; Mabrito, 1992; Barker and Kemp, 1990; Sirc and Reynolds, 1990; Thompson, 1990; Eldred, 1989; Kinkead, 1987).

Michael Spitzer (1990) suggested networked communications could encourage a greater sense of audience by fostering an "online discourse community" where writers and readers are genuinely communicating with each other and see a purpose behind their writing beyond the assignment itself. He argued that because computer networks change the dynamic of the classroom to an interactive and social one, they "have the potential to transform student writing from listless academic drudgery into writing that is purposeful and reader-based" (59).

Gail Hawisher noted in 1992 that online environments provide "a real and expanded audience" that student writers can return to with minimal restrictions on time and place. (86) Delores K. Schriner and William C. Rice said that when students posted messages to each other via a computer network, "they knew they had an audience beyond the teacher, and as a result their writing emerged as 'real,' 'volunteered,' even urgent" (475).

By and large, I support these claims and am a firm believer in using online discussions (either in the form of Usenet style "newsgroups" or email "listservs") to extend the boundaries of classroom discussions. I can't imagine teaching a class without one anymore. However, while there is a great deal of evidence to suggest that online writing is beneficial in and of itself, there is little research that suggests the benefits of the online writing environment transfer to any other environment. As Hawisher noted, few computer network researchers have "asked the question that scholars in composition studies asked frequently of word processing environments: Will students' writing improve as a result of this technology and environment?" (85).

Hawisher's question is obviously a problematic one-- what do we mean by "improve as a result of this technology?" what do we mean by "writing?"-- but it is still in my view an extremely important question to consider. "How will this improve student writing?" is the question asked of CMC advocates by others within the academy, professionals who want and deserve a reasonable response before investing a substantial amount of time and money into upgrading computer facilities to improve writing programs. My exploratory study, which this essay briefly discusses, suggests that while online discussions are potentially valuable as teaching tools, there is no evidence to suggest that they influence writing in other "off-line" environments.

Refining the Question, Defining Terms and Assumptions

Before I go any further, I think I should take a moment to explain the basic terms of my research question and the assumptions I began with. For the purposes of my study, I refined the question of "How will this improve students' writing?" to "Is there a correlation between those who demonstrate a high degree of interactivity online to those who demonstrate a high degree of audience awareness off-line?" By "online," I mean the email posts the students made to the class mailing list or "listserv" that Hart-Davidson and I set up between our two classes no matter what the subject of the student's email message. We both required students to post two email messages a week-- one "original" response to the readings or classroom activities and one "response" to another student's original response-- although, as the results I discuss suggest, the degree of compliance with this requirement varied wildly.

By "off-line," I mean the first "synthesis" essay assignment that the students in both Hart-Davidson's and my class completed. Bowling Green State University's first-year writing requirement is taught through a program called General Studies Writing that requires students in all sections to write the same number and type of assignments. To meet that requirement and to set the groundwork for this study, Hart-Davidson and I used the same assignment and provided students more or less the same amount of time to complete their essays.

"Interactivity" and "Audience Awareness" were of course more difficult to define. I go into more detail about how I arrived at and used these terms in the methodology section below, but basically I defined both "interactivity" and "audience awareness" as the extent to which the writer is mindful of and responsive to a real or imagined reader. However, because "online" emails represent a different medium, style, and purpose than "off-line" essays, I thought I needed to make a clear distinction between "interactivity" and "audience awareness." So more specifically, by "interactivity," I generally meant the extent to which students were mindful of and responsive to others online, and by "audience awareness," I meant the extent to which students were mindful of and responsive to the readers (the teacher and their classmates) off-line.

I began my study with two key assumptions. First, there seemed to me to be a lot of face value to the implied claim I think is being made in most of the literature about computer network communities: that is, there is indeed some transference from the online environment to the off-line one, that those who "write well" online will "write well" off-line. Second, I assumed that if we accept the position that connection to community is key to writing, then it's reasonable to assume that the off-line essays written by students participating in an online discussion would reflect these students' participation in the online community. As Barker and Kemp suggest, the computer network is the ideal place to demonstrate our maturing epistemology that accepts knowledge as a social construct and discourse as a process of negotiation within a community (1990). Hawisher in 1992 argued "that until the profession accepted and endorsed a view of meaning as negotiated, texts as socially constructed, and writing as knowledge creating, we were unable to value the kinds of talk in writing classes that electronic conferences encourage" (83).

Methodology and Results

Since this essay is really only a summary of my exploratory study, I will limit my explanation of my methods and results to what I think is crucial for the reader to appreciate the discussion that closes this essay. The original study is more fully outlined in the document "Comparing Interactivity of On-Line Texts to Audience Awareness of Off-line Texts: An Exploratory Study," which is available via World Wide Web at http://www.bgsu.edu/~skrause/interactivity.html

All of the materials discussed were the work of twenty volunteer students, 10 from my section of English 112 (the second semester component of the first year writing curriculum at BGSU) and 10 from Hart-Davidson's class. The online texts examined were the email messages these twenty students posted to the "listserv" email discussion set up between our two classes. Each student was given an email account that was identifiable to other students only by a code (such as "ad-112"), meaning that unless they choose to reveal their identities, the student participants were anonymous. There were 46 students altogether participating in the listserv, and there were well over 1,000 messages posted during the semester.

The off-line text examined was the first "synthesis essay" assigned in both classes. The same assignment was used in both classes and both groups of students had about the same time to complete the work. Hart-Davidson and I both made every effort to de-emphasize the role of the teacher as the sole audience member by having our students working together in small groups on revisions and by encouraging our students to think of their classmates as members of their target audience.

I considered three measures of online "interactivity" for the online texts: the number of posts, the number of words, and the Interactivity Score assigned to each student by the raters. The number of posts and number of words were simple counts. The Interactivity Score represented the total of the average rating for all of a student's email posts. Three raters (fellow graduate students and teachers of the same type of first-year writing course) were trained to score each email as a one, two, or three for interactivity. A "one" rating meant that the message seemed to be non-responsive and non-interactive. Typically, these were messages that seemed to be addressed to no one (e.g., "Hey! This email thing really works!") or messages that reported about the reading for the sake of fulfilling a class participation assignment (e.g., "I thought the article by Smith was very interesting.") A "two" rating meant that the writer was interacting in a line of discussion, but was not responding to a particular reader or a particular message. For example, a message that said "I agree, that Smith article really is very interesting. I thought it said a lot about our country today" was typically rated a two. A "three" rating meant that the writer was responding directly to another writer-- for example, "I agree with you cd-en112, that was a really good point about the Smith article." Raters scored each post on this scale, and an average score for all of the student's posts was calculated for each rater. These scores from each rater were added together and represent the Interactivity Score, which is on a scale of 3 to 9.

While this system was simple, the three raters agreed that rating the email posts was "easy" and that the three categories seemed enough to adequately describe all of the messages. Using a Cronbach Coefficient, the rater interrelability was calculated to be .952 or about 95%.

Rating the off-line essays for "Audience Awareness" was much more difficult. In the rater training discussion, we talked for quite some time about what this phrase "audience awareness" meant, and we all agreed that this was a highly subjective concept. To clarify, I offered three criteria to consider for rating audience awareness. The first was what Hays, Durham, Brandt, and Raitz (1990) called "naming and context moves," meaning the extent to which the writer uses simple cues to establish a contextual relationship for the reader. For example, the phrases "In our society today" or "I think that we as Americans need to consider..." have a higher degree of audience awareness than "In society today" or "Americans need to consider..." (254). The second was the extent to which the writer realized that the reader may be unfamiliar with the sources of evidence being used to support an argument. The clearest indication of this was whether or not the writer introduced the sources: phrases such as "In an article written by John Smith entitled 'Problems with America Today,' he said..." suggested a higher degree of audience awareness than phrases such as "Smith said..." or no introduction or citation at all. The third criteria was to consider the extent to which the writer seemed aware of his or her own credibility with the reader: did the writer account for the potential reader's background, did the writer make careful and credible word choices, did the writer include all of the information a reader would need to understand the argument, etc.

With these criteria in mind, each rater scored each paper on a scale of one to four, with one being low audience awareness, four high audience awareness. These scores were added together to form the Audience Awareness Score, which is on a scale of 3 to 12. Despite the crudeness of the scale and the reservations expressed by raters, the scores proved to be significantly consistent. Using a Cronbach Coefficient, the rater interrelability was calculated to be .827, or about 83%.

Table One expresses all of this data for each student. Worth noting here is the extremes in number of posts and number of words for some students.


Table 1: Number of Posts, Number of Words, Interactivity Score, and Audience^M Awareness Score Results

Student ID   Number of  Number of  Interactivity  Audience Awareness
Number       Posts      Words      Score          Score
   1           1            9        3.0           9
   2          22          741        6.6           8
   3          18          896        6.1          11
   4          61         3089        8.0           6
   5          13         1383        5.1           7
   6           9          452        3.9           9
   7          27         1617        6.4           6
   8          54         5570        7.5           5
   9          14          869        7.0           8
   10         11          687        5.4           9
   11         12          575        5.4           7
   12         18         1382        5.1           4
   13         12          627        5.9           9
   14         19         1179        4.2           6
   15         23         1751        6.7          11
   16          1           39        4.0           4
   17          1           34        3.0           6
   18         15         1233        7.1           6
   19         18         2332        6.2           7
   20         23         1756        7.1           3

Correlations between each of these four variables were calculated using a Pearson Correlation Coefficient. These results were reported in Table Two. The variables with a statistically significant correlation--between Number of Posts and Number of Words, Number of Posts and Interaction Score, and Number of Words and Interactivity Score--are marked with an asterisk.


Table 2: Correlations Between Number of Posts, Number of Words, Interactivity Score, and Audience Awareness Score

           Number of  Number of  Interactivity  Audience Awareness
           Posts      Words      Score          Score

Number of     XXX     0.87731*   0.74940*      -0.20723
Posts         XXX     0.0001     0.0001         0.3807

Number of     XXX     XXX        0.65767*      -0.28376
Words         XXX     XXX        0.0016         0.2254

Interactivity XXX     XXX        XXX           -0.06313
Score         XXX     XXX        XXX            0.7915

Not surprisingly, there was a strong correlation between the number of words in email posts with the number of posts. The strong correlation between the number of posts and the interactivity score also suggests that those students who posted most frequently were also those most truly engaged with their colleagues in discussions. The correlation between the number of words and interactivity is also evident. But significantly absent is a correlation between audience awareness and any other variable. Simply put, these results suggest the answer to my original research question-- Is there a correlation between those who demonstrate a high degree of interactivity online also demonstrate a high degree of audience awareness off-line?-- is "no."

Discussion

Let me again point out that these results need to be looked at cautiously: this was, after all, an exploratory study with a small sample. I think this study would benefit from a replication with a more formalized protocol and an increased sample size because while the raters' scores were consistent and statistically significant, I think developing clearer and more precise instructions for raters would be beneficial. I also think better student training on how and what to post to the listserv and non-anonymous user codes would have to be considered in replication.

As the results in table one indicated, several student's participation was essentially non-existent and several others were "too involved," taking up too much conversational space much in the same way that is common in a conventional classroom. Some students also took advantage of the shield of anonymity and posted insulting and obscene messages to other students or to the bulletin board as a whole. Arguably, the excess and freedom exercised by some students in their messages was seen by the raters as high interactivity, thus influencing the scores. While this explanation is possible, my impression was that raters on the whole responded more positively to messages pertaining to class activities than those that did not. By arguing that the instructions given to students about the use of email and the listserv (commonly referred to as "netiquette") should be reconsidered, I'm not suggesting that messages somehow be censored and regulated. After all, one of the points of this listserv was to offer a place where students felt they could express themselves in a fashion that they could not in class. However, I think in the interest of "class management," it might be beneficial to control the listserv environment a bit more tightly than to have no control at all.

But even with these limitations, I think there is much to learn from these results. First, the strong correlation between the number of posts and number of words with the interactivity score suggest that those who participated most in the community were those who were most active in it, which is the explicit claim I think most of the literature on computer networks is making. However, I think the lack of correlation between the interactivity scores and audience awareness scores-- the lack of correlation between the online environment and the off-line environment-- denies the implied claim of previous studies: that is, that the sense of connection to a community in one environment was somehow transferable to another. The evidence here suggests this was simply not the case.

I think there are four general explanations for the lack of correlation between Interactivity and Audience Awareness. First, perhaps my fairly restricted and quantifiable methodology was inadequate to demonstrate the correlation I assumed would be evident. I don't want to use this space as a forum for evaluating the strengths and weaknesses of "quantitative" research methodologies versus "qualitative" ones, but I think it is quite possible a more holistic and ethnographic approach could yield different results.

Second, perhaps the writing tasks were indeed too different to make a reasonable comparison. Obviously there was a difference in length and complexity between the online and off-line environments. The online emails represented writing "off the top of the head," whereas the off-line synthesis essays demanded a number of higher-level skills, such as reading and understanding the textbook material, devising a thesis, and organizing an argument that incorporates evidence.

Third, the lack of correlation here might be the result of students "emphasizing" different environments over others. For example, students who did well in Audience Awareness and average or poorly in Interactivity may have in fact been trying harder to address the "audience" of the teacher, believing that discussion with his or her peers on the listserv was irrelevant. Conversely, students who did well in Interactivity and average or poorly in Audience Awareness may have been responding to the attention--positive and negative-- being paid by his or her fellow classmates, thus emphasizing the online environment over the off-line one. In effect, I'm suggesting that there might not be a correlation between Interactivity and Audience Awareness because individual students may have decided that one environment "mattered" more than the other.

I think these results might best be explained by considering the power of the contexts of the online and off-line environments. While all of the students participating in this study were part of the same community, the writing done in the online environment was much more clearly defined in terms of audience and purpose. Students posting to the listserv knew that others would read their messages because they had read the messages of their colleagues and received answers. Students who routinely posted messages expected answers, and they often requested them from colleagues; thus, the writing produced online seemed real and genuine to the raters and to the students themselves. Yet when the context shifted to "the class" and "the paper" (and by extension, "the teacher" and "the grade"), the sense of audience and purpose deteriorated dramatically. Students who were more than able to get their point across in the online environment wrote stilted and stiff prose in the off-line environment, and each rater commented on the difficulty of reading and evaluating the off-line texts when compared with the online texts.

Whatever the reason for the lack of correlation between online and off-line writing tasks-- a correlation implied by most of the literature advocating the use of online discussions in the classroom-- I think these results suggest that CMC advocates need to be mindful of the reasons why we are advocating computer mediated communications in the classroom in the first-place. While online technologies are another valuable means of fostering discussion and promoting debate, it is doubtful at best that these benefits will necessarily transfer to other off-line areas. Thus, when writing program administrators ask the question "how will this improve students' writing?" I think our answers need to be more about the benefits of online writing environments and the definition of "writing" itself rather than about the potential benefits for off-line writing situations.

I think these results also suggest that the listserv is itself a context and a different realm than the classroom or the off-line "traditional essay," as opposed to merely a conversational tool that compliments "real world" communities. To that extent, perhaps this study confirms the notion that it is not as relevant to consider the relationship between online and off-line writing, how these two mediums are similar, or how the skills of one transfer to the other; but rather, researchers should consider how these two mediums are different and how an understanding of multiple contexts can be effectively conveyed. In other words, I think the results suggest that the question traditionally asked about the role of computers and computer networks in writing classrooms--"how will the incorporation of this technology improve student writing"--needs to be revised, reformed, and re-contextualized. ¤

Works Cited:

Steve Krause is a graduate student in the English Deparment at Bowling Green State University.

Copyright © 1995 by Steve Krause. All Rights Reserved.


This Issue / Index / CMC Studies Center / Contact Us