Tuesday, October 4, 2011

Implementing a successful writing program in public schools for students who are deaf.

Implementing a successful writing program in public schools for students who are deaf. The difficulties that young students who are deaf have with writingEnglish are well documented in a history that goes back several decades(Heider & Heider, 1940; Kluwin, 1979; Stuckless & Birch birch,common name for some members of the Betulaceae, a family of deciduous trees or shrubs bearing male and female flowers on separate plants, widely distributed in the Northern Hemisphere. , 1966;Taylor, 1969; Thompson, 1936; Walter, 1955). More recently, in a studyof the changes made by students who are deaf to narratives shared on acomputer system, Livingston (1989) showed that the students engaged insurface word changes or rephrasings of entries to respond toteachers' inquiries for clarification rather than any majorrestructuring of the text. The writers who were deaf tended to makesurface changes by adding or substituting words rather than throughdeletions as was characteristic of some hearing writers. In addition,other research on the composing com��pose?v. com��posed, com��pos��ing, com��pos��esv.tr.1. To make up the constituent parts of; constitute or form: process of young children who are deafsuggests that some apparent errors in the writing of these children mayresult from the process of thinking in one language and writing inanother when there is no clear concept of how to compose in eithersituation (Mather, 1989). Although researchers have repeatedly observedthat young writers who are deaf have a poor command of written Englishsyntax syntax:see grammar. syntaxArrangement of words in sentences, clauses, and phrases, and the study of the formation of sentences and the relationship of their component parts. , more recent work suggests that their problems with writing maybe related to an ignorance of how to compose effectively. Parallel to these investigations of poor writing skills of childrenwho are deaf is an extensive research history on the process of writing,as both a theoretical construct and a curricular innovation (Applebee,1982; Hillocks, 1987; Humes, 1983). The thrust of this work is thatwriting, as a process, is not a linear sequence of steps but rather arecursive See recursion. recursive - recursion process that has identifiable subprocesses. This approach towriting both as theory (Humes) and as pedagogy (Applebee) has aconsiderable and successful tradition. The goal of teaching writing asa process is to get students to work through the same general steps incomposing that skilled writers go through rather than teaching writingthrough correcting finished compositions. In other words Adv. 1. in other words - otherwise stated; "in other words, we are broke"put differently , processapproaches to writing instruction are effective in that they promote thethinking process of the individual student. In a review of 20 years of writing research (about 2,000 studies),Hillocks (1987) came to some specific conclusions about the effects ofsuch an intervention. Hillocks reported that the use of editing skills,such as grammar and mechanics corrections, as the primary focus ofwriting instruction had a negative effect on outcomes. Writing programsthat focused on a study of writing as "products" were moreeffective--but not as effective as forms of writing instruction thatfocused on the production of discourse or on activities that fosteredthe production of discourse, such as planning or organizing. In a large-scale study, Baxter and Kwalick (1984) reported that thewriting of 1,029 high-school students improved after only 15 weeks ofinstruction using a process approach to composing. They reported"contradictory" results in that holistic scores for papers hadincreased, but at the same time there was an increase in the number ofgrammatical errors made by the students. Working with 48 high-schoolstudents in a one-semester project, Moriarity (1978) reported thatinstruction in any component of the writing process led to animprovement in compositions that were rated impressionistically. ThoughMoriarity's study was flawed flaw?1?n.1. An imperfection, often concealed, that impairs soundness: a flaw in the crystal that caused it to shatter.See Synonyms at blemish.2. by a possible "Hawthorne"effect, it does fit into the regular pattern of findings for this kindof evaluation. Working with college-age students, Clifford (1981)reported that a modified process approach was successful in anexperimental/control group study. Covarying for initial between-groupdifferences, Clifford reported that there were significantly greatergains in the holistic scores of the students, but no differences werefound in their knowledge of the mechanics of writing or their use of themechanical conventions of writing. Humes (1983) offered a possible explanation for the differentialeffects of process approaches to writing instruction. In her review ofthe research in this area, she commented that the biggest impact of thistype of composing was on planning, with considerably less emphasis on"translating," or putting words to paper. In addition, forthose who were successful in this type of composition instruction, mostrevisions involved conceptual restructuring and responding to audienceinterests. Writers placed less explicit emphasis on revision of formalaspects of the composition. Knudson (1988) supported this contention:She reported that reduced amounts of direct teacher involvement led tobetter compositions when the approach was to teach writing as a process. This history of process approaches to teaching writing suggeststhree general findings: * Studies regularly cite positive effects for this approach whenholistic or impressionistic im��pres��sion��is��tic?adj.1. Of, relating to, or practicing impressionism.2. Of, relating to, or predicated on impression as opposed to reason or fact: impressionistic memories of early childhood. scoring is employed, even for relativelyshort periods of instruction. * These approaches report very mixed results in the improvement ofspecific grammatical or mechanical skills; that is, improvements ingrammatical skills are occasionally reported but are difficult to linkto the instructional procedure used. * Some information exists concerning the use of process approacheswith writers with learning disabilities but not with writers who aredeaf. That is not to say that the approach has not been used with thesewriters, but rather, there are no formal evaluations of these attemptsusing student writing as an outcome measure. Consequently, to improve the English composing skills of youngwriters who are deaf, we conducted a 2-year intervention program in 10public school programs for students who are deaf by training teachers touse the process approach to teaching writing. We assumed that themethod would be generally effective in improving the overall quality ofstudents' compositions as measured by impressionistic scoring ofoverall writing quality and hoped that improvements in grammaticalcomplexity would be seen as well. METHOD Teacher Training Fifty-two teachers from 10 school districts around the UnitedStates United States,officially United States of America, republic (2005 est. pop. 295,734,000), 3,539,227 sq mi (9,166,598 sq km), North America. The United States is the world's third largest country in population and the fourth largest country in area. with an average of 10 years of teaching experience participatedin the project. Participants came from every region of the UnitedStates. Forty percent of the teachers had master's degrees, 31%had done postgraduate postgraduateafter first degree graduation, the registerable degree in veterinary science.postgraduate degreemay be a research degree, e.g. PhD, or a course-work masterate with a vocational bias, or any combination of these. work, and the remaining teachers hadbachelor's degrees. Eighty-two percent had permanent certificatesas teachers of the deaf; 3 were certified See certification. to teach English, and 7 werecertified to teach secondary-level classes. One participant teacheridentified herself as hard of hearing and 2 identified themselves asdeaf. During the 2 years of the project, two separate workshop sequenceswere conducted for the teachers of the deaf involved in this project,both on and off the Gallaudet University Gallaudet University,at Washington, D.C.; coeducational; with federal support. It was founded (1856) as the Kendall School, a training school for deaf and blind students, by Edward Miner Gallaudet (see under Gallaudet, Thomas Hopkins). campus. The first-yearworkshops focused on developing a rationale for writing instruction,teaching writing as a process rather than as a product, and thepromotion of writing through dialogue journal writing. The point of thesecond-year workshops was threefold: to review the first year'straining goals with an emphasis on a clearer definition of the goals ofa writing program, to learn to use specific rationales in the selectionof writing topics, and to learn how to provide clear and useful feedbackto the students about their compositions. During the second-yearworkshops, the participants were taught how to express nonjudgmental non��judg��men��tal?adj.Refraining from judgment, especially one based on personal ethical standards.Adj. 1. nonjudgmental acceptance of the content of students' writing while discussingrevisions to the form and how to create classroom dialogue about form asa means to convey content in a specific fashion. Two forms of feedbackwere stressed during the second year. First, the teachers were givenadditional training in the preparation of scoring guides. Second, theface-to-face writing conference was introduced as a technique to providefeedback. [TABULAR DATA OMITTED] Each training session consisted of two 8-hr days. Trainingprocedures included a mix of lectures, discussion, and feedback from theteachers. Posttraining feedback questionnaires indicated a positiveresponse to the content of the training and the presentations. Sample This was a quasi-experimental study of the implementation of ateaching method under local schooling conditions; consequently, themovement of students into and out of the project was not controlled for.As a result, four types of student groups emerged naturally as theproject went along:1. Students who started the project but left after 1 year. 2.Students who entered the project at the start of the second year. 3.Students who were in both years of the project but who changed teachersfrom Year 1 to Year 2. 4. Students who kept the same teacher for bothyears of the project. Because of this difference in the degree of participation, thevariable, exposure to instruction, was defined as having four values forthese definitions. Table 1 compares the four groups of students by age, gender,ethnicity, severity of hearing loss, and reading level. Some systematicdifferences could be seen among the four groups. Group 2 wasconsiderably younger than any of the other groups, but the difference inage between Group 3 and Group 4 was not significant. Group 1 wassignificantly older than the other groups. The gender differences didnot appear to be substantial, but the number of minority group studentswas considerably lower in Group 4. Group 3 had more students with amore severe hearing loss than the other groups. Group 1 had a readinglevel that was significantly lower than that of the other groups. Thisgroup was also the most difficult to collect posttest post��test?n.A test given after a lesson or a period of instruction to determine what the students have learned. data from becausewe did not know they were out of the project until the fall of thefollowing year. As a result, they were not used in later comparativeanalyses. On the whole, there were a number of random, but significant,differences among the four groups. These differences were compensatedfor in later analyses by adjusting pretest pre��test?n.1. a. A preliminary test administered to determine a student's baseline knowledge or preparedness for an educational experience or course of study.b. A test taken for practice.2. and posttest scoresstatistically for the differences in age, ethnicity, hearing loss, andreading ability. Instrumentation Demographic Information. With the permission of the students'parents and the cooperation of the schools and the Center for Assessmentand Demographic Studies, we obtained background information on thestudents. This included the date of birth, sex, ethnic group, degree ofhearing loss, etiology etiology/eti��ol��o��gy/ (e?te-ol��ah-je)1. the science dealing with causes of disease.2. the cause of a disease. , and onset of deafness for each child. Inaddition, we obtained students' current reading achievement scoresfrom school records. The measure of reading skill was the StanfordAchievement Test, Hearing Impaired version. We asked the schools for thescores on the reading comprehension subtest. Because not all schoolsprovided us with scaled scores, we used grade equivalent scores (seeTable 1). Writing Assessment. During the fall of 1987, each student in theproject was scheduled to be given the descriptive, persuasive, andbusiness letter test stimuli appropriate to his or her age level thathad been developed by the Educational Testing Service The Educational Testing Service (or ETS) is the world's largest private educational testing and measurement organization, operating on an annual budget of approximately $1.1 billion on a proforma basis in 2007. (ETS ETS Educational Testing Service (nonprofit private educational testing and measurement organization)ETS Emergency Telecommunications ServiceETS Electronic Trading SystemETS Engineering (&)Technical Services ) for use bythe National Assessment of Educational Progress (NAEP NAEP National Assessment of Educational ProgressNAEP National Association of Environmental ProfessionalsNAEP National Association of Educational ProgressNAEP National Agricultural Extension PolicyNAEP Native American Employment Program ) (Mullis, 1980). The tests were administered locally by the teachers in the projectand returned by mail to the research team during the fall of 1987.Students were allowed as much time as needed as neededprn. See prn order. but generally completedeach test within half an hour. With the exception of one group ofessays (submitted by a teacher who encouraged students to rewrite re��write?v. re��wrote , re��writ��ten , re��writ��ing, re��writesv.tr.1. To write again, especially in a different or improved form; revise.2. theiressays), all essays were the product of a single draft. The testadministrators were told to encourage the children to write and toexplain to the students what was expected but not to tell them what orhow to write. The stimulus was provided to the students in print andwas read to them using total communication. The process was repeatedagain in the spring of 1989. Teacher Logs. To assist teachers in monitoring their writinginstruction, we developed a self-report system that requestedinformation from the teachers about the quantity of the writing thestudents were doing. This information included a description of thebooks used, pages covered, and amount of classwork and homework. On amonthly chart, the teachers were to estimate what portion of their classtime was devoted to a specific activity in 15-min increments. The purpose of the coding system Noun 1. coding system - a system of signals used to represent letters or numbers in transmitting messagescode - a coding system used for transmitting messages requiring brevity or secrecy was to gather estimates of theamount of writing instruction taking place and the degree to which theteachers followed the procedures for teaching writing as a process. Sixcategories of writing instruction were to be coded by the teachers:dialogue journal writing, prewriting pre��writ��ing?n.The creation and arrangement of ideas preliminary to writing. or organizing activities, writingin class, revision activities, publishing or any class time devoted tothe production of material in a finished form, and other writingactivities. ANALYSIS Essay Scoring Several scoring systems were used. For the persuasive anddescriptive essays, counts were made of the number of words, sentences,and clauses, both grammatical and ungrammatical un��gram��mat��i��cal?adj.1. Not in accord with the rules of grammar.2. Not in accord with standard or socially prestigious linguistic usage.un , and the number of"t-units." Words and sentences were defined orthographically;t-units were defined as a main verb verb,part of speech typically used to indicate an action. English verbs are inflected for person, number, tense and partially for mood; compound verbs formed with auxiliaries (e.g., be, can, have, do, will) provide a distinction of voice. clause and any subordinate clauses.Grammatical clauses were defined as complete verbs with their subjectsand subordinating or coordinating conjunctions, if appropriate. If agroup of words functioned as a clause in a sentence but lacked a majorelement such as a complete verb or an appropriate subordinateconjunction subordinate conjunctionn.A conjunction, such as after, because, if, or where, that introduces a dependent clause.Noun 1. , it was counted as an ungrammatical clause. The persuasive and descriptive essays were also coded usingholistic scoring systems. The format of the holistic scoring system Noun 1. scoring system - a system of classifying according to quality or merit or amountrating systemclassification system - a system for classifying things forthe descriptive and the persuasive essays consisted of a 6-point system,ranging from outstanding papers to barely comprehensible com��pre��hen��si��ble?adj.Readily comprehended or understood; intelligible.[Latin compreh papers. Aseventh category was used when the paper was comprehensible but off thetopic, which happened more with the persuasive essays than with thedescriptive essays. The operational definitions of these scales areshown in Tables 2 and 3. [TABULAR DATA OMITTED] The business letters represented a distinct type of writing fromthe persuasive and descriptive papers; thus, two different types ofscoring systems were used. Because variations in missing information inthe business letter made holistic scoring difficult, we developed anindividual-feature-analysis system, which counted the presence orabsence of specific pieces of information such as the greeting, theinternal address, or the closing. Two primary categories were used:form and content. The form categories included the internal address,date, greeting, writer's name Noun 1. writer's name - the name that appears on the by-line to identify the author of a workauthor's namename - a language unit by which a person or thing is known; "his name really is George Washington"; "those are two names for the same thing" , return address, and closing. Thecontents were coded for a reference to the calendar, a request for theitem, statement of a specific choice, and the addition of extraneous ex��tra��ne��ous?adj.1. Not constituting a vital element or part.2. Inessential or unrelated to the topic or matter at hand; irrelevant. See Synonyms at irrelevant.3. information. In addition, there were specific content requirements foreffective communication about the topic: the writer had to mention aparticular time, to request that the item be sent, and to provideinformation as to where to send it. The coding system involved onlychecking for the presence of key words or phrases. An explanation ofthis system is provided in Table 4. [TABULAR DATA OMITTED] Because the business letters were so brief, it was impractical im��prac��ti��cal?adj.1. Unwise to implement or maintain in practice: Refloating the sunken ship proved impractical because of the great expense.2. todo grammatical counts on them. Instead, they were evaluated using a6-point primary trait trait(trat)1. any genetically determined characteristic; also, the condition prevailing in the heterozygous state of a recessive disorder, as the sickle cell trait.2. a distinctive behavior pattern. scoring system for grammatical correctness, whichrated the business letters on a scale ranging from being virtually freeof grammatical or mechanical errors to having substantial deletions ofmajor syntactic Dealing with language rules (syntax). See syntax. elements and a failure to observe orthographicconventions (see Table 5). A seventh category was used, which indicatedthat the letter was too brief to be evaluated. The same general scoring procedure was followed in the case of allthree impressionistic scoring systems: the holistic scoring system forthe descriptive essay, the holistic scoring system for the persuasiveessay, and the primary trait rating system for the grammaticality In theoretical linguistics, grammaticality is the quality of a linguistic utterance of being grammatically correct.J. Lyons (Introduction to Theoretical Linguistics, 1968, ix. of thebusiness letter. Before papers were scored, we explained the scoringsystem to two readers. We discussed the criteria and the anchor papers.Then the readers practice-scored 20 papers in groups of 5 to developreliability. This process was repeated until the desired level ofreliability was achieved. [TABULAR DATA OMITTED] During the scoring session, each reader assigned a score from 1 to6 to a paper. If the scores of the two readers were within 1 point ofeach other, the scores were accepted as being in agreement. The scorefor the paper was the sum of the two readers' scores. In the eventof a disagreement, the referee "pulled" the paper to discussthe discrepancy DISCREPANCY. A difference between one thing and another, between one writing and another; a variance. (q.v.) 2. Discrepancies are material and immaterial. and to attempt to reestablish reliability between thetwo readers. However, after training, the readers usually did notdiffer by more than 1 point on any paper. Consequently, discrepant dis��crep��ant?adj.Marked by discrepancy; disagreeing.[Middle English discrepaunt, from Latin discrep scores occurred less than 3% of the time. When the readers wereconsistent with each other and the criteria, they then scored blocks of20 papers each to check consistency with the referee. [TABULAR DATAOMITTED] This system was developed by ETS (Mullis, 1980) as a way ofensuring greater reader agreement in testing situations where individualsubject or program evaluations were concerned. In theory, readeragreement is 100% because no disagreements beyond the 1-pointdiscrepancy limit are allowed; however, in practice, uncorrectedreliability across all three separate scoring systems was 97%. Outcome Measures Grammatical Complexity. From the counts of the grammaticalcategories for the descriptive and persuasive essays, three measures ofsyntactic complexity were computed: words per clause, words per t-unit,and clauses per t-unit. Consequently, there were nine measures ofgrammatical complexity for the pretest papers including a measure ofsyntactic errors for both the descriptive and persuasive pretest essays,and the primary trait grammatical rating for the pretest businessletter. A factor analysis, computed to reduce the number of variables,generated a grammatical complexity factor which was a measure ofsyntactic complexity and general grammatical accuracy in that itconsisted of the t-units per clause variables, the overall qualityrating for the grammaticality of the business letter, and the syntacticcomplexity measures for the descriptive essay. This factor score wasused as a global description of grammatical complexity since it includeda range of writing settings and measures. To create posttest scores, thefactor loadings for the pretest grammatical complexity measure were usedas weights in computing the posttest factor scores for grammaticalcomplexity. Overall Writing Quality. To develop a composite quality measure,as opposed to a measure of grammatical complexity, the holistic scoresfor the descriptive pretest and the persuasive pretest were factoranalyzed, along with the three pretest factor scores for the businessletter described below. In the process of generating a single score for the general qualityof the business letter, a factor analysis of the trait counts for thebusiness letter form and content categories produced three separatefactor scores. First, the content mastery factor included all thecontent measures and the essential information of the return address.Scoring well on this trait would mean that the individual would receivethe product described in the stimulus, whereas a low score would meanthat one would not get the product. Second, the formal mastery factorincluded the formalism Formalismor Russian FormalismRussian school of literary criticism that flourished from 1914 to 1928. Making use of the linguistic theories of Ferdinand de Saussure, Formalists were concerned with what technical devices make a literary text literary, apart of the internal address and the date as well asthe return address and the name. Third, the social mastery factor scoreincluded the elements that were descriptive of a social letter as wellas of a business letter, whereas the categories in the other factorsseemed more unique to business letters. The social mastery factor alsocontained a large loading for extraneous information. The composite quality score consisted of the persuasive holisticscore and the descriptive holistic score, as well as the businesscontent mastery score and the business form mastery score. The compositequality score was then used in further analyses as the measure ofoverall writing improvement. In summary, the measure of writing qualityused in this study addressed three questions: Did the students'ability to write a description improve? Did the students' abilityto construct a persuasive argument improve? Did a student'schances of receiving a product as the result of writing a businessletter improve? Evaluation Questions What Evidence Was There That the Teachers Actually Taught in theWay They Were Trained? The teachers in the project kept logs of theirteaching activities during the 2 years of the project as an inexpensivecheck on the impact of the training. Teachers reported the number of minutes they spent in writinginstruction during the day. On the average, about 40% of all availableclass time for literacy-related subjects was devoted to teachingwriting. During the first year of the project, an average of 22.21 minper day were devoted to the teaching of writing; the average for thesecond year was 16.45 min per day. To assess the implementation of teaching writing as a process, thelogs were analyzed for the completion of the various categories ofteaching writing as a process; that is, did the teachers engage in thecycle of prewriting-writing-revision-publishing respective of otherwriting activities? Of the 52 teachers involved in the project acrossthe 2 years, 60% regularly taught all four phases of the writing processby their own report. Thirty percent of the teachers regularly taught thefirst three phases of the process, but did not regularly engage in"publishing" as an activity. Eight percent of the teachersengaged only in prewriting and writing in the classroom. One teacherdid not follow the writing process approach or failed to properly noteher activities on the self-reporting log system. This teacher enteredthe project in the second year. Her students are not included in thesubsequent inferential in��fer��en��tial?adj.1. Of, relating to, or involving inference.2. Derived or capable of being derived by inference.in analyses. Was Instruction Effective? The primary hypothesis of the study wasthat if instruction were effective, a posttest score adjusted for theeffects of maturation maturation/mat��u��ra��tion/ (mach-u-ra��shun)1. the process of becoming mature.2. attainment of emotional and intellectual maturity.3. and differences in the implementation of thetraining would be greater than a pretest score. Since there wasvariability in the amount of instruction provided, it was possible thatthe students who were only in the second year of the project would showthe smallest amount of change in their writing skills and those studentswho had 2 years of instruction with the same project teacher would showthe greatest amount of change. To assess the impact of additionaltraining on writing change, this difference was kept in the analysis. To test the primary hypothesis, the pretest factor scores for theoverall quality of the composition and for the grammatical complexity ofthe essays were adjusted for between-subjects differences describedearlier in this article. Adjusted pretest scores were computed in amultiple-regression analysis using the students' beginning readingability as measured by their grade equivalent score, their degree ofhearing loss as measured by better ear average, the gender of thestudent, the age of the student when the data collection was done, andthe ethnicity of the student as predictor variables. Maturation wascontrolled for by using the child's age at the two testing pointsin the prediction equations to create the "adjusted scores."In other words, the posttest score would be reduced because the childwas older. The equation to adjust the pretest composite quality scoreaccounted for nearly 60% of the variance. Because these were thevariables that appeared to differentiate the groups who participated inthe study, we are confident that we have controlled for sources ofvariance not related to instruction. The F value for the multiple-regression equation to adjust thegrammar pretest factor score was statistically significant, althoughonly 18% of the variance was accounted for in this equation. What thissuggests is that, although there may be between-groups differences ondemographic variables, these same demographic variables are notsubstantially related to the grammatical complexity of thestudents' writing. With one addition, the same multiple-regression procedure was usedto adjust for between-subjects differences on posttest factors scores.First, because some of the students had been in the study for only 1year, whereas others had been in the study for 2 years, it was necessaryto adjust the posttest results for the amount of time between testingsessions. For the students who were in the study for 2 years, this was18 months. For the other students, it was 9 months. This equation predicted about 60% of the variance in the posttestcomposite quality score, suggesting that the holistic scores were moresensitive to between-student differences than were the grammar measures,which predicted only 14% of the posttest grammatical complexity scorevariance. In summary, both pretest and posttest scores were adjusted forbetween-groups differences noted earlier, specifically, age, sex,ethnicity, degree of hearing loss, and reading ability. In addition theposttest score was adjusted for the amount of instruction, as measuredby the months of instruction that the students received. Because boththe adjusted pretest scores and posttest scores included corrections forthe age of the students, maturational mat��u��ra��tion?n.1. The process of becoming mature.2. Biologya. The processes by which gametes are formed, including the reduction of chromosomes in a germ cell from the diploid number to the haploid number effects were eliminated from thesubsequent analysis. Figure 1 shows the adjusted pretest and posttestgroup means for both the composite quality score and for the grammarcomplexity score. To test the hypothesis stated previously, a repeated-measuresanalysis of variance was computed using the within-subjects factors ofthe time of testing--adjusted pretest versus adjusted posttest--and themeasure used to judge progress--composite quality versus grammaticalcomplexity. The two indexes were the posttest score and the expectedposttest score. The three-level factor was the degree of exposure toinstruction. The first level of the factor consisted of students whowere only in the second year of the project; the second level, studentswho were in both years of the project but had different teachers; thethird level, students who had the same teacher for both years of theproject. The between-subjects factor of instructional groupmembership--1 year of instruction, 2 years of instruction with twodifferent teachers, and 2 years of instruction with the sameteacher--was included as a secondary control for the effects of amountof instruction. In Table 6, Exposure refers to the three groups of students who hadvarying degrees of exposure to instruction, that is, 9 months or 18months. This difference should have been controlled for in the processof creating the adjusted scores described earlier. It is clear from theF value of exposure that the process of adjusting for between-groupsdifferences on this factor was successful in controlling for thesedifferences. [TABULAR DATA OMITTED] Time refers to the time of testing, that is, before or aftertraining. There is a statistically significant F value for time oftesting in Table 6. In other words, adjusted posttest scores, whichincluded an adjustment for maturation effects, were higher than theadjusted pretest scores. Instruction was effective because the factorthat measured the difference between the adjusted pretest scores and theadjusted posttest scores was statistically significant. There was a statistically significant effect for the type of test,identified as Test in Table 6. As can be seen in Figure 1, adjustedposttest scores were higher for the grammatical complexity measure thanfor the composite quality measure; however, the magnitude of thedifferences between the pretest and posttest scores were greater thanthe differences among the three groups. There is a two-way interaction of Exposure to instruction and Test.The bulk of this effect is traceable to pretest/posttest differences.The group that had only 1 year of instruction showed the same amount ofchange between their pretest composite quality scores and their posttestcomposite quality scores; however, they showed dramatic improvements inthe complexity of their grammar. For the other two groups, those with 2years of instruction, the improvement in their grammatical complexitywas nearly as great as for the group with only 1 year of instruction;but their composite quality scores jumped nearly as much. The three-wayinteraction of exposure, time of testing, and type of test is reflectedin Figure 1. This source of the three-way effect comes primarily fromthe pretest/posttest differences and the substantial differences betweenthe composite quality score change and the grammatical complexity scorechange. The apparent changes in the adjusted grammar scores are partiallyexplicable ex��plic��a��ble?adj.Possible to explain: explicable phenomena; explicable behavior.ex��plic as artifactual ar��ti��factalso ar��te��fact ?n.1. An object produced or shaped by human craft, especially a tool, weapon, or ornament of archaeological or historical interest.2. results because similar findings have beennoted during a reanalysis of the 13-year-olds' essays from the NAEP(Soltis & Walberg, 1989). Specifically, the discrepancy in resultsmay lie in the nature of the scales that were used. The holistic scoresthat generated the composite quality score were a discrete, ordinalscale ordinal scale (or´dn with a range of 12 values. The counts for the grammaticalmeasures had a maximum range of 30 points on a continuous, intervalscale. There is more variance between scores for the grammaticalmeasures, thus creating the possibility for greater score differences.In addition, the multiple regression to adjust for various extraneouseffects accounted for less of the variance for the grammaticalcomplexity score than it did for the qualitative score. What isprobably true of these data is that the differences and the directionsof the differences are real, but that the degree of discrepancy betweenmeasures may be artifactual. In other words, there was an increase inthe grammatical complexity of all of the students, but the magnitude ofthat change in relation to the change in the composite quality score maynot be as great as it appears. DISCUSSION It is apparent from our study that teaching writing as a processresults in improvements in the writing of students who are deaf. Whenthe teacher's focus was on providing specific steps that thestudents could take to improve their compositions, the students'writing improved beyond what would be expected from normal maturation,as measured both by overall impressionistic measures and by measures ofincreasing grammatical complexity. Teaching writing as a process improved the overall quality of thewriting of students who are deaf. Previous research on teaching writingthrough a process approach had strongly suggested that we would in factachieve such results using impressionistic scoring techniques whichfocus on the overall quality of the writing. It was not as apparentfrom reviewing the previous research that changes in grammaticalcomplexity could be consistently expected, but such a finding has onoccasion been reported. When writers become preoccupied pre��oc��cu��pied?adj.1. a. Absorbed in thought; engrossed.b. Excessively concerned with something; distracted.2. Formerly or already occupied.3. withgrammatical correctness, they have a tendency to use simplerconstructions, employ more familiar or common words, and experiment lesswith language. The apparent change in grammatical complexity may be dueto greater experimentation on the part of the students or to a greatersense of freedom of expression, which could be seen in a major increasein the length of students' sentence elements, especially the numberof words per clause and per t-unit. Because writing is a complex task with many components that mightbe taught, there is a tendency on the part of teachers to either avoidteaching writing or to attempt to substitute other more manageable tasksfor actual composing or instruction in composing. By giving theteachers an approach that produced rapid, apparent improvements in thewriting of individual students, we encouraged composing as an activity.Consequently, not only did the students become better writers, but manyof the teachers were able to have a positive and rewarding teachingexperience. Because this activity was sustained over a 2-year period, itwas possible to document a substantive change in writing quality. REFERENCES Applebee, A. N. (1982). Writing and learning in school settings.In P. M. Nystrand (Ed.), What writers know: The language, process, andstructure of written discours. (pp. 365-382). New York New York, state, United StatesNew York,Middle Atlantic state of the United States. It is bordered by Vermont, Massachusetts, Connecticut, and the Atlantic Ocean (E), New Jersey and Pennsylvania (S), Lakes Erie and Ontario and the Canadian province of : AcademicPress. Baxter, M., & Kwalick, B. (1984). Holism holismIn the philosophy of the social sciences, the view that denies that all large-scale social events and conditions are ultimately explicable in terms of the individuals who participated in, enjoyed, or suffered them. and the teaching ofwriting (Research Report). New York: Scribner Educational Publishers. Clifford, J. (1981). Composing in stages: The effects of acollaborative pedagogy. Research in the Teaching of English, 15(1),37-53. Heider, F., & Heider, G. (1940). A comparison of sentencestructure of deaf and hearing children. Psychological Monographs,52(1), 42-103. Hillocks, G. (1987). Synthesis of research on teaching writing.Educational Leadership, 44, 71-76. Humes, A. (1983). Research on the composing process. Review ofEducational Research, 53(2), 201-216. Kluwin, T. (1979). The effects of selected errors on the writtendiscourse of deaf adolescents. Directions, 1(2), 46-53. Knudson, R. E. (1988). The effects of highly structured versusless structured lessons on student writing. Journal of EducationalResearch, 81(6), 365-368. Livingston, S. (1989). Revision strategies of deaf studentwriters. American Annals an��nals?pl.n.1. A chronological record of the events of successive years.2. A descriptive account or record; a history: "the short and simple annals of the poor" of the Deaf, 134, 21-26. Mather, S. (1989). Visually oriented o��ri��ent?n.1. Orient The countries of Asia, especially of eastern Asia.2. a. The luster characteristic of a pearl of high quality.b. A pearl having exceptional luster.3. teaching strategies with deafpreschool children. In C. Lucas (Ed.), The sociolinguistics sociolinguistics,the study of language as it affects and is affected by social relations. Sociolinguistics encompasses a broad range of concerns, including bilingualism, pidgin and creole languages, and other ways that language use is influenced by contact among of the deafcommunity (pp. 165-190). New York: Academic Press. Moriarity, D. J. (1978). An investigation of the effects ofinstruction in five components of the writing process on the quality andsyntactic complexity of students' writing (Research Report).Framingham, MA: Framingham State University. Mullis, I. (1980). Using the primary trait system for evaluatingwriting. Princeton, NJ: Educational Testing Service. Soltis, J., & Walberg, H. (1989). Thirteen-year-olds'writing achievements: A secondary analysis of the fourth nationalassessment of writing. Journal of Educational Research, 83(1), 22-29. Stuckless, E., & Birch, J. (1966). The influence of earlymanual communication on the linguistic development of deaf children.American Annals of the Deaf, 111(4), 425-460, 499-504. Taylor, L. (1969). A language analysis of the writing of deafchildren (Final Report). Tallahassee: Department of English Noun 1. department of English - the academic department responsible for teaching English and American literatureEnglish departmentacademic department - a division of a school that is responsible for a given subject , FloridaState University Florida State University,at Tallahassee; coeducational; chartered 1851, opened 1857. Present name was adopted in 1947. Special research facilities include those in nuclear science and oceanography. . Thompson, W. (1936). Analysis of errors in written composition bydeaf children. American Annals of the Deaf, 81(2), 95-99. Walter, J. (1955). A study of the written sentence construction ofa group of profoundly deaf children. American Annals of the Deaf,100(3), 235-252.

No comments:

Post a Comment