Regional Educational Laboratory Program (REL) (2024)

Regional Educational Laboratory Program (REL) (1)An Investigation of the Impact of the 6 + 1 Trait® Writing Model on Student Achievement

Regional need and study purpose

Writing is not a focus of the adequate yearly progress provisions of the No Child Left Behind Act, but it is a critical component of language and literacy, in general, and an essential skill for continued academic development and success in many occupations. The U.S. Department of Labor's Secretary's Commission on Achieving Necessary Skills (1993) identified writing as a foundation skill required for success in a range of jobs.

Despite the importance of writing, the National Assessment of Educational Progress (NAEP) results for 2002 showed that just over a quarter of grade 4 students (27 percent) were proficient in writing, with similar proficiency levels in grade 8 (31 percent) and grade 12 (24 percent). In three states in the Northwest Region grade 4 proficiency rates on the 2002 NAEP writing tests were lower than the national average (27 percent). Idaho, Montana, and Oregon all had proficiency rates of 22 percent. The proficiency rate in Washington was 30 percent. (Alaska did not participate in the state-level NAEP exam).

The goal of this study is to provide high-quality evidence on the effectiveness of an analytical trait-based model for increasing student achievement in writing, a subject of national and regional importance. Analytical, or trait-based, writing instruction uses a set of features, or traits, to describe a text, such as students' written work. For example, the ideas in an essay are considered separately from the organization or structure of an essay—standard conventions or mechanics, such as punctuation and spelling, are yet another distinct trait of the essay. So, a student's essay might include interesting and innovative ideas presented with perfect spelling and punctuation but in a disorganized illogical flow. Or a student might write an essay that includes wonderful and well organized ideas but that is marred by spelling and punctuation errors.

When using a scheme to separate the traits of a written work in this way, teachers can use an assessment rubric or scoring guide to give students detailed, systematic feedback on various traits of their writing, as well as targeted suggestions for improvement. Delineating a system of traits provides a common structure and vocabulary that students and teachers can use to think about and discuss their writing. This trait-based method of planning, assessing, and revising writing is an alternative to holistic assessment and feedback, in which a student's essay receives a single score.

The publisher of an early trait-based model, the Northwest Regional Educational Laboratory, has reported distributing training materials to school districts in all 50 states, as well as internationally, and it has provided professional development sessions in 48 states and several countries. The trait-based approach to writing instruction has also been incorporated into language arts curriculum materials and guides to writing instruction distributed by other major education publishers, as well as training workshops, web sites, and other resources for schools. Given the widespread use of trait-based writing models, it is important that the education community have access to high-quality scientific evidence on the effectiveness of the approach. This study will contribute to that knowledge base—so that decisions about whether to expand, contract, or modify the adoption of this approach can be based on reliable evidence.

The study was designed primarily to determine the impact of the intervention on student achievement in writing. Thus, the study will first answer one primary, confirmatory experimental question:

  • What is the impact of the 6+1 Trait® writing model on student achievement in writing?

To answer the primary question with confidence, the study was designed to have an adequate sample size, well developed measurement instruments, and strong experimental controls of confounding factors, yielding an unbiased estimate of the treatment effect on student achievement.

In addition to the primary research question, two exploratory questions address whether there are different impacts on subgroups of schools or students and whether teacher implementation of trait-based writing methods is related to the treatment. The randomized experimental design characteristics were maintained, but to lower the overall cost of the study, a smaller sample size was used or outcomes were measured less thoroughly, resulting in a less sensitive design for two exploratory questions:

  • How do impacts on students vary by preexisting characteristics of schools and students?
  • What is the impact of 6+1 Trait writing professional development on the instructional practices of teachers?

Intervention description

The 6+1 Trait writing model is an analytic approach to teaching and assessing student writing from the Northwest Regional Educational Laboratory. The model consists of strategies to help integrate assessment with instruction, targeting seven traits of effective writing: ideas, organization, voice, word choice, sentence fluency, conventions, and presentation.

The model uses 10 instructional strategies that provide teachers with activities to support classroom instruction on the traits and engage students in learning about the traits, as well as using them in planning, assessing, and revising their writing (box 1). These strategies include teaching students the language of rubrics for the seven writing traits, having students score and justify their scores on writing samples, and using a writing process that emphasizes feedback, revision, and editing. The intervention is not an alternative writing curriculum designed to replace existing writing programs in schools. Instead, it is a complementary set of tools to aid in conceptualizing, assessing and describing the qualities of writing. With existing writing curricula, it provides a framework for classroom writing instruction, feedback, and dialogue to improve the ability of teachers and students to plan, evaluate, discuss, and revise their writing (Culham 2003; Hillocks 1987; Graham and Perin 2007).

Box 1. Ten strategies for writing instruction used in the 6+1 Trait® writing model

  1. Teaching the language of rubrics for writing assessment
  2. Reading and scoring papers and justifying the scores
  3. Teaching focused revision strategies
  4. Modeling participation in the writing process
  5. Reading a lot of materials that demonstrate varying writing quality
  6. Writing to effective prompts
  7. Weaving writing lessons into other subjects
  8. Having students set goals and monitor their own progress
  9. Integrating learning goals for writing into curriculum planning
  10. Teaching ways to structure nonfiction writing

The 6+1 Trait model is intended to help teachers provide effective feedback to students and develop students' self-assessment skills and awareness of their thinking patterns and strengths and weaknesses related to writing. The approach focuses on formative assessments—assessments that provide effective and timely feedback on student performance while they work on a task. Reviews of the research on formative assessment in classroom instruction consistently argue that assessment that provides students with performance feedback may be linked to achievement gains (Natriello 1987; Crooks 1988; Black and Wiliam 1998). Hattie (1992) claims that "the most powerful single moderator that enhances achievement is feedback" (p. 251).

Formative assessment is grounded in the feedback models of Crooks (1988) and Sadler (1989), in which students have access to three types of information about a particular performance: the intended instructional outcome, how current performance matches that expectation, and a mechanism to move students from current performance toward the desired outcome. Marzano (2003) identifies several features that make feedback successful, such as whether it's timely and ongoing, specific to the content being learned, aligned with assessment, and corrective. Fontana and Fernandes (1994) report that self-assessment based on an understanding of learning goals and evaluation criteria improves student achievement.

Teachers in the study attend a three-day summer institute that provides comprehensive training, planning time, and resource materials on the 6+1 Trait model. During the subsequent school year participants attend three additional one-day workshops to further their understanding of the model and to plan trait-based activities. Teachers also have access to online support resources to help them integrate the model into their existing classroom writing instruction.

Study design

Grade 5 teachers and students from 74 elementary schools in Oregon are participating in the study. Oregon schools are similar to those in other Northwest Region states in proportions of English language learner students, students from low-income households, and students from linguistic and racial/ethnic minority groups. Grade 5 was chosen as the target population because this is the typical grade level for students to begin expository writing, which is used in much of their subsequent academic careers.

Participating schools were selected according to the following criteria:

  • The school had a projected enrollment of 30 or more students in grade 5 at the time of recruitment.
  • The grade 5 teachers were not already implementing a trait-based writing approach.
  • The school and at least one grade 5 teacher were willing to participate in the research protocol.

To accommodate districts that would participate only if all their schools could be involved, the requirement for number of students was lowered to a projected minimum of 20 students. The number of participating schools was increased slightly to maintain the originally planned statistical sensitivity.

All participating elementary schools were randomly assigned to either the control or treatment condition. Two cohorts of schools participate—one during the 2007/08 and 2008/09 school years, another during the 2008/09 and 2009/10 school years. Randomization of schools occurs within strata defined by districts and by year of participation in the study. Within districts the two schools with the highest percentage of students eligible for free or reduced-price lunch were randomly assigned to conditions, then the next highest pair of schools, and so on, so that students in the treatment and control groups were reasonably well balanced. In districts with an odd number of participating schools, the single unpaired school was randomly assigned to a condition. The random assignment resulted in a final sample of 74 schools, including 39 treatment schools and 35 control schools. Within a participating school all available grade 5 writing teachers participate in the study.

The impact of the use of the 6+1 Trait writing model is estimated by comparing outcomes after year 1 of implementation—after the treatment schools have used the model for one year while the control group schools continued with whatever approach to writing they had used before. The control group schools receive the intervention in the following year, after data collection is complete. The study design is thus a pretest-posttest control group design (Shadish, Cook, and Campbell 2002) with treatment provided to the control group after the data collection period.

The control group is a realistic counterfactual condition in which many teachers have some awareness of the trait-based writing model, but have not been thoroughly trained in it and have not used it extensively with students. Oregon, for instance, uses a trait-based framework for state writing assessments in grades 4, 7, and 10, but it provides little support to integrate this framework into classroom instruction. Many schools provide teachers with professional development in trait-based writing instruction to fill this gap. A screening process excluded schools in which elementary level teachers were thoroughly trained in traits-based writing instruction and were using it in their classroom instruction and student assignments.

Statistical power refers to the sensitivity of a design to detect treatment effects (Cohen 1988; Schochet 2005). To determine the statistical power for the study, data were analyzed to estimate the minimum detectable effect size under a set of scenarios. In collaboration with the Oregon Department of Education, data with a structure similar to that proposed for the current study were obtained and modeled in order to estimate the expected degree of similarity of students within schools and the effect size for the school level covariate (prior year school level performance on the state writing assessment). These figures were calculated using the 2005/06 grade 4 Oregon Writing Assessment data and were subsequently used as key input for the power analysis. The minimum detectable effect size was then estimated given varying numbers of schools and varying numbers of students within schools (to account for the fact that school size varies greatly in Oregon).

The study includes 39 treatment schools and 35 control schools, yielding an estimated minimum detectable effect size of approximately .23. This means that, given the sample size and methods, the study has a high likelihood of detecting a difference between the writing essay scores of the treatment and control groups if this difference is .23 standard deviation or larger. Another way of understanding effect size is in terms of improvement in percentile scores. An intervention with an effect size of .23 standard deviation would increase average percentile scores from the 50th percentile to the 59th (for a table of effect sizes and corresponding increases from percentile scores of 50, see http://www.bestevidence.org/methods/effectsize.htm).

A possible threat to the study's internal validity is the use of traits-based writing instruction methods in control group schools—due to contamination from treatment schools during the study or introduction by other means. Since the control condition is intended to include the broad spectrum of instruction found in typical schools, some components of the intervention might appear to some degree in the control group. This is particularly likely because various trait-based methods for writing instruction are included in available language arts curriculum materials. Although all schools were screened for prior training in this kind of writing instruction, it is possible that some control group teachers might use some trait-based writing instruction methods during the study. This could attenuate the effect size that would be obtained if the control group teachers did not use any traits-based strategies in their classrooms.

Key outcomes and measures

Student achievement in writing is the key outcome of interest. Writing proficiency is assessed using raters trained to score essays using both the 6+1 Trait model and a single-dimension "holistic" rating system; both are common in statewide assessments and other measurement systems for writing proficiency. The Northwest Regional Educational Laboratory maintains a pool of trained raters who are regularly monitored and calibrated to ensure interrater reliability and fidelity to the assessment model. Each student essay is scored by two teams of raters. One team produces a holistic score reflecting the overall quality of the student writing. The other produces a set of scores based on the seven writing traits that are included in the intervention. The raters score the essays without knowing whether a student was part of the treatment or control group or whether a given essay was a pretest or posttest.

Data collection approach

Student essays are collected using a process parallel to that used in the Oregon statewide writing assessment. Pretest writing samples are collected in September of each data collection year; post-test writing samples, the following May. The essay writing sessions are proctored by the participating teachers in both control and treatment groups. Students work on their essays for 45 minutes on each of the three successive days, giving them the opportunity for a natural writing process, including planning, drafting, and revision.

A single prompt in the expository mode (a writing task that requires students to explain something) is used for pretests and posttests. Using the same prompt throughout, the study is intended to avoid variation in student performance that is related to the characteristics of different writing modes or specific prompts. Rather than narrowly specifying what must be explained in the essays, students are allowed to choose what they will explain, within a general guideline. This helps reduce the likelihood of a "practice effect," which can occur when students perform exactly the same task at a later point in time. Along with the essay prompt and a booklet for organizing their work, students receive instructions to guide them through a three-day process of planning, drafting, and completing an essay.

Specific directions for assessment administration are provided to teachers, explaining the purpose and procedures for each of the three days. Teachers are also provided copies of the current Oregon Accommodations Table and Modifications Table for writing test administration. Student accommodations do not change the content or performance standards of what is being measured by the assessment. Student essays written with assessment accommodations are included in all data analyses.

To collect data related to teachers' implementation of the 10 instructional strategies that form the basis of training, teacher classroom practices are measured using teacher surveys. The surveys were developed by the study team for this investigation. They provide data on both fidelity of implementation in the treatment group schools and the degree to which these instructional practices were also present in the control group schools. A survey is completed by all teachers in both treatment and control groups at three times: before the beginning of treatment, in the middle of the treatment year, and at the end of the treatment year. The survey contains Likert-type questions on teacher practices and opinions, descriptive information on their use of instructional time and their background, and open-ended questions on instructional and learning issues. The results are used to describe the treatment and control teachers and their classrooms.

Analysis plan

Since schools were randomly assigned to the treatment or control condition, but students are the focus of the analysis, the design is a cluster randomized trial (Bloom 2004; Murray 1998; Murray, Varnell, and Blitstein 2004). The framework for the data analysis is a two-level hierarchical linear model, with students nested within schools.

The spring essay scores (at the end of the treatment year) are used in the outcome analyses as dependent variables, while pretest essay scores from the beginning of the year are entered as covariate measures of baseline writing performance for individual students. School-level performance on the grade 4 Oregon writing assessment is also used as a school-level covariate, to reduce the between-school variance and improve the efficiency of the design. The state writing assessment is administered only in grades 4, 7, and 10, so the grade 4 aggregate school-level data from the treatment and control group students is used as a school-level covariate.

Student writing performance is analyzed using an intent-to-treat framework. Data from each student are analyzed as part of the experimental group to which those students were randomly assigned (treatment or control). Student attrition, students who change experimental groups, and students who enter study classrooms after baseline is monitored. Attrition rates are reported in detail; if attrition is large or different across conditions, the final sample may be weighted to preserve the representative nature of the sample. The posttest results of students who cross over from treatment schools to control schools or vice versa are analyzed with their original school. New students entering study schools after the pretest are excluded from the analysis.

Principal investigators

Michael T. Coe, PhD
Director, Research Unit
Center for Research, Evaluation, and Assessment

Contact information

Michael T. Coe, PhD
Director, Research Unit
Center for Research, Evaluation, and Assessment
101 SW Main, Suite 500
Portland, OR 97204
coem@nwrel.org
(503) 275-9497

Region: Northwest

References

Black, P., and Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5 (1), 7–74.

Bloom, H. (2004). Randomizing groups to evaluate place-based programs. New York: MDRC.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.

Crooks, T.J. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58(4), 438–81.

Culham, R. (2003). 6+1 Traits of Writing. New York: Scholastic.

Fontana, D., and Fernandes, M. (1994). Improvements in mathematics performance as a consequence of self-assessment in Portuguese primary school pupils. British Journal of Educational Psychology, 64(3), 407–17.

Grahm, S., and Perin, D. (2007). Writing next: effective strategies to improve writing of adolescents in middle and high schools. New York: Carnegie Corporation.

Hattie, J.A. (1992). Self concept. Hillsdale, NJ. Erlbaum.

Hillocks, G., Jr. (1987). Synthesis of research on teaching writing. Educational Leadership, 44(8), 71–6, 78, 80–2.

Marzano, R.J. (2003). What works in schools: Translating research into action. Alexandria, VA: Association for Supervision and Curriculum Development.

Murray, D.M. (1998). Design and analysis of group-randomized trials. Oxford: Oxford University Press.

Murray, D.M, Varnell, S.P. and Blitstein, J. (2004). Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health, 94 (3), 423–32.

Natriello, G. (1987). The impact of evaluation processes on students. Educational Psychologist, 22 (2), 155–75.

Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18 (2), 119–44.

Schochet, P.Z. (2005). Statistical power for random assignment evaluations of education programs. Princeton NJ: Mathematica Policy Research.

Secretary's Commission on Achieving Necessary Skills. (1993). Teaching the SCANS competencies. Washington, DC: U.S. Department of Labor.

Shadish, W.R., Cook, T.D., and Campbell, D.T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

Return to Index

Regional Educational Laboratory Program (REL) (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Clemencia Bogisich Ret

Last Updated:

Views: 6524

Rating: 5 / 5 (80 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Clemencia Bogisich Ret

Birthday: 2001-07-17

Address: Suite 794 53887 Geri Spring, West Cristentown, KY 54855

Phone: +5934435460663

Job: Central Hospitality Director

Hobby: Yoga, Electronics, Rafting, Lockpicking, Inline skating, Puzzles, scrapbook

Introduction: My name is Clemencia Bogisich Ret, I am a super, outstanding, graceful, friendly, vast, comfortable, agreeable person who loves writing and wants to share my knowledge and understanding with you.