Blog

From Past Influences to Present Implementation to Future Implications: How SRBI Promises to Change the Way We Help Struggling Students

Principal Investigators: Dr. Diana Sisson and Dr. Betsy Sisson
Connecticut Association for Reading Research

Introduction

Connecticut’s Scientific Research-Based Interventions (SRBI) stems from the national model of Response to Intervention (RtI) which is the culmination of over three decades of federal involvement in special education services in this nation. Beginning with the Education for All Handicapped Children Act of 1975 (later recodified as the Individuals with Disabilities Act or IDEA), legislation ensured appropriate public education for students with disabilities and access to nondiscriminatory evaluation procedures.

From the onset, however, controversy fermented due to the use of the IQ-discrepancy model as the primary diagnostic procedure. Soon after, statistics regarding eligibility criteria provided fodder for public debate over the validity of the identification process. For example, Gresham (2001) claimed that after nearly two decades of the IQ-discrepancy model no clear definition of learning disabilities existed in “policy or practice,” [thus,] “findings indicate that substantial proportions of school-identified LD students – from 52 to 70 percent – fail to meet state or federal eligibility criteria” (p. 1).

While the national debate over the IQ-discrepancy model would ultimately lead to a dramatic policy change affecting both general education and special education, it was not the
only deciding factor in the creation of RtI. Historical influences in the fields of psychology and literacy would coalesce to bring about a national recognition of the struggling reader, and legislative policy would follow that sought to offer the services that handicapped students would need to be successful in academic settings.

Historical Influences

The end of the nineteenth century witnessed the launch of experimental psychology into the cognitive processes of reading, and soon after leaders in the educational field delved into the pedagogical underpinnings of reading. Meanwhile, medical doctors began for the first time to diagnose students with reading difficulties – namely, reading dyslexia – a term reserved for those children who struggled to learn to read. While some schools employed trained reading specialists, private consultants provided most of this specialized tutoring outside of public school settings.

Due to the dearth of public school services, concerned parents of struggling learners organized a conference in 1963. Attended by specialists from a host of different fields, Samuel Kirk – later recognized as the father of special education – suggested the umbrella term of “learning disabilities” as a means to characterize the specific needs of these students. Marshaling their forces, they moved to influence change at the national level and lobbied for federal guarantees for a free and appropriate education for their children (Berninger, 2006).

As stakeholders in this new field of learning disabilities continued to rally support for their cause, the framework of the RtI model that would emerge in 2004 found its beginnings in the middle of the twentieth century when behavioral analysts utilized a problem-solving paradigm to address issues in social contexts. Eventually, practitioners refined the process to include a methodology for monitoring students’ responses to interventions in academic settings. Corresponding to this advancement emanated awareness that the instructional environment plays a key role in ameliorating learning problems. During the 1980s, school systems began to utilize tools to monitor academic progress and track student achievement. These historical influences merged with federal legislation as each new federal policy provided more advanced attempts to affect the academic achievement of all students and to use data as a barometer for school success (Wright, 2007).

Legislative Policy

As lawmakers endeavored to provide equity in the educational arena, the Elementary and Secondary Act of 1965 delivered the first federal legislation providing funding to public schools. Designed to address perceived social problems and eradicate poverty and its effect on the American economy, it did not consider the needs of disabled children. A decade would pass before the federal government reflected on the needs of handicapped students and with this recognition would come the advent of special education policy in the United States.

1975 – Education for All Handicapped Children Act (PL 94-142).

The first significant special education legislation originated in 1975 with the Education for All Handicapped Children Act (EAHCA) which guaranteed students with disabilities a free and appropriate public education (FAPE), the least restrictive environment (LRE) for school settings, due process rights, and nondiscriminatory evaluation protocols. Subsequently, a tidal wave of students qualifying for special education services inundated American schools. Since its inception, the number of students identified as learning disabled has grown more than 300% with American schools providing special education services for more than 6 million children (Cortiella, 2008).

1977 – Final Regulations for EAHCA (PL 94-142).

Legislators approved regulations for PL 94-142 in 1977. During this time, a learning disability was defined as “a severe discrepancy between achievement and intellectual ability” (U.S. Department of Education, 1977, p. G1082). Unable, however, to reach consensus regarding diagnostic procedures for identifying students with learning disabilities, a compromise was formed which set in place a protocol that identified learning disabilities as students who demonstrated acute underachievement in comparison with IQ as measured through an intelligence test.

The use of IQ as the sole criterion as a measure for determination of learning disability led to grave concerns from the educational field (Stuebing, Barth, Weiss, & Fletcher, 2009). To begin, the ability-achievement discrepancy did not address why students may exhibit normal cognitive functioning and yet struggle in specific academic performance standards. The discrepancy model with its utilization of a standardized testing instrument also did not take into account situation specific issues related to the individual student, including the variability of early childhood developmental experiences. Questions stemmed as well regarding those students whose ability-achievement discrepancy was not severe enough and were simply characterized as “slow learners” with no eligibility for special education services. Furthermore, clinical decisions regarding eligibility were limited to pre-determined discrepancy criteria without regard for the school psychologist’s expertise (Holdnack & Weiss, 2006).

Of import is that since its inception in 1977, special education referrals increased by 200% which led to over-extensions of services in special education as well as a national concern over possible misdiagnosis (Vaughn, Linan-Thompson, & Hickman, 2003). These dramatic increases occurred, however, only in the area of learning disabilities with its use of the IQ-discrepancy formula (Holdnack & Weiss, 2006).

1990 – IDEA Amendments (PL 101-476).

After reauthorizations in 1983 and 1986, policymakers again reauthorized EAHCA in 1990 and renamed it the Individuals with Disabilities Act, IDEA (PL 101-476). Lawmakers designed the 1990 amendments to ensure a greater diversity of services for eligible students. Founded on the concept of “zero exclusion,” IDEA also reaffirmed that eligible students receive a free and appropriate education in public schools (Hardman, 2006).

1997 – IDEA Amendments (PL 105-17).

With the 1997 reauthorization of IDEA (PL 105-17), the least restrictive environment (LRE) was extended into the general classroom. In effect, the new regulations brought the work of general educators and special educators closer together in a more unified system of delivering instruction and services (Wedle, 2005). It also focused attention on interventions in regular education settings as well as the use of problem-solving models in special education settings. The discrepancy model, however, remained the national protocol for identifying learning disabilities in American classrooms and schools.

Of note, the reauthorizations of 1983, 1986, and 1990 all focused on ensuring access to education for disabled students. In contrast, the reauthorization of 1997 diverted attention from access to accountability as is illustrated in its regulations concerning interventions and problem-solving models.

2001 – No Child Left Behind Act (PL 107-110).

Part of this relentless pursuit of educational improvement stemmed from the incendiary federal report in 1983 – A Nation at Risk – which publicly indicted the American educational system for its failure to educate students at a level appropriate to the nation’s ranking in the world marketplace. As the federal government continued to strive for increased competitiveness in international markets, legislators used their reauthorization of the Elementary and Secondary Education Act of 1965 to produce the No Child Left Behind Act. This legislation mandated that 100% of all students in American classrooms be proficient in reading and math by 2014. Schools who did not meet the pre-set adequate yearly progress (AYP) goals faced funding sanctions. As schools labored to meet the federal benchmarks through intensive test preparation and the adoption of standardized curriculum, struggling students throughout the nation continued to fail to meet the minimum competency requirements.

2004 – IDEIA Amendments (PL 108-446).

In 2004, legislators reauthorized IDEA (designated as the Individuals with Disabilities with Education Improvement Act, or IDEIA) with PL 108-446. This legislation shifted the emphasis of special education policy in a number of key aspects – from process to results, from a paradigm of failure to a model of prevention, and from a consideration of students as special education recipients first to an appreciation of their primary role in general education (Hardman, 2006). Contained within these regulations was language disallowing one single assessment to determine identification of a disability along with a declaration that states were not required to use the discrepancy formula to determine learning disabilities but were, rather, permitted to utilize a protocol that focused on a student’s response to interventions that were scientific and research-based (U.S. Department of Education, 2006).

With the new model, then, states could implement targeted research-based interventions as a means to monitor students’ responsiveness and subsequently determine an evaluation for a specific learning disability. The National Association of State Directors of Special Education (NASDSE) defined this “response to intervention” as the enactment of “high-quality instruction and interventions matched to student need, monitoring progress frequently to make decisions about changes in instruction or goals and applying child response data to important educational decisions” (NASDSE, 2006, p. 3).

Of note, a fundamental intent of RtI was to decrease the number of students in special education by perhaps 70% (Lyon et al., 2001). Such a significant decrease in students receiving special education services would have considerable effect on the federal government as it was predicted that the national cost of special education services would soon total $80 billion annually (Burns & Gibbons, 2008) for the current 6.5 million children identified with disabilities (Collier, 2010).

Addressing these long-standing budgetary issues, IDEIA 2004 contained three central elements: use of scientifically-based reading instruction, evaluation of how students respond to interventions, and the employment of data to inform decision making (Brown-Chidsey & Steege, 2005). Fuchs, Fuchs, and Vaughn (2008) characterized it as having two unified goals – the identification of at-risk students who would benefit from preventive services and the provision of on-going services to LD students who are chronically unresponsive and require a more individualized approach based on data-driven instructional planning.

Emergence of Response to Intervention

On August 14, 2006, legislators introduced final regulations to accompany the 2004 reauthorization of IDEIA (PL 108-446). Effective October 13, 2006, this historic new education policy promised to affect significant changes in practices for both general education and special education. Soon after the federal adoption, states began to examine the RtI model and prepare organizational designs for implementation. The first step was to identify its chief components.

RtI Components

There are a number of components that typify the RtI model. They include universal screenings,
multiple tiers of intervention services, progress monitoring, and data-based decision making.

Universal screenings.

Typically implemented three times (at the beginning, middle, and end) of the academic school year, universal screenings are conducted with all students and prove significant in the RtI model as they serve as the gateway for students to gain access to more intensive interventions (Mellard & Johnson, 2008).

While there is no mandate within the legislation for screenings, they do provide the “principal means for identifying early those students at risk of failure and likely to require supplemental instruction; as such, it represents a critical juncture in the service delivery continuum” (Jenkins, Hudson, & Johnson, 2007, p. 582). Wixson and Valencia (2011) contend that the intent of universal screening is to “use the assessment information as the basis for differentiating instruction so it is more responsive to students’ needs and more likely to accelerate student learning” (p. 466).

Multiple tiers.

RtI, unique from traditional approaches (Barnes & Harlacher, 2008), follows an approach utilized by the public health model that employs multiple tiers of interventions with increasing intensity. It begins with primary interventions for the general population, then secondary interventions for the subset of the population who require more intensive services, and finally, tertiary interventions for those who have failed to respond to all previous treatments (Harn, Kame’enui, & Simmons, 2007; Mellard & Johnson, 2008). In a comparable fashion, RtI commonly provides three tiers of academic supports.

Tier I encompasses the best practices implemented in the general classroom setting in which most students (80%-90%) will perform proficiently as evidenced by assessment outcomes, such as the universal screenings conducted throughout the year. Those students (10%-15%) who do not respond to the supports provided in Tier I have opportunities for targeted instruction in Tier II with a greater degree of frequency (1-2 times weekly) and intensity (small groups comprising 3-6 students). Instruction at this tier may be provided by the classroom teacher or interventionist trained to work at this level of support services. The small minority of students (1%-5%) who fail to respond in Tier I or Tier II move to Tier III with the most intensive interventions. During this time, services are provided at even greater frequency (3-5 times weekly) and with greater intensity (small groups of no more than 3 students). Fuchs and Fuchs (2006) suggest several means to increase intensity, such as by “(a) using more teacher-centered, systematic, and explicit, (e.g., scripted) instruction; (b) conducting it more frequently; (c) adding to its duration; (d) creating smaller and more homogeneous student groupings; or (e) relying on instructors with greater expertise” (p. 94).

Deno’s cascade of services.

This tiered configuration is reminiscent of the model devised by Deno (1970) which conceptualized special education services as a “cascade” model in which increasingly smaller groups of students receive instruction with intensifying attention paid to individual needs. Deno’s cascade of services shaped special education guidelines throughout the 1970s and 1980s, but greater and greater numbers of students qualifying for special education services hampered its ultimate effect. Despite its limitations, the RtI model is similar to Deno’s construct for specialized services.

The Standard Protocol Versus the Problem-Solving Approach.

The RtI tiered framework commonly adheres to one of two models – the standard treatment protocol or the problem-solving approach (Wixson, Lipson, & Johnston, 2010). Historically, each garnered support from a distinct professional group. Early interventionists in the reading field advocated for the superiority of the standard treatment protocol while behavioral psychologists promoted the more clinical problem-solving model (Fuchs, Mock, Morgan, & Young, 2003).

While elementally similar, they differ in the degree to which each provides individual interventions and the level to which they analyze the student achievement problem before implementing an intervention plan (Christ, Burns, & Ysseldyke, 2005). Fuchs, Mock, Morgan, and Young (2003) further assert by inherent principle, the standard treatment protocol will ensure quality control of the interventions while the problemsolving model will focus on individual differences and needs.

Typically used by practitioners in the field, the standard protocol provides a plan of standardized interventions for a given time with consideration given to teacher fidelity to the program. Although the ideology derived from the scientific method, the protocol itself was originally the work of Bergan in 1977 and later revised by Bergan and Kratochwill (1990). Bergen’s work delineated the steps of behavioral consultation into four stages that now constitute the precepts of the standard protocol for intervention services.

The problem-solving approach, preferred by researchers and school psychologists, typifies a tailored instructional plan designed for individual students based on their needs (Fuchs & Fuchs,
2008). Similar in design to the standard protocol, the problem-solving approach diverges in its intent to provide increasingly intensive interventions that are scientifically based and data focused as nonresponsive students move up the tier continuum (Hale, Kaufman, Naglieri, & Kavale, 2006).

Haager and Mahdavi (2007) suggest that there are a number of supports that must be present in order to implement a tiered intervention framework; such as, professional development, shared focus, administrator support, logistical support, teacher support, and assessment protocols. Similarly, they argue that barriers exist that will negate the effectiveness of such a model. They point to competing educational initiatives, negative perceptions regarding teachers’ roles and responsibilities in remediating reading, lack of time, inadequate training, and the absence of support structures.

Progress monitoring.

Within the RtI model, progress monitoring provides immediate feedback by assembling multiple measures of student academic achievement to “assess students’ academic performance, to quantify a student rate of improvement or responsiveness to instruction, and to evaluate the effectiveness of instruction” (National Center on Response to Intervention, 2011, para. 1). Thus, progress monitoring should provide accurate and reliable methods to track response to interventions in order to modify intervention plans for individual students (Alber-Morgan, 2010).

Data-based decision making.

As one of the primary aspect of the RtI model is ongoing assessment, the use of data to inform decisions proves paramount in the intervention and identification process. On a continuing basis, educators utilizing the RtI model gather student information “(1) to adjust the specifics of teaching to meet individual students’ needs and (2) to help students understand what they can do to keep growing as readers” (Owocki, 2010). Ultimately, the data will serve as a deciding factor in both preventive services and eligibility criteria, thereby necessitating that those in the field become expert in the area of data maintenance, data mining, and data-driven decision making.

Opponents of RtI argue that attention should focus on the shortcomings of RtI. Namely, this model requires classroom teachers to take greater responsibility for struggling students in ways that may extend beyond their level of expertise (Collier, 2010). A deeper concern is that the RtI model identifies chronically low-achieving students – not students who are learning disabled. As an extension of these issues, while RtI lowers the number of referrals (and the corresponding staffing and resources necessitated by such referrals), transitioning students through the three tiers of intervention creates issues of delays or possible eliminations of necessary referrals. If these concerns materialize, students who should be eligible for special education will suffer from the deprivation of vital support services.

Ultimately, whether advocate or opponent of RtI, researchers in the field estimate that there will continue to be 2% to 6% of students who will fail to respond to any of the three intervention tiers – regardless of frequency or intensity of support. They predict 6% to 8% of students will qualify for special education services (Fuchs, Stecker, & Fuchs, 2008) – approximately a 50% reduction from 2004.

Constructing SRBI

In reaction to the new federal legislation, the state of Connecticut moved to analyze this paradigm shift in special education policy within the context of the state’s classrooms and schools, subsequently documenting the process in its 2008 publication, Using Scientific Research-Based Interventions: Improving Education for All Students – Connecticut’s Framework for RtI.

State Leadership Team.

The first step in the implementation process began with the development of a state leadership team whose task was to craft a state policy that adhered to the federal law while considering the unique needs of Connecticut and its students. The team comprised delegates from the Connecticut State Department of Education (CSDE), the Regional Education Service Centers (RESCs), the State Education Resource Center (SERC), and other stakeholder educational agencies.

Roundtable discussions.

With the leadership team came roundtable discussions on RtI. Bringing together a wide range of stakeholder groups (e.g., administrators, regular and special education teachers, higher education faculty, members from the governor’s office, and parents), these dialogues centered on the key components of the RtI model – 1) universal screenings, 2) progress monitoring, 3) tiered interventions, and 4) implementation. From this discourse stemmed a number of significant concepts, namely, the need for a joint effort between regular education and special education, the importance of leadership, and the necessity of professional development.

Advisory panel.

An advisory panel assembled next and focused on two main responsibilities – reviewing the literature surrounding RtI and designing an implementation framework for Connecticut’s schools. During this time, the panel converted the nationally recognized name of RtI into the more personalized SRBI (scientific research-based interventions) for Connecticut. As a term used in both NCLB and IDEA, the panel proposed that such a designation would emphasize their belief in the significance of general education in the policy as well as the weight of using interventions that were scientific as well as research based.

State personnel development grants

To facilitate statewide implementation, the CSDE and SERC worked collaboratively to offer three-year grants to schools in four school districts. These school systems, Bristol, CREC, Greenwich, and Waterbury, served as model sites because of their usage of intervention services and differentiated instruction. This undertaking was to expand their work to additional schools in their systems as well as to create opportunities for collaboration with other school systems who wished to improve their educational services.

The SRBI Model

In constructing the state’s SRBI model, the adhered to the nationally recognized RtI model. Tier I occurs in the general classroom, focuses on general education curriculum, must be research-based and culturally responsive, and includes a range of supports. While instruction may occur through small, flexible groups, the instructor is the general educator with collaboration from specialists. Assessments in this tier include universal screenings and formative assessments and any additional assessment tools that may be beneficial to monitor individual student performance.

Data teams collaborate with classroom teachers to utilize assessment data as a means to inform instructional planning and make decisions regarding the placement of students within the three tiers.

Tier II attends to those students who have not responded to the supports provided in Tier I and offers additional services in the general education classroom or other general education settings. In this tier, students receive short-term interventions (8 to 20 weeks) for small-groups of struggling students (1:6) that are supplemental to the core curriculum. Interventionists may be any general education teacher or a specialist trained to work in this tier. Assessments during this tier concentrate on frequent progress monitoring (weekly or biweekly) to determine students’ responsiveness to interventions. Data analysis occurs in both data teams and intervention teams. During Tier III, the focus is on students who have failed to respond to supports or interventions in Tiers I and II. They continue to receive services in general education settings; however, they also receive additional short-term interventions (8 to 20 weeks) provided with a smaller group of homogeneous students (1:3) designed to be supplemental to the core curriculum. Interventionists again come from the general education field or others trained in this tier. Progress monitoring increases in frequency (twice weekly), and intervention teams continue to assess the data.

Conclusion

As schools in Connecticut continue to implement SRBI, focus must remain on the systemic reforms needed to ensure the academic well-being of Connecticut’s students. The SRBI model offers the potential to affect lasting change in our schools, perhaps even to bridge the achievement gap that has plagued Connecticut for so many years. To do so, however, will require all of us to work together with a singular goal in mind – ensuring that all of our students succeed.

References

Alber-Morgan, S. (2010). Using RtI to teach literacy to diverse learners, k-8: Strategies for the inclusive classroom. Thousand Oaks, CA: Corwin Press.

Barnes, A. C., & Harlacher, J. E. (2008). Clearing the confusion: Response-to-intervention as a set of principles. Education and Treatment of Children, 31(3), 417-431.

Bergan, J., & Kratochwill, T. R. (1990). Behavioral consultation and therapy. New York: Plenum Press.

Berninger, V. W. (2006). Research-supported ideas for implementing reauthorized IDEA with intelligent professional psychological services. Psychology in the Schools, 43(7), 781-796.

Brown-Chidsey, R., & Steege, M. W. (2005). Response to intervention: Principles and strategies for effective practice. New York: The Guilford Press.

Burns, M. K., & Gibbons, K. A. (2008). Implementing response-to-intervention in elementary and secondary schools. New York: Routledge Taylor & Francis Group.

Christ, T. J., Burns, M. K., & Ysseldyke, J. E. (2005). Conceptual confusion within
response-to-intervention vernacular: Clarifying meaningful differences. NASP Communiqué, 34(3), Retrieved from http://www.nasponline.org/publications/cq/cq343rti.aspx

Collier, C. (2010). RtI for diverse learners. Thousand Oaks, CA: Corwin Press.

Connecticut State Department of Education. (2008). Using scientific research-based interventions: Improving education for all students – Connecticut’s framework for RtI. Hartford, CT: Author.

Cortiella, C. (2008). Response to intervention – An emerging method for learning disability identification. Retrieved from http://www.schwablearning.org.

Deno, E. (1970). Special education as developmental capital. Exceptional Children, 37, 229-237.

Fuchs, D., & Fuchs, L. (2008). Implementing RtI. District Administration, 44(11), 72-76.

Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to intervention: What, why, and how valid is it? Reading Research Quarterly, 41(1), 93-99.

Fuchs, D., Fuchs, L. S., & Vaughn, S. (Eds.). (2008). Response to intervention: A framework for reading educators. Newark, DE: International Reading Association.

Fuchs, D., Mock, D., Morgan, P.L., & Young, C.L. (2003). Responsiveness-to-intervention for
the learning disabilities construct. Learning Disabilities Research & Practice, 18(3), 157-171.

Fuchs, D., Stecker, P. M., & Fuchs, L. S. (2008). Tier 3: Why special education must be the most intensive tier in a standards-driven, no child left behind world. In D. Fuchs, L. S. Fuchs, & S. Vaughn (Eds.), Response to intervention: A framework for reading educators (pp. 71-104). Newark, DE: International Reading Association.

Gresham, F. (2001, August). Response to intervention: An alternative approach to the identification of learning disabilities. Paper presented at the Learning Disabilities Summit: Building a Foundation for the Future. Washington, DC: August 27-28, 2001.

Haager, D., & Mahdavi, J. (2007). Teacher roles in implementing intervention. In D. Haager, Z. Klingner, & S. Vaughn (Eds.), Evidence-based reading practices for response to intervention (pp. 245-263). Baltimore, MD: Paul H. Brookes Publishing Company.

Hale, J. B., Kaufman, A., Naglieri, J. A., & Kavale, K. A. (2006). Implementation of IDEA: Integrating response to intervention and cognitive assessment methods. Psychology in the Schools, 43(7), 753-770.

Hardman, M. L. (2006). Outlook on special education policy. Focus on Exceptional Children, 38(8), 2-8.

Harn, B. A., Kame’enui, E. J., & Simmons, D. C. (2007). The nature and role of the third tier in a prevention model for kindergarten students. In D. Haager, J. Klingner, & S. Vaugn (Eds.), Evidence-based reading practices for response to intervention (pp. 161-184). Baltimore, MD: Paul H. Brookes Publishing Company.

Holdnack, J. A., & Weiss, L. G. (2006). IDEA 2004: Anticipated implications for clinical practice – integrating assessment and intervention. Psychology in the Schools, 43(8), 871-882.

Jenkins, J. R., Hudson, R. F., & Johnson, E. S. (2007). Screening for at-risk readers in a response to intervention framework. School Psychology Review, 36(4), 582-600.

Lyon, G. R. (1995). Research initiatives in learning disabilities: Contributions from scientists supported by the National Institute of Child Health and Human Development. Journal of Child Neurology, 10 (suppl. 1), S120-S126.

Lyon, G. R., Fletcher, J. M., Shaywitz, S. E., Shaywitz, B. A., Torgeson, J. K., Wood, F. B.,Schulte, A., & Olson, R. (2001). Rethinking learning disabilities. In C. E. Finn, R. A. Rotherham, & C. R. Hokanson (Eds.), Rethinking special education for a new century (pp. 259-287). Washington, DC: Progressive Policy Institute and the Thomas B. Fordham Foundation.

Mellard, D. E., & Johnson, E. (2008). RtI: A practitioner’s guide to implementing response to intervention. Thousand Oaks, CA: Corwin Press.

National Association of State Directors of Special Education (NASDSE). (2006). Response to intervention: Policy considerations and implementation. Alexandria, VA: Author. National Center on Response to Intervention. (2011). Progress monitoring. Retrieved from http://www.rti4success.org/categorycontents/progress_monitoring.

Owocki, G. (2010). RtI daily planning book, k-6: Tools and strategies for collecting and
assessing data & targeted follow-up instruction. Portsmouth, NH: Heinemann.

Stuebing, K. K., Barth, A. E., Weiss, B., & Fletcher, J. J. (2009). IQ is not strongly related to response to reading instruction: A meta-analytic interpretation. Exceptional Children, 76(1), 31-51.

U.S. Department of Education. (1977). 1977 code of federal regulations. Washington, DC: Author.

U.S. Department of Education. (2006). Assistance to states for the education of children with disabilities and preschools grants for children with disabilities, final rule. Retrieved from http://eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/lb/e9/95.pdf

Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to treatment as a means of
identifying students with reading/learning disabilities. Exceptional Children, 69, 391-409.

Wedle, R. J. (2005). Response to intervention: An alternative to traditional eligibility criteria for students with disabilities. Retrieved from Education Evolving website: http://www.educationevolving.org/pdf/Response_to_Intervention.pdf.

Wixson, K. K., Lipson, M. Y., & Johnston, P. H. (2010). Making the most of RtI. In M. Y. Lipson & K. K. Wixson (Eds.), Successful approaches to RtI: Collaborative practices for improving k-12 literacy (pp. 1-19). Newark, DE: International Reading Association.

Wixson, K. K., & Valencia, S. W. (2011). Assessment in RtI: What teachers and specialists need to know. The Reading Teacher, 64(6), 466-469.

Wright, J. (2007). RtI Toolkit: A practical guide for schools. Port Chester, NY: Dude Publishing.

Reading Intervention Strategies – Primary and Beyond

Theresa Jacksis & Lisa Yacoviello
University of Bridgeport
School of Education and Human Resources

Abstract

This article describes reading intervention strategies at the primary and secondary levels. The primary level focuses on the effectiveness of small group literacy instruction and the role that gender plays in reading achievement. At the secondary level, the reading intervention focus is on a specific reading strategies course and its impact on student achievement. This article gives a rationale for conducting this research and explores previous research in this field as well. At the elementary level, this study provided further support for the benefits of small group instruction and its positive effect on children’s literacy achievement. In addition, the study brought to light the effect that gender can play in small group literacy instruction. The research that was done at the secondary level has shown that intensive, scientifically validated reading programs do make a positive difference in achievement.

Introduction

The expectations placed upon students today are increasing in a variety of ways. Students are working hard to meet district, state and national standards in all subject areas. Now more than ever, educators are exploring options to find ways to reach the needs of all of their students to help them achieve the goals set before them. The purpose of this paper is two-fold. The primary level will focus upon the effect of small group instruction in the area of literacy as an intervention method as well as the role that gender plays on student achievement in this area. The secondary level will focus upon analyzing the effectiveness of a reading intervention course for struggling readers. The information gained from this study will assist educators in making instructional decisions that will help students progress toward local and national goals.

Background

The importance of early reading cannot be emphasized enough. Educators know that children who are struggling in reading in the primary grades often remain behind their classmates as they progress throughout their academic career. The Carnegie Corporation of New York and the Alliance for Excellent Education have reported close to eight million students ranging in grades 4 through 12 are not reading at grade level (Hasselbring, 2002). Now that the No Child Left Behind Act (NCLB) federally mandates and regulates state testing for all students, standardized tests play a major role in education today. Children typically take one or more standardized tests each year and oftentimes teachers devote a significant amount of class time preparing for these tests. Many states administer “high stakes” tests, which can have a significant impact on school assessment and funding, determine students’ placement in classes, or even prevent grade promotion. (Woodward & Talbert- Johnson, 2009)

With all the emphasis placed upon standardized tests, educators at the primary level are evaluating the success of small group instruction as an intervention method as well as the role that gender plays in student learning within a small group setting. At the secondary level, educators are evaluating the effectiveness of a structured reading intervention program.

Review of the Literature

It has been stated that in general, girls perform better than boys in reading, regardless of the criteria used to assess their competency in this area (Moss, 2000). Several researchers have suggested that one reason for this is that boys choose to read nonfiction more than other genres both inside and outside of school (Guzzetti et al., 2002; Topping, Samuels, & Paul, 2008; Moss, 2000; Farris et al., 2009). The reason why this becomes troublesome for boys is, according to Guzzetti et al. (2002) “school reading is more pertinent to the interests of girls than to boys and it showed gender disparities in instructional reading practice” (p. 55). Usually, the majority of books read in classrooms are narrative texts, and boys prefer nonfiction texts which are not typically available to them. Furthermore, it has been shown that boys appear to read less than girls, and proportionately more nonfiction, but as boys continue to advance throughout the grades, they begin to read less carefully, which results in lower reading achievement (Topping, Samuels, & Paul, 2008). Another factor that comes into play when examining the role of gender and literacy achievement is children’s perceptions related to reading achievement (Moss, 2000; Lynch, 2002). According to Lynch (2002) “children’s selfperceptions as readers are significantly related to their reading achievement” (p. 54). In the classroom, when proficiency judgments about reading abilities were made highly visible, boys who were weaker readers (compared to girls who were weaker readers) spent much more time trying to disguise their lack of reading success, and they often began to read more nonfiction than fiction text. Gender grouping also has a noticeable effect upon participation (Guzzetti et al., 2002; Aukrust, 2008). Boys typically participate more across all grade levels, regardless of whether the teacher was male or female; however the difference in girls’ and boys’ participation was less in a classroom with a female teacher rather than a male teacher (Aukrust, 2008). It has been suggested by Guzzetti et al. (2002) that teachers form small groups by gender for discussion because this provides females with more opportunities to participate in discussions and they feel safer participating when grouped on the basis of their gender.

It has been stated that early intervention and quality of instruction are keys to assisting struggling readers (Woodward & Talbert-Johnson, 2009). In order to provide children with the support they need, small group instruction has been utilized in classrooms as an intervention model in all grade levels. Several researchers have described small group instruction as a valuable practice which yields several benefits (Ganske, Monroe, & Strickland 2003; Woodward & Talbert-Johnson, 2009; Wasik, 2008; McIntyre et al., 2005). In a study conducted by Woodward & Talbert-Johnson (2009), classroom teachers reported that small group instruction allows the teacher to focus in on students’ needs with minimal distractions, and teachers were able to work with students in homogenous groups which allowed for student growth. This is further supported by Ganske, Monroe, & Strickland (2003) and Wasik (2008) who stated that the small group setting allows teachers to maintain the focus and attention of children they are working with who may otherwise disengage, and this setting is more conducive for teachers to monitor students’ reading behavior and adjust their instruction. Foorman & Torgesen (2001) contend that instruction for children who have difficulties in reading must be more explicit and comprehensive, more intensive, and more supportive than instruction required by the majority of students. Small group instruction is one approach that can increase the intensity of instruction for children who are experiencing reading difficulties. A study conducted by Menzies, Mahdavi, & Lewis (2008) came to similar conclusions where three research-based strategies were implemented to minimize the occurrence of reading difficulties. One strategy implemented was instruction characterized by high intensity through the use of groups with a low student-teacher ratio. At the outcome of the study, students showed marked improvement in their reading abilities. Small group instruction also allows students more time to interact with adults focusing on literacy (McIntyre et al., 2005) and when children have more opportunities to express themselves, there is a positive impact on their language development (Wasik, 2008).

Secondary and primary levels seek to improve both fluency and comprehension for students as well as increase student achievement in reading among struggling readers. Two important components of comprehension include determining importance and synthesis (Harvey and Goudvis, 2007). When students can construct meaning and then formulate their own opinion and connect prior knowledge to their thinking, they are more equipped to tackle content area reading. In order for students to procure these strategies, explicit teaching is imperative. One way to explicitly model effective comprehension strategies is through an interactive read aloud (Harvey and Goudvis, 2007; Gallagher, 2003). Teachers can explicitly model how to use the reading comprehension strategies when they are reading so students can use them to construct meaning. Research shows that even if teachers teach just one comprehension strategy that can improve students’ comprehension (Gill, 2008). Reading aloud not only enhances comprehension, but it’s an opportunity for teachers to immerse their students in wonderful literature (Harvey and Goudvis, 2007).

Sustained, silent, independent reading is another component that improves fluency and builds vocabulary. Students need to read extensively and the only way to improve is by practice. Teachers need to build time in their day for students to read independently (Harvey and Goudvis, 2007). Anderson, Wilson, and Fielding (2007) state that Sustained, Silent Reading (SSR) directly correlates with higher reading scores. The more students read the higher they scored on standardized reading tests (Table 1).

Table 1: Correlation of Minutes Read and Percentile Rank on Standardized Tests

Percentile Rank
Minutes of Text Reading Per Day
Estimated Number of Words Read Per Year
98
90.7
4,733,000
90
40.4
2,357,000
70
21.7
1,168,000
50
12.9
601,000

 

Teachers need to provide a structured reading program. Part of that structure is students recording what they are reading and how many pages read. Having students keep track of their reading shows them the progress they are making as readers. Having students chart their reading progress enables them to recognize their advancement as readers (Gallagher, 2003). Reading takes time and practice. The message students need to hear is that reading is important, and yes, reading is hard. If students are shown how to do it, given time to read, and exposed to high interest materials, teachers will see significant growth in students’ fluency, comprehension, and enjoyment of reading (Gallagher, 2003).

Fluency and vocabulary development come with repeated practice. Students need to be shown how good readers use the context while reading to figure out unfamiliar words. The more practice with that, the greater the chance to improve vocabulary. Though direct vocabulary instruction is still necessary, students need to know that vocabulary acquisition comes from reading a multitude of authors and genres.

Lastly, intensive reading programs can show measurable change in a student’s progress in reading. The format that seems to be most effective is a 90 minute class; Whole group minilesson followed by three rotations of SSR, small group direct instruction with the teacher, and a scientifically validated computer program for about 20 minutes each (WWC, 2009).

Method

The elementary level of this study examined the effect of small group instruction on 3rd grade students’ reading comprehension achievement as measured by the students’ scores on the Degrees of Reading Power (DRP) section of the Connecticut Mastery Test (CMT). The DRP is a nationally norm-referenced test that measures reading as a process. The DRP assesses students’ ability to use information in the text to figure out the meaning of the text. The DRP is comprised of several nonfiction passages on a variety of topics. Within each passage, words have been deleted and students are asked to select the correct word for each deletion in the text. The items in the test are designed to measure how well students’ process and understand text. (CSDE, 2005) This study also examined the effect of gender on the DRP scores of these students.

At the secondary level, the study focused on the improvement of academic reading performance of students enrolled in a reading intervention course as measured by their performance on the reading strands of the Connecticut Academic Performance Test. The CAPT is a standard measure of assessment utilized in the state of Connecticut. It is aligned with the State’s curriculum framework and provides information about how students are performing in the areas of Reading, Writing, Mathematics, and Science.

Description of the Setting and the Participants

Research for the elementary level of this study was conducted at a suburban school district in the northeast region of the United States. The school has a population of approximately 500 students. This school is in Educational Reference Group B (ERG-B). According to the Connecticut State Department of Education (2009), ERG’s were developed to compare groups of districts with similar characteristics such as median family income, level of parent’s education, and primary language other than English spoken in the home. ERG-A is considered to include the state’s wealthiest communities while ERG-I includes its poorest, ERG-B describes districts that are wealthy, though they are not included in the state’s wealthiest communities. (CSDE, 2005)

The participants of the elementary level of the study were seventeen (17) students, twelve (12) male and five (5) female. The participants were chosen because of their scores on the third grade DRP section of the Connecticut Mastery Test. The participants of the study scored below the goal score for third grade on the DRP or narrowly achieved the goal score. The participants do not include any students in Special Education programs or English Language Learners.

The secondary study took place in a suburban school district in the northeastern part of the United States. The participants were high school sophomores enrolled in the reading intervention course. There were thirteen (13) participants, six (6) female and seven (7) male. The participants were of diverse ethnicities, six (6) of the participants were English Language Learners and seven (7) were identified as receiving Special Education services. The participants were heterogeneously grouped and their learning needs varied from reading below grade level, inability to progress at a continuous rate, slow processing of material, lack of motivation, behavioral, and a lack of focus and/or attention. The results of the CAPT are used on the secondary level to assess the participants’ level of competency in the skill areas necessary for graduation and for the purpose of this study, specifically, Reading for Information and Responding to Literature.

Procedure

The first step in carrying out the elementary level of this study was closely examining the third grade DRP scores for the 2008-09 school year and identifying students who either scored below the goal score or who narrowly achieved the goal score. The researcher identified students for the study. The Reading Specialist in the school trained paraprofessionals in the school in literacy instruction in order to work with students within small groups. More specifically, paraprofessionals were trained in reading strategies to teach students how to use information in the text to figure out the meaning of the text. The students were then placed into three groups of three and two groups of four for small group instruction. The students were placed in their groups based upon which classroom they were in. The paraprofessionals worked with the students in small groups from the end of September until March (right before the CMT’s were administered). The students were met with four times per week for thirty minutes. The students took the CMT in March of 2010. The CMT’s were scored by the State of Connecticut Department of Education, and the scores were sent to the schools by the end of the summer. After the results of the CMT’s were sent to the schools, the DRP scores of the individual participants were analyzed by the Reading and Language Arts Consultant.

The participants at the secondary level were placed in a reading intervention class as freshmen based on their middle school CMT scores, specifically, not making goal. The participants met five days a week for approximately 42 minutes each day. The researcher began each session with an interactive, high interest read aloud. The researcher modeled strategies through think alouds and questioning. Participants responded to various comprehension strategies orally as a whole group, in partners, and in reading logs daily. The participants also participated in independent, sustained, silent reading in a book at their level or interest. In addition, the participants had practice skill and drill in fluency, phonics, decoding skills, word recognition, vocabulary development, spelling, and comprehension in a scientifically, validated computer program. Lastly, the researcher worked with small groups of participants in vocabulary instruction, summarization, main idea or making connections. The researcher determined which skills the flexible groups needed to master at that time. Groups were ability based determined by scores achieved on a reading survey test to assess reading achievement. Direct instruction was explicit, systematic, and scaffolded.

Results

In the elementary study, all of the participants’ DRP scores improved after receiving the small group intervention. Tables 1 and 2 show the male and female DRP scores before and after intervention (see Appendix A). The goal score for the DRP in fourth grade is 54. Seventy five percent of the male participants and eighty percent of the female participants achieved goal after the intervention. The average point increase for male participants on the DRP was 10.25, and the average point increase for female participants was 6.

The secondary study revealed that after forty weeks of a reading intervention class, eight of the thirteen students (62%) met goal or above on the Response to Literature strand; the control group was slightly higher. However, the number of students meeting goal was lower on the Reading for Information strand on the CAPT exam. Three out of thirteen students (23%) met or exceeded goal. 77% did not. Four students were actually one point away from making goal. The control group was varied.

Discussion

Due to the fact that all of the participants of the elementary study had an increase in their DRP score, it is evident that small group instruction was an effective intervention method to increase students’ reading achievement. The small group instruction that students were part of in the present study provided them with intense, explicit instruction on reading strategies. This type of intervention has been shown to be effective in several studies carried out in the past (Ganske, Monroe, & Strickland, 2003; Woodward & Talbert-Johnson, 2009; Wasik, 2008; McIntyre et al., 2005). The present study provides further support for small group instruction as a form of quality instruction that supports struggling readers.

In addition to examining the effect of small group instruction on students’ reading comprehension achievement, the present study was focusing upon the effect that gender had on the DRP scores of the students receiving small group instruction. Research has indicated that in general, girls perform better than boys in reading (Moss, 2000). This information would suggest that girls would make more growth in their DRP scores after receiving the small group instruction. However, this is not consistent with the results of the present study. On average, male participants scored 4.25 points higher on their DRP scores than females did. One possible reason for the outcome of the present study is the type of text being utilized within the study. The participants of the study were chosen based upon their DRP scores, and the DRP is a test which uses nonfiction text. Research has shown that boys tend to choose fact books and informational books when reading (Farris et. al, 2009). Since males tend to prefer to read nonfiction text, this could account for the higher level of achievement of males within the present study. The males might have been more engaged during the small group instruction, which would lead to retaining more information that was taught during the intervention, resulting in higher achievement on the DRP test. According to Guzzetti et al. (2002), the majority of books read in classrooms are narrative texts, so most instructional time in reading is focused on fiction texts. Although there is a portion of time allotted to nonfiction in the classroom, it is not given as much attention as fiction. In the present study, there were more male participants than female participants. Since most instruction in the classroom focuses on fictional text, males might not have been attentive in learning and applying strategies that have been taught in class. Females could possibly have used what they learned about reading fiction text and applied it to their nonfiction reading during the initial test on the DRP. Since males might not have been as invested in reading activities taking place in class, they might not have been as equipped with strategies to achieve goal on the initial DRP test. This could be a potential reason for the larger size of male participants in the present study. This could also explain why males outperformed females after they were explicitly taught during the small group instruction how to use the information in the text to figure out the meaning of the text when reading nonfiction.

In the present elementary study, the students were grouped with other students from their homeroom class, but not by gender. According to Guzzetti et al. (2002), it is recommended that teachers form small groups by gender because it provides females with more opportunities to participate and they feel safer in participating rather than when students are grouped heterogeneously by gender. This reasoning could be another possible explanation of the results of the present study. It could explain the higher point increase for males on individual DRP test scores after intervention.

White, Haslam and Hewes (2006) carried out a large scale evaluation of a scientifically validated computer program in Phoenix. This same intervention program was used in the researcher’s study at the secondary level. The intervention groups were low achieving ninth graders from across the district. The intervention groups used this specific scientifically validated computer program for a full school year. At the end of the school year, the intervention groups scored higher on the SAT than the control groups, exactly 1.3 normal curve equivalents (NCE). There were even larger positive effects for ELL students. However, one year later, as sophomores, the intervention groups and control groups had identical scores on the Arizona Instrument to Measure Standards reading test.

A similar study was conducted in urban areas of Los Angeles, California. Mostly Hispanic students, half of which were ELL and had been retained, received reading instruction through the use of a scientifically validated computer program daily. Again, the intervention groups made substantially greater gains on the reading portion of the SAT compared to their well-matched control group from across the district (Papalewis, 2004). On the contrary, an intervention group composed mostly of African American students from Little Rock, Arkansas who received reading instruction daily through the use of a scientifically validated computer program did not perform as well as their well-matched control group on the Iowa Test of Basic Skills (ITBS) reading portion (Mims, Lowther, Strahl, and Nunnery, 2006).

Why did the intervention groups score significantly better on the reading portion of the SAT but not on the ITBS or the CAPT? Is the SAT designed in such a way that aligns with this specific scientifically validated computer reading program? The intervention group scored significantly lower on the Reading for Information strand of the CAPT and slightly lower on the Response to Literature strand of the CAPT. Since the setting was the researcher’s direct work setting, the researcher noted that there were certain intervention participants more intrinsically motivated to improve than others. Those that were motivated showed progress and increased achievement from their pre to post reading survey test. Evidence shows that with the rate of increase of these motivated participants, perhaps their confidence would grow and they would be able to succeed in all facets of reading. It’s not that they will be the best readers in the general population, but their skills will have improved.

Limitations of Study

One of the limitations of the elementary study is the number of participants. There were 17 students (12 male and 5 female). The small sample size makes it difficult to generalize to the entire population of students. In addition, the paraprofessionals all received the same training in literacy instruction by the Reading Specialist. However, each paraprofessionals’ educational background and prior literacy training is unknown. This could have an impact on the quality of instruction that was delivered within the small groups.

There were several limitations to the secondary study. First, the size of the participant group was small. Secondly, the study focused on quantitative measures of reading and there are qualitative and correlational measures of the intervention program that were not measured by CAPT; and in addition, the scientifically validated computer program is designed to be effective with 90 minutes of daily use, whereas, the intervention group received about 15-20 minutes daily. Finally, because the research site was the researcher’s “immediate work setting” (Creswell, 2003, pg. 184), the researcher acknowledged that bias might have occurred with participants since the classroom was shared by the researcher and a colleague.

Conclusion

The purpose of the study at the elementary level was two-fold. The primary purpose was to investigate the effect of small group instruction on students’ reading comprehension achievement. This study provided further support for the benefits of small group instruction and its positive effect on children’s literacy achievement. The secondary purpose of this study was to examine the role that gender played in the literacy learning of the students in the small group instruction. Throughout this study, it became evident that the type of text that was used with students was a very important variable. It is important for teachers to try to maintain a balance between fiction and nonfiction text in the classroom. Keeping this information in mind can assist teachers in making instructional decisions that will help them to reach the needs of all of their students.

At the secondary level, if students are having difficulties decoding, understanding vocabulary, and comprehending texts, then their achievement in their academic content areas will most likely be impeded. Investigating the nature of this breakdown, focusing exclusively in reading, will lead to determining what factors are interfering in this area. Giving students effective feedback while conferring, will assist them in the process of understanding what is preventing them from being strategic readers and demonstrating the possible strategies they can implement to begin to remediate their difficulties. There are still many integral parts of this puzzle to piece together, but research has shown that intensive, scientifically validated reading programs do make a positive difference in achievement.

Recommendations

The elementary level study provides further support for the effectiveness of small group instruction. It is important for teachers to continue to utilize this method of instruction in classrooms, especially in reading instruction as a way to address students’ diverse needs. This study also highlights the importance of gender in reading instruction. The results of the study reinforce the idea that boys tend to read more nonfiction. It is essential for teachers to try to balance the amount of fiction and nonfiction that is used in the classroom to maintain the interest of the male population in their classrooms. Furthermore, this study leads teachers to think about the importance of explicitly teaching students how many comprehension strategies that they use when reading fiction texts are the same strategies that can be used when reading nonfiction texts. This can assist teachers in maximizing their instructional time and satisfying the reading interests of both males and females in their classrooms. Finally, this study touches upon the idea that females are more able to take risks and participate in group discussions when they are grouped by gender. This could be valuable to teachers when making decisions on instructional grouping.

Since the researcher cannot amend the time periods of the school day at the secondary level, the researcher recommends that the teacher maintain rigor and time-on-task. According to the National Center for Education Statistics, about 35% of undergraduates took remedial reading classes in 2000 (Christie, 2008). The interventions that are implemented need to be monitored for methodology, quantity, and progress. In this type of program it is imperative that the focus be on students’ needs. The addition of a literacy coach could complement the Reading and Language Arts consultants and assist in meeting the many diverse needs of the remedial population in many academic areas. Another recommendation is for professional development for secondary teachers in adolescent literacy instruction. Many secondary level teachers may not have had instruction in teaching reading, but with sustained, systemic training, and the tools necessary to address adolescent literacy, they will be able to better support literacy achievement.

The information gained from the studies at the elementary and secondary level in conjunction with previous studies that have been conducted in the area of reading intervention will ultimately assist educators in making instructional decisions that will help students progress towards achieving the district, state, and national standards set before them.

References

Anderson, R., Wilson, P., and Fielding, L. (1988). “Growth in Reading and How Children Spend Their Time Outside of School.” Reading Research Quarterly 23, 285-303.

Aukrust, V.G. (2008). Boys’and girls conversational participation across four grade levels in Norwegian classrooms: Taking the floor or being given the floor? Gender and Education, 20(3), 237-252.

Creswell, J. (2003). Research design: Qualitative, quantitative, and mixed method approaches, (2nd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Christie, K. (2008). “Can Those Tweens and Teens Read Yet?” Phi Delta Kappan V89, 629-630, 703.

Connecticut State Department of Education (CSDE). 2005. News: Connecticut

Department of Education, 2005 Connecticut Academic Performance Test Results. Retrieved November 3, 2009 from http://www.sde.ct.gov/sde/lib/sde/PDF/PressRoomCAPT_PRESS_RELEASE05.pdf

Dwyer, C.A. (1975). Book reviews. American Educational Research Journal, 12(4), 513- 516.

Farris, P.J., Werderich, D.E., Nelson, P.A., & Fuhler, C.J. (2009). Male call: Fifth-grade boys’ reading preferences. The Reading Teacher, 63(3), 180-188.

Foorman, B.R., & Torgesen, J. (2001). Critical elements of classroom and small-group instruction promote reading success in all children. Learning Disabilities Research & Practice, 16(4), 203-212.

Gallagher, K. (2003). Reading reasons: motivational mini-lessons for middle and high school. Portland, ME: Stenhouse Publishers.

Ganske, K., Monroe, J.K., & Strickland, D.S. (2003). Questions teachers ask about struggling readers and writers. The Reading Teacher, 57(2), 118-128.

Gill, S. (2008). “The Comprehension Matrix: A Tool for Designing Comprehension Instruction.” The Reading Teacher 62(2), 106-113.

Guzzetti, B.J., Young, J.P., Gritsavage, M.M., Fyfe, L.M., & Hardenbrook, M. (2002). Reading, writing, and talking gender in literacy learning. Newark, DE: International Reading Association.

Harvey, S. and Goudvis, A. (2007). Strategies that work: teaching comprehension for Understanding and engagement, (2nd ed.). Portland, ME: Stenhouse Publishers.

Hasselbring, T. (2002a). Read 180. New York, NY: Scholastic, Inc.

Keene, E.O. and Zimmermann, S. (2007). Mosaic of Thought (2nd ed.). Portsmouth, NH: Heinemann.

Lynch, J. (2002). Parents’ self-efficacy beliefs, parents’ gender, children’s reader selfperceptions, reading achievement and gender. Journal of Research in Reading, 25(1), 54-67.

McIntyre, E., Petrosko, J., Jones, D., Powell, R., Powers, S., Bright, K., & Newsome, F. (2005). Supplemental instruction in early reading: Does it matter for struggling readers? The Journal of Educational Research, 99(2), 99-112.

Menzies, H.M., Mahdavi, J.N., & Lewis, J.L. (2008). Early intervention in reading. Remedial and Special Education, 29(2), 67- 77.

Mims, C., Lowther, D., Strahl, J. and Nunnery, J. (2006). Little Red School District Read 180 evaluation Technical report. Memphis, TN: The University of Memphis Center for Research in Educational Policy.

Moss, G. (2000). Raising boys’ attainment in reading: Some principles for intervention. Reading, 101-106.

Papalewis, R. (2004). “Struggling Middle School Readers: Successful, accelerating Intervention.” Reading Improvement. 41(1), 24-37.

Slavin, R., Cheung, A., Groff, C. and Lake, C. (2007). “Effective Reading Programs for Middle and High Schools: A Best-Evidence Synthesis.” Reading Research Quarterly 43: 290-322.

Topping, K.J., Samuels, J., & Paul, T. (2008). Independent reading: The relationship of challenge, nonfiction and gender to achievement. British Educational Research Journal, 34(4), 505-524.

Wasik, B. (2008). When fewer is more: Small groups in early childhood classrooms. Early Childhood Education, 35, 515-521.

Weaver, C. (2002). Reading process and practice (3rd ed.). Portsmouth, NH: Heinemann.

What Works Clearinghouse. (2009). Intervention: Read 180. Washington, DC: US Department of Education Institute of Education Sciences. Retrieved December 10, 2009. http://ies.ed.gov/ncee/wwc/reports/ adolescent_literacy/Read180/index.asp.

White, R., Haslam, M. and Hewes, G. (2006). Improving student literacy in the Phoenix Union High School District 2003-04 and 2004-05. Final Report. Washington, DC’s Policy Studies Association.

Woodward, M.M., & Talbert-Johnson, C. (2009). Reading intervention models: Challenges of classroom support and separated instruction. The Reading Teacher, 63(3), 190-200.

Improving Reading Comprehension Across the Content Areas: A Language Arts and Social Studies Partnership

Christine Ceraso Parisi, Reading Consultant, Thomas Edison Middle School

Abstract

The purpose of this action research study was to answer specific questions about student achievement in the area of reading comprehension and writing. The focus question of the study was, “Will students improve reading and writing skills if both language arts and social studies teachers implement the same strategies in their content area classrooms for the period of one school year?” Formal reading comprehension and writing instruction in content area classrooms rarely takes place beyond the elementary school level. Students are expected to have mastered needed reading comprehension and writing skills and to have transferred those skills to much more complicated content area readings prior to entering middle school. Understanding content area subject concepts requires application of a variety of reading comprehension and writing strategies to make sense of text. Mathematics reading depends upon student ability to understand precise vocabulary meanings, science reading engages students in visual understanding of graphics, and social studies enlists student understanding of author bias (Shanahan & Shanahan, 2008). Content area teachers should model how to read their specific content area text.

Specific strategies applied in language arts and social studies classrooms during this study were: application leveled non-fiction texts and readings, research-based instructional strategies such as summarizing, creating relevant connections, and making inferences. Students were introduced to a variety of note taking techniques, graphic organizers, and summarization formats. Results of the study confirmed strategic reading comprehension and writing instruction in the areas of social studies and language arts congruently enhance student comprehension.

Introduction

Understanding how to instill reading comprehension and writing strategies in all students is paramount to student success. RAND (2002) stated research in the area of reading comprehension and writing strategies is vital to the advancement of literacy. Reading comprehension and writing affects all subject areas. The intent of this action research study was to explore how a team of middle school educators implemented developmentally appropriate reading comprehension and writing strategies to enhance student comprehension and writing in grades 6-8 language arts and social studies classrooms. Shanahan and Shanahan (2008) reported that the directives of No Child Left behind (NCLB) require higherlevel literacy skills, yet students at the secondary level are not proficient in reading comprehension or writing skills. The initial draft of NCLB in 2001 focused on interventions for elementary students who were not successful in reading comprehension and contained minimal mention of reading interventions at the secondary level (Jackson, 2009; Santa, 2006). NCLB is changing classroom reading instruction in content area classrooms (Duddin, 2010). Anfara and Schmid (2009) contended the continued teacher directed instruction and rote memorization of content area concepts are not producing needed reading comprehension and writing results. Analysis of current reading comprehension and writing instruction and research in the area of reading comprehension and writing revealed a need to design new reading comprehension and writing instructional models to engage all learners which is paramount to the reading comprehension process (RAND, 2002). Reading comprehension and writing strategies taught in language arts and social studies should support the varying skills and aptitudes of all middle school students (National Middle School Association, 1995).

Middle school students are required to comprehend extensive reading materials in science, mathematics, social studies, and language arts in a short span of time. To successfully synthesize all required information, students need to apply a variety of metacognitive and written strategies (Tovani, 2000). Reading comprehension and writing strategies are a necessity in all middle school content area classrooms. Content area materials are difficult and tedious for many middle school students to comprehend.

Many middle school teachers focus instruction on subject content rather than the atomicity of skills needed to access content area information. To master subject content, students need a variety of informational texts, must be self-driven, and responsible for their learning (Information Literacy Competency Standards for Higher Education, 2000; RAND, 2002). Many middle school teachers do not have experience teaching specific reading comprehension and writing skills. Insufficient teacher knowledge of reading comprehension and writing skills coupled with difficult reading materials make necessary a need for change in middle school content area instructional practice. Studies by Alger, Hall, Osborne, and Spencer (as cited by Phillips, Bardsley, Bach & Gibb-Brown, 2009) determined content area teachers would like to teach reading comprehension and writing strategies but are unsure of where to begin and presume direct reading comprehension and writing instruction is limited to elementary school classrooms, is part of the language arts curriculum, or the sole responsibility of the language arts teacher. Content area teachers surmise that specializing in a particular content area subject negates the necessity to teach reading comprehension and writing strategies (Bintz, 2011; Wilson, Graham, & Smetana, 2009). Fang and Schleppegrell (2010) disagreed and purported that middle school students need more advanced reading comprehension and writing skills, time to practice accessing information, and exposure to high interest leveled reading materials in specialized content areas. Perie, Grigg, and Donahue (as cited by Fang and Schleppegrell, 2010), presented a variety of reports in the area of reading comprehension strategies which suggested that a vast amount of middle to high school students lack needed reading comprehension skills. Perie et al. (as cited by Fang and Schleppegrell, 2010), contended that over 8 million students in grades 4-12 cannot adequately comprehend content specific texts.

Reading and writing are learned processes in which the reader creates meaning from the printed word and responds to the reading based on the ability to make connections. Students must be provided with a variety of leveled texts and ample opportunities to practice strategic reading and writing skills. A review of literature in the area of reading comprehension and writing disclosed a variety of approaches related instructional practice in both areas but studies were not consistent in the processes of reading comprehension and writing (McKewon, Beck, & Blake, 2009; Anfara & Schmid, 2007). There have not been studies to develop a universal program for writing and reading comprehension in the middle school content areas (Mead, Burke, Lanning, & Mitchell, 2010; Anfara & Schmid, 2009; Slavin, Cheung, Groff, & Lake, 2008). Limited information has been found in guiding instruction across the curriculum in the area of reading comprehension prior to middle school (Reed, 2009; Fang et al. 2008; Parris & Block, 2007).

Little is known about how to implement an approach to reading comprehension instruction in content area reading across the content area classrooms, provide student opportunities to practice reading comprehension strategies, and transfer information beyond elementary school (Reed, 2009). There are a limited number of existing studies related to effective implementation of reading comprehension and writing strategies at the middle school level (Allington, 2009; RAND, 2002). Middle school content area teachers need assistance in teaching reading comprehension and writing strategies so that students are able to access and make sense of specific content area concepts. Students need purposeful direction in order to identify the important information embedded in content area texts. While reading, students integrate their prior knowledge, schema, and questions to make predictions or inferences. Active engagement with content area text to promote understanding, imparting the importance of how reading comprehension strategies promote understanding and effective reading comprehension and writing strategies are expedients through which students assimilate content area concepts (Freedman & Carver, 2007).

Time constraints in traditionally taught content area classrooms limit opportunities for teachers to provide rich text related to the content which demarcates student selection of stimulating reading materials (Duddin, 2010). Scheduling reading comprehension and writing instruction into the middle school content area curriculum is difficult due to short instructional periods (Fang & Wei, 2010; Sanacore & Palumbo, 2010). Middle school students should be provided with time to actively read in order to continue developing reading skills (Fang & Wei, 2010; Sanacore & Palumbo, 2010).

Content area teachers are facilitators who model and guide students through the metacognitive process of reading comprehension and writing. Duddin (2010) observed a lack of opportunities in content area classrooms to include readings to connect student learning to the world. Teachers who teach reading comprehension and writing strategies in content area classrooms understand the goal is to ensure all students become active and metacognitive readers (Wilson, Graham, & Smetana, 2009). The process of involving content area teachers in teaching reading comprehension and writing strategies is diffusive (Fang & Wei, 2010). Guiding students in understanding content area text is the responsibility of all content area teachers (McKewon, Beck, & Blake, 2009). Practice and instruction of reading comprehension and writing strategies enhances the curriculum of content area teachers rather than add more instructional content (Joseph, 2010).

Method

This study examined a cooperative team approach to reading comprehension in the areas of language arts and social studies. Implementation of reading comprehension and writing stratagem, based on student data reviewed by the content area team was the basis to determine the expansion of student reading comprehension and writing in both content areas. McKewon, Black and Blake (2009) noted providing students with reading comprehension and writing strategies facilitates comprehension. Each content area teacher involved in the study modeled specific reading comprehension and writing strategies and provided ample class time for students to practice reading comprehension and writing strategies.

Working as a team to analyze student data guided the teachers to identify specific weaknesses in reading comprehension and the writing process. The analysis of student data prompted teachers to create targeted remediation strategies and incorporate tailored strategic instruction simultaneously in both social studies and language arts content area instruction for the period of one school year.

Population and Sampling

The study took place in an urban middle school in northeast United States. The population consisted of 36 middle school teachers in grades 6-8. A population constitutes “…individuals who have the same characteristic” (Creswell, 2005, p. 145). Jackson (2008) explained that the population is the overall group the study will generalize. The population in this study was segmented into three teams of teachers in each 6-8 grade level, a total of nine content area teams. Each team was comprised of a language arts teacher, a science teacher, a social studies teacher, and a mathematics teacher. Team teachers serviced approximately 80-90 students. The population followed a typical middle school schedule of 45 minute classes with students moving to each content area classroom. Instructional earning environments outside of language arts classrooms was mainly teacher-directed.

The sample consisted of the language arts and social studies teachers from the population in grades 6-8. Similarities to the population were the coherence of the sample to the configuration of the middle school’s procedures and programs. Students were expected to self-select and apply reading comprehension and writing strategies learned in previous grades to comprehend and respond to a variety of reading materials such as teacher selected web sites, district chosen textbooks and readings and teacher assigned readings.

Procedure

Each language arts and social studies teacher was provided with an outline of the purpose of the action research study. The first step in the implementation of the study was to provide the sample of teachers with professional development. All teachers received training in Marzano’s teaching strategies as well as supplied with a copy of Marzano, Pickering, and Pollocks’s book Classroom Instruction that Works. Teachers were also provided with a day of in-service in the area of reading comprehension and writing strategies across the curriculum provided by the building reading consultants. Phillips, Bardsley, Bach, Gibb-Brown (2009) professed reading and writing must be taught in every grade yet content area teachers are not trained on how to teach reading comprehension and writing strategies or implement the strategies into their content area curriculum. Teacher professional development was specifically designed to meet content area reading comprehension and writing methods as well as discussion comparing successful methods to encourage teachers to re-evaluate student needs (Friedman, Harwell, & Schnepel, 2006).

Following professional development, language arts and social studies teachers were asked to develop a plan of action in the area of reading comprehension and writing related to their grade level. It was important that all content area teachers at each grade level worked with the same reading comprehension and writing strategies to help students access content area information. Identifying which strategies were most important and transferable to all content areas began with charting student progress in each language arts and social studies classroom. Analyzing student data to inform instruction was the foundation to develop specific strategies to increase middle school students’ reading comprehension and writing in all content areas as well as show significant gains on state mandated tests.

Each grade, based on data collected by the team, chose two areas of weakness to be the focus of the research plan. Grade 6 chose making connections to text, self, and world at an evaluative level of thought. Teachers also chose to develop summarizing of non-fiction readings to be applied in persuasive writing. Grades 7 and 8 chose to focus on persuasive writing and summarization. Both grades used a variety of sources, similar graphic organizers, conferencing, notetaking and vocabulary development to enhance reading comprehension and persuasive writing instruction. Non-fiction readings were assigned to students to practice determining important information and applying information to persuasive writing.

Prior to whole class instruction, students were given a pre-test to determine proficiency in each grade level goal. Data provides teachers who possess limited understanding of literacy instruction an opportunity to implement successful reading comprehension methodologies and document student progress in literacy understanding (Topping, Wenrich, & Hoffman, 2006). Successful reading and writing instruction begins with building student background knowledge, vocabulary, explicit instruction, modeling student taking, summarizing and identifying text structure (Fang & Wei, 2010).

Results of grade level pre-tests determined strategy instruction. All teachers provided the following direct instruction and modeling of chosen strategies and skills: highlighting key vocabulary, main ideas and supporting details in non-fiction readings, persuasive writing graphic organizers, elaboration of thoughts, paraphrasing, and understanding and responding to open-ended questions. Post evaluation determined which skills students mastered after students selected readings at their instructional and interest levels.

Results

Based on data collected prior to instruction, teachers identified the number of students scoring at a proficient or higher instructional level as well as determined the number of students to reach a proficient or higher instructional level at the conclusion of the study. Data was collected, charted, and discussed at weekly team meetings. Pre- and post-test results of each grade level’s goals are presented in Tables 1, 2, and 3.

Table 1: Grade 6

Goals Based on Pre-Test
Results Post Instruction
Making Connections- Students scoring proficient
or higher will increase from 31% to 70%
51% of students reached proficiency or higher
Summarization- Students scoring proficient or
higher will increase from 35% to 70%
73% of students reached proficiency or higher

 

Table 2: Grade 7

Goals Based on Pre-Test
Results Post Instruction
Persuasive Writing- Students scoring at proficient
or higher will increase from 45% to 60%
Each team met goal with an increase of 82%,
69%, and 86%
Summarizing- Students scoring at proficiency or
higher will increase from 44% to 80%
87% of students reached proficiency or higher

 

Table 3: Grade 8

Goals Based on Pre-Test
Results Post Instruction
Persuasive Writing- Students scoring at proficient
or higher will increase from 56% to 80%
Each team met goal with an increase of 56% to
93%, 62% to 93%, and 82% to 91%
Summarizing- Students scoring at proficiency or
higher will increase from 17.2% to 60%
Each team increased percentage of students
reading proficiency or higher with scores of
11% to 54%, 23% to 28%, and 17% to 62%

 

Discussion

Teachers, through cooperative team planning, begin to create an atmosphere for safe learning and critical thinking (Fang & Schleppegrell, 2010). Students need to be taught reading comprehension and writing skills in order to access and understand content area concepts. Accessing content information is a critical component of student learning and success (Bozo, 2010).

Teachers at the middle school level specialize in the content they teach and are typically disengaged from reading comprehension and writing instruction (Fang & Wei, 2010). Educational methodologies in middle school classrooms transition from elementary school process learning to rote learning of specific content matter knowledge and test preparation (Palumbo & Sanacore, 2009; Duddin, 2010; Joseph, 2010). Results of this study determined that middle school teachers working as a team and presenting the same strategies to students in both language arts and social studies congruently were successful in increasing student learning. Providing teachers with on-site professional development related to strategic reading comprehension and writing instruction and ensuring opportunities for teachers to meet weekly as a team to evaluate student progress and design specific instruction contributed to increase of student scores on postinstruction assessments. Meeting as a team and choosing specific reading comprehension and writing strategies to incorporate in language arts and social studies instruction support Kay’s (2009) findings that skills should be taught “comprehensively, intentionally, and purposefully” (p.45) to ensure student success.

Summary

Hinde et al. (2007) indicated that more time devoted to teaching content does not guarantee comprehension, however, teaching students to read and write strategically does ensure content comprehension. In 2010, the Common Core State Standards Initiative Committee (CCSSI), Pitcher, Martinez, Dicembre, and McCormick, proposed a shared responsibility in teaching reading comprehension across content area disciplines. Reed (2009) and Gwertz (2009) asserted that direct instruction of reading comprehension and writing strategies is the responsibility of all content area teachers and easily accomplished when all teachers collaborate as a team. Content area teachers in this study documented increases in student achievement working as a team with a common strategic focus.

Duddin (2010) maintained teaching to state standards with student self-selection of books is research-based and proven to be beneficial to all students because students are reading and responding to text of interest correlated to the content topic and engage students in independent reading and writing. Duddin (2010) also espoused student academic success is reliant on authentic, meaningful text rather than content area textbooks written above middle school grade level. Teachers involved in this study provided students with a variety of leveled reading materials related to topics studied in social studies. Teachers noted that students became actively and purposefully involved in applying reading comprehension and writing strategies and many challenged themselves to read more difficult text.

Results of this study proved specific professional development and team collaboration through data collection increased student productivity in the areas of reading comprehension and writing. Teachers need to be provided with relevant professional development at the beginning of the school year as well as periodic collaboration with reading consultants to clarify questions or concerns in the area of reading comprehension and writing. Teachers must also be provided time to collaborate as a team using data collection, have access to a variety of content specific leveled texts, and schedule time in classroom instruction for students to practice and share reading comprehension and writing strategies with others.

Content area teachers who understand the metacognitive process related to reading comprehension and writing prepare content instruction with literacy at the forefront of instruction (Brozo, 2010). According to Ness (as cited by McCoss- Yergian and Krepps, 2010) reading comprehension and writing instruction has been endorsed by many researchers such as: Alvermann, Biancarosa and Snow, Kamil, Heller and Greenleaf, Torgensen, Houston, Rissman, Decker, and Roberts in the area of reading comprehension and writing instruction as a critical method to improve student understanding in all content area classrooms. Student interaction with content area textbooks depends upon the application of metacognitive strategies (Wilson, Grisham, & Smetana, 2009). Infusion of reading comprehension and writing instruction in all content area classrooms will enhance student learning (Wilson et al., 2009). Ruday (2009) reported an increase in student interest in reading and writing after direct instruction of specific strategies and practice of metacognitive thinking. McCoss-Yergian and Krepps (2010) also noted that the most effective means of increasing student comprehension in content area classrooms is direct instruction of specific reading comprehension and writing strategies across the curriculum. Research by educational leaders Allington, Boyles, Bennett, Daniels and Zemelman, Harvey and Goudvis, and Keene and Zimmerman support the methods applied in this study and results from this study provide a basis to create collaboration with all content area teachers to teach specific reading comprehension and writing strategies in language arts, social studies science and mathematics in middle school classrooms.

References

Allington, R. (2009). What really matters in response to intervention: Research based designs. New York: Pearson.

Anfara, V., & Schmid, J. (2007, May). Defining the effectiveness of middle grades teachers. Middle School Journal, 54-62. Retrieved from EBSCOhost.

Bintz,W. (2011, January). “Way-in” books encourage exploration in middle grades classrooms. Middle School Journal, 42(3), 34-45. Retrieved from EBSHOhost.

Brozo, W. (2010, October). The role of content literacy in an effective RTI program. The Reading Teacher, 64(2), 147-150. doi: 10.1598/RT.64.2.11

Common Core State Standards Initiative (2010). http://www.corestandards.org

Creswell, J. (2005). Educational Research: Planning, conducting and evaluating, quantitative and qualitative research. Upper Saddle River, NJ: Pearson.

Dudden, K. (2010, September). Research for the classroom: Review of Readicide by Kelly Gallagher. English Journal, 100(1). Retrieved from ProQuest.

Fang, Lamme, Pringle, Patrick, Sanders, Zmach, Charbonnet & Henkel (2008, December). Integrating reading into middle school science: What we did, found, and learned. International Journal of Science Education, 30(15), 2067-2089. doi: 10.1080/09500690701644266

Fang, Z., & Schleppegrell, M. (2010, April). Disciplinary literacies across content areas: Supporting secondary reading through functional language analysis. Journal of Adolescent and Adult Literacy. 53(7), 587- 597.

Fang, Z., & Wei, Y. (2010). Improving middle school student’s science literacy through reading infusion. The Journal of Educational Research, 103(4), 262-273. Retrieved form ProQuest.

Friedman, M., Harwell, D., & Schnepel, K. (2006). Effective instruction: A handbook of evidence-based strategies. Columbia, SC: The Institute For Evidence-Based Decision- Making In Education.

Gwertz, C. (2009). Teachers said to need better training. Education Week, 29(11), 8. Retrieved from EBSCHOhost.

Hinde, E., Popp, S., Dorn, R., Ekiss, G., Mater, M., Smith, C., & Libbee, M. (2007). The Integration of literacy and geography: The Arizona geoliteracy program’s effect on reading comprehension. Theory and Research in Social Education, 35(15), 343- 365. Retrieved from EBSCOhost.

Information literacy competency standards for higher education. (2000).Retrieved from http://www.acrl.org/ala/mgrps/divs/acrl/stan dards/standards.pdf.

Jackson, A., & Davis, A. (2000). Turning points 2000: Educating adolescents in the 21st century. New York: Teacher’s College Press.

Jackson, A. (2009, May). New middle schools for new futures, Middle School Journal, 40(5), 6-9. Retrieved from ProQuest Central.

Joseph, N. (2010). Metacognition needed: Teaching middle and high school students to develop strategic learning skills. Preventing School Failure, 54(2), 99-102. Retrieved from EBSCOhost.

Kay, K. (2009, May). Middle schools preparing young people for the 21st century life and work. Middle School Journal, 40(5), 41-45. Retrieved from ProQuest.

Marzano, R., Pickering, D., & Pollock (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: Association for Supervision and Curriculum Development.

McKeown, C., Beck, I., & Blake, R. (2009). Reading comprehension instruction: Focus on content or strategies? Perspectives on Language and Literacy, 28-32. Retrieved from ProQuest.

McCoss-Yergian, T., & Krepps, L. (2010). Do teacher attitudes impact literacy strategy implementation in content area classrooms? Journal of Instructional Pedagogies, 4, 1- 18. Retrieved from EBSCOhost.

Mead, L., Burke, K., Lanning, L., Mitchell, J., (2010, Fall). Explicit strategy instruction, learning-style preferences, and reading comprehension of struggling readers. CARR Reader 7, 15-24.

National Middle School Association. (1997). What current research says to the middle level practitioner. Columbus, OH: National Middle School Association. No Child Left Behind [NCLB] Act of 2001, Pub. L. No. 107-110, 115, Stat. 1425 (2002).

Parris, S., & Block, C. (2007). The expertise of literacy teachers. International Reading Association,80(7), 582-596. doi: 10.1598/JAAL.50.7.7

Pitcher, S., Martinez, G., Dicembre, E., & McCormick, M. (2010). The literacy needs of adolescents in their own words. Journal of Adolescent and Adult Literacy, 53(8), 637-644. doi: 10.1598/JAAL.53.8.2

Philips, D., Bardsley, M., Bach, T., & Gibb- Brown, K. (2009). “But I teach math!” The journey of middle school mathematics teachers and literacy coaches learning to integrate literacy strategies into the math instruction. Education, 129(3), 467-472. Retrieved from ProQuest.

RAND (2002). Reading for understanding: Toward an r & d program in reading comprehension. Santa Monica, CA: RAND

Reed, D. (2009). A synthesis of professional development on the implementation of literacy strategies for middle content area teachers. Research in Middle Level Education, 32(10), 1-12. Retrieved from EBSCOhost.

Ruday, S. (2009). Improving students’ higher order thinking skills: popular culture in the reading workshop. The Virginia English Bulletin, 58(2), 8-14.Retrieved from EBSCOhost.

Sanacore, J., & Palumbo, A. (2010). Middle school students need more opportunities to read across the curriculum. The Clearing House, 83(5), 180-185. Retrieved from ProQuest.

Santa, C. (2006, March). A vision for adolescent literacy: Ours or theirs? Journal of Adolescent & Adult Literacy, 49(6), 466- 476. Retrieved from EBSCOhost.

Shanahan, T., & Shanahan, C, (2008). Teaching disciplinary literacy to adolescents: Rethinking content-area literacy. Harvard Educational Review, 78(1), 40-59. Retrieved from EBSCOhost.

Slavin, R., Cheung, A., Groff, C., & Lake, C. (2008). Effective reading programs for middle and high schools: A best-evidence synthesis. Reading Research Quarterly. Retrieved from EBSCOhost.

Topping, D., Wenrich, J., & Hoffman, S. (2007). Three views of content-area literacy: Making inroads, marketing it inclusive , and making up for lost time. College Reading Association Yearbook, (28). 157-169. Retrieved from Education Research Complete database.

Tovani, C. (2000). I read it, but I don’t get it: Comprehension strategies for adolescent readers. Portland, ME: Stenhouse Publishing.

Wilson, N., Grisham, D., & Smetana, L. (2009). Investigating content area teachers’ understanding of a content literacy framework; A yearlong professional development initiative. Journal of Adolescent and adult Literacy, 52(8), 708- 716. doi: 10.1598/JAAL.52.8.6 25

A Status Report on SRBI in Connecticut Public Schools Principal Investigators

Dr. Diana Sisson and Dr. Betsy Sisson Connecticut Association for Reading Research

Abstract

The purpose of this study was to investigate the current status of SRBI in Connecticut public schools in respect to four research questions. 1) What are educators’ perceptions of SRBI? 2) How familiar are they with SRBI principles and practices? 3) What are their beliefs regarding its implementation and sustainability in their school systems? 4) What professional development resources and training have they previously received, and what resources and training do they believe are integral to the success of SRBI in Connecticut? A mixed-methods design utilized a questionnaire survey to collect responses from a sample group of 200 educators representing 64 school systems, including classroom teachers, reading educators, instructional support personnel, building and district administrators, and independent consultants. The quantitative research employed descriptive statistics garnered from the Likert scale items included in the instrument, while the qualitative research focused on the embedded open-ended items. Findings suggested that participants supported the philosophy and rationale behind SRBI but harbored reservations about its implementation and sustainability. Key concerns centered on the significance of familiarizing all faculty and administrators to SRBI, ongoing professional development to facilitate the transition to this new model, the time demands associated with such intensive services, staffing needed to provide quality interventions (including the importance of ensuring that those working with the neediest of students are certified and trained to offer interventions), resources to meet the needs of diverse student populations, scheduling issues both for students and for educators, and a prevailing theme focusing on their perceived lack of an in-depth, comprehensive understanding regarding data analysis.

Policy Precedents to the SRBI Model

Response to Intervention (RtI) is the culmination of over three decades of federal involvement in special education services in this nation. Beginning with the Education for All Handicapped Children Act of 1975 (re-codified as the Individuals with Disabilities Education Act or IDEA), this legislation ensured appropriate public education for students with disabilities and access to nondiscriminatory evaluation procedures. From the onset, controversy fermented due to the use of the IQ discrepancy model as the primary diagnostic procedure. Reauthorized in 1977, 1983, 1986, and 1990, the discrepancy model remained untouched. By the time of the next reauthorization, IDEA 1997 emphasized regular education interventions and a problem-solving model to determine eligibility for special education services. It also identified thirteen classifications of student disability from which learning disabled (LD) emerged as the predominant category with 52% of students receiving special education services in the United States categorized as LD. Research findings, however, indicated that 52% to 70% of school-identified LD students failed to meet state or federal guidelines for this classification. These numbers prompted G. Reid Lyon of the National Institute of Child and Human Development to suggest that “learning disabilities have become the sociological sponge to wipe up the spills of general education” (Gresham, 2001, p. 1). Despite the implications of these statistics for special education law, the identification process remained ambiguous and inconsistent. This was addressed in IDEA 2004 which recommended simplifying the identification process and incorporating students’ responses to scientificallybased instruction as part of the qualification criteria for classification. This “response to intervention” was defined by the National Association of State Directors of Special Education (NASDSE) as

a practice of providing high-quality instruction and interventions matched to student need, monitoring progress frequently to make decisions about changes in instruction or goals and applying child response data to important educational decisions. (NASDSE, 2006, p. 3)

This paradigm shift in special education regulations has profound impact on American schools as the “prevalence of significant reading disability in children is 17-20% (1 in 5), while more than 33% (1 in 3) struggle to learn to read” (Greenwood, Kamps, Terry, & Linebarger, 2007, p. 73) with current statistics indicating that “12% -14% of students in U.S. schools receive special education services” (Hall, 2008, p. 23). A fundamental intent of RtI is to decrease the number of students in special education by perhaps 70% (Lyon, Fletcher, Shaywitz, Shaywitz, Torgeson, Wood, Schulte, & Olson, 2001). Such a significant decrease in students receiving special education services has considerable effect on the federal government as it is predicted that soon the national cost of special education services will total $80 billion annually (Burns & Gibbons, 2008) for the current 6.5 million children identified with disabilities (Collier, 2010).

Addressing these long-standing issues, IDEA 2004 contained three central elements: use of scientifically-based reading instruction, evaluation of how students respond to interventions, and employing data to inform decision making (Brown-Chidsey & Steege, 2005). In addition, most RtI models incorporated a multi-tier prevention system.

In this way, RtI has two interconnected goals: (1) to identify at-risk students early sothat they may receive more intensive prevention services prior to the onset of severe deficits and disability identification and (2) to identify students with LD who are repeatedly unresponsive to validated, standardized forms of instruction and instead require individualized, data-based instruction. (Fuchs, Fuchs, & Vaughn, 2008, p. 2)

In 2006, the state of Connecticut created an advisory panel charged with the task of reviewing RtI research and developing an implementation framework for the state’s schools. It was during this time that the nationally-recognized Response to Intervention (RtI) model was designated in Connecticut as Scientific Research-Based Interventions (SRBI) because that language was contained in both No Child Left Behind (NCLB) and IDEA regulations and was further intended to “emphasize the centrality of general education and the importance of using interventions that are scientific and research based” (Connecticut State Department of Education, 2008, p. 4). More specifically, “RtI models are dependent on interventions in which evidence is available to attest to their effectiveness” (Costello, 2008, p. 4), while the SRBI model is not held to such parameters. Key elements of Connecticut’s SRBI model included the following:

  1. Core general education curricula that are comprehensive in nature
  2. Wide-ranging academic and behavioral support systems
  3. Positive school climate
  4. Research-based instructional strategies
  5. Differentiated instruction for all students
  6. Universal assessments
  7. Early interventions
  8. Data-driven decision making
  9. Continuum of support throughout the three tiers
  10. Common formative assessments

As a cohesive model, SRBI was designed to provide a quality education for all students and close the achievement gap that has persisted in Connecticut for a number of years. The current study sought to investigate the status of its implementation within public school systems across Connecticut, delving into educators’ perceptions about its effectiveness in individual districts and schools.

Method

Design

A descriptive research study that employed a mixed-method design was used to further “collecting, analyzing, and mixing both quantitative and qualitative data in a single study” (Creswell & Plano-Clark, 2007, p. 5) as a means to investigate educators’ experiences and perceptions of SRBI implementation in the state of Connecticut. The design was carried out through a questionnaire survey comprising both numeric Likert scale items as well as open-ended queries created to provide the greatest depth of responses from a large sample group.

Instrumentation

Developed from a review of the relevant literature pertaining to RtI as well as prior research studies investigating implementation in state-wide models, the questionnaire survey consisted of 30 items related to the sample group’s demographic information as well as to their experiences and perceptions of the current status of SRBI implementation. The items were formatted in a five-point Likert scale which could be numerically measured. In addition to this quantitative component in the study, open-ended queries were included in the survey in order to gain insights through qualitative means. The items for the instrument were developed and reviewed by multiple educational experts to ensure relevance, application to the field, content and construct validity, and bias analysis.

Participants

A total of 200 public school educators (grades K-12) were selected through nonprobability sampling. Educators – including classroom teachers, reading educators, instructional support personnel, building and district administrators, and independent consultants – were approached to participate during professional development events sponsored by the Connecticut Association for Reading Research (CARR) as well as the Connecticut Reading Association (CRA) and its local reading council affiliates.

Those who participated were asked to complete a demographic profile contained within the questionnaire survey. Table 1 presents a delineation of the sample group.

Table 1: Participant Characteristics

Descriptor
Current Position
Frequency
Percent
Classroom Teacher
71
35.50%
Reading Educator
94
47.00%
Instructional Support Personnel
14
7.00%
Building Administrator
8
4.00%
District Administrator
5
2.50%
Independent Consultant
5
2.50%
Missing
3
1.50%

 

Grades Primarily Served

Elementary School — K-5
117
58.50%
Middle School — 6-8
44
22.00%
Elementary / Middle School &mdash K-8
9
4.50%
High School — 9-12
19
9.50%
K-12
8
4.00%
Missing
3
1.50%

 

School Identification

Urban
48
24.00%
Suburban
117
58.50%
Rural
26
13.00%
Missing
9
4.50%

 

Number of Professional Development Trainings Pertaining to SRBI

Never
16
8.00%
01/02/11
57
28.50%
03/05/11
73
36.50%
06/09/11
38
19.00%
10
13
6.50%
Missing
3
1.50%

Note. Those fields listed as “Missing” refer to the number of participants who failed to complete that
particular survey item.

 

Data Collection

The principal investigators of the study developed a questionnaire survey to examine SRBI implementation and sustainability in Connecticut. The sample group was representative of educators in attendance at a series of professional development events held during the academic year of 2010- 2011, including the following: 1) CARRsponsored events, 2) the CRA Conference, 3) the CRA Leadership Conference, 4) SRBI Lecture Series events co-sponsored by CRA, CARR, and local reading councils, and 5) local reading council events. During these events, attendees were invited to complete the survey questionnaire and share their thoughts of SRBI in their respective school systems.

Data Analysis

Once research in the field was completed, the sample included 200 participants representing 64 public school districts. The survey instrument itself was divided into five sub-scales: Research Participant Demographic Profile, Perceptions of SRBI, Familiarity with SRBI Principles and Practices, Implementation and Sustainability, and Professional Development Resources. Descriptive statistics were used to summarize the demographic data obtained from the first sub-scale. Both quantitative and qualitative methods were utilized to analyze the remaining sub-scales.

The quantitative research consisted of 30 statements comprising 5 items relating to the construction of a demographic profile of the participants and 25 items using a 5-point Likert scale to determine the level of agreement or disagreement with key components of the SRBI model. Descriptive statistics, including calculations for item frequencies and chi-square tests, were conducted on each of the demographic variables as well as on those items pertaining to participant statements to determine relationships within the data. The qualitative data, consisting of four constructed-response items found in Sub-Scales II, III, IV, and V, requested participants to expand on their Likert-scale responses. To complete an organized analysis of the findings, the components of Miles and Huberman’s Interactive Model of Data Analysis (1994) was employed which follows a process-oriented approach to qualitative research – data reduction, data display, and conclusion drawing and verification. In effect, the data reduction of the participants’ constructed responses took place through coding comments and discerning themes that were representative of the sample group’s responses. Then, the data were systematically displayed which organized the respondents’ constructed responses graphically in matrices and charts so that conclusions could be drawn, verified, and validated to be accurate of the sample.

Results

Positive Perceptions Regarding SRBI

A compelling theme that emerged from the questionnaire survey was the predominantly positive attitudes of the participants in regards to the SRBI model. When asked if they believed in the principles and practices of SRBI, 81.5% of respondents agreed (inclusive of agreed and strongly agreed) with 70.5% believing that providing systematic interventions for struggling students is more effective in determining achievement potential than IQ testing. Consensus was also reached on the need for differentiated instructional practices in Tier I classrooms (95.5%) with one classroom teacher from a suburban district offering, “I believe that Tier 1 instruction is most important. If you have effectively implement[ed] this instruction, you will have less in Tier 2 and 3.” This resolute support of educators for the SRBI model falters significantly when queried if the majority of the educators with whom they work are currently prepared to implement the SRBI model. Only 31.0% asserted that their colleagues were professionally ready. This percentage dropped significantly when analyzing administrators’ responses with only 9.1% purporting that their staff was ready. This unease was verbalized by a reading educator from an urban district who stated, “Teachers don’t know the principles of SRBI.” Another urban reading educator offered, “It helps the teacher to meet each student’s needs by working with that student on his or her level. However, this means that the teachers must have received appropriate PD about RtI/SRBI and work the system with fidelity.” Further analysis revealed that of those respondents who agreed general education teachers should implement differentiated instructional practices to meet the needs of diverse learners, only 23.8% could easily navigate through the three tiers, 12.7% could access appropriate resources, and 12.8% could select appropriate data for progress monitoring of student performance during intervention services.

Lack of Familiarity with Assessment Tools Needed to Drive SRBI Model

A second theme that surfaced was the inconsistency with which participants responded in relationship to their familiarity with SRBI principles and practices. When questioned on specific components of the model, 50.0% (inclusive of agreed and strongly agreed) indicated that they could easily navigate among the three tiers, 86.5% could make decisions regarding core instruction and interventions, 69.0% could access appropriate resources (urban, 70.8%; suburban, 72.4%; rural, 57.7%), 75.5% could ensure that intervention plans were supported by data, and 64.5% could select appropriate data for progress monitoring of student performance during intervention services. In contrast to this comparatively stalwart belief in their understanding and application of the components of the SRBI model, coding of their constructed responses indicated that respondents expressed a lack of familiarity in two principal areas: identifying evidence-based programs and interventions as well as an even more pronounced concern regarding their ability to use assessment tools. Of note, the construct of assessment tools encompassed actual assessments, progress monitoring techniques, and data analysis. The respondents, inclusive of all sub-groups of educators, referred at length to the ambiguity of how data should be effectively utilized in the model and this topic served as the basis for a recurring theme in their call for professional development training in this area.

Systemic Obstacles Impeding Effectiveness of the SRBI Model

Of further issue were the organizational barriers that the participants perceived as hindering successful implementation of SRBI in their schools. With participants from 64 school systems, only 47.0% (inclusive of agreed and strongly agreed) believed that district-level leadership provided active support for SRBI, and an almost equivalent 48.0% perceived that implementation of the model was jointly directed by general education and special education efforts. Of the respondents, 36.0% deemed a clearlydefined SRBI model to be in place in their school system with one reading educator noting, “It’s not clearly defined – therefore no one is sure of their role and process.” Although relatively few in number within the study’s sample group, those who identified themselves as servicing middle school and high school students expressed anxiety about implementing the model with older students as one classroom teacher simply stated, “It doesn’t exist at the high school level.”

In considering the individual components of the SRBI model, 48.5% asserted that a schoolbased multidisciplinary intervention team was in place that met on a regular basis, and 77.0% affirmed that universal screenings were being administered three times a year. Complexity did exist in this response, however, as the percentage shifted dramatically based on which grades the respondents were servicing, i.e., 93.0% of elementary educators affirmed that universal screenings were in place with that statistic dropping significantly to 72.7% of middle school educators and 27.8% of high school educators. Pertaining to certified staff providing intervention services, 68.5% suggested that Tier II and 78.0% that Tier III interventions were currently offered by certified personnel. The number of respondents, however, who concluded that these interventions were prescriptive to the individual needs of specific students dipped to 57.5%. This theme of unease with the role data played in the SRBI model and its impact on delivering interventions persisted as nearly one third of the respondents did not believe that appropriate progress monitoring within the three tiers was currently in place.

In addition to these direct inquiries regarding implementation and sustainability of the SRBI model, respondents also shared specific obstacles that they perceived as impeding effectiveness. Approximately 20% cited time as the primary barrier, followed by staffing, resources, scheduling, familiarizing educators with the model, ongoing training, the need for certified personnel, and the persistent issue of data – specifically progress monitoring.

Content-Specific Training Needed to Ensure Implementation and Sustainability

In the context of professional development training that the participants had previously received, 74.5% (inclusive of agreed and strongly agreed) had attended an overview of RtI principles and Connecticut’s SRBI model; 51.5% had received information regarding modifications of special education referral practices; 42.0% had obtained data pertaining to specific practices within each of the three tiers (0.0% of administrators professed to having had accessed such training); and, 52.5% had received training in evidence-based interventions with 53.5% attending training in progress-monitoring procedures. In essence, three out of four of the participants in the sample had attended training in an overview, but only one out of two had attended more advanced training in the specific components of the model. Beyond the items on the survey that questioned the importance of future professional development in specific components of the SRBI model, respondents referred to several particular areas of need: interventions, the recurring theme of utilizing data and progress monitoring, accessing resources, and the importance of training classroom teachers in Tier I core instruction with differentiated strategies. The relevance of providing content-specific training in the SRBI model can be illustrated in its relationship with the attitudes generated toward the model. Of those who never attended any training in SRBI, only 56.3% agreed with the principles and practices of SRBI. A trend formed of increasingly positive attitudes toward the model from additional attendance: 1-2 = 75.0%, 3-5 = 91.5%, 6-9 = 92.1%. That number dropped to 76.9% at attendance of 10 or more trainings.

Discussion

This study was designed to provide a preliminary investigation of SRBI implementation in public schools in Connecticut. As the SRBI model constitutes a recent shift in educational policy, little has been known about its implementation phase, how the state’s educators view it, and its potential for sustainability as a state-wide model. By incorporating a mixed-methods approach, participants were able to express their experiences and perceptions of the model in succinct, quantitative terms while also sharing deeper insights through their constructed responses. The questionnaire survey employed by the survey served as an instrument by which to gauge the model in a broad array of school systems in a relatively short amount of time.

The item analysis coupled with the constructed responses suggested that participants viewed SRBI as a positive paradigm shift in educational policy and special education practices; nonetheless, they also deemed the current status of Connecticut schools unprepared to deliver the model with fidelity. Lack of a clear understanding in data analysis (from the selection of common assessments and probes to progress monitoring techniques to utilizing data to make effective intervention plans for struggling students) and a deficit of training topped their concerns – a theme that persisted throughout their responses. In addition, participants articulated a myriad of other issues that they felt impeded the effectiveness of the SRBI model, including time, staffing, resources, scheduling, and training. Specifically, they expressed a need for more comprehensive training in resources, data, and Tier I core instruction for classroom teachers.

The findings of this study support the necessity of additional training opportunities focusing on a specialized set of tools and competencies in order to ensure the success of SRBI in Connecticut’s public schools (Allington, 2009; Howard, 2010; Johnston, 2010; Wright, 2007). Successful implementation will also necessitate deeper training in the use of assessments and data to inform educational planning (Owocki, 2010) as well as a systemic response to the barriers currently impeding effective implementation of the SRBI model.

Limitations of Current Research

The results of this study were limited by several factors. First, the majority of participants in the sample were in attendance at professional development events which offered SRBI training sessions which suggests that the sample group may be more knowledgeable and more actively involved with the model than the overall population. Those who participated were also those willing to share their perceptions. Consequently, the extent to which their perceptions are representative of those who elected not to participate remains unknown. Neither was these responses verified, so self-reported data may have been biased. Second, administrators and instructional support personnel were a small proportion of the sample group which lessens the equity of their responses. Third, participants derived primarily from suburban school systems which limited the representative nature of the results, especially with rural school systems that only comprised 13% of the sample group.

Recommendations for Future Research

Additional research is required to furnish a more detailed understanding of SRBI in Connecticut. Future studies should be conducted with administrators as they hold a key role in the implementation and sustainability of the model (Hall, 2008; Shores & Chester, 2009). As this study was exploratory in nature and attempted to offer broad generalizations of current perceptions, research in the future should probe deeper into the model’s effectiveness in schools through correlation research to determine relationships, assess consistency, and form predictive statements (Ary, Jacobs, Razavieh, & Sorensen, 2006), focus groups to examine the issues more deeply, and document analysis to investigate special education rates as well as standardized achievement tests to ascertain the degree to which students are responding to interventions and if these interventions are affecting the percentage of students being identified for special education services.

Educational Implications

A majority of the participants indicated that that they endorsed the philosophy of SRBI but registered apprehension that educators were fully cognizant of and prepared to apply the model’s principles and practices with struggling students. These findings should be viewed in a positive light for educators at the school, district, and state level. While there is strong support for the philosophy of SRBI, their concerns offer a context for discourse about the systemic reforms needed to facilitate the model. First, school systems should provide active commitment and support as evidenced through the development of a strategic SRBI plan for all of its schools with clear delineations of roles and responsibilities (Howell, Patton, & Deiotte, 2008; Sack-Min, 2009; Shores & Chester, 2009). Second, a focused professional development plan should be developed that provides training for administrators and faculty members across the continuum of SRBI principles and practices so that all staff members are fully prepared to assume responsibility for the SRBI model within their specific role in ameliorating student academic weaknesses (Applebaum, 2009; Bergstrom, 2008; Foorman, Carlson, & Santi, 2007; Howard, 2009; Mellard & Johnson, 2008; Restori, Gresham, & Cook, 2008). Third, school systems and individual schools should work collaboratively to create a resource kit for resources and specific interventions aligned to students’ academic needs (Burns & Gibbons, 2008; Wright, 2007). Fourth, school-based multidisciplinary teams should be developed at each school to collect and monitor data (including common assessments and probes, progress monitoring programs, and data analysis techniques to drive interventions) to assess the level of commitment and impact of the SRBI model at site-specific locations (Applebaum, 2009; Burns & Gibbons, 2008; Mellard & Johnson, 2008).

Conclusion

Although this study sought to provide a preliminary report on the implementation of SRBI in Connecticut’s public schools, it should also be viewed as an opening dialogue about the organizational frameworks needed to support this shift in educational policy. For example, results of the analysis of the study makes the need for more comprehensive training programs patently clear. Furthermore, attention needs to be given to the common issues of how to regulate schools’ time, scheduling, resources, and staffing to align to the SRBI model. Future research should strive to furnish status checks on Connecticut’s continued expansion of SRBI. It should also focus on the systemic reforms needed to ensure the academic well-being of Connecticut’s students. Reader’s Note: This article provides a brief summary of the research study, A Status Report on SRBI in Connecticut. CARR will publish a comprehensive report in the coming months.

References

Allington, R. L. (2009). What really matters in Response to Intervention. Boston, MA: Pearson.

Applebaum, M. (2009). The one-stop guide to implementing RtI. Thousand Oaks, CA: Corwin Press.

Ary, D., Jacobs, L. C., Razavieh, A., & Sorensen, C. (2006). Introduction to research in education. Belmont, CA: Thomson Higher Education.

Bergstrom, M. K. (2008). Professional development in Response to Intervention: Implementation of a model in a rural region. Rural Special Education Quarterly, 274, 27- 36.

Brown-Chidsey, R., & Steege, M. W. (2005). Response to Intervention: Principles and strategies for effective practice. New York: The Guilford Press.

Burns, M. K., & Gibbons, K. A. (2008). Implementing Response-to-Intervention in elementary and secondary schools: Procedures to assure scientific-based practices. New York: Routledge.

Collier, C. (2010). RtI for diverse learners. Thousand Oaks, CA: Corwin. Connecticut State Department of Education. (2008). Using scientific research-based interventions: Improving education for all students. Connecticut’s framework for RtI – February 2008 executive summary. Hartford, CT: Author.

Costello, K. A. (2008). Connecticut’s Response to Intervention (RtI) model: A status report. Retrieved April 2. 2009, from National Center on Response to Intervention: RtI State Database Web site: http:state.rti4success.org/index.php? option=com_state&stateId=110

Creswell, J. W., & Plano-Clark, V. L. (2007). Designing and conducting mixed methods research. Thousand Oaks, CA: Sage Publications, Inc.

Foorman, B. R., Carlson, C. D., & Santi, K. L. (2007). Classroom reading instruction and teacher knowledge in the primary grades. In D. Haagar, J. Klinger, & S. Vaughn (eds.), Evidence-based reading practices for response to intervention (pp. 45-72). Baltimore, MD: Paul H. Brookes Publishing Co.

Fuchs, D., Fuchs, L. S., & Vaughn, S. (Eds.). (2008). Response to Intervention: A framework for reading educators. Newark, DE: International Reading Association.

Greenwood, C. R., Kamps, D., Terry, B. J., & Linebarger, D. L. (2007). Primary intervention: A means of preventing special education? In D. Haager, J. Klinger, & S. Vaughn (Eds.), Evidence-based reading practices for Response to Intervention (pp. 73-106).Baltimore, MD: Paul H. Brookes Publishing Co.

Gresham, F. (2001). Responsiveness to Intervention: An alternative approach to the identification of learning disabilities. Paper presented at the Learning Disabilities Summit, Washington, D.C.

Hall, S. L. (2008). Implementing Response to Intervention. Thousand Oaks, CA: Corwin Press.

Howard, M. (2009). RtI from all sides: What every teacher needs to know. Portsmouth, NH: Heinemann.

Howard, M. (2010). Moving forward with RtI. Portsmouth, NH: Heinemann. Howell, R., Patton, S., & Deiotte, M. (2008). Understanding Response to Intervention: A practical guide to systemic implementation. Bloomington, IN: Solution Tree.

Johnston, P. H. (2010). (Ed.). RtI in literacy – Responsive and comprehensive. Newark: DE: International Reading Association.

Lyon, G. R., Fletcher, J. M., Shaywitz, S. E., Shaywitz, B. A., Torgeson, J. K. Wood, F. B., Schulte, A., & Olson, R. (2001). Rethinking learning disabilities. In C. E. Finn, R. A Rotherham, & C. R. Hokanson (Eds.), Rethinking special education for a new century (pp. 259-287). Washington, DC: Progressive Policy Institute and the Thomas B. Fordham Foundation.

Mellard, D. F., & Johnson, E. (2008). RtI: A practitioner’s guide to implementing Response to Intervention. Thousand Oaks, CA: Corwin Press.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis. Thousand Oaks, CA: Sage Publications, Inc.

National Association of State Directors of Special Education (NASDSE). (2006). Response to Intervention: Policy considerations and implementation. Alexandria, VA: Author.

Owocki, G. (2010). The RtI Daily Planning Book, K- 6. Portsmouth, NH: Heinemann.

Restori, A. F., Gresham, F. M., & Cook, C. R. (2008). “Old habits die hard:” Past and current issues pertaining to Response-to- Intervention. California School Psychologist, 13, 67-78.

Sack-Min, J. (2009). A rapid response. American School Board Journal, 196(9), 37-39. Shores, C., & Chester, K. (2009). Using RtI for school improvement: Raising every student’s achievement scores. Thousand Oaks, CA: Corwin Press.

Wright, J. (2007). RtI Toolkit: A practical guide for schools. Port Chester, NY: Dude Publishing.

Top Ten Tips for Fostering The New Literacies of Online Reading Comprehension In Your Classroom

Julie Coiro, PhD., University of Rhode Island

Having the ability to comprehend and create online information texts will play a central role in our students’ success in a digital information age. Unfortunately, it is challenging to know how best to introduce these new online reading comprehension skills as part of today’s reading and writing curriculum. To address this challenge, I offer ten promising practices that reflect research-based guidelines (Coiro, 2009a) for supporting students’ online literacy development in school classrooms.

  1. Help students understand the unique relationships between offline and online reading strategy use. Literacy and content-area reading lessons should encourage students to notice the similarities and differences between offline and online text features (e.g., graphics, hyperlinked headings, digitized speech, and video) while discussing suitable reading purposes and audiences for each. Several lessons designed by classroom teachers for the ReadWriteThink online lesson databases (www.readwritethink.org) effectively illustrate reflective classroom assignments that compare and contrast offline and online text comprehension processes. Over time, reflective thinking about these differences helps students gain a deeper understanding of how to navigate and comprehend information on the Internet.
  2. Provide explicit teacher and peer think-aloud models of effective online reading comprehension strategy use. Instructional think-alouds can model strategies for formulating online questions, generating effective keyword searches, critically evaluating online sources, or integrating information from multiple sources using a particular online communication tool such as email, blogs, or discussion boards (see Coiro, 2005 for four strategy lessons in this area). Over time, you can gradually release responsibility to empower students in the online meaning-making process.
  3. Embed explicit strategy lessons within curriculum-based online information challenges. Rather than teach online reading strategies as part of an isolated technology lesson with the computer teacher, a curriculum-based online information challenge invites students to use a range of Internet technologies linked directly to a particular content theme or learning objective. Small groups of students are presented with contentrelated information problems designed both to develop conceptual knowledge and elicit important online reading skills (e.g., asking questions, locating, evaluating, synthesizing, and communicating). Lessons are designed to minimize teacher talk, maximize student engagement, and provide time at the end for students to debrief and to exchange strategies with the entire class, after having done so in their small groups (for more information, see Leu, Coiro, Castek, Henry, Reinking, & Hartman, 2008).
  4. Honor the literacies students bring to school from their daily lives. We are in need of new frameworks and associated instructional models that bridge in-school and out-of-school practices to exploit the multiple literacy competencies that students bring to school. We can begin by fostering a classroom culture that recognizes the multiple literacies of every student and makes space for students to share their expertise as part of classroom routines. Emerging research highlights the potential of connecting personal and academic online reading tasks to facilitate conventional learning outcomes, new literacies, and student engagement (e.g., Burnett & Wilkinson, 2005; O’Brien, Beach, & Scharber, 2007). Gradually, students begin to understand how to use literacy differently for different purposes, in and out of school, and realize the need to flexibly apply these skills for new purposes and new contexts using new technologies.
  5. Provide space for students to explore, interpret, and create multiple forms and genres of texts. Opportunities for students to interact with images, soundtracks, and text interconnected in complex, multifaceted ways as part of school projects can prompt more sophisticated uses of multimodal online texts (Tierney, Bond, & Bresler, 2006). When teachers recognize the role of creative composing and innovation as part of literacy development, reluctant readers and writers, in particular, see themselves as capable text producers with authentic opportunities to contribute to their classroom literacy communities.
  6. Clarify new roles and relationships for collaborating with peers and teachers. Because literacy contexts change so quickly on the Internet, it is important that teachers be flexible in exploring and clarifying what they expect from themselves and of their students as part of face-to-face and online collaborations. Students should come to appreciate that each of their peers brings to the group a different, but valuable, set of skills and experiences that can positive influence the group’s overall ability to solve problems with the Internet (Cope & Kalantzis, 2002; Schulz-Zander, Buchter, & Dalmer, 2002). Similarly, as teachers explore how to plan and orchestrate complex online learning tasks, students should have plenty of authentic opportunities to work as partners with teachers to support their use of technology in classrooms.
  7. Promote students’ awareness of how positive dispositions impact reading comprehension and learning on the Internet. In open-ended Internet reading environments, successful online readers are those who manage rapidly changing text forms with persistence, flexibility, creativity, patience, critical stance, and self-reflection (American Association of School Librarians, 2007). As individual students gain a sense of themselves and their efforts as readers, they should be encouraged to understand how their habits and attitudes influence their ability to comprehend challenging texts. Regular strategy conversations can integrate a focus on personal dimensions with social, cognitive, and knowledge-building dimensions of classroom life to support students as they work to make sense of online and offline texts.
  8. Design collaborative inquiry projects that naturally prompt interdisciplinary connections to 21st century life skills. Productive online learning tasks empower students to solve important problems by integrating their knowledge of several subject areas with opportunities to apply their developing financial, global, and civic literacies in real academic learning contexts (Partnership for 21st Century Skills, 2007). Thus, effective online literacy teachers seek to promote students’ self-efficacy and online reading confidence, while integrating opportunities to practice entrepreneurial skills, develop a mutual respect for diverse cultures and lifestyles, and participate effectively in civic life experiences.
  9. Employ multiple alternative forms of assessment that evaluate group and individual learning processes and products. Students learning how to read successfully on the Internet should have opportunities to engage in self-, peer-, and teacher assessments of their online strategy use as part of reflective learning process (Coiro & Castek, 2010). In doing so, students begin to accept more responsibility for their learning and reflect more thoughtfully on their literacy efforts and performance. Take time to teach your students how to set and monitor realistic online comprehension goals and encourage students to share and reflect on their online reading strategy use during each phase of the inquiry process. Finally, employ alternative measures of Internet reading performance that capture both a student’s individual online reading ability or contribution to an assigned online reading task and the quality of his or her working group’s interactions and discussion (see ideas in Coiro, 2009b).
  10. Read, network, reflect, and read some more: Because online literacy contexts and digital literacy tools will continue to rapidly emerge faster than any one person can keep pace with, we must join forces as educators in ways that capitalize on our different areas of expertise and interest. Build partnerships with colleagues, read as much as you have time for, and exchange ideas and questions you have about new literacies with those around you. Become an active member of an online learning community such as The New Literacies Collaborative (http://newlitcollaborative.ning.com) to seek advice when things get overwhelming and to share moments of success as they emerge. Actively build connections between your own literacy efforts and those around you as you venture forward on the journey to prepare today’s students for their literacy futures in a globally networked, digital information world. Keep reading, choose a starting place, set an action plan, be patient, and move forward – you will soon be amazed to realize the new possibilities of the Internet for teaching and learning literacy in school.

References

American Association of School Librarians. (2007). Standards for the 21st century learner. Retrieved October 1, 2008 from http://www.ala.org/ala/mgrps/divs/aasl/aaslproftools/learningstandards/standards.cfm

Burnett, C. & Wilkinson, J. (2005). Holy lemons! Learning from children’s uses of the Internet in out-of-school contexts. Literacy, 39, 158- 165.

Coiro, J. (2005). Making sense of online text. Educational Leadership, 63, 30-35.

Coiro, J. (2009a). Promising practices for supporting adolescents’ online literacy development. In K.D. Wood and W.E. Blanton (Eds.). Promoting literacy with adolescent learners: Research-based instruction (pp. 442-471). New York, NY: Guilford Press.

Coiro, J. (2009b). Rethinking reading assessment in a digital age: How is reading comprehension different and where do we turn now. Educational Leadership, 66, 59- 63.

Coiro, J. & Castek, J. (2010). Assessment frameworks for teaching and learning English language arts in a digital age. In D. Lapp & D. Fisher (Eds.). Handbook of Research on Teaching the English Language Arts, Third Edition: Co-Sponsored by the International Reading Association and the National Council of Teachers of English. New York: Routledge.

Cope, B., & Kalantzis, M. (2002). Multiliteracies. London, UK: Routledge.

Leu, D. J., Coiro, J., Castek, J., Henry, L. A., Reinking, D. & Hartman, D. K. (2008). Research on Instruction and Assessment in the New Literacies of Online Reading Comprehension. In C. C. Block, S. Parris, & P. Afflerbach. (Eds.). Comprehension instruction: Research-based best practices (pp. 321-346). New York: Guilford Press.

O’Brien, D., Beach, R., and Scharber, C. (2007). “Struggling” middle schoolers: Engagement and literate competence in a reading writing intervention class. Reading Psychology, 28, 51-73.

Partnership for 21st Century Skills. (2007). Learning for the 21st century. Retrieved January 10, 2008 from http://www.21stcenturyskills.org

Schulz-Zander, R., Buchter, A., & Dalmer, R. (2002). The role of ICT as a promoter of students’ cooperation. Journal of Computer Assisted Learning, 18, 438-448.

Tierney, R. J., Bond, E., & Bresler, J. (2006). Examining literate lives as students engage with multiple literacies. Theory Into Practice, 45, 359-367.