From Past Influences to Present Implementation to Future Implications: How SRBI Promises to Change the Way We Help Struggling Students

Principal Investigators: Dr. Diana Sisson and Dr. Betsy Sisson
Connecticut Association for Reading Research

Introduction

Connecticut’s Scientific Research-Based Interventions (SRBI) stems from the national model of Response to Intervention (RtI) which is the culmination of over three decades of federal involvement in special education services in this nation. Beginning with the Education for All Handicapped Children Act of 1975 (later recodified as the Individuals with Disabilities Act or IDEA), legislation ensured appropriate public education for students with disabilities and access to nondiscriminatory evaluation procedures.

From the onset, however, controversy fermented due to the use of the IQ-discrepancy model as the primary diagnostic procedure. Soon after, statistics regarding eligibility criteria provided fodder for public debate over the validity of the identification process. For example, Gresham (2001) claimed that after nearly two decades of the IQ-discrepancy model no clear definition of learning disabilities existed in “policy or practice,” [thus,] “findings indicate that substantial proportions of school-identified LD students – from 52 to 70 percent – fail to meet state or federal eligibility criteria” (p. 1).

While the national debate over the IQ-discrepancy model would ultimately lead to a dramatic policy change affecting both general education and special education, it was not the
only deciding factor in the creation of RtI. Historical influences in the fields of psychology and literacy would coalesce to bring about a national recognition of the struggling reader, and legislative policy would follow that sought to offer the services that handicapped students would need to be successful in academic settings.

Historical Influences

The end of the nineteenth century witnessed the launch of experimental psychology into the cognitive processes of reading, and soon after leaders in the educational field delved into the pedagogical underpinnings of reading. Meanwhile, medical doctors began for the first time to diagnose students with reading difficulties – namely, reading dyslexia – a term reserved for those children who struggled to learn to read. While some schools employed trained reading specialists, private consultants provided most of this specialized tutoring outside of public school settings.

Due to the dearth of public school services, concerned parents of struggling learners organized a conference in 1963. Attended by specialists from a host of different fields, Samuel Kirk – later recognized as the father of special education – suggested the umbrella term of “learning disabilities” as a means to characterize the specific needs of these students. Marshaling their forces, they moved to influence change at the national level and lobbied for federal guarantees for a free and appropriate education for their children (Berninger, 2006).

As stakeholders in this new field of learning disabilities continued to rally support for their cause, the framework of the RtI model that would emerge in 2004 found its beginnings in the middle of the twentieth century when behavioral analysts utilized a problem-solving paradigm to address issues in social contexts. Eventually, practitioners refined the process to include a methodology for monitoring students’ responses to interventions in academic settings. Corresponding to this advancement emanated awareness that the instructional environment plays a key role in ameliorating learning problems. During the 1980s, school systems began to utilize tools to monitor academic progress and track student achievement. These historical influences merged with federal legislation as each new federal policy provided more advanced attempts to affect the academic achievement of all students and to use data as a barometer for school success (Wright, 2007).

Legislative Policy

As lawmakers endeavored to provide equity in the educational arena, the Elementary and Secondary Act of 1965 delivered the first federal legislation providing funding to public schools. Designed to address perceived social problems and eradicate poverty and its effect on the American economy, it did not consider the needs of disabled children. A decade would pass before the federal government reflected on the needs of handicapped students and with this recognition would come the advent of special education policy in the United States.

1975 – Education for All Handicapped Children Act (PL 94-142).

The first significant special education legislation originated in 1975 with the Education for All Handicapped Children Act (EAHCA) which guaranteed students with disabilities a free and appropriate public education (FAPE), the least restrictive environment (LRE) for school settings, due process rights, and nondiscriminatory evaluation protocols. Subsequently, a tidal wave of students qualifying for special education services inundated American schools. Since its inception, the number of students identified as learning disabled has grown more than 300% with American schools providing special education services for more than 6 million children (Cortiella, 2008).

1977 – Final Regulations for EAHCA (PL 94-142).

Legislators approved regulations for PL 94-142 in 1977. During this time, a learning disability was defined as “a severe discrepancy between achievement and intellectual ability” (U.S. Department of Education, 1977, p. G1082). Unable, however, to reach consensus regarding diagnostic procedures for identifying students with learning disabilities, a compromise was formed which set in place a protocol that identified learning disabilities as students who demonstrated acute underachievement in comparison with IQ as measured through an intelligence test.

The use of IQ as the sole criterion as a measure for determination of learning disability led to grave concerns from the educational field (Stuebing, Barth, Weiss, & Fletcher, 2009). To begin, the ability-achievement discrepancy did not address why students may exhibit normal cognitive functioning and yet struggle in specific academic performance standards. The discrepancy model with its utilization of a standardized testing instrument also did not take into account situation specific issues related to the individual student, including the variability of early childhood developmental experiences. Questions stemmed as well regarding those students whose ability-achievement discrepancy was not severe enough and were simply characterized as “slow learners” with no eligibility for special education services. Furthermore, clinical decisions regarding eligibility were limited to pre-determined discrepancy criteria without regard for the school psychologist’s expertise (Holdnack & Weiss, 2006).

Of import is that since its inception in 1977, special education referrals increased by 200% which led to over-extensions of services in special education as well as a national concern over possible misdiagnosis (Vaughn, Linan-Thompson, & Hickman, 2003). These dramatic increases occurred, however, only in the area of learning disabilities with its use of the IQ-discrepancy formula (Holdnack & Weiss, 2006).

1990 – IDEA Amendments (PL 101-476).

After reauthorizations in 1983 and 1986, policymakers again reauthorized EAHCA in 1990 and renamed it the Individuals with Disabilities Act, IDEA (PL 101-476). Lawmakers designed the 1990 amendments to ensure a greater diversity of services for eligible students. Founded on the concept of “zero exclusion,” IDEA also reaffirmed that eligible students receive a free and appropriate education in public schools (Hardman, 2006).

1997 – IDEA Amendments (PL 105-17).

With the 1997 reauthorization of IDEA (PL 105-17), the least restrictive environment (LRE) was extended into the general classroom. In effect, the new regulations brought the work of general educators and special educators closer together in a more unified system of delivering instruction and services (Wedle, 2005). It also focused attention on interventions in regular education settings as well as the use of problem-solving models in special education settings. The discrepancy model, however, remained the national protocol for identifying learning disabilities in American classrooms and schools.

Of note, the reauthorizations of 1983, 1986, and 1990 all focused on ensuring access to education for disabled students. In contrast, the reauthorization of 1997 diverted attention from access to accountability as is illustrated in its regulations concerning interventions and problem-solving models.

2001 – No Child Left Behind Act (PL 107-110).

Part of this relentless pursuit of educational improvement stemmed from the incendiary federal report in 1983 – A Nation at Risk – which publicly indicted the American educational system for its failure to educate students at a level appropriate to the nation’s ranking in the world marketplace. As the federal government continued to strive for increased competitiveness in international markets, legislators used their reauthorization of the Elementary and Secondary Education Act of 1965 to produce the No Child Left Behind Act. This legislation mandated that 100% of all students in American classrooms be proficient in reading and math by 2014. Schools who did not meet the pre-set adequate yearly progress (AYP) goals faced funding sanctions. As schools labored to meet the federal benchmarks through intensive test preparation and the adoption of standardized curriculum, struggling students throughout the nation continued to fail to meet the minimum competency requirements.

2004 – IDEIA Amendments (PL 108-446).

In 2004, legislators reauthorized IDEA (designated as the Individuals with Disabilities with Education Improvement Act, or IDEIA) with PL 108-446. This legislation shifted the emphasis of special education policy in a number of key aspects – from process to results, from a paradigm of failure to a model of prevention, and from a consideration of students as special education recipients first to an appreciation of their primary role in general education (Hardman, 2006). Contained within these regulations was language disallowing one single assessment to determine identification of a disability along with a declaration that states were not required to use the discrepancy formula to determine learning disabilities but were, rather, permitted to utilize a protocol that focused on a student’s response to interventions that were scientific and research-based (U.S. Department of Education, 2006).

With the new model, then, states could implement targeted research-based interventions as a means to monitor students’ responsiveness and subsequently determine an evaluation for a specific learning disability. The National Association of State Directors of Special Education (NASDSE) defined this “response to intervention” as the enactment of “high-quality instruction and interventions matched to student need, monitoring progress frequently to make decisions about changes in instruction or goals and applying child response data to important educational decisions” (NASDSE, 2006, p. 3).

Of note, a fundamental intent of RtI was to decrease the number of students in special education by perhaps 70% (Lyon et al., 2001). Such a significant decrease in students receiving special education services would have considerable effect on the federal government as it was predicted that the national cost of special education services would soon total $80 billion annually (Burns & Gibbons, 2008) for the current 6.5 million children identified with disabilities (Collier, 2010).

Addressing these long-standing budgetary issues, IDEIA 2004 contained three central elements: use of scientifically-based reading instruction, evaluation of how students respond to interventions, and the employment of data to inform decision making (Brown-Chidsey & Steege, 2005). Fuchs, Fuchs, and Vaughn (2008) characterized it as having two unified goals – the identification of at-risk students who would benefit from preventive services and the provision of on-going services to LD students who are chronically unresponsive and require a more individualized approach based on data-driven instructional planning.

Emergence of Response to Intervention

On August 14, 2006, legislators introduced final regulations to accompany the 2004 reauthorization of IDEIA (PL 108-446). Effective October 13, 2006, this historic new education policy promised to affect significant changes in practices for both general education and special education. Soon after the federal adoption, states began to examine the RtI model and prepare organizational designs for implementation. The first step was to identify its chief components.

RtI Components

There are a number of components that typify the RtI model. They include universal screenings,
multiple tiers of intervention services, progress monitoring, and data-based decision making.

Universal screenings.

Typically implemented three times (at the beginning, middle, and end) of the academic school year, universal screenings are conducted with all students and prove significant in the RtI model as they serve as the gateway for students to gain access to more intensive interventions (Mellard & Johnson, 2008).

While there is no mandate within the legislation for screenings, they do provide the “principal means for identifying early those students at risk of failure and likely to require supplemental instruction; as such, it represents a critical juncture in the service delivery continuum” (Jenkins, Hudson, & Johnson, 2007, p. 582). Wixson and Valencia (2011) contend that the intent of universal screening is to “use the assessment information as the basis for differentiating instruction so it is more responsive to students’ needs and more likely to accelerate student learning” (p. 466).

Multiple tiers.

RtI, unique from traditional approaches (Barnes & Harlacher, 2008), follows an approach utilized by the public health model that employs multiple tiers of interventions with increasing intensity. It begins with primary interventions for the general population, then secondary interventions for the subset of the population who require more intensive services, and finally, tertiary interventions for those who have failed to respond to all previous treatments (Harn, Kame’enui, & Simmons, 2007; Mellard & Johnson, 2008). In a comparable fashion, RtI commonly provides three tiers of academic supports.

Tier I encompasses the best practices implemented in the general classroom setting in which most students (80%-90%) will perform proficiently as evidenced by assessment outcomes, such as the universal screenings conducted throughout the year. Those students (10%-15%) who do not respond to the supports provided in Tier I have opportunities for targeted instruction in Tier II with a greater degree of frequency (1-2 times weekly) and intensity (small groups comprising 3-6 students). Instruction at this tier may be provided by the classroom teacher or interventionist trained to work at this level of support services. The small minority of students (1%-5%) who fail to respond in Tier I or Tier II move to Tier III with the most intensive interventions. During this time, services are provided at even greater frequency (3-5 times weekly) and with greater intensity (small groups of no more than 3 students). Fuchs and Fuchs (2006) suggest several means to increase intensity, such as by “(a) using more teacher-centered, systematic, and explicit, (e.g., scripted) instruction; (b) conducting it more frequently; (c) adding to its duration; (d) creating smaller and more homogeneous student groupings; or (e) relying on instructors with greater expertise” (p. 94).

Deno’s cascade of services.

This tiered configuration is reminiscent of the model devised by Deno (1970) which conceptualized special education services as a “cascade” model in which increasingly smaller groups of students receive instruction with intensifying attention paid to individual needs. Deno’s cascade of services shaped special education guidelines throughout the 1970s and 1980s, but greater and greater numbers of students qualifying for special education services hampered its ultimate effect. Despite its limitations, the RtI model is similar to Deno’s construct for specialized services.

The Standard Protocol Versus the Problem-Solving Approach.

The RtI tiered framework commonly adheres to one of two models – the standard treatment protocol or the problem-solving approach (Wixson, Lipson, & Johnston, 2010). Historically, each garnered support from a distinct professional group. Early interventionists in the reading field advocated for the superiority of the standard treatment protocol while behavioral psychologists promoted the more clinical problem-solving model (Fuchs, Mock, Morgan, & Young, 2003).

While elementally similar, they differ in the degree to which each provides individual interventions and the level to which they analyze the student achievement problem before implementing an intervention plan (Christ, Burns, & Ysseldyke, 2005). Fuchs, Mock, Morgan, and Young (2003) further assert by inherent principle, the standard treatment protocol will ensure quality control of the interventions while the problemsolving model will focus on individual differences and needs.

Typically used by practitioners in the field, the standard protocol provides a plan of standardized interventions for a given time with consideration given to teacher fidelity to the program. Although the ideology derived from the scientific method, the protocol itself was originally the work of Bergan in 1977 and later revised by Bergan and Kratochwill (1990). Bergen’s work delineated the steps of behavioral consultation into four stages that now constitute the precepts of the standard protocol for intervention services.

The problem-solving approach, preferred by researchers and school psychologists, typifies a tailored instructional plan designed for individual students based on their needs (Fuchs & Fuchs,
2008). Similar in design to the standard protocol, the problem-solving approach diverges in its intent to provide increasingly intensive interventions that are scientifically based and data focused as nonresponsive students move up the tier continuum (Hale, Kaufman, Naglieri, & Kavale, 2006).

Haager and Mahdavi (2007) suggest that there are a number of supports that must be present in order to implement a tiered intervention framework; such as, professional development, shared focus, administrator support, logistical support, teacher support, and assessment protocols. Similarly, they argue that barriers exist that will negate the effectiveness of such a model. They point to competing educational initiatives, negative perceptions regarding teachers’ roles and responsibilities in remediating reading, lack of time, inadequate training, and the absence of support structures.

Progress monitoring.

Within the RtI model, progress monitoring provides immediate feedback by assembling multiple measures of student academic achievement to “assess students’ academic performance, to quantify a student rate of improvement or responsiveness to instruction, and to evaluate the effectiveness of instruction” (National Center on Response to Intervention, 2011, para. 1). Thus, progress monitoring should provide accurate and reliable methods to track response to interventions in order to modify intervention plans for individual students (Alber-Morgan, 2010).

Data-based decision making.

As one of the primary aspect of the RtI model is ongoing assessment, the use of data to inform decisions proves paramount in the intervention and identification process. On a continuing basis, educators utilizing the RtI model gather student information “(1) to adjust the specifics of teaching to meet individual students’ needs and (2) to help students understand what they can do to keep growing as readers” (Owocki, 2010). Ultimately, the data will serve as a deciding factor in both preventive services and eligibility criteria, thereby necessitating that those in the field become expert in the area of data maintenance, data mining, and data-driven decision making.

Opponents of RtI argue that attention should focus on the shortcomings of RtI. Namely, this model requires classroom teachers to take greater responsibility for struggling students in ways that may extend beyond their level of expertise (Collier, 2010). A deeper concern is that the RtI model identifies chronically low-achieving students – not students who are learning disabled. As an extension of these issues, while RtI lowers the number of referrals (and the corresponding staffing and resources necessitated by such referrals), transitioning students through the three tiers of intervention creates issues of delays or possible eliminations of necessary referrals. If these concerns materialize, students who should be eligible for special education will suffer from the deprivation of vital support services.

Ultimately, whether advocate or opponent of RtI, researchers in the field estimate that there will continue to be 2% to 6% of students who will fail to respond to any of the three intervention tiers – regardless of frequency or intensity of support. They predict 6% to 8% of students will qualify for special education services (Fuchs, Stecker, & Fuchs, 2008) – approximately a 50% reduction from 2004.

Constructing SRBI

In reaction to the new federal legislation, the state of Connecticut moved to analyze this paradigm shift in special education policy within the context of the state’s classrooms and schools, subsequently documenting the process in its 2008 publication, Using Scientific Research-Based Interventions: Improving Education for All Students – Connecticut’s Framework for RtI.

State Leadership Team.

The first step in the implementation process began with the development of a state leadership team whose task was to craft a state policy that adhered to the federal law while considering the unique needs of Connecticut and its students. The team comprised delegates from the Connecticut State Department of Education (CSDE), the Regional Education Service Centers (RESCs), the State Education Resource Center (SERC), and other stakeholder educational agencies.

Roundtable discussions.

With the leadership team came roundtable discussions on RtI. Bringing together a wide range of stakeholder groups (e.g., administrators, regular and special education teachers, higher education faculty, members from the governor’s office, and parents), these dialogues centered on the key components of the RtI model – 1) universal screenings, 2) progress monitoring, 3) tiered interventions, and 4) implementation. From this discourse stemmed a number of significant concepts, namely, the need for a joint effort between regular education and special education, the importance of leadership, and the necessity of professional development.

Advisory panel.

An advisory panel assembled next and focused on two main responsibilities – reviewing the literature surrounding RtI and designing an implementation framework for Connecticut’s schools. During this time, the panel converted the nationally recognized name of RtI into the more personalized SRBI (scientific research-based interventions) for Connecticut. As a term used in both NCLB and IDEA, the panel proposed that such a designation would emphasize their belief in the significance of general education in the policy as well as the weight of using interventions that were scientific as well as research based.

State personnel development grants

To facilitate statewide implementation, the CSDE and SERC worked collaboratively to offer three-year grants to schools in four school districts. These school systems, Bristol, CREC, Greenwich, and Waterbury, served as model sites because of their usage of intervention services and differentiated instruction. This undertaking was to expand their work to additional schools in their systems as well as to create opportunities for collaboration with other school systems who wished to improve their educational services.

The SRBI Model

In constructing the state’s SRBI model, the adhered to the nationally recognized RtI model. Tier I occurs in the general classroom, focuses on general education curriculum, must be research-based and culturally responsive, and includes a range of supports. While instruction may occur through small, flexible groups, the instructor is the general educator with collaboration from specialists. Assessments in this tier include universal screenings and formative assessments and any additional assessment tools that may be beneficial to monitor individual student performance.

Data teams collaborate with classroom teachers to utilize assessment data as a means to inform instructional planning and make decisions regarding the placement of students within the three tiers.

Tier II attends to those students who have not responded to the supports provided in Tier I and offers additional services in the general education classroom or other general education settings. In this tier, students receive short-term interventions (8 to 20 weeks) for small-groups of struggling students (1:6) that are supplemental to the core curriculum. Interventionists may be any general education teacher or a specialist trained to work in this tier. Assessments during this tier concentrate on frequent progress monitoring (weekly or biweekly) to determine students’ responsiveness to interventions. Data analysis occurs in both data teams and intervention teams. During Tier III, the focus is on students who have failed to respond to supports or interventions in Tiers I and II. They continue to receive services in general education settings; however, they also receive additional short-term interventions (8 to 20 weeks) provided with a smaller group of homogeneous students (1:3) designed to be supplemental to the core curriculum. Interventionists again come from the general education field or others trained in this tier. Progress monitoring increases in frequency (twice weekly), and intervention teams continue to assess the data.

Conclusion

As schools in Connecticut continue to implement SRBI, focus must remain on the systemic reforms needed to ensure the academic well-being of Connecticut’s students. The SRBI model offers the potential to affect lasting change in our schools, perhaps even to bridge the achievement gap that has plagued Connecticut for so many years. To do so, however, will require all of us to work together with a singular goal in mind – ensuring that all of our students succeed.

References

Alber-Morgan, S. (2010). Using RtI to teach literacy to diverse learners, k-8: Strategies for the inclusive classroom. Thousand Oaks, CA: Corwin Press.

Barnes, A. C., & Harlacher, J. E. (2008). Clearing the confusion: Response-to-intervention as a set of principles. Education and Treatment of Children, 31(3), 417-431.

Bergan, J., & Kratochwill, T. R. (1990). Behavioral consultation and therapy. New York: Plenum Press.

Berninger, V. W. (2006). Research-supported ideas for implementing reauthorized IDEA with intelligent professional psychological services. Psychology in the Schools, 43(7), 781-796.

Brown-Chidsey, R., & Steege, M. W. (2005). Response to intervention: Principles and strategies for effective practice. New York: The Guilford Press.

Burns, M. K., & Gibbons, K. A. (2008). Implementing response-to-intervention in elementary and secondary schools. New York: Routledge Taylor & Francis Group.

Christ, T. J., Burns, M. K., & Ysseldyke, J. E. (2005). Conceptual confusion within
response-to-intervention vernacular: Clarifying meaningful differences. NASP Communiqué, 34(3), Retrieved from http://www.nasponline.org/publications/cq/cq343rti.aspx

Collier, C. (2010). RtI for diverse learners. Thousand Oaks, CA: Corwin Press.

Connecticut State Department of Education. (2008). Using scientific research-based interventions: Improving education for all students – Connecticut’s framework for RtI. Hartford, CT: Author.

Cortiella, C. (2008). Response to intervention – An emerging method for learning disability identification. Retrieved from http://www.schwablearning.org.

Deno, E. (1970). Special education as developmental capital. Exceptional Children, 37, 229-237.

Fuchs, D., & Fuchs, L. (2008). Implementing RtI. District Administration, 44(11), 72-76.

Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to intervention: What, why, and how valid is it? Reading Research Quarterly, 41(1), 93-99.

Fuchs, D., Fuchs, L. S., & Vaughn, S. (Eds.). (2008). Response to intervention: A framework for reading educators. Newark, DE: International Reading Association.

Fuchs, D., Mock, D., Morgan, P.L., & Young, C.L. (2003). Responsiveness-to-intervention for
the learning disabilities construct. Learning Disabilities Research & Practice, 18(3), 157-171.

Fuchs, D., Stecker, P. M., & Fuchs, L. S. (2008). Tier 3: Why special education must be the most intensive tier in a standards-driven, no child left behind world. In D. Fuchs, L. S. Fuchs, & S. Vaughn (Eds.), Response to intervention: A framework for reading educators (pp. 71-104). Newark, DE: International Reading Association.

Gresham, F. (2001, August). Response to intervention: An alternative approach to the identification of learning disabilities. Paper presented at the Learning Disabilities Summit: Building a Foundation for the Future. Washington, DC: August 27-28, 2001.

Haager, D., & Mahdavi, J. (2007). Teacher roles in implementing intervention. In D. Haager, Z. Klingner, & S. Vaughn (Eds.), Evidence-based reading practices for response to intervention (pp. 245-263). Baltimore, MD: Paul H. Brookes Publishing Company.

Hale, J. B., Kaufman, A., Naglieri, J. A., & Kavale, K. A. (2006). Implementation of IDEA: Integrating response to intervention and cognitive assessment methods. Psychology in the Schools, 43(7), 753-770.

Hardman, M. L. (2006). Outlook on special education policy. Focus on Exceptional Children, 38(8), 2-8.

Harn, B. A., Kame’enui, E. J., & Simmons, D. C. (2007). The nature and role of the third tier in a prevention model for kindergarten students. In D. Haager, J. Klingner, & S. Vaugn (Eds.), Evidence-based reading practices for response to intervention (pp. 161-184). Baltimore, MD: Paul H. Brookes Publishing Company.

Holdnack, J. A., & Weiss, L. G. (2006). IDEA 2004: Anticipated implications for clinical practice – integrating assessment and intervention. Psychology in the Schools, 43(8), 871-882.

Jenkins, J. R., Hudson, R. F., & Johnson, E. S. (2007). Screening for at-risk readers in a response to intervention framework. School Psychology Review, 36(4), 582-600.

Lyon, G. R. (1995). Research initiatives in learning disabilities: Contributions from scientists supported by the National Institute of Child Health and Human Development. Journal of Child Neurology, 10 (suppl. 1), S120-S126.

Lyon, G. R., Fletcher, J. M., Shaywitz, S. E., Shaywitz, B. A., Torgeson, J. K., Wood, F. B.,Schulte, A., & Olson, R. (2001). Rethinking learning disabilities. In C. E. Finn, R. A. Rotherham, & C. R. Hokanson (Eds.), Rethinking special education for a new century (pp. 259-287). Washington, DC: Progressive Policy Institute and the Thomas B. Fordham Foundation.

Mellard, D. E., & Johnson, E. (2008). RtI: A practitioner’s guide to implementing response to intervention. Thousand Oaks, CA: Corwin Press.

National Association of State Directors of Special Education (NASDSE). (2006). Response to intervention: Policy considerations and implementation. Alexandria, VA: Author. National Center on Response to Intervention. (2011). Progress monitoring. Retrieved from http://www.rti4success.org/categorycontents/progress_monitoring.

Owocki, G. (2010). RtI daily planning book, k-6: Tools and strategies for collecting and
assessing data & targeted follow-up instruction. Portsmouth, NH: Heinemann.

Stuebing, K. K., Barth, A. E., Weiss, B., & Fletcher, J. J. (2009). IQ is not strongly related to response to reading instruction: A meta-analytic interpretation. Exceptional Children, 76(1), 31-51.

U.S. Department of Education. (1977). 1977 code of federal regulations. Washington, DC: Author.

U.S. Department of Education. (2006). Assistance to states for the education of children with disabilities and preschools grants for children with disabilities, final rule. Retrieved from http://eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/lb/e9/95.pdf

Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to treatment as a means of
identifying students with reading/learning disabilities. Exceptional Children, 69, 391-409.

Wedle, R. J. (2005). Response to intervention: An alternative to traditional eligibility criteria for students with disabilities. Retrieved from Education Evolving website: http://www.educationevolving.org/pdf/Response_to_Intervention.pdf.

Wixson, K. K., Lipson, M. Y., & Johnston, P. H. (2010). Making the most of RtI. In M. Y. Lipson & K. K. Wixson (Eds.), Successful approaches to RtI: Collaborative practices for improving k-12 literacy (pp. 1-19). Newark, DE: International Reading Association.

Wixson, K. K., & Valencia, S. W. (2011). Assessment in RtI: What teachers and specialists need to know. The Reading Teacher, 64(6), 466-469.

Wright, J. (2007). RtI Toolkit: A practical guide for schools. Port Chester, NY: Dude Publishing.