Strategic Reading Assessment Systems

High-achieving schools use data to guide instructional decisions made by administrators, teachers, and specialists (Bambrick-Santoyo).

Equity requires strategic and comprehensive assessment systems for reading which are:

    • High quality (valid and reliable)

    • Purposefully aligned to essential components of reading (skill-based)

    • Actionable to inform instruction, monitor progress, and evaluate learning

    • Communicated with all stakeholders

Equitable multi-level systems of supports are driven by the strategic use of data for continuous improvement. Continuous improvement is an ongoing effort to improve a framework, process, program and innovation and requires an organizational commitment to continual learning, self-reflection, adaptation, and growth. Teams, across the system, use both implementation data and outcome data in a continuous improvement problem-solving process to inform decisions and actions leading to college and career readiness for all.

Wisconsin’s Framework for Equitable Multi-Level Systems of Supports (2017, p. 9)

Types of Reading Assessments

There are three types of assessment identified by researchers that give educators the most useful information in the most efficient way (Coyne, Harn, Mckenna, Wallpole):

  • Screening

  • Diagnostic

  • Progress monitoring

Reading assessments can also be skill-based or standards-based.

Skills-based assessments measure a specific foundational reading (decoding) skills or set of skills. Results inform explicit, systematic, cumulative instruction.

Standards-based assessments measure specific academic standards, which require integrated application of foundational reading skills and oral language comprehension skills. Result inform explicit, systematic, cumulative instruction in background knowledge, vocabulary, and comprehension skills improve achievement of specific complex academic standards.

Types of Reading Assessments - WI Dyslexia Roadmap.pdf

“Ninety percent of children with reading difficulties will achieve grade level in reading if they receive help by the first grade. Seventy-five percent of children whose help is delayed to age nine or later continue to struggle throughout their school careers.”

Vellutino, Scanlon, Sipay, Small, Pratt, Chen & Denckla, 1996

“The best solution to the problem of reading failure is to allocate resources for early identification and prevention.”

Joseph K. Torgesen, 1998

Early screening is vital. The goal of universal, early literacy screening is to identify children at risk of future failure before that failure actually occurs. By doing so, we create the opportunity to intervene early when we are most likely to be more effective and efficient. The key to effective screening is maximizing the ability to predict future difficulties.

Steve Dykstra, 2013

Screening Assessments

Instruction and intervention aligned to the Science of Reading is diagnostic and prescriptive. Equitable Multi-Level System of Support (MLSS) Structures should be in place to facilitate the implementation of structured literacy and SoR-aligned language instruction and intervention. Administration of screening measures is the first step. School districts should administer literacy screeners as part of the universal screening process.

Literacy screeners should be:

  • Valid and reliable

  • Nationally normed

  • Skills-based

Literacy screeners should be administered to:

  • All 4K-5th grade students

  • Three times a year (fall, winter, spring)

Universal screeners are not traditional assessments. They are brief, informative tools used to measure academic skills in six general areas:

  • Basic reading skills

  • Reading fluency

  • Reading comprehension

  • Math calculation

  • Math problem-solving

  • Written expression

Data compiled from universal screening is used to inform instructional planning for students who are flagged as at-risk and helps target Tier 2 and Tier 3 interventions.

If a standards-based assessment is used to screen all students instead of a skills-based universal screener, a skills-based screener is still necessary to identify more specific skill area(s) of focus and to determine alignment of interventions for students identified as “at risk.”

A skills-based universal screener is the most appropriate, defensible tool for identifying students that have skills deficits and informing the need for a skills-based intervention. If a skills-based universal screener is not used, districts might not identify students with underlying skills deficits or properly align interventions. Further, if districts do not use a skills-based universal screener and are unable to collect accurate data associated with a suspected area of disability, they may run the risk of violating their Child Find obligation.

Tennessee Department of Education (2018, p. 12)

Gaab (2017) recommends incorporating eight key characteristics when determining an optimal screening battery for an individual classroom, school, or district.


Short - No longer than 30 minutes

Comprehensive - Includes phonological awareness, letter-sound knowledge, rapid naming, listening comprehension, and family history

Resourceful - Typically already part of school resources

Early - Beginning no later than kindergarten

ESL/Dialect Inclusion - All learners are assessed

Neurobiology/Genetics - Family history is examined

Evidence-based response to screening - Screening is followed by evidence-based intervention for the students identified in need

Developmentally appropriate - Developmentally appropriate for the age of the students being assessed

Dyslexia Connection: Dyslexia has been estimated to occur in 5% to 17% of the population of school-age children (Handbook of Clinical Neurology, 2013). Special Education law requires schools to recognize and provide appropriate education for dyslexia. Diagnosis is not required to receive intensive, explicit, systematic, multisensory teaching of the structure of the English language (Vellutino and Fletcher, 2009). Districts can use a multi-tiered system of support to identify students with dyslexic profiles as early as kindergarten. The goal of the process is to address students’ learning needs through evidenced-based instruction and assessment that is effective for all students, including students with dyslexia.

Skills to Assess During Screening

No screening tool can measure all reading skills. Therefore, it’s important to understand the purpose behind universal screening, the science of how children learn to read, and what indicators are warning signs for dyslexia.


Kindergarten screening is most meaningful when done at least twice (fall and spring) but can also be done three times a year. There are three main areas of skill development that predict risk of later challenges with accuracy and/or automaticity in word reading: phonemic awareness, alphabetic knowledge, and rapid automatized naming.

Phonemic Awareness refers to students’ knowledge of individual sounds in language. Skills typically mastered in kindergarten include:

  • Identifying the first, last, and middle sound in a word

  • Ability to blend 2 or 3 sounds together, often comes in late kindergarten

  • Ability to break apart the sounds in words, a skill that is measured through phoneme segmentation tasks

Note: Phoneme segmentation is a single task that can efficiently reveal students’ larger phonological skills and their verbal working memory skills.

Alphabetic Knowledge refers to students’ familiarity with how the sounds of language are represented in letters and letter patterns. At the beginning of kindergarten, assessment should include:

  • Letter name and/or letter sound tasks

By the end of the year, students' risk is predicted by:

  • Letter sound knowledge

  • Ability to decode nonsense words

Nonsense word decoding requires blending a series of letter sounds together to produce a nonsense word. Using nonsense words allows us to see what a student can do to decode an unfamiliar word. (Nevills, & Wolfe, 2009; Petscher, Fien, Stanley, Gearin, Gaab, Fletcher, & Johnson, 2019; Fien, Baker, Smolkowski, Smith, Kame’enui, & Beck, 2008).

Rapid Automatized Naming (RAN) refers to students’ ability to rapidly name a limited set of repeatedly-presented familiar symbols, such as objects/colors or letters/numbers. Students’ performance on Rapid Automatized Naming tasks is highly predictive of later reading automaticity, as the brain activity involved in naming symbols is also involved in oral reading fluency (Compton, Fuchs, Fuchs, Bouton, Gilbert, Barquero, Cho, & Crouch, 2010).

RAN and letter naming tasks are often confused because both require students to produce names of letters rather than sounds. There are several differences between the two.

Letter Naming :

  • Measures students’ broad knowledge of the alphabet

  • Can be timed or untimed

  • Requires students to name as many upper and lower-case letters as possible


  • Always timed

  • Scores represent the rate at which they are able to retrieve the names of a limited set of symbols (i.e. objects or letters)

A RAN task has several important criteria, including naming letters in order from left-to-right and sufficient familiarity with the items to be named (Compton et al., 2010). The familiarity component is why some school districts choose object or color naming rather than letter or number naming for the initial kindergarten assessment.

First Grade

Screening in first grade is most effective when done three times a year: fall, winter and spring. Phonemic awareness, alphabetic knowledge and rapid naming tasks continue to be reliable predictors of future challenges with accuracy and/or automaticity. Real word identification and oral reading fluency are also introduced in first grade.

Phonemic Awareness in first grade is often measured by:

  • Ability to break apart all the individual sounds of a word in a phoneme segmentation task

  • Manipulate sounds within a word by adding, deleting, or substituting sounds (middle or end of first grade)

Weaknesses in phonemic awareness at any point in the year can inform instructional decision-making.

Alphabetic Knowledge in first grade is highly predictive of later reading achievement. Students’ knowledge of individual letter-sound correspondences and ability to decode nonsense words is essential screening information both for predicting risk and informing instruction (Speece & Ritchey, 2005).

Word Reading emerges more fully in first grade. Single word recognition is an effective screening measure, especially when timed. The timing element indicates whether a student is likely decoding most words sound-by-sound or is recognizing some of the words automatically. Along with Oral Reading Fluency, Word Reading is highly predictive of reading fluency and comprehension in later grades, including performance on standardized assessments (Baker, Smolkowski, Katz, Fien, Seeley, Kame’Enui, & Beck, 2008).

Oral Reading Fluency measures both passage reading accuracy and fluency in a timed assessment, often accompanied by comprehension questions. It also gives information about students’ skills in the areas of vocabulary and syntax knowledge, both of which contribute to overall comprehension. Oral Reading Fluency scores are highly predictive of risk (Baker et al., 2008).

RAN should be administered once at the beginning of the year to serve as a predictor for later challenges with oral reading fluency. These tasks do not need to be administered again or progress monitored because they serve as a predictor of the likelihood of a reading disability in fluency, but not an outcome measure (Norton & Wolf, 2012).

Second Grade

Second grade screening is most effective when administered three times a year: fall, winter, and spring. After a few years of reading instruction, second graders are now screened using measures of decoding, passage reading fluency, reading comprehension, and RAN.

Decoding Alphabetic knowledge continues to be highly predictive of later achievement. A typical phonics scope and sequence for second grade will begin with a review of short and long vowels before introducing new patterns that span all six syllable types (open, closed, r-controlled, CVC-e, vowel teams/diphthongs, consonant-le). Nonsense words will assess students’ knowledge of more complex phonics patterns and is highly effective for identifying students at-risk for difficulties in accuracy and fluency decoding unknown words (Speece & Ritchey, 2005).

Oral Reading Fluency continues to be an efficient measure of accuracy and fluency because it requires automaticity integrating skills such as decoding and sight word recognition. It also gives information about students’ skills in the areas of vocabulary and syntax knowledge, both of which contribute to overall comprehension. Oral reading fluency scores are highly predictive of risk.

Reading Comprehension assessments are recommended starting in second grade. Skills in this area are now a more reliable predictor of risk (Torgesen, 2004).

RAN should be administered once at the beginning of the year to serve as a predictor for later challenges with oral reading fluency. Just as in first grade, these tasks do not need to be administered again or progress monitored because they serve as a predictor of the likelihood of a reading disability in fluency, but not an outcome measure (Norton & Wolf, 2012).

Dyslexia Connection: Phonemic awareness and rapid automatized naming (RAN) have been identified as the best predictors of dyslexia (Moats & Dakin, 2008), and therefore should be included in either the universal screener or the screening for dyslexia in kindergarten through second grade. Screening for dyslexia is most meaningful when administered at least twice during the school year, such as fall and spring (Fletcher et al., 2020).

Selecting a Universal Screener

Choosing the right screening tool is critical. Rigorously validated, not brilliantly marketed, instruments are needed. Screening tools are meant to be:

  • Brief measures

  • Administered to all students at a particular grade level (except those with vision and/or hearing impairments)

  • That collect reliable, valid data about students’ risk level

  • To better understand their academic needs

Wisconsin Statute 118.016 mandates that all public and charter school students enrolled in four-year-old kindergarten to second grade be assessed annually for reading readiness. While the Department of Public Instruction does not require a specific screener, school districts should consider three components to any universal screening tool:

  • Predictive validity

  • Classification accuracy

  • Norm-referenced scores

Clinical Psychologist Steve Dykstra explained these components in this way:

Predictive validity is a measure of how well the prediction of future performance matches actual performance along the entire range of performance from highest to lowest, not just at or near the cut score. It answers the question, "If we used this screener to predict how every child will perform at some point in the future, how good would those predictions be?"

Classification accuracy answers the question, "If we used this screener to divide our students into those considered at risk and those considered not to be at risk, how well would we do based on the outcome of their future performance? "(2013, p. 2)

Norm-referenced scores allow us to compare scores on multiple assessments to properly judge whether we have a consistent picture of performance, or whether some of the scores are aberrant and may need special consideration. Normative scoring also gives us better ability to track performance over time. Without normative scoring, we only know if a child scored above or below the cut score for being considered at risk. We do not know how far they may be above or below the cut score, how much that performance may have changed over time, or how it compares to other assessment data we may have on that child. (2013, p. 3)

The following resources can be used to learn more and determine which screener is best for your school or school district:

  • National Center on Intensive Intervention (NCII) Academic Screening Tools Chart (NCII Tools Chart) to evaluate the scientific evidence of available screeners.

  • Gaab Lab Early Literacy Assessment/Screener List created by the research teams at the Gaab Lab at Boston Children’s Hospital and the Gabrieli Lab at MIT. The list of screeners for dyslexia risk and early literacy milestones provided in the table is the most comprehensive list available. Gaab and Gabrieli Labs note that the table is not a list of recommended screeners, but rather an overview of ALL screeners, meant to help you compare them. Not all screeners are on the NCII Tools Chart, which only reviews screeners that were directly submitted for evaluation. The NCII Tools Chart will ultimately help you to determine a good screener for your district or school. Retrieved from

  • Screening for Dyslexia. A Report by the National Center for Improving Literacy (Petscher, et al., 2019).

  • National Center on Improving Literacy (2019). Best practices in universal screening.

It is highly recommended that districts use evidence-based tools to screen for risk of dyslexia, rather than using individual tools created at the district-level. Consider that of late, many school districts are opting to use seemingly low-cost survey or questionnaire-type screeners (asking teachers a series of questions) for assessing dyslexia risk instead of assessing the child directly (Gaab, 2020). This is problematic as several research studies have shown that teacher surveys are poorly correlated with the actual performance of a child, especially at the beginning of Kindergarten (or in any grade as teachers are still getting to know the student). In Examining the Accuracy of Teachers’ Judgments of DIBELS Performance, researchers note that teachers’ judgments of students’ early literacy skills alone may be insufficient to accurately identify students at risk for reading difficulties. Dr. Nadine Gaab indicates that survey or questionnaire-type screeners are biased, often poorly designed and not rigorously validated. They may be cheaper, but are in fact “wasting resources, harming students and hurting advocacy efforts since these tools will lead to inaccurate screenings and will lead to misconceptions that screeners don’t work" (Gaab, 2020). These types of survey assessments should not be used as the sole means of identifying struggling readers in the classroom, but rather could be used to complement direct assessment (Martin and Shapiro, 2011; Graney, 2008; Cabell et al., 2009).

Sensitivity of Balanced Literacy Benchmarking Systems

Administrators really like the Fountas and Pinnell’s Benchmark Assessment System (BAS) because it generates a number that can be put on a spreadsheet. It’s not a particularly instructive number, but it’s a number.

Margaret Goldberg, The Right to Read Project

Universal Screening vs. Identifying Dyslexia

“Screening focuses on a specific set of skills that indicate reading readiness or skills that can predict future reading success, such as phonemic awareness and letter-naming fluency.

Identification or diagnosis focuses on gathering clinical evidence to make a clinical determination.

Diagnostic tests of reading examine more complex skills, such as comprehension and cognitive processes. A screener can lead to a diagnosis, but a diagnosis will need to come from a professional who is approved to diagnose dyslexia." (Pons, 2016)

Diagnostic Assessments

Any student that scores below the benchmark on the universal screener or otherwise identified as “at risk” should be administered diagnostic assessments to determine student intervention needs. These diagnostic assessments for reading must explicitly measure foundational reading skills and characteristics of dyslexia, including:

  • Phonological and phonemic awareness

  • Sound symbol recognition

  • Alphabet knowledge

  • Decoding skills

  • Rapid naming

  • Encoding skills

The informal diagnostic assessments can be used by classroom teachers, Title I teachers, or reading specialists to identify the specific area(s) of need for students. These differ from the formal, norm-referenced assessments school psychologist or special education teachers might use to determine special education eligibility (Hasbrouck, 2020).

There is a solid body of evidence that shows students can be screened for risk of dyslexia before receiving any formal reading instruction. Researchers have identified the brain activation patterns of dyslexia through the use of neuroimaging technology. This “neural signature” is characterized by less activity in key areas of the brain responsible for processing the sounds in language, matching up sounds with letters, and the retrieval of linguistic information (Shaywitz, Shaywitz, Pugh, Mencl, Fulbright, Skudlarski, Constable, Marchione, Fletcher, Lyon, & Gore, 2002 and Ozernov-Palchik, Yu, Wang, & Gaab, 2016). Furthermore, young children who performed poorly in the pre-reading skills of phonemic awareness, rapid automatized object naming and letter identification and were later diagnosed with dyslexia exhibited these underactive patterns before receiving any formal reading instruction (Ozernov-Palchik, Yu, Wang, & Gaab, 2016). This finding suggests that dyslexia is not a result of the daily struggle to learn to read; rather, students possess brain activation patterns that put them at risk before receiving formal instruction (Im, Raschle, Smith, Grant, & Gaab, 2016).

Screening students for the characteristics of dyslexia, as well as other reading disabilities, is an essential step in preventing reading difficulties. Early screening is critical because a dyslexia diagnosis in elementary school has historically depended on what Dr. Nadine Gaab, Harvard Medical School researcher and developmental cognitive neuroscientist, calls a “wait-to-fail-approach” (2017). This approach requires a child to struggle to learn to read over a prolonged period of time before more intensive (more frequent and higher quality) intervention strategies are instituted. So while a dyslexia diagnosis usually is not given before the end of second grade or the beginning of third grade (after the prolonged period of failing), intensive interventions are most effective in kindergarten or first grade (Wanzek & Vaughn, 2007).

There are several reasons why this “wait to fail” framework is problematic. Reading challenges can have a negative impact on the emotional well-being of struggling readers (Mugnaini, Lassi, La Malfa, & Albertini, 2009). Also, when reading interventions are initiated in third grade or after, struggling readers have tremendous difficulty meeting grade-level expectations (Wanzek & Vaughn, 2007).

In addition, schools should screen older students or students who scored at or above benchmark but perform poorly in the classroom or display other indicators for dyslexia.

Example Reading Diagnostic & Progress Monitoring Assessments

There are many examples of valid & reliable reading assessments. Many can be used for both diagnostic and progress monitoring purposes.

Example Assessments - WI Dyslexia Roadmap.pdf

Podcasts & Videos

Additional Resources About Reading Assessments


Baker, S. K., Smolkowski, K., Katz, R., Fien, H., Seeley, J. R., Kame’enui, E. J., & Beck, C. T. (2008). Reading fluency as a predictor of reading proficiency in low-performing, high-poverty schools. School Psychology Review, 37(1), 18–37.

Bambrick-Santoyo, P. (2019). Driven by data 2.0: A practical guide to improve instruction (2nd ed.). Jossey-Bass.

Breznitz, Z. (2006). Fluency in reading: Synchronization of processes. L. Erlbaum Associates.

Cabell, S. Q., Justice, L. M., Zucker, T. A., & Kilday, C. R. (2009). Validity of teacher report for assessing the emergent literacy skills of at-risk preschoolers. Language, Speech, and Hearing Services in Schools, 40(2), 161–173.

Colorado Department of Education. (2020, February 25). Colorado Department of Education dyslexia handbook. Office of Special Education.

Compton, D. L., Fuchs, D., Fuchs, L. S., Bouton, B., Gilbert, J. K., Barquero, L. A., Cho, E., & Crouch, R. C. (2010). Selecting at-risk first-grade readers for early intervention: Eliminating false positives and exploring the promise of a two-stage gated screening process. Journal of Educational Psychology, 102(2), 327–340.

Coyne, M. D. & Harn, B. A. (2006) Promoting beginning reading success through meaningful assessment of early literacy skills. Psychology in the Schools, 43(1), 33.

Duty, L. (2018, November 8). Marysville Schools leading the way: Evolving best practices for learners with dyslexia. International Dyslexia Association.

Dykstra, Steven P. A Literate Nation White Paper. (2013). Selecting screening instruments: Focus on predictive validity, classification accuracy and norm reference scoring. San Francisco, CA: Literate Nation (Fall 2013). Retrieved from Appendix C, pages 10-14.

Fien, H., Baker, S. K., Smolkowski, K., Smith, J. L. M., Kame’enui, E. J., & Beck, C. T. (2008). Using nonsense word fluency to predict reading proficiency in kindergarten through second grade for English learners and native English speakers. School Psychology Review, 37(3), 391–408.

Fletcher, J. M., Francis, D. J., Foorman, B. R., & Schatschneider, C. (2020). Early detection of dyslexia risk: Development of brief, teacher-administered screens. Learning Disability Quarterly, 073194872093187.

Gaab Lab. (2021, March 1). Early Literacy Screening Tools. Retrieved on January 15, 2022, from

Gaab, N. [@GaabLab]. (2020, January 10). Many school districts are deciding to use “survey” or “questionnaire” #screeners (asking teachers a series of questions) for assessing #dyslexia [Tweet]. Twitter.

Gaab, N., Ph. D. (2017, February). It’s a myth that young children cannot be screened for dyslexia! International Dyslexia Association.

Goldberg, M. (2019, September 29). Fountas and Pinnell benchmark assessment system: Doesn’t look right, sound right, or make sense. [Blog post]. Reading Rockets. Retrieved January 15, 2022, from

Graney, S. B. (2008). General education teacher judgments of their low-performing students’ short-term reading progress. Psychology in the Schools, 45(6), 537–549.

Habib, M., & Giraud, K. (2013). Dyslexia. In Dulac, O., Lassonde, M., Sarnat, H. B. (Eds.), Handbook of clinical neurology (Vol. 111, pp. 229–235). Elsevier.

Hasbrouck, Jan. (2020). Conquering dyslexia: A guide to early detection and intervention for teachers and families. Benchmark Education Company, NY. (121)

IDA Editorial Contributors. (2015, June 11). Testing and evaluation. International Dyslexia Association.

Im, K., Raschle, N. M., Smith, S. A., Ellen Grant, P., & Gaab, N. (2015). Atypical sulcal pattern in children with developmental dyslexia and at-risk kindergarteners. Cerebral Cortex, 26(3), 1138–1148.

Kilpatrick, D. A. (2015). Essentials of assessing, preventing, and overcoming reading difficulties (essentials of psychological assessment). John Wiley & Sons.

Martin, S. D., & Shapiro, E. S. (2011). Examining the accuracy of teachers' judgments of DIBELS performance. Psychology in the Schools, 48(4), 343–356.

McKenna, M. C. & Walpole, S. (2005) How well does assessment inform our reading instruction? The Reading Teacher, 59(1), 84-86.

Moats, L. C., & Dakin, K. E. (2008). Basic facts about dyslexia and other reading problems. Baltimore, MD: The International Dyslexia Association.

Mugnaini, D., Lassi, S., La Malfa, G., & Albertini, G. (2009). Internalizing correlates of dyslexia. World Journal of Pediatrics, 5(4), 255–264.

National Center on Improving Literacy. (2019). Best Practices in Universal Screening. Washington, DC: U.S. Department of Education, Office of Elementary and Secondary Education, Office of Special Education Programs, National Center on Improving Literacy. Retrieved from

National Center on Intensive Intervention at American Institutes for Research. (2020, June). Academic screening tools chart. National Center on Intensive Intervention.

Nevills, P. A., & Wolfe, P. A. (2009). Building the reading brain, preK-3 (Second ed.). Corwin.

Norton, E. S., & Wolf, M. (2012). Rapid automatized naming (RAN) and reading fluency: Implications for understanding and treatment of reading disabilities. Annual Review of Psychology, 63(1), 427–452.

Ohio Department of Education. (2020, January). Ohio’s plan to raise literacy achievement.

Ozernov-Palchik, O., Yu, X., Wang, Y., & Gaab, N. (2016). Lessons to be learned: how a comprehensive neurobiological framework of atypical reading development can inform educational practice. Current opinion in behavioral sciences, 10, 45–58.

Petscher, Y., Fien, H., Stanley, C., Gearin, B., Gaab, N., Fletcher, J.M., & Johnson, E. (2019). Screening for dyslexia. Washington, DC: U.S. Department of Education, Office of Elementary and Secondary Education, Office of Special Education Programs, National Center on Improving Literacy. Retrieved on January 15, 2022, from

Pons, Donell (2016). Dyslexia: What every educator needs to know. Retrieved from

Shaywitz, B. A., Shaywitz, S. E., Pugh, K. R., Mencl, W., Fulbright, R. K., Skudlarski, P., Constable, R., Marchione, K. E., Fletcher, J. M., Lyon, G., & Gore, J. C. (2002). Disruption of posterior brain systems for reading in children with developmental dyslexia. Biological Psychiatry, 52(2), 101–110.

Speece, D. L., & Ritchey, K. D. (2005). A longitudinal study of the development of oral reading fluency in young children at risk for reading failure. Journal of Learning Disabilities, 38(5), 387–399.

Tennessee Department of Education. (2018). Dyslexia resource guide: Guidance on the "Say Dyslexia" law.

Torgesen, J. K. (1998). Catch them before they fall: Identification and assessment to prevent reading failure in young children. The American Educator, 22, 32–39.

Torgesen, J. K. (2004). Avoiding the devastating downward spiral: The evidence that early intervention prevents reading failure. American Educator, 28, 6–19.

Vellutino, F. R., & Fletcher, J. M. (2009). Developmental dyslexia. In M. J. Snowling & C. Hulme (Eds.), The science of reading: A handbook (pp. 362–378). Blackwell Publishing.

Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small, S. G., Pratt, A., Chen, R., & Denckla, M. B. (1996). Cognitive profiles of difficult-to-remediate and readily remediated poor readers: Early intervention as a vehicle for distinguishing between cognitive and experiential deficits as basic causes of specific reading disability. Journal of Educational Psychology, 88(4), 601–638.

Wanzek, J., & Vaughn, S. (2007). Research-based implications from extensive early reading interventions. School Psychology Review, 36(4), 541–561.

Wisconsin RtI Center, Wisconsin Department of Public Instruction. (2017). Wisconsin’s framework for equitable multi-level systems of supports.