i-Ready Literacy Tasks
Passage Reading Fluency

Summary

The i-Ready Literacy Task for Passage Reading Fluency can help determine a student’s oral reading fluency proficiency, progress, and individual instruction needs. These tasks evaluate students’ oral reading of connected text to determine a student’s passage reading accuracy, rate (Words Correct Per Minute), prosody, and comprehension. Both Benchmark Tasks and Progress Monitoring Tasks are available for students in grades 1–6 (recommended for use in mid- to late-grade 1 through grade 6). Per grade, four Benchmark forms are available for screening in fall, winter, and spring, with one additional form, if needed. Twenty-four progress monitoring forms, per grade, provide educators with a comparable tool to monitor student progress and evaluate instructional interventions.

Where to Obtain:
Curriculum Associates, LLC
RFPs@cainc.com
153 Rangeway Road, N. Billerica MA 01862
800-225-0248
www.curriculumassociates.com
Initial Cost:
$8.00 per student
Replacement Cost:
$8.00 per student per year
Included in Cost:
$8.00/student/year for i-Ready Assessment for reading, which includes Passage Reading Fluency for grades 1–6. i-Ready is a fully web-based, vendor-hosted, Software-as-a-Service application, with the i-Ready Literacy Tasks available as PDFs that are printed from within the i-Ready system. The per-student or site-based license fee includes account set-up and management; unlimited access to i-Ready’s assessment, management, and reporting functionality; plus unlimited access to U.S.-based customer service/technical support and all program maintenance, updates, and enhancements for as long as the license remains active. The license fee also includes hosting, data storage, and data security. Via the i-Ready teacher and administrator dashboards and i-Ready Central support website, educators may access comprehensive user guides and downloadable lesson plans, as well as implementation tips, best practices, video tutorials, and more to supplement onsite, fee-based professional development. These online resources are self-paced and available 24/7. The Literacy Tasks also have a digital administration feature in which a teacher can score a student in real time using a computer or iPad (rather than scoring the student on paper and inputting scores into i-Ready). This new feature is currently available at no charge and may incur an additional fee in later years. Professional development is required and available at an additional cost ($2,300/session up to six hours). Site-license pricing is also available.
The document linked below includes considerations and guidance related to the administration of i-Ready Literacy Tasks, including Passage Reading Fluency Tasks, for students with specific disabilities. While all decisions about appropriateness of tasks must be made by educators who have access to information about students’ IEPs, 504 plans, or other documented needs, the information in this document may be helpful to include in the decision-making process. We recommend that educators review this document, as well as each task, and apply what they know about their students to determine whether tasks are appropriate. FAQ: i-Ready Literacy Tasks Accessibility and Accommodations Guidance: https://cdn.bfldr.com/LS6J0F7/at/cqftmn8kmf3p5s43sc8w9z2q/iready-faq-literacy-tasks-accessibility-guidance.pdf. The linked documents and resources are housed on our Accessibility & Accommodations Resource Hub (https://www.curriculumassociates.com/reviews/ireadyaccessibility), along with other helpful accessibility resources such as FAQs, feature overviews, and video demonstrations.
Training Requirements:
Training not required
Qualified Administrators:
No minimum qualifications specified.
Access to Technical Support:
Support is available through dedicated i-Ready Partners (Partner Success Manager, Professional Learning Specialist), unlimited access to in-house technical support during business hours, and self-service resources on i-ReadyCentral.com/LiteracyTasks. Self-service materials are available on i-Ready Central and through our Online Educator Learning platform. Materials include guidance documents, recorded webinars, and administration videos with scoring practice options.
Assessment Format:
Scoring Time:
  • 3 minutes per student and passage
Scores Generated:
  • Raw score
  • Percentile score
  • Grade equivalents
  • Other: On-grade performance level placements. These levels are based on the nationally recognized Hasbrouck and Tindal (2017) norms for oral reading fluency and are expressed as “Below Level”, “On Level,” and “Above Level.” In addition the Below Level indicates three levels based on percentile ranges: Below (0-10th percentile), Below (11-24th percentile), and Below (25-49th percentile). Hasbrouck, J. & Tindal, G. (2017). An update to compiled ORF norms (Technical Report No. 1702). Eugene, OR. Behavioral Research and Teaching, University of Oregon.
Administration Time:
  • 2 minutes per student and passage
Scoring Method:
  • Manually (by hand)
  • Other : The Literacy Tasks also have a digital administration feature in which a teacher can score a student in real time using a computer or iPad (rather than scoring the student on paper and inputting scores into i-Ready). This new feature is currently available at no charge and may incur an additional fee in later years. When the digital administration feature is used, all scores are calculated automatically based on the inputs from the individual administering the test.
Technology Requirements:
  • Computer or tablet
  • Internet connection
Accommodations:
The document linked below includes considerations and guidance related to the administration of i-Ready Literacy Tasks, including Passage Reading Fluency Tasks, for students with specific disabilities. While all decisions about appropriateness of tasks must be made by educators who have access to information about students’ IEPs, 504 plans, or other documented needs, the information in this document may be helpful to include in the decision-making process. We recommend that educators review this document, as well as each task, and apply what they know about their students to determine whether tasks are appropriate. FAQ: i-Ready Literacy Tasks Accessibility and Accommodations Guidance: https://cdn.bfldr.com/LS6J0F7/at/cqftmn8kmf3p5s43sc8w9z2q/iready-faq-literacy-tasks-accessibility-guidance.pdf. The linked documents and resources are housed on our Accessibility & Accommodations Resource Hub (https://www.curriculumassociates.com/reviews/ireadyaccessibility), along with other helpful accessibility resources such as FAQs, feature overviews, and video demonstrations.

Descriptive Information

Please provide a description of your tool:
The i-Ready Literacy Task for Passage Reading Fluency can help determine a student’s oral reading fluency proficiency, progress, and individual instruction needs. These tasks evaluate students’ oral reading of connected text to determine a student’s passage reading accuracy, rate (Words Correct Per Minute), prosody, and comprehension. Both Benchmark Tasks and Progress Monitoring Tasks are available for students in grades 1–6 (recommended for use in mid- to late-grade 1 through grade 6). Per grade, four Benchmark forms are available for screening in fall, winter, and spring, with one additional form, if needed. Twenty-four progress monitoring forms, per grade, provide educators with a comparable tool to monitor student progress and evaluate instructional interventions.
The tool is intended for use with the following grade(s).
not selected Preschool / Pre - kindergarten
not selected Kindergarten
selected First grade
selected Second grade
selected Third grade
selected Fourth grade
selected Fifth grade
selected Sixth grade
not selected Seventh grade
not selected Eighth grade
not selected Ninth grade
not selected Tenth grade
not selected Eleventh grade
not selected Twelfth grade

The tool is intended for use with the following age(s).
not selected 0-4 years old
not selected 5 years old
not selected 6 years old
not selected 7 years old
not selected 8 years old
not selected 9 years old
not selected 10 years old
not selected 11 years old
not selected 12 years old
not selected 13 years old
not selected 14 years old
not selected 15 years old
not selected 16 years old
not selected 17 years old
not selected 18 years old

The tool is intended for use with the following student populations.
selected Students in general education
selected Students with disabilities
selected English language learners

ACADEMIC ONLY: What skills does the tool screen?

Reading
Phonological processing:
not selected RAN
not selected Memory
not selected Awareness
not selected Letter sound correspondence
not selected Phonics
not selected Structural analysis

Word ID
not selected Accuracy
not selected Speed

Nonword
not selected Accuracy
not selected Speed

Spelling
not selected Accuracy
not selected Speed

Passage
selected Accuracy
selected Speed

Reading comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
selected Retell
not selected Maze
not selected Sentence verification
not selected Other (please describe):


Listening comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
not selected Retell
not selected Maze
not selected Sentence verification
not selected Vocabulary
not selected Expressive
not selected Receptive

Mathematics
Global Indicator of Math Competence
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Early Numeracy
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Concepts
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Computation
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematic Application
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Fractions/Decimals
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Algebra
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Geometry
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

not selected Other (please describe):

Please describe specific domain, skills or subtests:
The PRF tasks also measure prosody (phrasing and intonation) via an educator-scored rubric.
BEHAVIOR ONLY: Which category of behaviors does your tool target?


BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.

Acquisition and Cost Information

Where to obtain:
Email Address
RFPs@cainc.com
Address
153 Rangeway Road, N. Billerica MA 01862
Phone Number
800-225-0248
Website
www.curriculumassociates.com
Initial cost for implementing program:
Cost
$8.00
Unit of cost
student
Replacement cost per unit for subsequent use:
Cost
$8.00
Unit of cost
student
Duration of license
year
Additional cost information:
Describe basic pricing plan and structure of the tool. Provide information on what is included in the published tool, as well as what is not included but required for implementation.
$8.00/student/year for i-Ready Assessment for reading, which includes Passage Reading Fluency for grades 1–6. i-Ready is a fully web-based, vendor-hosted, Software-as-a-Service application, with the i-Ready Literacy Tasks available as PDFs that are printed from within the i-Ready system. The per-student or site-based license fee includes account set-up and management; unlimited access to i-Ready’s assessment, management, and reporting functionality; plus unlimited access to U.S.-based customer service/technical support and all program maintenance, updates, and enhancements for as long as the license remains active. The license fee also includes hosting, data storage, and data security. Via the i-Ready teacher and administrator dashboards and i-Ready Central support website, educators may access comprehensive user guides and downloadable lesson plans, as well as implementation tips, best practices, video tutorials, and more to supplement onsite, fee-based professional development. These online resources are self-paced and available 24/7. The Literacy Tasks also have a digital administration feature in which a teacher can score a student in real time using a computer or iPad (rather than scoring the student on paper and inputting scores into i-Ready). This new feature is currently available at no charge and may incur an additional fee in later years. Professional development is required and available at an additional cost ($2,300/session up to six hours). Site-license pricing is also available.
Provide information about special accommodations for students with disabilities.
The document linked below includes considerations and guidance related to the administration of i-Ready Literacy Tasks, including Passage Reading Fluency Tasks, for students with specific disabilities. While all decisions about appropriateness of tasks must be made by educators who have access to information about students’ IEPs, 504 plans, or other documented needs, the information in this document may be helpful to include in the decision-making process. We recommend that educators review this document, as well as each task, and apply what they know about their students to determine whether tasks are appropriate. FAQ: i-Ready Literacy Tasks Accessibility and Accommodations Guidance: https://cdn.bfldr.com/LS6J0F7/at/cqftmn8kmf3p5s43sc8w9z2q/iready-faq-literacy-tasks-accessibility-guidance.pdf. The linked documents and resources are housed on our Accessibility & Accommodations Resource Hub (https://www.curriculumassociates.com/reviews/ireadyaccessibility), along with other helpful accessibility resources such as FAQs, feature overviews, and video demonstrations.

Administration

BEHAVIOR ONLY: What type of administrator is your tool designed for?
not selected General education teacher
not selected Special education teacher
not selected Parent
not selected Child
not selected External observer
not selected Other
If other, please specify:

What is the administration setting?
not selected Direct observation
not selected Rating scale
not selected Checklist
not selected Performance measure
not selected Questionnaire
not selected Direct: Computerized
not selected One-to-one
not selected Other
If other, please specify:

Does the tool require technology?
Yes

If yes, what technology is required to implement your tool? (Select all that apply)
selected Computer or tablet
selected Internet connection
not selected Other technology (please specify)

If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:

What is the administration context?
selected Individual
not selected Small group   If small group, n=
not selected Large group   If large group, n=
not selected Computer-administered
not selected Other
If other, please specify:

What is the administration time?
Time in minutes
2
per (student/group/other unit)
student and passage

Additional scoring time:
Time in minutes
3
per (student/group/other unit)
student and passage

ACADEMIC ONLY: What are the discontinue rules?
not selected No discontinue rules provided
not selected Basals
not selected Ceilings
selected Other
If other, please specify:
Our administration guidance notes the following: "If the student is unable to read the first line of the presented passage, then discontinue the task."


Are norms available?
Yes
Are benchmarks available?
Yes
If yes, how many benchmarks per year?
Three
If yes, for which months are benchmarks available?
Fall, Winter, Spring
BEHAVIOR ONLY: Can students be rated concurrently by one administrator?
If yes, how many students can be rated concurrently?

Training & Scoring

Training

Is training for the administrator required?
No
Describe the time required for administrator training, if applicable:
i-Ready Literacy Tasks were intentionally designed with administration guidance that would make it possible for educators to administer with little or no formal training. Various training options are available to educators interested in using the i-Ready Literacy Tasks. Professional learning specialists can visit a district to provide live trainings, with Literacy Task training lengths varying based on the district’s needs and scope of implementation. In many cases, training on the Literacy Tasks is often folded into training on the computer-adaptive i-Ready Diagnostic assessment and i-Ready Personalized Instruction lessons. These trainings are available at additional cost and can also be provided virtually. In addition to live trainings, i-Ready has an asynchronous learning platform known as the Online Educator Learning System. This system, available at no additional cost, features on-demand courses that can help educators understand how to use the Literacy Tasks. Courses include: Getting Started with i-Ready Literacy Tasks: 10 minutes; i-Ready Literacy Tasks Administration and Scoring: 30 minutes. Finally, Curriculum Associates has worked extensively to provide educators with the information they need right within the i-Ready system to administer Literacy Tasks with fidelity even with little or no training, although training is always recommended where possible.
Please describe the minimum qualifications an administrator must possess.
selected No minimum qualifications
Are training manuals and materials available?
Yes
Are training manuals/materials field-tested?
Yes
Are training manuals/materials included in cost of tools?
Yes
If No, please describe training costs:
In addition to our no-cost training materials, facilitated professional development is available for an additional cost if districts/schools have not already purchased a professional learning package. If they have purchased a package, Passage Reading Fluency training can be part of that package.
Can users obtain ongoing professional and technical support?
Yes
If Yes, please describe how users can obtain support:
Support is available through dedicated i-Ready Partners (Partner Success Manager, Professional Learning Specialist), unlimited access to in-house technical support during business hours, and self-service resources on i-ReadyCentral.com/LiteracyTasks. Self-service materials are available on i-Ready Central and through our Online Educator Learning platform. Materials include guidance documents, recorded webinars, and administration videos with scoring practice options.

Scoring

How are scores calculated?
selected Manually (by hand)
not selected Automatically (computer-scored)
selected Other
If other, please specify:
The Literacy Tasks also have a digital administration feature in which a teacher can score a student in real time using a computer or iPad (rather than scoring the student on paper and inputting scores into i-Ready). This new feature is currently available at no charge and may incur an additional fee in later years. When the digital administration feature is used, all scores are calculated automatically based on the inputs from the individual administering the test.

Do you provide basis for calculating performance level scores?
Yes
What is the basis for calculating performance level and percentile scores?
not selected Age norms
selected Grade norms
not selected Classwide norms
not selected Schoolwide norms
not selected Stanines
not selected Normal curve equivalents

What types of performance level scores are available?
selected Raw score
not selected Standard score
selected Percentile score
selected Grade equivalents
not selected IRT-based score
not selected Age equivalents
not selected Stanines
not selected Normal curve equivalents
not selected Developmental benchmarks
not selected Developmental cut points
not selected Equated
not selected Probability
not selected Lexile score
not selected Error analysis
not selected Composite scores
not selected Subscale/subtest scores
selected Other
If other, please specify:
On-grade performance level placements. These levels are based on the nationally recognized Hasbrouck and Tindal (2017) norms for oral reading fluency and are expressed as “Below Level”, “On Level,” and “Above Level.” In addition the Below Level indicates three levels based on percentile ranges: Below (0-10th percentile), Below (11-24th percentile), and Below (25-49th percentile). Hasbrouck, J. & Tindal, G. (2017). An update to compiled ORF norms (Technical Report No. 1702). Eugene, OR. Behavioral Research and Teaching, University of Oregon.

Does your tool include decision rules?
Yes
If yes, please describe.
If the student’s placement level is On or Above, the student is showing proficiency with passage reading fluency and should continue to be supported through grade-level core instruction with a focus on reading increasingly complex connected text. If the student’s placement is Below, the student would likely benefit from further investigation and potential additional instructional support of their foundational reading skills including automatic word recognition and decoding. If foundational skills are on grade level, the student should continue working on grade-level connected text with a focus on integration of automatic word recognition and effortless decoding, leading to fluency.
Can you provide evidence in support of multiple decision rules?
No
If yes, please describe.
Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
Scoring for the passage reading fluency consists of determining the average number of words correct per minute (WCPM) on two passages. To calculate WCPM per passage, determine the number of words read in one minute and subtract the number of words skipped or read incorrectly. Then repeat this calculation with the second passage. Then determine the average WCPM between the two passages. This score is the Average WCPM that is the student‘s benchmark score. The Average WCPM can then be compared to the percentiles provided on the 2017 Hasbrouck & Tindal passage reading fluency norms chart. If a disruption occurs during one of the two passage administrations, a third backup passage is provided. If another disruption occurs during administration of the backup passage, the WCPM on a single passage may be recorded as the benchmark WCPM score in place of the average WCPM on two passages. In addition to the WCPM, accuracy is calculated as the percent of words read correctly divided by the total number of words read. Accuracy is considered to see how well the student is decoding and recognizing words without their reading rate factored in. Comprehension and Prosody scores are based on four-point rubrics. The Comprehension score should be based on the student’s retelling of the whole passage. The Prosody score should also be based on the student’s reading of the whole passage. Prosody and Linguistic Diversity: Students’ expressive speech in English varies based on their geographic regions, their home languages, their familiarity with English, and other aspects of their linguistic and cultural backgrounds. This should be taken into consideration when evaluating a student’s prosody. For some students, the criteria on the Prosody Rubric for phrasing may provide more relevant information about their oral reading fluency than the criteria for intonation. In these instances, the educator may weigh phrasing criteria more heavily in the score selection on the Prosody Rubric. The rubric for the Comprehension score is based on the objective: retells details or provides a summary statement to show an understanding of the text. For a score of 1 (Beginning), for grades 1–2 a student retells only one accurate detail, demonstrating insufficient understanding of the text. For grades 3–6, a student retells only one or two accurate details, demonstrating insufficient understanding of the text. For a score of 2 (Developing) a student retells minimal accurate details that cover a small portion of the passage, demonstrating a partial understanding of the text. For a score of 3 (Proficient) a student retells enough accurate details that cover a significant portion of the passage, or provides an acceptable summary statement, to demonstrate a sufficient understanding of the text. For a score of 4 (Exemplary) a student accurately retells almost all details or provides a comprehensive summary statement that includes supporting details, demonstrating a thorough understanding of the text. The rubric for the Prosody score is based on the objective: Reads with Expression (phrasing and intonation). For a score of 1 (Beginning), a student reads primarily word-by-word, hesitating between words. Reads primarily in a monotone voice. For a score of 2 (Developing) a student frequently reads word-by-word, with occasional long pauses between words. May read with some sentence phrasing. Reads in a monotone voice but may occasionally vary pitch and volume to read expressively. For a score of 3 (Proficient) a student frequently reads with sentence phrasing, with only occasional word-by-word reading. Varies pitch and volume to read expressively but may occasionally read in a monotone voice. For a score of 4 (Exemplary) a student consistently reads with sentence phrasing. Reads primarily in an expressive voice, varying pitch and volume to deliver an engaging interpretation of the text.
Describe the tool’s approach to screening, samples (if applicable), and/or test format, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
The Passage Reading Fluency task measures a student’s ability to read text accurately, at an appropriate rate, and with suitable expression (i.e., prosody). Research shows that students who read text fluently also have better reading comprehension. Passages are designed to align to a specific grade level, using both quantitative (Lexile) and qualitative (layers of meaning/purpose, knowledge demands, language complexity, and text structure) factors. Additionally, because the passages are intended to be read aloud, some text elements are avoided or kept to a minimum, such as the use of foreign words or phrases and complex plot or time structures. The passages were reviewed by educators before they were launched as part of the PRF tasks, and some edits and realignments to grade level were made based on educator feedback. Regarding representation in the passages, cultural and linguistic responsiveness work involves authentically representing various cultural and linguistic backgrounds, while ensuring that students can draw connections between the content and their own cultural and linguistic identities. We have increasingly sought to support educators’ pursuit of culturally responsive teaching and to better the cultural and linguistic responsiveness of our products. Through ongoing partnerships with key advisors, we have conducted product reviews, undergone training, and developed guidelines and practices to understand where we are and plan where we need to go. One place where this work and research is evident is in the topics, characters, and settings depicted in the passages that make up the Passage Reading Fluency academic screening assessment, which represent a range of cultures and lived experiences. Curriculum Associates is also dedicated to ensuring the Literacy Tasks are accessible to as many students as possible. To help aid educators in using the Literacy Tasks, a detailed FAQ is available that includes considerations for educators to keep in mind about the provision of specific accommodations and/or the use of i-Ready Literacy Tasks for English Learners and students with specific disabilities. Please refer to our i-Ready Literacy Tasks Accessibility and Accommodations Guidance: https://cdn.bfldr.com/LS6J0F7/at/cqftmn8kmf3p5s43sc8w9z2q/iready-faq-literacy-tasks-accessibility-guidance.pdf. While all decisions about appropriateness of tasks must be made by educators who have access to information about students’ IEPs, 504 plans, or other documented needs, the information in this document may be helpful to consider as one factor in the decision-making process. We recommend that educators review this document, as well as each task, and apply what they know about their students to determine whether tasks are appropriate. Specific guidance is provided for untimed accommodations; home language support; accommodations processes for students who are deaf or hard of hearing; accommodations for students who are blind, color blind, or have low vision; considerations for students who are non-verbal, have limited vocalizations, or variances in articulation processes for students who are deaf or hard of hearing; and masking accommodations.

Technical Standards

Classification Accuracy & Cross-Validation Summary

Grade Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Classification Accuracy Fall Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable
Classification Accuracy Winter Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable
Classification Accuracy Spring Partially convincing evidence Convincing evidence Partially convincing evidence Convincing evidence Partially convincing evidence Partially convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available

i-Ready Diagnostic for Reading overall score

Classification Accuracy

Select time of year
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
For grades 1-6, the spring i-Ready Diagnostic for Reading overall score served as the criterion measure for classification accuracy. The i-Ready Diagnostic for Reading (referred to as Diagnostic) is a valid and reliable tool aligned to rigorous state standards across the following domains: Phonological Awareness, Phonics, High-Frequency Words, Vocabulary, Comprehension of Informational Text, and Comprehension of Literature. Although both the Diagnostic and the i-Ready Literacy Tasks are provided by Curriculum Associates, the method variance and lack of item overlap are consistent with the TRC requirements for two assessments from the same vendor establishing validity evidence. While both the Diagnostic and Passage Reading Fluency tasks are available within the i-Ready platform, they are completely separate assessments. The Diagnostic is a computer adaptive assessment that administers on-grade and off-grade level items targeted to students’ interim proficiency. The Diagnostic scores and placement levels are modeled through item response theory, unlike Passage Reading Fluency which is based on classical test theory. There is no overlap between items. The Diagnostic passages and items are developed to different content development standards compared to the passages for Passage Reading Fluency. There is no overlap between passages or items. Separate samples and criterion established the validity and reliability evidence for the Diagnostic compared to the validity and reliability evidence provided for Passage Reading Fluency.  The overall score on the Diagnostic is highly correlated with measures of reading comprehension; therefore, this was used as an external measure to demonstrate classification accuracy for the Passage Reading Fluency forms. Concurrent classification accuracy is often considered better given data are collected at the same time thereby reducing the impact of external factors. Therefore, similar classifications would indicate the i-Ready Literacy Task Passage Reading Fluency is an appropriate measure.
Do the classification accuracy analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Classification accuracy is a way of measuring the degree of similarity in the classification results of two different measures. In this analysis, i-Ready Literacy Task Passage Reading Fluency and the Diagnostic overall score are compared to determine if students would be grouped similarly by both measures. The data collection spanned twenty states across all regions of the United States. Both measures were administered in the spring within a similar time in the spring testing window. Since both measures are used to identify students with reading difficulties, a concurrent method for classification analyses was deemed appropriate. Concurrent classification accuracy is often considered better given data are collected at the same time thereby reducing the impact of external factors. Therefore, similar classifications would indicate the i-Ready Literacy Task Passage Reading Fluency is an appropriate measure.
Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
i-Ready Diagnostic for Reading scale scores are linear transformations of logit values. Logits are measurement units for logarithmic probability models such as the Rasch model. Logits are used to determine both student ability and item difficulty. Within the Rasch model, if the ability matches the item difficulty, then the person has a .50 chance of answering the item correctly. For the Diagnostic, student ability and item logit values generally range from around -7 to 6. When the i-Ready vertical scale was updated in August 2016, the equipercentile equating method was applied to the updated logit scale. The appropriate scaling constant and slope were applied to the logit value to convert to scale score values between 100 and 800 (Kolen and Brennan, 2014). This scaling is accomplished by converting the estimated logit values with the following equation: Scale Value = 499.38 + 37.81 × Logit Value. Once this conversion is made, floor and ceiling values are imposed to keep the scores within the 100–800 scale range. This is achieved by simply recoding all values below 100 up to 100 and all values above 800 down to 800. The scale score range, mean, and standard deviation on the updated scale are either exactly the same as (range), or very similar (mean and standard deviation) to those from the scale prior to the August 2016 scale update, which generally allows year-over-year comparisons of i-Ready scale scores. i-Ready Literacy Task Passage Reading Fluency scores (number of words read correctly within one minute) are aligned to one of three performance levels (Below Level, On Level, or Above Level) based on established cut scores. These levels are based on the nationally recognized Hasbrouck and Tindal (2017) norms for oral reading fluency. The Below Level indicates three levels based on percentile ranges: Below (0-10th percentile), Below (11-24th percentile), and Below (25-49th percentile). Classification analyses were conducted based on dichotomizing scores for these two measures. In August 2024, national norms for the Diagnostic were released. In alignment with NCII’s identification of students at-risk, the overall scale score associated with the 20th percentile was used to group students in one of two groups. Using these cut scores, students were classified as at-risk if they scored below the cut score on the Diagnostic for the given testing window, or not-at-risk if they scored at or above the cut. The data for i-Ready Literacy Task Passage Reading Fluency were dichotomized by assigning students in the Below (0-10th percentile) and Below (11-24th percentile) to the at-risk group and students in the Below Level (25-49th percentile), On Level, or Above Level to a low to no risk group. In general, the lower bound cut score of the i-Ready Literacy Task Passage Reading Fluency Below Level (25-49th percentile) placement level should align with the cut score delineating the 20th percentile on the Diagnostic overall scale. Therefore, students scoring in the Below (0-10th percentile) and Below (11-24th percentile) for i-Ready Literacy Task Passage Reading Fluency are likely to score below the 20th percentile of the Diagnostic overall scale. Hasbrouck, J. & Tindal, G. (2017). An update to compiled ORF norms (Technical Report No. 1702). Eugene, OR. Behavioral Research and Teaching, University of Oregon. Kolen M.J. & Brennan R.L. (2014). Test equating, scaling, and linking. Springer, New York, NY.
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
No
If yes, please describe the intervention, what children received the intervention, and how they were chosen.

Cross-Validation

Has a cross-validation study been conducted?
No
If yes,
Select time of year.
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
Do the cross-validation analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
If yes, please describe the intervention, what children received the intervention, and how they were chosen.

Classification Accuracy - Spring

Evidence Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6
Criterion measure i-Ready Diagnostic for Reading overall score i-Ready Diagnostic for Reading overall score i-Ready Diagnostic for Reading overall score i-Ready Diagnostic for Reading overall score i-Ready Diagnostic for Reading overall score i-Ready Diagnostic for Reading overall score
Cut Points - Percentile rank on criterion measure 20 20 20 20 20 20
Cut Points - Performance score on criterion measure 408 450 475 504 523 538
Cut Points - Corresponding performance score (numeric) on screener measure 34 72 91 105 119 121
Classification Data - True Positive (a) 6536 7005 6341 1858 1822 1048
Classification Data - False Positive (b) 6399 5388 6734 722 1048 334
Classification Data - False Negative (c) 890 607 380 458 395 232
Classification Data - True Negative (d) 23551 23485 16893 3410 2711 1081
Area Under the Curve (AUC) 0.91 0.95 0.94 0.90 0.86 0.88
AUC Estimate’s 95% Confidence Interval: Lower Bound 0.91 0.95 0.94 0.89 0.85 0.87
AUC Estimate’s 95% Confidence Interval: Upper Bound 0.92 0.95 0.94 0.91 0.87 0.90
Statistics Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6
Base Rate 0.20 0.21 0.22 0.36 0.37 0.47
Overall Classification Rate 0.80 0.84 0.77 0.82 0.76 0.79
Sensitivity 0.88 0.92 0.94 0.80 0.82 0.82
Specificity 0.79 0.81 0.71 0.83 0.72 0.76
False Positive Rate 0.21 0.19 0.29 0.17 0.28 0.24
False Negative Rate 0.12 0.08 0.06 0.20 0.18 0.18
Positive Predictive Power 0.51 0.57 0.48 0.72 0.63 0.76
Negative Predictive Power 0.96 0.97 0.98 0.88 0.87 0.82
Sample Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6
Date Spring 2023 screening and criterion Spring 2023 screening and criterion Spring 2023 screening and criterion Spring 2023 screening and criterion Spring 2023 screening and criterion Spring 2023 screening and criterion
Sample Size 37376 36485 30348 6448 5976 2695
Geographic Representation East North Central (IL, MI, OH, WI)
East South Central (AL, KY, TN)
Middle Atlantic (NJ, NY)
Mountain (AZ, CO)
New England (CT, MA, VT)
Pacific (CA, HI, OR, WA)
South Atlantic (VA, WV)
West North Central (KS, MO, SD)
East North Central (IL, MI, OH, WI)
East South Central (AL, TN)
Middle Atlantic (NJ, NY)
Mountain (AZ, CO)
New England (MA, VT)
Pacific (CA, HI, OR, WA)
South Atlantic (VA, WV)
West North Central (KS, MO, SD)
East North Central (IL, IN, MI, OH, WI)
East South Central (AL, TN)
Middle Atlantic (NJ, NY, PA)
Mountain (AZ, CO)
New England (MA, VT)
Pacific (CA, HI, OR, WA)
South Atlantic (NC, VA, WV)
West North Central (KS, MO)
West South Central (LA)
East North Central (IL, OH)
East South Central (TN)
Middle Atlantic (NJ, NY)
Mountain (CO, NV)
New England (MA)
Pacific (CA, HI, OR, WA)
South Atlantic (FL, VA, WV)
West North Central (KS, MO)
West South Central (LA)
East North Central (IL, MI, OH)
East South Central (TN)
Middle Atlantic (NY)
Mountain (AZ, CO)
New England (MA)
Pacific (CA, HI, OR, WA)
South Atlantic (FL, VA, WV)
West North Central (KS, MO)
West South Central (LA)
East North Central (IL, MI, OH)
East South Central (TN)
New England (MA)
Pacific (HI, OR, WA)
South Atlantic (VA)
West North Central (KS, MO)
Male            
Female            
Other            
Gender Unknown            
White, Non-Hispanic            
Black, Non-Hispanic            
Hispanic            
Asian/Pacific Islander            
American Indian/Alaska Native            
Other            
Race / Ethnicity Unknown            
Low SES            
IEP or diagnosed disability            
English Language Learner            

Reliability

Grade Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Rating Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Offer a justification for each type of reliability reported, given the type and purpose of the tool.
We provide three types of reliability to support the Passage Reading Fluency forms. The first method coefficient alpha (Cronbach Alpha) reliability analysis assesses internal consistency. This method is often used to demonstrate internal consistency of items in educational tests. For this measure, the items represent each word on the form and the maximum possible score is the total number of words on the form. For each item or word, a student’s correct or incorrect response along with their overall score is used in the calculation. Since Literacy Tasks are timed assessments and the formula for coefficient alpha does not account for response time with respect to accuracy, caution is recommended when interpreting the coefficients. The second method, concurrent alternate form reliability, compares the similarity in scores across two forms. For each form, the total number of words read correctly within one minute is the overall score. The purpose of the concurrent alternate form reliability analysis is to assess the consistency or stability of the student scores obtained from two forms that were administered to the same student on the same day. Consistency in scores across the forms is important since the forms are developed based on the same content requirements and to be of similar difficulties. The third method, delayed alternate form reliability, compares the similarity in scores across two forms administered during different testing windows. For each form, the total number of words read correctly within one minute is the overall score. The purpose of the delayed alternate form reliability analysis is to assess the consistency of student scores obtained from two forms that were administered different points in time. This is similar to a test-retest analysis; however, the saliency of the words in the forms precludes the use of the same form during a second testing window. Consistency in scores across the forms is important since the forms are developed based on the same content requirements and to be of similar difficulties.
*Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
The samples for the reliability analyses were distinct for each analysis. The sample for the coefficient alpha analyses consisted of students testing in fall 2024. These analyses did not rely on a special study for data collection. Naturally occurring data that arose from the ordinary administration of Passage Reading Fluency in districts and schools that chose to administer it. This reflected all students taking Passage Reading Fluency in grade 1 scored through digital administration. Approximately 45,938 students from public and private schools across 124 districts in 23 states across all regions were represented in our sample from grades 1-6. Concurrent alternate form reliability study data were collected through special studies during school years 2020–2021 through 2024–2025. Concurrent alternate form reliability depends on recruiting schools to participate and administer an additional form to each student. The target sample size per task was one hundred students. Educators voluntarily participating in the study were instructed to administer two forms each to students, with the alternate form administered immediately following the first form on the same day. For grades 1-4, the samples were representative of the general population with 28 districts from 12 states representing all regions. For grades 5-6, the samples were representative of the general population with four districts from four states representing all regions. Delayed alternate form reliability evaluates the consistency of student scores on the same task in adjacent administration windows such as fall compared to winter or winter compared to spring. These analyses did not rely on a special study for data collection. Naturally occurring data that arose from the ordinary administration of Passage Reading Fluency during fall and winter testing windows in districts and schools that chose to administer them during the 2022–2023 and 2023–2024 test administration windows were leveraged. Typically, the form administered for each task in fall is the first form, the second form in winter, and the third form in spring. Delayed alternate form reliability requires matching students across administration windows who took the same task (different form) in more than one administration window within a school year. Because some students are not administered the same task multiple times, the data available to analyze may be limited. For grade 1, the delayed alternate reliability correlations were winter scores to spring scores for 2023–2024. Approximately 41,000 students from public and private schools across 184 districts in 24 states across all regions were represented in our sample. For grades 2-6, the delayed alternate reliability correlations were fall scores to spring scores for 2023–2024. Approximately 150,000 students from public and private schools across 345 districts in 30 states across all regions were represented in the sample.
*Describe the analysis procedures for each reported type of reliability.
For Passage Reading Fluency, the items represent each word on the form and the maximum possible score is the total number of words in the passage. For each item or word, a student’s correct or incorrect response along with their overall score is used in the calculation. For each form, coefficient alpha is derived from the item-total correlation for each item, the average covariance between items, and the average total variance. Since Literacy Tasks are timed assessments and the formula for coefficient alpha does not account for response time with respect to accuracy, caution is recommended when interpreting the coefficients. Pearson correlations were calculated to determine concurrent alternate form reliability and delayed alternate form reliability of the Passage Reading Fluency task, as it provides a measure of the direction and strength of the relationship between two variables, in this case, the two forms. Because the task is used to establish performance benchmarks three times during the year, establishing the consistency with which forms measure the same construct is important. The following results included Cronbach Alpha, concurrent alternate form reliability, and delayed alternate form reliability and the lower and upper 95% confidence interval.

*In the table(s) below, report the results of the reliability analyses described above (e.g., internal consistency or inter-rater reliability coefficients).

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.
Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated reliability data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Validity

Grade Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Rating Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
Passage reading fluency is the ability to read text accurately, at an appropriate rate, and with suitable expression (prosody). Research shows that students who read text fluently also have better reading comprehension. Fluent readers are better able to focus on constructing meaning from text because they do not need to use their working memory for decoding and word recognition. Establishing validity for a measurement instrument requires accumulating evidence to support the inferences made from the information provided by the instrument. Thus, validity is not considered a property of an assessment but rather the collection of evidence that supports the uses of its scores (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME], 2014). Of the five categories for validity evidence, we provide concurrent and predictive analyses as evidence based on relationships with other variables. One source of evidence is with the Dynamic Indicators of Basic Early Literacy Skills, 8th Edition (DIBELS 8) Oral Reading Fluency. We selected DIBELS 8 as the external measure because it is commonly used in United States elementary schools as a universal literacy screener. DIBELS 8 purports to assess component skills involved in reading; more specifically, “DIBELS 8 subtests were developed and researched as indicators of risk and progress in overall reading, as well as risk for dyslexia and other reading difficulties” (DIBELS, 2023). Concurrent analyses were conducted for i-Ready Literacy Task Passage Reading Fluency and DIBELS 8 Oral Reading Fluency and to the DIBELS 8 Composite Score. The second source of evidence is with i-Ready Diagnostic, including overall score and relevant domain scores. The i-Ready Diagnostic for Reading (referred to as Diagnostic) is a valid and reliable tool aligned to rigorous state standards across the following domains: Phonological Awareness, Phonics, High-Frequency Words, Vocabulary, Comprehension of Informational Text, and Comprehension of Literature. Although both the Diagnostic and the i-Ready Literacy Tasks are provided by Curriculum Associates, the method variance and lack of item overlap are consistent with the TRC requirements for two assessments from the same vendor establishing validity evidence. While both the i-Ready Diagnostic and Passage Reading Fluency tasks are available within the i-Ready platform, they are completely separate assessments. The Diagnostic is a computer adaptive assessment that administers on-grade and off-grade level items targeted to students’ interim proficiency. The Diagnostic scores and placement levels are modeled through item response theory, unlike Passage Reading Fluency which is based on classical test theory. There is no overlap between items. The Diagnostic passages and items are developed to different content development standards compared to the passages for Passage Reading Fluency. There is no overlap between passages or items. Separate samples and criterion established the validity and reliability evidence for the Diagnostic compared to the validity and reliability evidence provided for Passage Reading Fluency. Both assessments are typically administered three times throughout the academic year (fall, winter, and spring). Student performance in fall on the Diagnostic provides a baseline for students’ current reading performance and is a good predictor of student performance at the end of the year. The overall score on the Diagnostic is highly correlated with measures of reading comprehension; therefore, this was used as an external measure to demonstrate validity evidence for the Passage Reading Fluency forms. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (Eds.). (2014). Standards for educational and psychological testing. University of Oregon (2023). 8th Edition of Dynamic Indicators of Basic Early Literacy Skills (DIBELS®): Administration and Scoring Guide. Eugene, OR: University of Oregon. Available: https://dibels.uoregon.edu.)
*Describe the sample(s), including size and characteristics, for each validity analysis conducted.
For grades 1–3, i-Ready Literacy Task Passage Reading Fluency and DIBELS 8 Oral Reading Fluency data were collected during a special study conducted in winter, spring, and fall 2023 whereby trained administers at volunteer schools administered i-Ready Literacy Tasks and DIBELS 8 Tasks at the same time to a sample of students. In addition, students at each school took the i-Ready Diagnostic in each administration window. Other than grade level, we did not request any personally identifiable student information or demographic characteristics about the students for whom the schools participating in the studies submitted data. Approximately, 800 students in grades 1–3 participated from four states representing Northeast, West, Midwest, and South regions provided representative samples. For grades 4–6, the concurrent analyses consisted of fall Passage Reading Fluency and fall i-Ready Diagnostic for reading overall scores. Naturally occurring data that arose from the ordinary administration of Passage Reading Fluency in districts and schools that chose to administer them during academic year 2023–2024 were leveraged. In the fall, the Diagnostic and literacy tasks were taken within a short period of time of each other. For grades 4–6, students from public and private schools across 322 districts in 33 states across all regions were represented in our fall sample with grade level samples ranging from 7,000–28,000, approximately. For grades 1–6, the predictive analyses for a prior i-Ready Literacy Task Passage Reading Fluency and i-Ready Diagnostic for Reading overall scores did not rely on a special study for data collection. Naturally occurring data that arose from the ordinary administration of Passage Reading Fluency in districts and schools that chose to administer them during the 2023–2024 test administration windows along with administering the i-Ready Diagnostic for Reading overall scores in the spring were leveraged. For grade 1, winter Passage Reading Fluency scores for approximately 48,000 students from public and private schools across 252 districts in 26 states across all regions were represented in our winter to spring sample. For grades 2–6, students from public and private schools across 498 districts in 36 states across all regions were represented in our fall to spring sample with grade level samples ranging from 6,500–71,000, approximately.
*Describe the analysis procedures for each reported type of validity.
For grades 1–3, concurrent analyses required a representative sample with i-Ready Literacy Task Passage Reading Fluency scores and DIBELS 8 Oral Reading Fluency. Pearson correlations and the lower and upper 95% confidence interval were calculated. Given the DIBELS 8 Oral Reading Fluency measures similar content to i-Ready Literacy Task Passage Reading Fluency task, a strong, positive correlation is expected. The concurrent analyses are expected to be higher than a predictive analysis since the Diagnostic was administered at later time. For grades 4–6, concurrent analyses required representative samples for i-Ready Literacy Task Passage Reading Fluency scores for fall and i-Ready Diagnostic for Reading overall scores for fall. Pearson correlations and the lower and upper 95% confidence interval were calculated. The concurrent analyses are expected to be higher than a predictive analysis since the Diagnostic was administered at later time. For grades 1–6, predictive analyses required a representative sample with i-Ready Literacy Task Passage Reading Fluency scores for fall and i-Ready Diagnostic for Reading overall scores for winter and spring. Pearson correlations and the lower and upper 95% confidence interval were calculated. Due to the overall score assessing students across multiple domains, a moderate to highly moderate correlation coefficient is expected. The following results included concurrent and predictive analyses through Pearson correlations and the lower and upper 95% confidence interval.

*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.
Describe the degree to which the provided data support the validity of the tool.
Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated validity data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Bias Analysis

Grade Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Rating Not Provided Not Provided Not Provided Not Provided Not Provided Not Provided
Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
No
If yes,
a. Describe the method used to determine the presence or absence of bias:
b. Describe the subgroups for which bias analyses were conducted:
c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.

Data Collection Practices

Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.