TPRI Early Reading Assessment
Reading
Summary
The TPRI Early Reading Assessment consists of a Screening Section and an Inventory Section. The TPRI Screening Section is designed for classroom teachers to quickly and effectively identify students who may be at risk of reading difficulty. The TPRI Inventory Section is the diagnostic section of the assessment, and is designed to give teachers information about the specific instructional needs of their students. The Inventory Section is briefly described further in the “Notes” area of Section 1 of the protocol. The K-3 screens are based on empirically derived predictors of reading success. They consist of measures of phonological awareness, letter sound correspondence, and word reading skills that predict reading outcomes involving word recognition and comprehension skills. In Kindergarten and Grade 1, the TPRI Screening Section is administered at the beginning of the year (BOY), and again at the end of the year (EOY). In Grades 2 and 3, the Screening Section is administered only at BOY. In Kindergarten at BOY and EOY, and in Grade 1 at BOY, the Screening Section identifies students through the administration of multiple tasks. Students score either Developed (D) or Still Developing (SD) on each screening task. However, it is the overall screening score of either D or SD that indicates whether a student is identified as at-risk or likely not at-risk of reading difficulty.
- Where to Obtain:
- Children's Learning Institute
- P.O. Box 10624, Baltimore, MD 21285-0624
- 1-800-638-3775
- www.brookespublishing.com
- Initial Cost:
- Contact vendor for pricing details.
- Replacement Cost:
- Contact vendor for pricing details.
- Included in Cost:
- • TPRI Benchmarking Kit Includes both the Screening Section and the Inventory Section (diagnostic section) of the assessment. • Kit includes the TPRI Teacher’s Guide, Reading Comprehension Story Booklet, Task Cards, and Stopwatch. • TPRI Record Sheets Package: Includes 25 Student Record Sheets and 3 Class Record Sheets to record individual and whole-class scores at beginning of year, middle of year and end of year. • The TPRI Intervention Activities Guide is sold separately as an optional product and to provide additional support for teachers. • In addition to the TPRI products published by Brookes Publishing, the TPRI is also available for purchase from two companies (“Liberty Source” and “Wireless Generation”) that offer administration via handheld devices like a Palm Pilot or iTouch, and web based data reporting, review and analysis. The TPRI is offered on Tango Software by Liberty Source and mClass Software by Wireless Generation.
- Guidelines and Accommodations for Special Needs Students: The TPRI Screening and Inventory should be administered to all K-G3 special education students at their grade-level placement for reporting purposes. Accommodations for students with special needs can be used in administering the TPRI. Decisions on accommodations should be made on an individual basis, taking into consideration the needs of the student and whether the student routinely receives the accommodation during classroom instruction. If the student has an Individualized Educational Plan (IEP) or an instructional plan developed by a Section 504 committee, it may assist you in deciding which accommodations are appropriate. The following accommodations are acceptable: • Instructions can be signed to a student with a hearing impairment. • A student can place a colored transparency over any material presented. • A student can use a place marker. • A student can use any other accommodation that is a routine part of their reading, writing or spelling instruction.
- Training Requirements:
- 1-4 hours of training
- Qualified Administrators:
- Professional
- Access to Technical Support:
- TPRI.org, TPRI development team, Brookes Publishing, Wireless Generation, Tango Software
- Assessment Format:
-
- One-to-one
- Scoring Time:
-
- Scoring is automatic OR
- 1 minutes per student
- Scores Generated:
-
- Raw score
- Administration Time:
-
- 3 minutes per student
- Scoring Method:
-
- Manually (by hand)
- Automatically (computer-scored)
- Technology Requirements:
-
- Accommodations:
- Guidelines and Accommodations for Special Needs Students: The TPRI Screening and Inventory should be administered to all K-G3 special education students at their grade-level placement for reporting purposes. Accommodations for students with special needs can be used in administering the TPRI. Decisions on accommodations should be made on an individual basis, taking into consideration the needs of the student and whether the student routinely receives the accommodation during classroom instruction. If the student has an Individualized Educational Plan (IEP) or an instructional plan developed by a Section 504 committee, it may assist you in deciding which accommodations are appropriate. The following accommodations are acceptable: • Instructions can be signed to a student with a hearing impairment. • A student can place a colored transparency over any material presented. • A student can use a place marker. • A student can use any other accommodation that is a routine part of their reading, writing or spelling instruction.
Descriptive Information
- Please provide a description of your tool:
- The TPRI Early Reading Assessment consists of a Screening Section and an Inventory Section. The TPRI Screening Section is designed for classroom teachers to quickly and effectively identify students who may be at risk of reading difficulty. The TPRI Inventory Section is the diagnostic section of the assessment, and is designed to give teachers information about the specific instructional needs of their students. The Inventory Section is briefly described further in the “Notes” area of Section 1 of the protocol. The K-3 screens are based on empirically derived predictors of reading success. They consist of measures of phonological awareness, letter sound correspondence, and word reading skills that predict reading outcomes involving word recognition and comprehension skills. In Kindergarten and Grade 1, the TPRI Screening Section is administered at the beginning of the year (BOY), and again at the end of the year (EOY). In Grades 2 and 3, the Screening Section is administered only at BOY. In Kindergarten at BOY and EOY, and in Grade 1 at BOY, the Screening Section identifies students through the administration of multiple tasks. Students score either Developed (D) or Still Developing (SD) on each screening task. However, it is the overall screening score of either D or SD that indicates whether a student is identified as at-risk or likely not at-risk of reading difficulty.
ACADEMIC ONLY: What skills does the tool screen?
- Please describe specific domain, skills or subtests:
- BEHAVIOR ONLY: Which category of behaviors does your tool target?
-
- BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.
Acquisition and Cost Information
Administration
- Are norms available?
- No
- Are benchmarks available?
- Yes
- If yes, how many benchmarks per year?
- 3
- If yes, for which months are benchmarks available?
- September, January, April
- BEHAVIOR ONLY: Can students be rated concurrently by one administrator?
- If yes, how many students can be rated concurrently?
Training & Scoring
Training
- Is training for the administrator required?
- Yes
- Describe the time required for administrator training, if applicable:
- 1-4 hours of training
- Please describe the minimum qualifications an administrator must possess.
- Professional
- No minimum qualifications
- Are training manuals and materials available?
- Yes
- Are training manuals/materials field-tested?
- No
- Are training manuals/materials included in cost of tools?
- No
- If No, please describe training costs:
- Training and training materials are considered optional and are purchased separately.
- Can users obtain ongoing professional and technical support?
- Yes
- If Yes, please describe how users can obtain support:
- TPRI.org, TPRI development team, Brookes Publishing, Wireless Generation, Tango Software
Scoring
- Do you provide basis for calculating performance level scores?
- Does your tool include decision rules?
- If yes, please describe.
- Can you provide evidence in support of multiple decision rules?
-
No
- If yes, please describe.
- Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
- Students score 1 for a correct response on an item and 0 for an incorrect response. At the end of the task, the responses are tallied producing a raw score for the task. A range of raw scores relates to a descriptive score of Developed (D) or Still Developing (SD). Students who score D on the Screening Section are highly likely not at risk for reading difficulty. Students who score SD may fail to reach grade-level performance in reading if instructional intervention is not provided. When students are unsuccessful with the Screening Section tasks, it signals a need to gather additional assessment data to determine whether they require intervention to progress.
- Describe the tool’s approach to screening, samples (if applicable), and/or test format, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
- General Scoring Information: 1. Classroom teachers should administer the TPRI to their own students. If multiple teachers provide reading instruction for a student, the TPRI should be administered by the teacher most responsible for providing reading instruction. 2. If possible, teachers should administer the Screening Section to all students within a 1-week period. 3. The TPRI should be administered to only one student at a time. 4. The assessment environment should be adequately lighted, relatively quiet and free from distractions. 5. The Teacher’s Guide should be used with every student assessed to ensure accurate and consistent administration. The materials for each task are listed at the top of the task in the Teacher’s Guide, as well as directions on task administration. What teachers say to the student while testing always appears in bold print. 6. If a task includes Practice Items, teachers should always present all items. Practice Items allow the student to gain a better understanding of what the task requires. 7. Instructions may be repeated as needed. PA items may be repeated only in case of noisy interferences. Other assessment items may be repeated if requested by the student. 8. Teachers should record scores on the Student Record Sheet as the assessment is administered and not wait until they’ve finished. As the student responds to each item, teachers record 1 for a correct response or 0 for an incorrect response. 9. While administering the assessment, hints, clues or other feedback about correct responses should not be provided. Teachers should be equally positive and encouraging with both correct and incorrect responses, praising effort, not correct responses. Students should leave the administration feeling good about their performance. For further information, please contact the vendor to obtain excerpts from the Teacher’s Guide detailing the administration procedures and instructions for each screening task at all grade levels. Dialectical and Cultural Sensitivity It’s important to be sensitive to students’ dialectic, linguistic and cultural diversity when administering the TPRI. When student and teacher don’t share the same dialect, scoring reliability can be jeopardized. Teachers must be sensitive to a student’s dialect, accent and speech peculiarities or impairments. Flexibility, professional judgment and knowledge of students should always be used in scoring student responses. In general, it’s better to err on the side of caution by marking an error when you’re uncertain about how to score a response, whether it’s related to the student’s speech or other concerns.
Technical Standards
Classification Accuracy & Cross-Validation Summary
Grade |
Kindergarten
|
Grade 1
|
Grade 2
|
Grade 3
|
---|---|---|---|---|
Classification Accuracy Fall | ||||
Classification Accuracy Winter | ||||
Classification Accuracy Spring |
WJ-Broad
Classification Accuracy
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- WJ-Broad = the Woodcock-Johnson Broad Reading Cluster score. Risk was determined to be below the 20th percentile on the Broad Reading Cluster.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- Cut-points were achieved by deliberately and manually adjusting the equation (relationship of selected screening tasks as predictors and outcomes) to establish the lowest possible false positive error rate, while also keeping false negative error rates below 10%. Cut-offs that produced the most desirable classification were selected. Grade 2 Screen The 2010 Grade 2 screen was similarly revised based upon the existing TPRI second grade screen as well as piloting new items and measures. We employed the same logic and procedures as outlined above for the Kindergarten and first grade screens. The Grade 2 TPRI has a screen at beginning of second grade which consists of 12-item word reading task. Students who correctly read 9 out of the 12 items are considered “Developed” and students who correctly read 8 or fewer are “Still Developing” on the screen. For the 814 students that we had complete data, 727 were above the 20th percentile on the WJ Broad Reading and 87 were below. The beginning of year screen for second grade correctly identified 559 and misidentified 168 (false-positive) of the 727 students above threshold. The beginning of year screen for second grade correctly identified 77 and failed to identify 10 (false-negative) of the 87 students below threshold. The 11% false-negative rate is strong and comparable to prior TPRI screens. In our sample, the revised second grade TPRI screen would have only “missed” 10 out of 814 students with a 12-item word reading task that takes less than 3 minutes to administer. Grade 3 Screen The 2010 Grade 3 screen was not revised in this edition. The Grade 3 TPRI has a screen at the beginning of the year which consists of a 20-item word reading task. Students who correctly read 19 out of the 20 items are considered “Developed” and students who correctly read 18 or fewer are considered “Still Developing” on the screen. For the 739 students for whom we had complete data, 691 were above the 20th percentile on the WJ Broad Reading and 48 were below. The beginning of year screen for third grade correctly identified 494 and misidentified 197 (false-positive) of the 691 students above threshold. The beginning of year screen for third grade correctly identified 45 and failed to identify 3(false negative) of the 48 students below threshold. The 6% false-negative rate is strong. In our sample, the third grade TPRI screen would have only “missed” 3 out of 739 students with a 20-item word reading task that takes less than 3 minutes to administer.
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
-
No
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Cross-Validation
- Has a cross-validation study been conducted?
-
No
- If yes,
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
WJ-Broad (SCR 1 - BOY)
Classification Accuracy
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- WJ-Broad = the Woodcock-Johnson Broad Reading Cluster score. Risk was determined to be below the 20th percentile on the Broad Reading Cluster.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- The Kindergarten screen at beginning of year consists of two short tasks: a ten item letter-sound identification task and an 8 item blending onset-rimes and phonemes task. To be “Developed” (not at-risk) students must provide the correct letter sound for 6 out of the 10 letters on the letter-sound identification task, and blend 4 out of the 8 words on the blending onset-rimes and phonemes task. Because each task on the screening section is evaluated separately in order to make a decision about risk status, it is not appropriate to simply add or combine the scores (neither raw nor z-scores) from the two separate screening tasks, and then examine the combined ROC curve. Instead we’ve provided a ROC curve for each of the screening tasks separately and presented the AUC and specificity information here for each task individually. For this reason, these ROC-Curves may not be as telling as they will be in other instances, namely the G1-EOY, G2-BOY, and G3-BOY screens, each of which consists of a single task. Cut-points were achieved by deliberately and manually adjusting the equation (relationship of selected screening tasks as predictors and outcomes) to establish the lowest possible false positive error rate, while also keeping false negative error rates below 10%. Cut-offs that produced the most desirable classification were selected. Kindergarten Screens The 2010 Kindergarten screens were revised based upon the existing TPRI screens as well as testing other measures for predictive utility. Performance at the beginning of the year on various screening tasks was compared against outcome measures administered in the late spring, or end of year timeframe. We also screen students at the end of kindergarten in order to help the teacher identify children who would benefit from administration of the inventory in order to plan learning objectives for the summer and following year. Predictors include measures of letter names, letter sounds, and phonological awareness tasks. The first step in establishing the best set of predictors involved an examination of all possible combinations in predicting outcomes at the end of the year. To this, a linear discriminant function analysis was conducted. We examined both the squared canonical correlation, an index of the strength of the relationship between the predictor and outcome variable(s), and the identification matrices resulting from predicting outcomes on a case-by-case basis. Variables were selected if they exhibited both a) a high squared canonical correlation and b) relatively low numbers of both false positive and false negative errors. In all instances, the prediction set that provided the best identifications using the least number of predictors was selected. Once a set of predictors was selected, a cut-point from the equation expressing the relationship of the predictors and outcomes was established. This cut-point was achieved by deliberately and manually adjusting the equation to establish the lowest possible false positive error rate, while also keeping false negative error rates below 10%. Cut-offs that produced the most desirable classification were selected. Only 6 out of the 743 students who were assessed at both BOY and EOY were misclassified at BOY as not at-risk. This represents a false-negative, a more egregious type of error in an educational setting. A false-positive misclassification represents students who were misidentified at BOY as being at-risk but who were not considered at-risk at end of year based upon the end of year outcome measures. This error is less egregious because it merely amounts some additional assessment (i.e., gathering data from the inventory tasks). False-negative errors, on the other hand, should be minimized in order to prevent failure-to-identify students who do show signs of struggle at the end of the year. False-negative rate is determined by the number of misclassifications (6) divided by the total number of students who were at risk on the outcome measure (94), or 6%. Only 8 out of the 744 students who were assessed using both TPRI and outcome measures at EOY were misclassified as not at-risk. The false-negative rate at end of year was 9%. Grade 1 Screens The 2010 Grade 1 screens were similarly revised based upon the existing TPRI screens as well as piloting new items and measures. We employed the same logic and procedures as outlined above for the Kindergarten screens. The First Grade screen at beginning of year consists of three short tasks: a ten item letter-sound identification task, an eight item word reading task, and a 6 item blending phoneme task. The 10 item letter-sound identification task does not factor into the overall decision rule for a student to be considered “Developed” or “Still Developing” in the screen. It is to provide some carry-over from Kindergarten or to provide first grade teachers some information about their students’ letter-sound identification abilities at the beginning of year. To be “Developed” students must correctly read 4 out of the 8 words on the word reading task or blend 5 out of the 6 words on the blending phonemes task. At the end of first grade 129 students were below the 20th percentile on the Woodcock-Johnson Broad Reading cluster. There were 602 students that scored above the 20th percentile. If we had applied the decision rule from data that was collected in the Fall, we would have correctly identified 120 out of these 129 students who ended up below the outcome criterion and 426 out of the 602 who ended up above the criterion. These decision criteria would have incorrectly identified 176 out of the 602 who ended up above the outcome criterion as “Still Developing” (false-positive identification rate); these decision criteria would have incorrectly not-identified 9 out of the 129 who fell below the outcome criterion (false-negative identification rate). The 7% false-negative rate is strong and comparable to prior TPRI screens. In our sample, the revised TPRI screens would have only “missed” 9 out of 731 students. The screen at the end of first grade performs similarly. The EOY screen is a 12-item word reading task. Students who correctly read 8 out of the 12 items are considered “Developed” and students who correctly read 7 or fewer are considered “Still Developing” on the screen. For the 735 students that we had complete data, 608 were above the 20th percentile on the WJ Broad Reading and 127 were below. The end of year screen for first grade correctly identified 466 and misidentified 142 (false-positive) of the 608 students above threshold. The end of year screen for first grade correctly identified 117 and failed to identify 10 (false-negative) of the 127 students below threshold. Again, the 8% false-negative rate is strong and comparable to prior TPRI screens. In our sample, the revised TPRI screens would have only “missed” 10 out of 735 students with a 12-item word reading task that takes less than 3 minutes to administer.
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
-
No
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Cross-Validation
- Has a cross-validation study been conducted?
-
No
- If yes,
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
WJ-Broad (SCR 2 - EOY)
Classification Accuracy
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- WJ-Broad = the Woodcock-Johnson Broad Reading Cluster score. Risk was determined to be below the 20th percentile on the Broad Reading Cluster.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- The Kindergarten screen at beginning of year consists of two short tasks: a ten item letter-sound identification task and an 8 item blending onset-rimes and phonemes task. To be “Developed” (not at-risk) students must provide the correct letter sound for 6 out of the 10 letters on the letter-sound identification task, and blend 4 out of the 8 words on the blending onset-rimes and phonemes task. Because each task on the screening section is evaluated separately in order to make a decision about risk status, it is not appropriate to simply add or combine the scores (neither raw nor z-scores) from the two separate screening tasks, and then examine the combined ROC curve. Instead we’ve provided a ROC curve for each of the screening tasks separately and presented the AUC and specificity information here for each task individually. For this reason, these ROC-Curves may not be as telling as they will be in other instances, namely the G1-EOY, G2-BOY, and G3-BOY screens, each of which consists of a single task. Cut-points were achieved by deliberately and manually adjusting the equation (relationship of selected screening tasks as predictors and outcomes) to establish the lowest possible false positive error rate, while also keeping false negative error rates below 10%. Cut-offs that produced the most desirable classification were selected. Kindergarten Screens The 2010 Kindergarten screens were revised based upon the existing TPRI screens as well as testing other measures for predictive utility. Performance at the beginning of the year on various screening tasks was compared against outcome measures administered in the late spring, or end of year timeframe. We also screen students at the end of kindergarten in order to help the teacher identify children who would benefit from administration of the inventory in order to plan learning objectives for the summer and following year. Predictors include measures of letter names, letter sounds, and phonological awareness tasks. The first step in establishing the best set of predictors involved an examination of all possible combinations in predicting outcomes at the end of the year. To this, a linear discriminant function analysis was conducted. We examined both the squared canonical correlation, an index of the strength of the relationship between the predictor and outcome variable(s), and the identification matrices resulting from predicting outcomes on a case-by-case basis. Variables were selected if they exhibited both a) a high squared canonical correlation and b) relatively low numbers of both false positive and false negative errors. In all instances, the prediction set that provided the best identifications using the least number of predictors was selected. Once a set of predictors was selected, a cut-point from the equation expressing the relationship of the predictors and outcomes was established. This cut-point was achieved by deliberately and manually adjusting the equation to establish the lowest possible false positive error rate, while also keeping false negative error rates below 10%. Cut-offs that produced the most desirable classification were selected. Only 6 out of the 743 students who were assessed at both BOY and EOY were misclassified at BOY as not at-risk. This represents a false-negative, a more egregious type of error in an educational setting. A false-positive misclassification represents students who were misidentified at BOY as being at-risk but who were not considered at-risk at end of year based upon the end of year outcome measures. This error is less egregious because it merely amounts some additional assessment (i.e., gathering data from the inventory tasks). False-negative errors, on the other hand, should be minimized in order to prevent failure-to-identify students who do show signs of struggle at the end of the year. False-negative rate is determined by the number of misclassifications (6) divided by the total number of students who were at risk on the outcome measure (94), or 6%. Only 8 out of the 744 students who were assessed using both TPRI and outcome measures at EOY were misclassified as not at-risk. The false-negative rate at end of year was 9%. Grade 1 Screens The 2010 Grade 1 screens were similarly revised based upon the existing TPRI screens as well as piloting new items and measures. We employed the same logic and procedures as outlined above for the Kindergarten screens. The First Grade screen at beginning of year consists of three short tasks: a ten item letter-sound identification task, an eight item word reading task, and a 6 item blending phoneme task. The 10 item letter-sound identification task does not factor into the overall decision rule for a student to be considered “Developed” or “Still Developing” in the screen. It is to provide some carry-over from Kindergarten or to provide first grade teachers some information about their students’ letter-sound identification abilities at the beginning of year. To be “Developed” students must correctly read 4 out of the 8 words on the word reading task or blend 5 out of the 6 words on the blending phonemes task. At the end of first grade 129 students were below the 20th percentile on the Woodcock-Johnson Broad Reading cluster. There were 602 students that scored above the 20th percentile. If we had applied the decision rule from data that was collected in the Fall, we would have correctly identified 120 out of these 129 students who ended up below the outcome criterion and 426 out of the 602 who ended up above the criterion. These decision criteria would have incorrectly identified 176 out of the 602 who ended up above the outcome criterion as “Still Developing” (false-positive identification rate); these decision criteria would have incorrectly not-identified 9 out of the 129 who fell below the outcome criterion (false-negative identification rate). The 7% false-negative rate is strong and comparable to prior TPRI screens. In our sample, the revised TPRI screens would have only “missed” 9 out of 731 students. The screen at the end of first grade performs similarly. The EOY screen is a 12-item word reading task. Students who correctly read 8 out of the 12 items are considered “Developed” and students who correctly read 7 or fewer are considered “Still Developing” on the screen. For the 735 students that we had complete data, 608 were above the 20th percentile on the WJ Broad Reading and 127 were below. The end of year screen for first grade correctly identified 466 and misidentified 142 (false-positive) of the 608 students above threshold. The end of year screen for first grade correctly identified 117 and failed to identify 10 (false-negative) of the 127 students below threshold. Again, the 8% false-negative rate is strong and comparable to prior TPRI screens. In our sample, the revised TPRI screens would have only “missed” 10 out of 735 students with a 12-item word reading task that takes less than 3 minutes to administer.
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
-
No
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Cross-Validation
- Has a cross-validation study been conducted?
-
No
- If yes,
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
WJ-Broad (SCR 2 - BOY)
Classification Accuracy
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- WJ-Broad = the Woodcock-Johnson Broad Reading Cluster score. Risk was determined to be below the 20th percentile on the Broad Reading Cluster.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- The Kindergarten screen at beginning of year consists of two short tasks: a ten item letter-sound identification task and an 8 item blending onset-rimes and phonemes task. To be “Developed” (not at-risk) students must provide the correct letter sound for 6 out of the 10 letters on the letter-sound identification task, and blend 4 out of the 8 words on the blending onset-rimes and phonemes task. Because each task on the screening section is evaluated separately in order to make a decision about risk status, it is not appropriate to simply add or combine the scores (neither raw nor z-scores) from the two separate screening tasks, and then examine the combined ROC curve. Instead we’ve provided a ROC curve for each of the screening tasks separately and presented the AUC and specificity information here for each task individually. For this reason, these ROC-Curves may not be as telling as they will be in other instances, namely the G1-EOY, G2-BOY, and G3-BOY screens, each of which consists of a single task. Cut-points were achieved by deliberately and manually adjusting the equation (relationship of selected screening tasks as predictors and outcomes) to establish the lowest possible false positive error rate, while also keeping false negative error rates below 10%. Cut-offs that produced the most desirable classification were selected. Kindergarten Screens The 2010 Kindergarten screens were revised based upon the existing TPRI screens as well as testing other measures for predictive utility. Performance at the beginning of the year on various screening tasks was compared against outcome measures administered in the late spring, or end of year timeframe. We also screen students at the end of kindergarten in order to help the teacher identify children who would benefit from administration of the inventory in order to plan learning objectives for the summer and following year. Predictors include measures of letter names, letter sounds, and phonological awareness tasks. The first step in establishing the best set of predictors involved an examination of all possible combinations in predicting outcomes at the end of the year. To this, a linear discriminant function analysis was conducted. We examined both the squared canonical correlation, an index of the strength of the relationship between the predictor and outcome variable(s), and the identification matrices resulting from predicting outcomes on a case-by-case basis. Variables were selected if they exhibited both a) a high squared canonical correlation and b) relatively low numbers of both false positive and false negative errors. In all instances, the prediction set that provided the best identifications using the least number of predictors was selected. Once a set of predictors was selected, a cut-point from the equation expressing the relationship of the predictors and outcomes was established. This cut-point was achieved by deliberately and manually adjusting the equation to establish the lowest possible false positive error rate, while also keeping false negative error rates below 10%. Cut-offs that produced the most desirable classification were selected. Only 6 out of the 743 students who were assessed at both BOY and EOY were misclassified at BOY as not at-risk. This represents a false-negative, a more egregious type of error in an educational setting. A false-positive misclassification represents students who were misidentified at BOY as being at-risk but who were not considered at-risk at end of year based upon the end of year outcome measures. This error is less egregious because it merely amounts some additional assessment (i.e., gathering data from the inventory tasks). False-negative errors, on the other hand, should be minimized in order to prevent failure-to-identify students who do show signs of struggle at the end of the year. False-negative rate is determined by the number of misclassifications (6) divided by the total number of students who were at risk on the outcome measure (94), or 6%. Only 8 out of the 744 students who were assessed using both TPRI and outcome measures at EOY were misclassified as not at-risk. The false-negative rate at end of year was 9%. Grade 1 Screens The 2010 Grade 1 screens were similarly revised based upon the existing TPRI screens as well as piloting new items and measures. We employed the same logic and procedures as outlined above for the Kindergarten screens. The First Grade screen at beginning of year consists of three short tasks: a ten item letter-sound identification task, an eight item word reading task, and a 6 item blending phoneme task. The 10 item letter-sound identification task does not factor into the overall decision rule for a student to be considered “Developed” or “Still Developing” in the screen. It is to provide some carry-over from Kindergarten or to provide first grade teachers some information about their students’ letter-sound identification abilities at the beginning of year. To be “Developed” students must correctly read 4 out of the 8 words on the word reading task or blend 5 out of the 6 words on the blending phonemes task. At the end of first grade 129 students were below the 20th percentile on the Woodcock-Johnson Broad Reading cluster. There were 602 students that scored above the 20th percentile. If we had applied the decision rule from data that was collected in the Fall, we would have correctly identified 120 out of these 129 students who ended up below the outcome criterion and 426 out of the 602 who ended up above the criterion. These decision criteria would have incorrectly identified 176 out of the 602 who ended up above the outcome criterion as “Still Developing” (false-positive identification rate); these decision criteria would have incorrectly not-identified 9 out of the 129 who fell below the outcome criterion (false-negative identification rate). The 7% false-negative rate is strong and comparable to prior TPRI screens. In our sample, the revised TPRI screens would have only “missed” 9 out of 731 students. The screen at the end of first grade performs similarly. The EOY screen is a 12-item word reading task. Students who correctly read 8 out of the 12 items are considered “Developed” and students who correctly read 7 or fewer are considered “Still Developing” on the screen. For the 735 students that we had complete data, 608 were above the 20th percentile on the WJ Broad Reading and 127 were below. The end of year screen for first grade correctly identified 466 and misidentified 142 (false-positive) of the 608 students above threshold. The end of year screen for first grade correctly identified 117 and failed to identify 10 (false-negative) of the 127 students below threshold. Again, the 8% false-negative rate is strong and comparable to prior TPRI screens. In our sample, the revised TPRI screens would have only “missed” 10 out of 735 students with a 12-item word reading task that takes less than 3 minutes to administer.
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
-
No
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Cross-Validation
- Has a cross-validation study been conducted?
-
No
- If yes,
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Classification Accuracy - Fall
Evidence | Kindergarten | Grade 1 | Grade 2 | Grade 3 |
---|---|---|---|---|
Criterion measure | WJ-Broad (SCR 1 - BOY) | WJ-Broad (SCR 2 - BOY) | WJ-Broad | WJ-Broad |
Cut Points - Percentile rank on criterion measure | 20 | 20 | 20 | 20 |
Cut Points - Performance score on criterion measure | 8.00 | 18.00 | ||
Cut Points - Corresponding performance score (numeric) on screener measure | At least 6/10 letter-sound identification items correct and 4/8 items correct on blending onset rimes / phonemes task | At least 4/8 items correct on word reading task OR 5/6 items correct on phoneme-blending task | ||
Classification Data - True Positive (a) | 88 | 120 | 77 | 45 |
Classification Data - False Positive (b) | 283 | 176 | 168 | 197 |
Classification Data - False Negative (c) | 6 | 9 | 10 | 3 |
Classification Data - True Negative (d) | 366 | 426 | 559 | 494 |
Area Under the Curve (AUC) | 0.98 | 0.96 | 0.95 | 0.91 |
AUC Estimate’s 95% Confidence Interval: Lower Bound | ||||
AUC Estimate’s 95% Confidence Interval: Upper Bound |
Statistics | Kindergarten | Grade 1 | Grade 2 | Grade 3 |
---|---|---|---|---|
Base Rate | 0.13 | 0.18 | 0.11 | 0.06 |
Overall Classification Rate | 0.61 | 0.75 | 0.78 | 0.73 |
Sensitivity | 0.94 | 0.93 | 0.89 | 0.94 |
Specificity | 0.56 | 0.71 | 0.77 | 0.71 |
False Positive Rate | 0.44 | 0.29 | 0.23 | 0.29 |
False Negative Rate | 0.06 | 0.07 | 0.11 | 0.06 |
Positive Predictive Power | 0.24 | 0.41 | 0.31 | 0.19 |
Negative Predictive Power | 0.98 | 0.98 | 0.98 | 0.99 |
Sample | Kindergarten | Grade 1 | Grade 2 | Grade 3 |
---|---|---|---|---|
Date | 2008-2009 | 2008-2009 | 2008-2009 | 2008-2009 |
Sample Size | 743 | 731 | 814 | 739 |
Geographic Representation | West South Central (TX) | West South Central (TX) | West South Central (TX) | West South Central (TX) |
Male | 44.0% | 45.0% | 47.7% | |
Female | 45.9% | 46.9% | 49.6% | |
Other | ||||
Gender Unknown | 10.1% | 8.1% | 2.7% | |
White, Non-Hispanic | 26.6% | 25.2% | 22.4% | |
Black, Non-Hispanic | 24.8% | 22.6% | 20.1% | |
Hispanic | 21.8% | 20.9% | 18.7% | |
Asian/Pacific Islander | ||||
American Indian/Alaska Native | ||||
Other | ||||
Race / Ethnicity Unknown | 26.8% | 31.3% | 38.8% | |
Low SES | ||||
IEP or diagnosed disability | ||||
English Language Learner |
Classification Accuracy - Spring
Evidence | Kindergarten | Grade 1 |
---|---|---|
Criterion measure | WJ-Broad (SCR 2 - EOY) | WJ-Broad (SCR 2 - EOY) |
Cut Points - Percentile rank on criterion measure | 20 | 20 |
Cut Points - Performance score on criterion measure | 7 | |
Cut Points - Corresponding performance score (numeric) on screener measure | At least 8/10 letter-sound identification items correct and 6/8 items correct on blending onset rimes / phonemes task | At least 8/12 items correct on word reading task |
Classification Data - True Positive (a) | 86 | 117 |
Classification Data - False Positive (b) | 252 | 142 |
Classification Data - False Negative (c) | 8 | 10 |
Classification Data - True Negative (d) | 398 | 466 |
Area Under the Curve (AUC) | 0.94 | 0.96 |
AUC Estimate’s 95% Confidence Interval: Lower Bound | ||
AUC Estimate’s 95% Confidence Interval: Upper Bound |
Statistics | Kindergarten | Grade 1 |
---|---|---|
Base Rate | 0.13 | 0.17 |
Overall Classification Rate | 0.65 | 0.79 |
Sensitivity | 0.91 | 0.92 |
Specificity | 0.61 | 0.77 |
False Positive Rate | 0.39 | 0.23 |
False Negative Rate | 0.09 | 0.08 |
Positive Predictive Power | 0.25 | 0.45 |
Negative Predictive Power | 0.98 | 0.98 |
Sample | Kindergarten | Grade 1 |
---|---|---|
Date | 2008-2009 | 2008-2009 |
Sample Size | 744 | 735 |
Geographic Representation | West South Central (TX) | West South Central (TX) |
Male | 44.0% | 44.8% |
Female | 45.8% | 46.7% |
Other | ||
Gender Unknown | 10.2% | 8.6% |
White, Non-Hispanic | 26.6% | 25.0% |
Black, Non-Hispanic | 24.7% | 22.4% |
Hispanic | 21.8% | 20.8% |
Asian/Pacific Islander | ||
American Indian/Alaska Native | ||
Other | ||
Race / Ethnicity Unknown | 26.9% | 31.7% |
Low SES | ||
IEP or diagnosed disability | ||
English Language Learner |
Reliability
Grade |
Kindergarten
|
Grade 1
|
Grade 2
|
Grade 3
|
---|---|---|---|---|
Rating | d | d | d | d |
- *Offer a justification for each type of reliability reported, given the type and purpose of the tool.
- *Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
- *Describe the analysis procedures for each reported type of reliability.
*In the table(s) below, report the results of the reliability analyses described above (e.g., internal consistency or inter-rater reliability coefficients).
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
- Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
- Yes
If yes, fill in data for each subgroup with disaggregated reliability data.
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
Validity
Grade |
Kindergarten
|
Grade 1
|
Grade 2
|
Grade 3
|
---|---|---|---|---|
Rating |
- *Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
- *Describe the sample(s), including size and characteristics, for each validity analysis conducted.
- *Describe the analysis procedures for each reported type of validity.
*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
- Describe the degree to which the provided data support the validity of the tool.
- Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
If yes, fill in data for each subgroup with disaggregated validity data.
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
Bias Analysis
Grade |
Kindergarten
|
Grade 1
|
Grade 2
|
Grade 3
|
---|---|---|---|---|
Rating | No | No | No | No |
- Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
- No
- If yes,
- a. Describe the method used to determine the presence or absence of bias:
- b. Describe the subgroups for which bias analyses were conducted:
- c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.
Data Collection Practices
Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.