DIBELS 8th Edition
Composite

Summary

DIBELS 8 composite score is a combination of scores on DIBELS 8 measures and provides an estimate of overall student literacy skill. Scores from all of the relevant subtests for a specific grade are weighted and combined to form a single Composite Score. A confirmatory factor analysis was used to determine the optimal weighting for each of the subtest scores. The Composite Score is scaled so that 400 represents the mean at the middle of year assessment at each grade with 40 as a standard deviation.

Where to Obtain:
University of Oregon, Center on Teaching and Learning
support@dibels.uoregon.edu
5292 University of Oregon Eugene, OR 97403
1-888-497-4290
https://dibels.uoregon.edu
Initial Cost:
Free
Replacement Cost:
Free
Included in Cost:
All materials required for administration are available for free download at https://dibels.uoregon.edu. Printed materials are also available at https://dibels.uoregon.edu/market for a cost of $53 to $91 for a classroom set of benchmark screening materials. The DIBELS Data System (DDS) is not required, but is available for online data entry, management and reporting for a cost of $1.00 per student per year. A multi-year discount is currently available. The DDS is free-of-charge to schools in Oregon. For the most current pricing information see: https://dibels.uoregon.edu/help/pricing. Additional costs are associated with printing, and computer and internet access if also using the DIBELS Data System. Starting in the 2019-20 school year, tablet-based administration will be available from Amplify (https://www.amplify.com).
DIBELS 8th Edition approved assessment accommodations involve minor changes to assessment procedures that are unlikely to change the meaning of the results and have been approved either by DIBELS developers or assessment professionals. They should be used only when: • An accurate score is unlikely to be obtained without the accommodation; and/or • Specified in a student’s 504 plan or Individualized Education Plan (IEP). The accommodations approved for DIBELS 8th Edition are: quiet setting for testing; breaks in between measures; assistive technology (e.g., hearing aids, assistive listening devices, glasses); enlarged student materials; colored overlays, filters, or lighting adjustments; and marker or ruler for tracking.
Training Requirements:
4-8 hours of training
Qualified Administrators:
Paraprofessional
Access to Technical Support:
Technical support is available from the DIBELS Data System at the University of Oregon, https://dibels.uoregon.edu (phone: 1-888-497-4290, email: support@dibels.uoregon.edu, hours of operation: 6:00am to 5:30pm Pacific Time, Monday through Friday).
Assessment Format:
  • Performance measure
Scoring Time:
  • Scoring is automatic OR
  • 5 minutes per student
Scores Generated:
  • Percentile score
  • Developmental benchmarks
  • Developmental cut points
  • Composite scores
  • Subscale/subtest scores
Administration Time:
  • 0 minutes per student
Scoring Method:
  • Manually (by hand)
  • Automatically (computer-scored)
Technology Requirements:
Accommodations:
DIBELS 8th Edition approved assessment accommodations involve minor changes to assessment procedures that are unlikely to change the meaning of the results and have been approved either by DIBELS developers or assessment professionals. They should be used only when: • An accurate score is unlikely to be obtained without the accommodation; and/or • Specified in a student’s 504 plan or Individualized Education Plan (IEP). The accommodations approved for DIBELS 8th Edition are: quiet setting for testing; breaks in between measures; assistive technology (e.g., hearing aids, assistive listening devices, glasses); enlarged student materials; colored overlays, filters, or lighting adjustments; and marker or ruler for tracking.

Descriptive Information

Please provide a description of your tool:
DIBELS 8 composite score is a combination of scores on DIBELS 8 measures and provides an estimate of overall student literacy skill. Scores from all of the relevant subtests for a specific grade are weighted and combined to form a single Composite Score. A confirmatory factor analysis was used to determine the optimal weighting for each of the subtest scores. The Composite Score is scaled so that 400 represents the mean at the middle of year assessment at each grade with 40 as a standard deviation.
The tool is intended for use with the following grade(s).
not selected Preschool / Pre - kindergarten
selected Kindergarten
selected First grade
selected Second grade
selected Third grade
selected Fourth grade
selected Fifth grade
selected Sixth grade
selected Seventh grade
selected Eighth grade
not selected Ninth grade
not selected Tenth grade
not selected Eleventh grade
not selected Twelfth grade

The tool is intended for use with the following age(s).
not selected 0-4 years old
not selected 5 years old
not selected 6 years old
not selected 7 years old
not selected 8 years old
not selected 9 years old
not selected 10 years old
not selected 11 years old
not selected 12 years old
not selected 13 years old
not selected 14 years old
not selected 15 years old
not selected 16 years old
not selected 17 years old
not selected 18 years old

The tool is intended for use with the following student populations.
selected Students in general education
selected Students with disabilities
selected English language learners

ACADEMIC ONLY: What skills does the tool screen?

Reading
Phonological processing:
not selected RAN
not selected Memory
selected Awareness
selected Letter sound correspondence
selected Phonics
not selected Structural analysis

Word ID
selected Accuracy
selected Speed

Nonword
selected Accuracy
selected Speed

Spelling
not selected Accuracy
not selected Speed

Passage
selected Accuracy
selected Speed

Reading comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
not selected Retell
selected Maze
not selected Sentence verification
not selected Other (please describe):


Listening comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
not selected Retell
not selected Maze
not selected Sentence verification
not selected Vocabulary
not selected Expressive
not selected Receptive

Mathematics
Global Indicator of Math Competence
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Early Numeracy
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Concepts
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Computation
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematic Application
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Fractions/Decimals
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Algebra
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Geometry
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

not selected Other (please describe):

Please describe specific domain, skills or subtests:
BEHAVIOR ONLY: Which category of behaviors does your tool target?


BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.

Acquisition and Cost Information

Where to obtain:
Email Address
support@dibels.uoregon.edu
Address
5292 University of Oregon Eugene, OR 97403
Phone Number
1-888-497-4290
Website
https://dibels.uoregon.edu
Initial cost for implementing program:
Cost
$0.00
Unit of cost
Replacement cost per unit for subsequent use:
Cost
$0.00
Unit of cost
Duration of license
Additional cost information:
Describe basic pricing plan and structure of the tool. Provide information on what is included in the published tool, as well as what is not included but required for implementation.
All materials required for administration are available for free download at https://dibels.uoregon.edu. Printed materials are also available at https://dibels.uoregon.edu/market for a cost of $53 to $91 for a classroom set of benchmark screening materials. The DIBELS Data System (DDS) is not required, but is available for online data entry, management and reporting for a cost of $1.00 per student per year. A multi-year discount is currently available. The DDS is free-of-charge to schools in Oregon. For the most current pricing information see: https://dibels.uoregon.edu/help/pricing. Additional costs are associated with printing, and computer and internet access if also using the DIBELS Data System. Starting in the 2019-20 school year, tablet-based administration will be available from Amplify (https://www.amplify.com).
Provide information about special accommodations for students with disabilities.
DIBELS 8th Edition approved assessment accommodations involve minor changes to assessment procedures that are unlikely to change the meaning of the results and have been approved either by DIBELS developers or assessment professionals. They should be used only when: • An accurate score is unlikely to be obtained without the accommodation; and/or • Specified in a student’s 504 plan or Individualized Education Plan (IEP). The accommodations approved for DIBELS 8th Edition are: quiet setting for testing; breaks in between measures; assistive technology (e.g., hearing aids, assistive listening devices, glasses); enlarged student materials; colored overlays, filters, or lighting adjustments; and marker or ruler for tracking.

Administration

BEHAVIOR ONLY: What type of administrator is your tool designed for?
not selected General education teacher
not selected Special education teacher
not selected Parent
not selected Child
not selected External observer
not selected Other
If other, please specify:

What is the administration setting?
not selected Direct observation
not selected Rating scale
not selected Checklist
selected Performance measure
not selected Questionnaire
not selected Direct: Computerized
not selected One-to-one
not selected Other
If other, please specify:

Does the tool require technology?
No

If yes, what technology is required to implement your tool? (Select all that apply)
not selected Computer or tablet
not selected Internet connection
not selected Other technology (please specify)

If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:
Administering the measure does not require technology, but if users choose to use the DIBELS Data System for management and reporting of data, an internet connected computer is required. Additionally, if schools choose to administer the DIBELS 8th Edition measures using a tablet, they should contact Amplify for technology requirements.

What is the administration context?
not selected Individual
not selected Small group   If small group, n=
not selected Large group   If large group, n=
not selected Computer-administered
selected Other
If other, please specify:
The Composite score is calculated based on performance on each subtest. If users choose to use the DIBELS Data System for management and reporting of data, the Composite score is calculated automatically. Otherwise, scores can be calculated using scoring worksheets.

What is the administration time?
Time in minutes
0
per (student/group/other unit)
student

Additional scoring time:
Time in minutes
5
per (student/group/other unit)
student

ACADEMIC ONLY: What are the discontinue rules?
not selected No discontinue rules provided
not selected Basals
not selected Ceilings
selected Other
If other, please specify:
Discontinue rules are specified for each subtest.


Are norms available?
Yes
Are benchmarks available?
Yes
If yes, how many benchmarks per year?
3
If yes, for which months are benchmarks available?
Benchmarks are available for the beginning, middle and end of the school year. Beginning months are typically September, October and November; middle months are December, January, and February; and end months are typically March, April, May and June. Regardless of when the benchmark occurs, we recommend that all students are tested within a one-month window.
BEHAVIOR ONLY: Can students be rated concurrently by one administrator?
If yes, how many students can be rated concurrently?

Training & Scoring

Training

Is training for the administrator required?
Yes
Describe the time required for administrator training, if applicable:
4-8 hours of training
Please describe the minimum qualifications an administrator must possess.
Paraprofessional
not selected No minimum qualifications
Are training manuals and materials available?
Yes
Are training manuals/materials field-tested?
Yes
Are training manuals/materials included in cost of tools?
No
If No, please describe training costs:
Information about online training is available on the DIBELS Data System (https://dibels.uoregon.edu/training). Online training is free-of-charge for ‘early adopters’ (i.e., schools or districts that sign up for the next school year by a specified date in spring.) For people not associated with the ‘early adopter’ program the charge is $40 to $79 per person, depending on the number of people purchasing the training, and whether an individual is associated with a DDS account.
Can users obtain ongoing professional and technical support?
Yes
If Yes, please describe how users can obtain support:
Technical support is available from the DIBELS Data System at the University of Oregon, https://dibels.uoregon.edu (phone: 1-888-497-4290, email: support@dibels.uoregon.edu, hours of operation: 6:00am to 5:30pm Pacific Time, Monday through Friday).

Scoring

How are scores calculated?
selected Manually (by hand)
selected Automatically (computer-scored)
not selected Other
If other, please specify:

Do you provide basis for calculating performance level scores?
Yes
What is the basis for calculating performance level and percentile scores?
not selected Age norms
selected Grade norms
not selected Classwide norms
not selected Schoolwide norms
not selected Stanines
not selected Normal curve equivalents

What types of performance level scores are available?
not selected Raw score
not selected Standard score
selected Percentile score
not selected Grade equivalents
not selected IRT-based score
not selected Age equivalents
not selected Stanines
not selected Normal curve equivalents
selected Developmental benchmarks
selected Developmental cut points
not selected Equated
not selected Probability
not selected Lexile score
not selected Error analysis
selected Composite scores
selected Subscale/subtest scores
not selected Other
If other, please specify:

Does your tool include decision rules?
Yes
If yes, please describe.
Two cut points are available for DIBELS 8th Edition Composite Scores to help educators determine where to allocate resources and how much intervention students may need. One cut point indicates that students are likely at risk for difficulty in learning to read. The other is a benchmark cut point that indicates if students are likely to be on track. Students between the two cut points are considered to be somewhere between “at-risk” and “on track”.
Can you provide evidence in support of multiple decision rules?
Yes
If yes, please describe.
This application addresses the “at-risk” cut point. Information about benchmark cut points is available on the DIBELS Data System website https://dibels.uoregon.edu.
Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
Directions for calculating the DIBELS 8 Composite score are available in: https://dibels.uoregon.edu/docs/dibels_8_composite_score_calculation_guide_supplement_072020.pdf 1. For each subtest raw score, multiply the student’s raw score by the Weight listed in the table on page 6, rounding the result to the 100ths place. If a student does not have a subtest raw score due to the Discontinue or Gating Rules, use the constant from the table on page 5 for the missing subtest scores. 2. Sum the resulting weighted scores across all applicable subtests. 3. From that sum, subtract the Mean for the appropriate grade from the table on page 6. 4. Divide the result by the standard deviation (SD) for the appropriate grade in the table on page 6 and round to the 100ths place. 5. Multiply the result by 40 and round to the ones place. 6. Add the scaling constant corresponding to the grade and season in which the student was tested from the table on page 6. The result is the composite score.
Describe the tool’s approach to screening, samples (if applicable), and/or test format, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
The DIBELS 8 Composite Score represents a weighted combination of scores on all DIBELS 8 measures that are required for a particular grade and provides an estimate of overall student literacy skill. The Composite Score was developed based on data from a nationally representative sample of students in kindergarten through grade 8, using a confirmatory factor analysis (CFA) approach in which multiple theoretical one-factor reading models were constructed based on theories on literacy development and literacy assessment. Those models were then tested empirically to identify the model for each grade that best fit the data. The reading factor models were built starting with a base model for each grade, where all DIBELS 8 measures were loaded on the common reading factor. Then, the base model was extended by modeling different types of covariances. In the reading factor models for grades K–3, modeling ORF(WRC) – ORF (ACC) covariance and NWF (CLS) – NWF (WRC) covariance takes into account the residuals that multiple scores are derived from the same subtest. Modeling ORF – WRF covariance takes into account residuals associated with measures that share the task of reading real words, while the WRF – NWF(WRC) covariance accounts for the residuals associated with measuring blending words. Modeling ORF (WRC) – Maze covariance takes into account the residuals associated with measuring reading comprehension. The final model for each grade level was determined by comparing model fits. Fit of the models was evaluated using the comparative fit index (CFI; Bentler, 1990; acceptable fit ≥ .95), root mean square error of approximation (RMSEA; Browne & Cudeck, 1993; acceptable fit ≤ .06), the standardized root mean square residual (RMSR; Hu & Bentler, 1998; acceptable fit ≤ .10), Akaike information criterion (AIC; Burnham & Anderson; the lower the better), and Bayesian information criterion (BIC; Burnham & Anderson; the lower the better). Maximum likelihood was used to estimate the model. The resulting best-fitting reading factor model for grades K–3 included the available DIBELS 8 measures for each grade level and the NWF (CLS) – NWF (WRC) covariance. The best-fitting reading model for grades 4–8 included all the available DIBELS 8 measures but no covariances. Unstandardized factor loadings in the final reading models were all statistically significant. We then used the “regression method” (Thurston, 1935) to combine scores on DIBELS 8 measures and compute the composite scores. The DIBELS 8 Composite Score is thus calculated as a sum of the weighted standardized observed values of each of the measures in the estimated latent reading factor with a mean of zero and standard deviation of 1. The least square regression method used is a multivariate procedure that accounts for the correlations among the observed variables as well as the correlations between the factors and between the factors and observed variables (DiStefano, Zhu, & Mîndrilă, 2009).

Technical Standards

Classification Accuracy & Cross-Validation Summary

Grade Kindergarten
Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Classification Accuracy Fall Partially convincing evidence Partially convincing evidence Convincing evidence Partially convincing evidence Partially convincing evidence Partially convincing evidence Partially convincing evidence Convincing evidence Unconvincing evidence
Classification Accuracy Winter Partially convincing evidence Partially convincing evidence Convincing evidence Partially convincing evidence Convincing evidence Convincing evidence Unconvincing evidence Partially convincing evidence Unconvincing evidence
Classification Accuracy Spring Convincing evidence Partially convincing evidence Convincing evidence Partially convincing evidence Partially convincing evidence Partially convincing evidence Unconvincing evidence Partially convincing evidence Data unavailable
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available

DIBELS Next Composite Score

Classification Accuracy

Select time of year
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
In kindergarten, the criterion measure was the DIBELS Next Composite score administered in the spring. The DIBELS Next Composite score in the spring of kindergarten combines scores on Letter Naming Fluency, Phoneme Segmentation Fluency, and Nonsense Word Fluency Correct Letter Sounds. Although it assesses similar constructs, DIBELS Next was developed separately from DIBELS 8th Edition using different development specifications and is not part of the same measurement system.
Do the classification accuracy analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Screening measures were administered in the fall, winter, and spring of the 2018-19 school year. The DIBELS Next Composite was administered in the spring of 2019. All else being equal, concurrent administrations are preferable because they reduce the likelihood of inflated false positives due to intervention delivery on the part of schools. Thus, all spring benchmarks predicted end of year performance on the concurrent spring 2019 administration. Fall and winter benchmarks predicted end of year performance on the available spring 2019 DIBELS Next Composite administration.
Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
DIBELS 8th Edition Composite cut scores were established by using the composite score at each time point to predict end of year performance on a criterion measure of reading achievement. We used a two-stage process for determining cut-points for the DIBELS 8th Edition Composite score. First, we plotted a Receiver Operating Characteristic (ROC) curve for the selected end-of-year criterion measure at each time point and grade and determined the area under the curve (A). Second, we conducted a diagnostic analysis of each measure at each time point (i.e., season). For each analysis, we focused on two statistics: sensitivity and specificity. We chose to focus on sensitivity and specificity (rather than PPV and NPV) because they remain stable indicators regardless of the prevalence of reading difficulties in the population (Pepe, 2003). We attempted to balance sensitivity and specificity in our analyses because of their complimentary roles in a prevention model in education. Specifically, we want to be confident that as many students as possible receive the level of instructional support they require as early as possible, without overburdening teachers by asking them to deliver intervention to students who do not need additional instruction. Thus, wherever possible, the recommended cut points for DIBELS 8th edition were determined using an optimal decision threshold that maximized sensitivity among scores with a specificity at or above .80. That is, at each time point, we selected the score with the highest sensitivity among scores with a specificity at or above .80, unless the maximum sensitivity value exceeded .90, in which case the cut point selected was the score that minimized the difference between sensitivity and specificity among scores with specificity at or above .80. For measures and periods with no cut scores that met the minimum threshold for specificity, the cut point represents the score that best balances the goals of providing additional instruction where needed while keeping demands on teachers reasonable.
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
No
If yes, please describe the intervention, what children received the intervention, and how they were chosen.

Cross-Validation

Has a cross-validation study been conducted?
No
If yes,
Select time of year.
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
Do the cross-validation analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
If yes, please describe the intervention, what children received the intervention, and how they were chosen.

Iowa Assessment Total Reading Score

Classification Accuracy

Select time of year
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
The criterion outcome measure was the Iowa Assessment Total Reading Score. The Iowa Assessment is a published, group-administered, multiple-choice, norm-referenced measure of reading achievement. It is completely independent of DIBELS 8th Edition measures.
Do the classification accuracy analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Screening measures were administered in the fall, winter, and spring of the 2018-19 school year and the fall of the 2019-2020 school year. The Iowa Assessment was administered in the spring of 2019 to the full benchmarking sample and in the fall of 2019 to a subset of schools. All else being equal, concurrent administrations are preferable because they reduce the likelihood of inflated false positives due to intervention delivery on the part of schools. Thus, all spring benchmarks predicted end of year performance on the concurrent spring 2019 administration. Winter benchmarks predicted end of year performance on the spring 2019 Iowa administration because no concurrent administration was available. Fall benchmarks predict end of year performance on the Spring 2019 Iowa administration due to limited sample size for the fall 2019 Iowa sample, with the exception of Grade 3 where sample size was sufficient and thus the concurrent Fall 2019 administration was used.
Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
DIBELS 8th Edition Composite cut scores were established by using the composite score at each time point to predict end of year performance on a criterion measure of reading achievement. We used a two-stage process for determining cut-points for the DIBELS 8th Edition Composite score. First, we plotted a Receiver Operating Characteristic (ROC) curve for the selected end-of-year criterion measure at each time point and grade and determined the area under the curve (A). Second, we conducted a diagnostic analysis of each measure at each time point (i.e., season). For each analysis, we focused on two statistics: sensitivity and specificity. We chose to focus on sensitivity and specificity (rather than PPV and NPV) because they remain stable indicators regardless of the prevalence of reading difficulties in the population (Pepe, 2003). We attempted to balance sensitivity and specificity in our analyses because of their complimentary roles in a prevention model in education. Specifically, we want to be confident that as many students as possible receive the level of instructional support they require as early as possible, without overburdening teachers by asking them to deliver intervention to students who do not need additional instruction. Thus, wherever possible, the recommended cut points for DIBELS 8th edition were determined using an optimal decision threshold that maximized sensitivity among scores with a specificity at or above .80. That is, at each time point, we selected the score with the highest sensitivity among scores with a specificity at or above .80, unless the maximum sensitivity value exceeded .90, in which case the cut point selected was the score that minimized the difference between sensitivity and specificity among scores with specificity at or above .80. For measures and periods with no cut scores that met the minimum threshold for specificity, the cut point represents the score that best balances the goals of providing additional instruction where needed while keeping demands on teachers reasonable.
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
No
If yes, please describe the intervention, what children received the intervention, and how they were chosen.

Cross-Validation

Has a cross-validation study been conducted?
No
If yes,
Select time of year.
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
Do the cross-validation analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
If yes, please describe the intervention, what children received the intervention, and how they were chosen.

Classification Accuracy - Fall

Evidence Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Criterion measure DIBELS Next Composite Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score
Cut Points - Percentile rank on criterion measure 20 20 20 20 20 20 20 20 20
Cut Points - Performance score on criterion measure 108 140 154 158 176 185 194 202 211
Cut Points - Corresponding performance score (numeric) on screener measure 279 320 315 313 309 312 312 314 344
Classification Data - True Positive (a) 36 17 32 20 22 17 4 9 14
Classification Data - False Positive (b) 30 12 9 10 17 15 7 1 4
Classification Data - False Negative (c) 15 9 7 8 9 7 2 1 7
Classification Data - True Negative (d) 225 74 126 57 133 99 31 25 20
Area Under the Curve (AUC) 0.87 0.86 0.93 0.88 0.93 0.83 0.84 0.93 0.89
AUC Estimate’s 95% Confidence Interval: Lower Bound 0.81 0.79 0.89 0.82 0.89 0.72 0.70 0.82 0.79
AUC Estimate’s 95% Confidence Interval: Upper Bound 0.92 0.93 0.97 0.95 0.96 0.93 0.99 1.00 0.98
Statistics Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Base Rate 0.17 0.23 0.22 0.29 0.17 0.17 0.14 0.28 0.47
Overall Classification Rate 0.85 0.81 0.91 0.81 0.86 0.84 0.80 0.94 0.76
Sensitivity 0.71 0.65 0.82 0.71 0.71 0.71 0.67 0.90 0.67
Specificity 0.88 0.86 0.93 0.85 0.89 0.87 0.82 0.96 0.83
False Positive Rate 0.12 0.14 0.07 0.15 0.11 0.13 0.18 0.04 0.17
False Negative Rate 0.29 0.35 0.18 0.29 0.29 0.29 0.33 0.10 0.33
Positive Predictive Power 0.55 0.59 0.78 0.67 0.56 0.53 0.36 0.90 0.78
Negative Predictive Power 0.94 0.89 0.95 0.88 0.94 0.93 0.94 0.96 0.74
Sample Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Date Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2019 screening; Fall 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion
Sample Size 306 112 174 95 181 138 44 36 45
Geographic Representation East North Central (OH)
Middle Atlantic (PA)
West North Central (MO)
West South Central (AR, TX)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (WA)
West North Central (MO)
Mountain (AZ)
Pacific (WA)
West North Central (MO)
Mountain (AZ)
Pacific (WA)
West North Central (MO)
Male 52.6% 34.8% 28.2% 37.9% 32.0% 26.1% 43.2% 55.6% 46.7%
Female 47.1% 35.7% 26.4% 61.1% 47.5% 27.5% 56.8% 44.4% 53.3%
Other                  
Gender Unknown 0.3% 29.5% 23.0% 1.1% 20.4% 15.2%      
White, Non-Hispanic 44.8% 26.8% 30.5% 35.8% 35.4% 23.2% 65.9% 75.0% 75.6%
Black, Non-Hispanic 1.3% 32.1% 35.1% 50.5% 27.1% 26.1% 11.4% 5.6% 2.2%
Hispanic 53.3% 2.7% 2.9% 1.1% 6.1% 2.2% 15.9% 5.6%  
Asian/Pacific Islander   0.9% 2.9%   4.4%        
American Indian/Alaska Native   5.4% 2.3% 5.3% 2.2% 1.4%   2.8% 20.0%
Other 0.7% 2.7% 4.6% 6.3% 4.4% 0.7% 6.8% 11.1% 2.2%
Race / Ethnicity Unknown   29.5% 21.8% 1.1% 20.4% 15.2%      
Low SES                  
IEP or diagnosed disability                  
English Language Learner                  

Classification Accuracy - Winter

Evidence Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Criterion measure DIBELS Next Composite Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score
Cut Points - Percentile rank on criterion measure 20 20 20 20 20 20 20 20 20
Cut Points - Performance score on criterion measure 108 140 154 166 176 185 194 202 211
Cut Points - Corresponding performance score (numeric) on screener measure 355 376 372 376 379 379 369 373 390
Classification Data - True Positive (a) 37 28 22 12 29 16 7 30 31
Classification Data - False Positive (b) 25 21 14 30 17 15 19 23 7
Classification Data - False Negative (c) 14 7 5 8 5 4 2 6 6
Classification Data - True Negative (d) 233 79 108 115 131 95 20 83 31
Area Under the Curve (AUC) 0.92 0.85 0.93 0.82 0.92 0.90 0.68 0.88 0.87
AUC Estimate’s 95% Confidence Interval: Lower Bound 0.88 0.79 0.89 0.72 0.87 0.84 0.49 0.82 0.79
AUC Estimate’s 95% Confidence Interval: Upper Bound 0.95 0.91 0.97 0.91 0.98 0.96 0.86 0.94 0.95
Statistics Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Base Rate 0.17 0.26 0.18 0.12 0.19 0.15 0.19 0.25 0.49
Overall Classification Rate 0.87 0.79 0.87 0.77 0.88 0.85 0.56 0.80 0.83
Sensitivity 0.73 0.80 0.81 0.60 0.85 0.80 0.78 0.83 0.84
Specificity 0.90 0.79 0.89 0.79 0.89 0.86 0.51 0.78 0.82
False Positive Rate 0.10 0.21 0.11 0.21 0.11 0.14 0.49 0.22 0.18
False Negative Rate 0.27 0.20 0.19 0.40 0.15 0.20 0.22 0.17 0.16
Positive Predictive Power 0.60 0.57 0.61 0.29 0.63 0.52 0.27 0.57 0.82
Negative Predictive Power 0.94 0.92 0.96 0.93 0.96 0.96 0.91 0.93 0.84
Sample Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Date Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion
Sample Size 309 135 149 165 182 130 48 142 75
Geographic Representation East North Central (OH)
Middle Atlantic (PA)
West North Central (MO)
West South Central (AR, TX)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (GA)
West North Central (MO)
Mountain (AZ)
Pacific (WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
South Atlantic (GA)
West North Central (MO)
Male 52.8% 36.3% 32.2% 38.8% 33.0% 41.5% 39.6% 45.1% 42.7%
Female 47.2% 34.1% 36.2% 37.6% 46.7% 40.8% 60.4% 54.2% 52.0%
Other                  
Gender Unknown   29.6% 31.5% 23.6% 20.3% 17.7%   0.7% 5.3%
White, Non-Hispanic 44.7% 23.0% 35.6% 40.0% 36.8% 50.8% 68.8% 50.0% 48.0%
Black, Non-Hispanic 1.3% 37.8% 18.8% 24.8% 26.4% 20.8%   40.1% 33.3%
Hispanic 53.4% 2.2% 3.4% 3.6% 4.9% 2.3% 20.8% 1.4%  
Asian/Pacific Islander   0.7% 3.4% 3.0% 4.4% 5.4%   4.2%  
American Indian/Alaska Native   4.4% 2.7% 2.4% 2.7% 1.5%   0.7% 12.0%
Other 0.6% 2.2% 4.7% 2.4% 4.4% 0.8% 10.4% 2.8% 1.3%
Race / Ethnicity Unknown   29.6% 31.5% 23.6% 20.3% 18.5%   2.8% 5.3%
Low SES                  
IEP or diagnosed disability                  
English Language Learner                  

Classification Accuracy - Spring

Evidence Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7
Criterion measure DIBELS Next Composite Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score
Cut Points - Percentile rank on criterion measure 20 20 20 20 20 20 20 20
Cut Points - Performance score on criterion measure 108 140 154 166 176 185 194 202
Cut Points - Corresponding performance score (numeric) on screener measure 405 426 420 423 420 435 418 416
Classification Data - True Positive (a) 44 25 36 16 24 10 7 30
Classification Data - False Positive (b) 19 14 15 27 21 16 18 13
Classification Data - False Negative (c) 11 7 9 7 9 4 2 9
Classification Data - True Negative (d) 247 86 127 121 126 79 21 94
Area Under the Curve (AUC) 0.94 0.87 0.92 0.85 0.90 0.87 0.67 0.88
AUC Estimate’s 95% Confidence Interval: Lower Bound 0.91 0.81 0.89 0.77 0.84 0.76 0.50 0.82
AUC Estimate’s 95% Confidence Interval: Upper Bound 0.97 0.94 0.96 0.92 0.95 0.97 0.85 0.95
Statistics Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7
Base Rate 0.17 0.24 0.24 0.13 0.18 0.13 0.19 0.27
Overall Classification Rate 0.91 0.84 0.87 0.80 0.83 0.82 0.58 0.85
Sensitivity 0.80 0.78 0.80 0.70 0.73 0.71 0.78 0.77
Specificity 0.93 0.86 0.89 0.82 0.86 0.83 0.54 0.88
False Positive Rate 0.07 0.14 0.11 0.18 0.14 0.17 0.46 0.12
False Negative Rate 0.20 0.22 0.20 0.30 0.27 0.29 0.22 0.23
Positive Predictive Power 0.70 0.64 0.71 0.37 0.53 0.38 0.28 0.70
Negative Predictive Power 0.96 0.92 0.93 0.95 0.93 0.95 0.91 0.91
Sample Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7
Date Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion
Sample Size 321 132 187 171 180 109 48 146
Geographic Representation Middle Atlantic (PA)
West North Central (MO)
West South Central (AR, TX)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
Pacific (WA)
West North Central (MO)
Pacific (WA)
West North Central (MO)
Male 52.6% 35.6% 33.7% 36.3% 33.9% 37.6% 39.6% 43.8%
Female 47.4% 28.8% 41.2% 37.4% 45.0% 42.2% 60.4% 55.5%
Other                
Gender Unknown   32.6% 25.1% 26.3% 21.1% 20.2%   1.4%
White, Non-Hispanic 44.9% 25.0% 28.3% 39.2% 36.7% 61.5% 68.8% 48.6%
Black, Non-Hispanic 1.2% 28.8% 34.2% 22.8% 25.6% 6.4%   41.8%
Hispanic 53.3% 3.0% 3.7% 3.5% 5.0% 1.8% 20.8% 1.4%
Asian/Pacific Islander   0.8% 2.7% 2.9% 4.4% 6.4%   4.1%
American Indian/Alaska Native   4.5% 2.1% 2.3% 2.8% 1.8%   0.7%
Other 0.6% 2.3% 3.7% 2.9% 4.4% 0.9% 10.4% 2.7%
Race / Ethnicity Unknown   32.6% 25.1% 26.3% 21.1% 20.2%   1.4%
Low SES                
IEP or diagnosed disability                
English Language Learner                

Reliability

Grade Kindergarten
Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Rating Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Offer a justification for each type of reliability reported, given the type and purpose of the tool.
To assess the reliability of DIBELS 8th Edition, we evaluated multiple forms of reliability, including model-based and delayed alternate form reliability. We include delayed alternate form reliability as a supplementary source of reliability evidence by reporting correlations between two or more alternate form of the same test administered at different time points (e.g., different seasons). Model-based reliability: Model-based estimates of coefficient omega were derived from the Exploratory Factor Analysis used to determine composite score formulas. Coefficient omega (McDonald, 1970, 1999) is a measure of reliability of the items in a measurement instrument as a measure of the underlying latent variable (or construct). Since this is how the composite was derived, this is the most relevant reliability evidence of composite scores. Alternate-form reliability: Alternate-form reliability indicates the extent to which test results generalize to different item samples. To assess alternate-form reliability, students were administered multiple forms of each subtest, and scores from these two forms were correlated. Concurrent alternate-form reliability of a single (i.e., benchmark) form was estimated by the correlation between the score on that form and the score on an alternate (i.e., progress monitoring) form. Delayed alternate form reliability was estimated by correlating scores measured at different benchmark administrations across year—beginning-, middle-, and end of year. The use of alternate form reliability is justified because it uses different but equivalent forms, thereby preventing practice effects inherent in test-retest reliability where the same form is administered twice. In addition, it is important to establish that different forms are equivalent given the use of different forms for progress-monitoring across the year.
*Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
The delayed alternate form and exploratory factor analysis (EFA) sample consisted of 21 schools that administered DIBELS 8th Edition to 5,259 students in grades K - 8. The schools were located in the Pacific, East North Central, West North Central, Mountain, and South Atlantic census divisions. Schools represent towns, large cities, suburbs and rural areas. The sample of students was 50.6% male and 48.9% female; 1.5% American Indian or Alaskan Native; 2.5% Asian, 17.2% Black, 20.9% Hispanic, 4.1% two or more races, 0.4% Native Hawaiian/Pacific Islander, and 53.0% White. 13.9% of students had disabilities, 59.6% were eligible for free or reduced lunch, and 7.3% were English learners. The confirmatory factor analysis (CFA) sample was drawn from the DIBELS Data System sample for the 2018-19 school year and comprised of 135 districts across 18 states. Using weighted district-level estimates, the sample of students was approximately 51.39% male and 48.61% female; 1.74% American Indian or Alaskan Native; 4.27% Asian/Pacific Islander, 2.55% Black or African-American, 0.04% Native Hawaiian or other Pacific Islander, 14.93% Hispanic or Latino, 3.11% two or more races, and 73.27% White. Approximately 16.05% of students had an Individualized Education Plan (IEP) and 2.1% were 4.91% English learners.
*Describe the analysis procedures for each reported type of reliability.
Model-based reliability: Reliability estimates were based on a factor analytic model. Coefficient omega uses the item factor loading and uniqueness to estimate reliability. Multiple ways of calculating omega are available. We estimate and report omega-3, which uses the observed covariance matrix instead of the model-implied covariance matrix to calculate the observed total variance. This formula is the most conservative method in calculating coefficient omega. Alternate form reliability: Students were administered multiple forms of each subtest, and scores from these two forms were correlated. Delayed alternate form reliability was estimated by correlating scores measured at different benchmark administrations across year—beginning-, middle-, and end of year.

*In the table(s) below, report the results of the reliability analyses described above (e.g., internal consistency or inter-rater reliability coefficients).

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.
Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated reliability data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Validity

Grade Kindergarten
Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Rating Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Unconvincing evidence Partially convincing evidence Partially convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
DIBELS 8th Edition Composite score in kindergarten through grade 8 was validated against multiple criterion measures drawn from DIBELS Next and the Iowa Assessment of Reading. DIBELS Next criterion measure was the DIBELS Composite Score which provided both concurrent and predictive validity evidence in grades K-3. The DIBELS Next Composite Score is comprised of the individual DIBELS Next subtests administered in a given grade and time of year. The DIBELS Next Composite Score is appropriate because it assesses reading and was developed independently and administered separately from DIBELS 8th Edition. The Iowa Assessment was utilized as an additional criterion measure, administered in spring. The Iowa Assessment is a published, group-administered, multiple-choice, norm-referenced measure of reading achievement. It is completely independent of DIBELS 8th Edition measures. The Iowa Assessment is a commonly accepted measure of reading achievement. It is a published, group-administered, multiple-choice, norm-referenced test of reading. The Total Reading measure assesses broad reading achievement. Iowa assessments are completely independent of DIBELS 8th Edition measures.
*Describe the sample(s), including size and characteristics, for each validity analysis conducted.
Concurrent Fall evidence is drawn from 7 public schools that administered the Iowa Assessment to 939  students in grades 1 - 8 during the Fall of 2019. The schools are located in the Pacific, West North Central, Mountain, and South Atlantic census divisions. Schools represent towns, large cities, and rural areas. The sample of students was 50.3% male and 49.7% female; 3.88% American Indian or Alaskan Native; 0.15% Asian, 43.15% Black, 6.69% Hispanic, 4.12% two or more races, 0.43% Native Hawaiian/Pacific Islander, and 41.44% White. 16.26% of students had disabilities, 66.91% were eligible for free or reduced lunch, and 1.36% were English learners. All other evidence was drawn from 21 schools that administered DIBELS 8th Edition to 5,259 students in grades K - 8, in addition to the Iowa Assessment which was administered in the Spring of 2019. The schools were located in the Pacific, East North Central, West North Central, Mountain, and South Atlantic census divisions. Schools represent towns, large cities, suburbs and rural areas. The sample of students was 50.6% male and 48.9% female; 1.5% American Indian or Alaskan Native; 2.5% Asian, 17.2% Black, 20.9% Hispanic, 4.1% two or more races, 0.4% Native Hawaiian/Pacific Islander, and 53.0% White. 13.9% of students had disabilities, 59.6% were eligible for free or reduced lunch, and 7.3% were English learners.
*Describe the analysis procedures for each reported type of validity.
Concurrent validity: Concurrent validity was evaluated by examining the strength of correlation between the screening measure and the criterion measures administered at approximately the same time of the year. Predictive validity: Predictive validity was evaluated by examining the strength of correlation between the screening measure and the student future performance on the criterion measures (administered at least three months later).

*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.
Describe the degree to which the provided data support the validity of the tool.
Overall, the validity of the DIBELS 8th Edition composite Score is well supported by a range of concurrent and predictive validity correlations across multiple criterion measures. Lower correlations are indicative of greater lengths of time between administrations (and thus, more opportunity for student growth) and/or weaker alignment between constructs being measured.
Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated validity data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Bias Analysis

Grade Kindergarten
Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Rating No No No No No No No No No
Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
No
If yes,
a. Describe the method used to determine the presence or absence of bias:
b. Describe the subgroups for which bias analyses were conducted:
c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.

Data Collection Practices

Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.