DIBELS 8th Edition
Maze

Summary

Maze is a standardized, group-administered measure of reading comprehension. Maze is administered to students in the fall of second grade through the spring of eighth grade. In Maze, the examiner presents students with a passage that has every seventh word removed and replaced with three options. In third through eighth grade, the first and last sentence are left intact, and in second grade, the first two sentences and last sentence are left intact. The final score is the number of maze words selected correctly within 3 minutes minus one-half of the number of errors. Skipped items are treated as errors, but items not reached are not counted as errors.

Where to Obtain:
University of Oregon, Center on Teaching and Learning
support@dibels.uoregon.edu
5292 University of Oregon Eugene, OR 97403
1-888-497-4290
https://dibels.uoregon.edu
Initial Cost:
Free
Replacement Cost:
Free
Included in Cost:
All materials required for administration are available for free download at https://dibels.uoregon.edu. Printed materials are also available at https://dibels.uoregon.edu/market for a cost of $53 to $91 for a classroom set of benchmark screening materials. The DIBELS Data System (DDS) is not required, but is available for online data entry, management and reporting for a cost of $1.00 per student per year. A multi-year discount is currently available. The DDS is free-of-charge to schools in Oregon. For the most current pricing information see: https://dibels.uoregon.edu/help/pricing. Additional costs are associated with printing, and computer and internet access if also using the DIBELS Data System. Starting in the 2019-20 school year, tablet-based administration will be available from Amplify (https://www.amplify.com).
DIBELS 8th Edition approved assessment accommodations involve minor changes to assessment procedures that are unlikely to change the meaning of the results and have been approved either by DIBELS developers or assessment professionals. They should be used only when: • An accurate score is unlikely to be obtained without the accommodation; and/or • Specified in a student’s 504 plan or Individualized Education Plan (IEP). The accommodations approved for DIBELS 8th Edition are: quiet setting for testing; breaks in between measures; assistive technology (e.g., hearing aids, assistive listening devices, glasses); enlarged student materials; colored overlays, filters, or lighting adjustments; and marker or ruler for tracking.
Training Requirements:
1-4 hours
Qualified Administrators:
Paraprofessional
Access to Technical Support:
Technical support is available from the DIBELS Data System at the University of Oregon, https://dibels.uoregon.edu (phone: 1-888-497-4290, email: support@dibels.uoregon.edu, hours of operation: 6:00am to 5:30pm Pacific Time, Monday through Friday).
Assessment Format:
  • Performance measure
  • Other: Small or large group administration is supported.
Scoring Time:
  • 2 minutes per student
Scores Generated:
  • Raw score
  • Percentile score
  • Developmental benchmarks
  • Developmental cut points
Administration Time:
  • 5 minutes per group
Scoring Method:
  • Manually (by hand)
Technology Requirements:
Accommodations:
DIBELS 8th Edition approved assessment accommodations involve minor changes to assessment procedures that are unlikely to change the meaning of the results and have been approved either by DIBELS developers or assessment professionals. They should be used only when: • An accurate score is unlikely to be obtained without the accommodation; and/or • Specified in a student’s 504 plan or Individualized Education Plan (IEP). The accommodations approved for DIBELS 8th Edition are: quiet setting for testing; breaks in between measures; assistive technology (e.g., hearing aids, assistive listening devices, glasses); enlarged student materials; colored overlays, filters, or lighting adjustments; and marker or ruler for tracking.

Descriptive Information

Please provide a description of your tool:
Maze is a standardized, group-administered measure of reading comprehension. Maze is administered to students in the fall of second grade through the spring of eighth grade. In Maze, the examiner presents students with a passage that has every seventh word removed and replaced with three options. In third through eighth grade, the first and last sentence are left intact, and in second grade, the first two sentences and last sentence are left intact. The final score is the number of maze words selected correctly within 3 minutes minus one-half of the number of errors. Skipped items are treated as errors, but items not reached are not counted as errors.
The tool is intended for use with the following grade(s).
not selected Preschool / Pre - kindergarten
not selected Kindergarten
not selected First grade
selected Second grade
selected Third grade
selected Fourth grade
selected Fifth grade
selected Sixth grade
selected Seventh grade
selected Eighth grade
not selected Ninth grade
not selected Tenth grade
not selected Eleventh grade
not selected Twelfth grade

The tool is intended for use with the following age(s).
not selected 0-4 years old
not selected 5 years old
not selected 6 years old
not selected 7 years old
not selected 8 years old
not selected 9 years old
not selected 10 years old
not selected 11 years old
not selected 12 years old
not selected 13 years old
not selected 14 years old
not selected 15 years old
not selected 16 years old
not selected 17 years old
not selected 18 years old

The tool is intended for use with the following student populations.
selected Students in general education
selected Students with disabilities
selected English language learners

ACADEMIC ONLY: What skills does the tool screen?

Reading
Phonological processing:
not selected RAN
not selected Memory
not selected Awareness
not selected Letter sound correspondence
not selected Phonics
not selected Structural analysis

Word ID
not selected Accuracy
not selected Speed

Nonword
not selected Accuracy
not selected Speed

Spelling
not selected Accuracy
not selected Speed

Passage
not selected Accuracy
not selected Speed

Reading comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
not selected Retell
selected Maze
not selected Sentence verification
not selected Other (please describe):


Listening comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
not selected Retell
not selected Maze
not selected Sentence verification
not selected Vocabulary
not selected Expressive
not selected Receptive

Mathematics
Global Indicator of Math Competence
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Early Numeracy
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Concepts
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Computation
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematic Application
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Fractions/Decimals
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Algebra
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Geometry
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

not selected Other (please describe):

Please describe specific domain, skills or subtests:
BEHAVIOR ONLY: Which category of behaviors does your tool target?


BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.

Acquisition and Cost Information

Where to obtain:
Email Address
support@dibels.uoregon.edu
Address
5292 University of Oregon Eugene, OR 97403
Phone Number
1-888-497-4290
Website
https://dibels.uoregon.edu
Initial cost for implementing program:
Cost
$0.00
Unit of cost
Replacement cost per unit for subsequent use:
Cost
$0.00
Unit of cost
Duration of license
Additional cost information:
Describe basic pricing plan and structure of the tool. Provide information on what is included in the published tool, as well as what is not included but required for implementation.
All materials required for administration are available for free download at https://dibels.uoregon.edu. Printed materials are also available at https://dibels.uoregon.edu/market for a cost of $53 to $91 for a classroom set of benchmark screening materials. The DIBELS Data System (DDS) is not required, but is available for online data entry, management and reporting for a cost of $1.00 per student per year. A multi-year discount is currently available. The DDS is free-of-charge to schools in Oregon. For the most current pricing information see: https://dibels.uoregon.edu/help/pricing. Additional costs are associated with printing, and computer and internet access if also using the DIBELS Data System. Starting in the 2019-20 school year, tablet-based administration will be available from Amplify (https://www.amplify.com).
Provide information about special accommodations for students with disabilities.
DIBELS 8th Edition approved assessment accommodations involve minor changes to assessment procedures that are unlikely to change the meaning of the results and have been approved either by DIBELS developers or assessment professionals. They should be used only when: • An accurate score is unlikely to be obtained without the accommodation; and/or • Specified in a student’s 504 plan or Individualized Education Plan (IEP). The accommodations approved for DIBELS 8th Edition are: quiet setting for testing; breaks in between measures; assistive technology (e.g., hearing aids, assistive listening devices, glasses); enlarged student materials; colored overlays, filters, or lighting adjustments; and marker or ruler for tracking.

Administration

BEHAVIOR ONLY: What type of administrator is your tool designed for?
not selected General education teacher
not selected Special education teacher
not selected Parent
not selected Child
not selected External observer
not selected Other
If other, please specify:

What is the administration setting?
not selected Direct observation
not selected Rating scale
not selected Checklist
selected Performance measure
not selected Questionnaire
not selected Direct: Computerized
not selected One-to-one
selected Other
If other, please specify:
Small or large group administration is supported.

Does the tool require technology?
No

If yes, what technology is required to implement your tool? (Select all that apply)
not selected Computer or tablet
not selected Internet connection
not selected Other technology (please specify)

If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:
Administering the measure does not require technology, but if users choose to use the DIBELS Data System for management and reporting of data, an internet connected computer is required. Additionally, if schools choose to administer the DIBELS 8th Edition measures using a tablet, they should contact Amplify for technology requirements.

What is the administration context?
selected Individual
selected Small group   If small group, n=
selected Large group   If large group, n=30
not selected Computer-administered
not selected Other
If other, please specify:

What is the administration time?
Time in minutes
5
per (student/group/other unit)
group

Additional scoring time:
Time in minutes
2
per (student/group/other unit)
student

ACADEMIC ONLY: What are the discontinue rules?
selected No discontinue rules provided
not selected Basals
not selected Ceilings
not selected Other
If other, please specify:


Are norms available?
Yes
Are benchmarks available?
Yes
If yes, how many benchmarks per year?
3
If yes, for which months are benchmarks available?
Benchmarks are available for the beginning, middle and end of the school year. Beginning months are typically September, October and November; middle months are December, January, and February; and end months are typically March, April, May and June. Regardless of when the benchmark occurs, we recommend that all students are tested within a one-month window
BEHAVIOR ONLY: Can students be rated concurrently by one administrator?
If yes, how many students can be rated concurrently?

Training & Scoring

Training

Is training for the administrator required?
Yes
Describe the time required for administrator training, if applicable:
1-4 hours
Please describe the minimum qualifications an administrator must possess.
Paraprofessional
not selected No minimum qualifications
Are training manuals and materials available?
Yes
Are training manuals/materials field-tested?
Yes
Are training manuals/materials included in cost of tools?
Yes
If No, please describe training costs:
Information about online training is available on the DIBELS Data System (https://dibels.uoregon.edu/training). Online training is free-of-charge for ‘early adopters’ (i.e., schools or districts that sign up for the next school year by a specified date in spring.) For people not associated with the ‘early adopter’ program the charge is $40 to $79 per person, depending on the number of people purchasing the training, and whether an individual is associated with a DDS account.
Can users obtain ongoing professional and technical support?
Yes
If Yes, please describe how users can obtain support:
Technical support is available from the DIBELS Data System at the University of Oregon, https://dibels.uoregon.edu (phone: 1-888-497-4290, email: support@dibels.uoregon.edu, hours of operation: 6:00am to 5:30pm Pacific Time, Monday through Friday).

Scoring

How are scores calculated?
selected Manually (by hand)
not selected Automatically (computer-scored)
not selected Other
If other, please specify:

Do you provide basis for calculating performance level scores?
Yes
What is the basis for calculating performance level and percentile scores?
not selected Age norms
selected Grade norms
not selected Classwide norms
not selected Schoolwide norms
not selected Stanines
not selected Normal curve equivalents

What types of performance level scores are available?
selected Raw score
not selected Standard score
selected Percentile score
not selected Grade equivalents
not selected IRT-based score
not selected Age equivalents
not selected Stanines
not selected Normal curve equivalents
selected Developmental benchmarks
selected Developmental cut points
not selected Equated
not selected Probability
not selected Lexile score
not selected Error analysis
not selected Composite scores
not selected Subscale/subtest scores
not selected Other
If other, please specify:

Does your tool include decision rules?
Yes
If yes, please describe.
DIBELS 8th Edition Maze provides two cut points to help educators determine where to allocate resources and how much intervention students may need. One cut point indicates that students are likely at risk for difficulty in learning to read. The other is a benchmark cut point that indicates if students are likely to be on track. Students between the two cut points are considered to be somewhere between “at-risk” and “on track”.
Can you provide evidence in support of multiple decision rules?
Yes
If yes, please describe.
This application addresses the “at-risk” cut point. Information about benchmark cut points is available on the DIBELS Data System website https://dibels.uoregon.edu.
Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
Maze forms include approximately 40-70 items, depending on grade level. Scorers mark and sum the incorrect items. Count the number of correct items and subtract half the number of incorrect items from the number of items correct. The resulting score is the Maze adjusted score.
Describe the tool’s approach to screening, samples (if applicable), and/or test format, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
Maze is a standardized, group-administered measure of reading comprehension. Maze is administered to students in the fall of second grade through the spring of eighth grade. In Maze, the examiner presents students with a passage that has every seventh word removed and replaced with three options. The final score is the number of maze words selected correctly within 3 minutes minus one-half of the number of errors. Skipped items are treated as errors, but items not reached are not counted as errors. To make DIBELS maze measures more informative, several innovations were undertaken. First, as with ORF, maze passages are written by experienced and aspiring authors. Second, more work has gone into the selection of distractors. Third, formatting of Maze was revised to make reading the passages easier on the eye, reflecting research that suggests that overly long lines can cause disfluency and interfere with comprehension in reading for young readers (e.g., Dyson & Haselgrove, 2001; Katzir et al., 2013). Finally, all passages were reviewed by a panel of parents and former teachers for grade-level appropriateness and for adherence to principles of diversity and inclusion. Maze passages were developed in the same manner as ORF passages but went through a few additional steps of development. Passages were lengthened to reach typical lengths found in other CBMs and in previous DIBELS editions to allow for enough items for appropriate measurement of readers with better fluency and comprehension. Following common rules, the first and last sentences of every passage were left intact, except in Grade 2 where the second sentence was also left intact to allow for better establishment of a situation model for the passage (Kintsch, 1998). Beginning with the third word of the second sentence (or third sentence in Grade 2), every seventh word was deleted with a few caveats. If the seventh word was a proper noun or number, then the eighth word was deleted. If the seventh word was highly specialized (e.g., an uncommon scientific term for a given grade), it would not be deleted unless it had occurred previously in the passage. Also, hyphenated words were treated as one word. The deleted word became one of the answer choices, and two distractors were written for each deleted word. Each distractor was written by a different DIBELS 8th Edition researcher according to a number of rules informed by research. Distractors could not begin with the same letter as the correct word (Conoyer et al., 2017). Distractors were also kept to within two letters in length of the correct answer, although this rule was relaxed in the upper grades (i.e., Grade 5 and beyond). When the deleted word was a noun, verb, or adjective, distractors had to be grammatically correct. For instance, if the word to be chosen followed “an”, then the distractors had to begin with a vowel. When the deleted word was a contraction, all distractors also had to be contractions and tense agreement was deemed unimportant. Different forms of the same word were never used as distractors (e.g., “be”, “is”, and “are”). For all other parts of speech, grammatical correctness was not a requirement because it was found to result in repetitive distractors. For example, when the deleted word was an article, requiring grammatical correctness resulted in the answer choices always being “a”, “an”, and “the.” It was deemed undesirable to have answer choices repeat too frequently. Finally, in Grade 5 and up, one of the distractors was required to have semantic similarity to the correct word. That is, it could make sense in a given sentence but not in the story as a whole. Once distractors were written, they were reviewed by another DIBELS 8th Edition researcher, who would make corrections when rules were violated. If the reviewer found a particular item to be inordinately difficult, the item was brought to a subset of researchers for discussion and potential revision. Finally, the answer choices were reordered so that they were always listed alphabetically. Benchmark passages were selected from the resulting pool using rules that balanced readability, text complexity, and Lexile ratings. In order to balance these factors, readability grade levels were permitted to go above grade level in all but second grade.

Technical Standards

Classification Accuracy & Cross-Validation Summary

Grade Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Classification Accuracy Fall Partially convincing evidence Partially convincing evidence Partially convincing evidence Partially convincing evidence Unconvincing evidence Partially convincing evidence Unconvincing evidence
Classification Accuracy Winter Convincing evidence Partially convincing evidence Convincing evidence Partially convincing evidence Convincing evidence Partially convincing evidence Unconvincing evidence
Classification Accuracy Spring Convincing evidence Convincing evidence Partially convincing evidence Partially convincing evidence Partially convincing evidence Convincing evidence Data unavailable
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available

Iowa Assessment Total Reading Score

Classification Accuracy

Select time of year
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
The criterion outcome measure was the Iowa Assessment Total Reading Score. The Iowa Assessment is a published, group-administered, multiple-choice, norm-referenced measure of reading achievement. It is completely independent of DIBELS 8th Edition measures.
Do the classification accuracy analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Screening measures were administered in the fall, winter, and spring of the 2018-19 school year. The Iowa Assessment was administered in the spring of 2019. All else being equal, concurrent administrations are preferable because they reduce the likelihood of inflated false positives due to intervention delivery on the part of schools. Thus, all spring benchmarks predicted end of year performance on the concurrent spring 2019 administration. Fall and winter benchmarks predicted end of year performance on the spring 2019 Iowa administration because no concurrent administration was available.
Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
We used a two-stage process for determining cut-points for the DIBELS 8th Edition Maze score. First, we plotted a Receiver Operating Characteristic (ROC) curve for the selected end-of-year criterion measure at each time point and grade and determined the area under the curve (A). Second, we conducted a diagnostic analysis of each measure at each time point (i.e., season). For each analysis, we focused on two statistics: sensitivity and specificity. We chose to focus on sensitivity and specificity (rather than PPV and NPV) because they remain stable indicators regardless of the prevalence of reading difficulties in the population (Pepe, 2003). We attempted to balance sensitivity and specificity in our analyses because of their complimentary roles in a prevention model in education. Specifically, we want to be confident that as many students as possible receive the level of instructional support they require as early as possible, without overburdening teachers by asking them to deliver intervention to students who do not need additional instruction. Thus, wherever possible, the recommended cut points for DIBELS 8th Edition were determined using an optimal decision threshold that maximized sensitivity among scores with a specificity at or above .8. That is, at each time point, we selected the score with the highest sensitivity among scores with a specificity at or above .80, unless the maximum sensitivity value exceeded .90, in which case the cut point selected was the score that minimized the difference between sensitivity and specificity among scores with specificity at or above .8. For measures and periods with no cut scores that met the minimum threshold for specificity, the cut point represents the score that best balances the goals of providing additional instruction where needed while keeping demands on teachers reasonable.
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
No
If yes, please describe the intervention, what children received the intervention, and how they were chosen.

Cross-Validation

Has a cross-validation study been conducted?
No
If yes,
Select time of year.
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
Do the cross-validation analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
If yes, please describe the intervention, what children received the intervention, and how they were chosen.

Classification Accuracy - Fall

Evidence Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Criterion measure Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score
Cut Points - Percentile rank on criterion measure 20 20 20 20 20 20 20
Cut Points - Performance score on criterion measure 154 166 176 185 194 202 211
Cut Points - Corresponding performance score (numeric) on screener measure 2 4.5 10.5 10 12 15 16
Classification Data - True Positive (a) 36 15 25 19 6 7 16
Classification Data - False Positive (b) 22 26 21 30 12 8 3
Classification Data - False Negative (c) 14 6 7 6 1 3 6
Classification Data - True Negative (d) 117 119 129 87 82 75 21
Area Under the Curve (AUC) 0.88 0.87 0.92 0.80 0.94 0.93 0.84
AUC Estimate’s 95% Confidence Interval: Lower Bound 0.83 0.80 0.88 0.70 0.87 0.87 0.71
AUC Estimate’s 95% Confidence Interval: Upper Bound 0.93 0.93 0.96 0.91 1.00 0.99 0.96
Statistics Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Base Rate 0.26 0.13 0.18 0.18 0.07 0.11 0.48
Overall Classification Rate 0.81 0.81 0.85 0.75 0.87 0.88 0.80
Sensitivity 0.72 0.71 0.78 0.76 0.86 0.70 0.73
Specificity 0.84 0.82 0.86 0.74 0.87 0.90 0.88
False Positive Rate 0.16 0.18 0.14 0.26 0.13 0.10 0.13
False Negative Rate 0.28 0.29 0.22 0.24 0.14 0.30 0.27
Positive Predictive Power 0.62 0.37 0.54 0.39 0.33 0.47 0.84
Negative Predictive Power 0.89 0.95 0.95 0.94 0.99 0.96 0.78
Sample Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Date Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion Fall 2018 screening; Spring 2019 criterion
Sample Size 189 166 182 142 101 93 46
Geographic Representation East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (WA)
South Atlantic (FL)
West North Central (MO)
Mountain (AZ)
Pacific (WA)
West North Central (MO)
Male 33.3% 39.2% 31.9% 43.0% 45.5% 46.2% 47.8%
Female 43.9% 36.1% 47.8% 41.5% 54.5% 51.6% 52.2%
Other              
Gender Unknown 22.8% 24.7% 20.3% 15.5%   2.2%  
White, Non-Hispanic 28.6% 39.8% 35.7% 46.5% 69.3% 76.3% 76.1%
Black, Non-Hispanic 37.0% 24.7% 26.9% 28.2% 9.9% 7.5% 2.2%
Hispanic 2.6% 2.4% 6.0% 2.1% 6.9% 2.2%  
Asian/Pacific Islander 2.6% 3.0% 4.4% 4.9% 7.9% 6.5%  
American Indian/Alaska Native 2.1% 2.4% 2.2% 1.4%   1.1% 19.6%
Other 4.2% 3.0% 4.4% 0.7% 4.0% 4.3% 2.2%
Race / Ethnicity Unknown 22.8% 24.7% 20.3% 15.5%   2.2%  
Low SES              
IEP or diagnosed disability              
English Language Learner              

Classification Accuracy - Winter

Evidence Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Criterion measure Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score
Cut Points - Percentile rank on criterion measure 20 20 20 20 20 20 20
Cut Points - Performance score on criterion measure 154 166 176 185 194 202 211
Cut Points - Corresponding performance score (numeric) on screener measure 6.0 9.0 12.5 14.0 14.5 17.5 19.0
Classification Data - True Positive (a) 24 16 31 15 5 8 11
Classification Data - False Positive (b) 20 32 20 15 3 6 3
Classification Data - False Negative (c) 6 5 4 6 1 0 9
Classification Data - True Negative (d) 103 115 129 97 24 14 16
Area Under the Curve (AUC) 0.89 0.87 0.92 0.88 0.96 0.94 0.82
AUC Estimate’s 95% Confidence Interval: Lower Bound 0.83 0.80 0.88 0.80 0.90 0.85 0.68
AUC Estimate’s 95% Confidence Interval: Upper Bound 0.94 0.94 0.97 0.96 1.00 1.00 0.96
Statistics Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Base Rate 0.20 0.13 0.19 0.16 0.18 0.29 0.51
Overall Classification Rate 0.83 0.78 0.87 0.84 0.88 0.79 0.69
Sensitivity 0.80 0.76 0.89 0.71 0.83 1.00 0.55
Specificity 0.84 0.78 0.87 0.87 0.89 0.70 0.84
False Positive Rate 0.16 0.22 0.13 0.13 0.11 0.30 0.16
False Negative Rate 0.20 0.24 0.11 0.29 0.17 0.00 0.45
Positive Predictive Power 0.55 0.33 0.61 0.50 0.63 0.57 0.79
Negative Predictive Power 0.94 0.96 0.97 0.94 0.96 1.00 0.64
Sample Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Date Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion Winter 2018/2019 screening; Spring 2019 criterion
Sample Size 153 168 184 133 33 28 39
Geographic Representation East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
West North Central (MO)
Pacific (WA)
West North Central (MO)
Pacific (WA)
West North Central (MO)
Pacific (WA)
West North Central (MO)
Male 32.0% 38.1% 33.2% 42.1% 60.6% 50.0% 43.6%
Female 36.6% 38.1% 46.7% 40.6% 39.4% 50.0% 53.8%
Other              
Gender Unknown 31.4% 23.8% 20.1% 17.3%     2.6%
White, Non-Hispanic 35.3% 39.3% 36.4% 49.6% 75.8% 82.1% 74.4%
Black, Non-Hispanic 19.0% 25.0% 27.2% 22.6%      
Hispanic 3.3% 3.6% 4.9% 2.3% 15.2%    
Asian/Pacific Islander 3.3% 3.0% 4.3% 5.3%      
American Indian/Alaska Native 2.6% 2.4% 2.7% 1.5% 3.0% 3.6% 23.1%
Other 5.2% 3.0% 4.3% 0.8% 9.1% 14.3%  
Race / Ethnicity Unknown 31.4% 23.8% 20.1% 17.3% 9.1%   2.6%
Low SES              
IEP or diagnosed disability              
English Language Learner              

Classification Accuracy - Spring

Evidence Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7
Criterion measure Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score Iowa Assessment Total Reading Score
Cut Points - Percentile rank on criterion measure 20 20 20 20 20 20
Cut Points - Performance score on criterion measure 154 166 176 185 194 202
Cut Points - Corresponding performance score (numeric) on screener measure 6.5 11.5 13.5 17.5 20.0 24.0
Classification Data - True Positive (a) 38 19 31 9 5 7
Classification Data - False Positive (b) 24 26 36 10 8 4
Classification Data - False Negative (c) 9 4 2 5 1 1
Classification Data - True Negative (d) 119 122 112 85 19 16
Area Under the Curve (AUC) 0.89 0.89 0.90 0.88 0.89 0.93
AUC Estimate’s 95% Confidence Interval: Lower Bound 0.85 0.83 0.84 0.77 0.77 0.82
AUC Estimate’s 95% Confidence Interval: Upper Bound 0.94 0.94 0.96 0.99 1.00 1.00
Statistics Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7
Base Rate 0.25 0.13 0.18 0.13 0.18 0.29
Overall Classification Rate 0.83 0.82 0.79 0.86 0.73 0.82
Sensitivity 0.81 0.83 0.94 0.64 0.83 0.88
Specificity 0.83 0.82 0.76 0.89 0.70 0.80
False Positive Rate 0.17 0.18 0.24 0.11 0.30 0.20
False Negative Rate 0.19 0.17 0.06 0.36 0.17 0.13
Positive Predictive Power 0.61 0.42 0.46 0.47 0.38 0.64
Negative Predictive Power 0.93 0.97 0.98 0.94 0.95 0.94
Sample Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Grade 7
Date Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion Spring 2019 screening; Spring 2019 criterion
Sample Size 190 171 181 109 33 28
Geographic Representation East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
East North Central (OH)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
Mountain (AZ)
Pacific (OR, WA)
South Atlantic (FL, GA)
Pacific (WA)
West North Central (MO)
Pacific (WA)
West North Central (MO)
Male 33.7% 36.3% 33.7% 37.6% 39.4% 50.0%
Female 41.6% 37.4% 45.3% 42.2% 60.6% 50.0%
Other            
Gender Unknown 24.7% 26.3% 21.0% 20.2%    
White, Non-Hispanic 27.9% 39.2% 36.5% 61.5% 75.8% 82.1%
Black, Non-Hispanic 35.3% 22.8% 25.4% 6.4%    
Hispanic 3.7% 3.5% 5.5% 1.8% 15.2%  
Asian/Pacific Islander 2.6% 2.9% 4.4% 6.4%    
American Indian/Alaska Native 2.1% 2.3% 2.8% 1.8%   3.6%
Other 3.7% 2.9% 4.4% 0.9% 9.1% 14.3%
Race / Ethnicity Unknown 24.7% 26.3% 21.0% 21.1%    
Low SES            
IEP or diagnosed disability            
English Language Learner            

Reliability

Grade Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Rating Partially convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Partially convincing evidence Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Offer a justification for each type of reliability reported, given the type and purpose of the tool.
To assess the reliability of DIBELS 8th Edition, we evaluated multiple forms of reliability, including test-retest reliability, concurrent alternate form reliability, and delayed alternate form reliability. We include delayed alternate form reliability as a supplementary source of reliability evidence by reporting correlations between two or more alternate form of the same test administered at different benchmark periods. Test-retest reliability: Test-retest reliability was evaluated by administering the same test (i.e., set of items) to the same individuals two times and correlating scores from the two test administrations. We included test-retest reliability in cases where the only source of alternate form reliability was delayed alternate form. In those instances, test-retest reliability provides some measure of reliability without the confound of the (expected) student growth between administrations. Alternate-form reliability: Alternate-form reliability indicates the extent to which test results generalize to different item samples. To assess alternate-form reliability, students were administered multiple forms of each subtest, and scores from these two forms were correlated. The use of alternate form reliability is justified because it uses different but equivalent forms, thereby preventing practice effects inherent in test-retest reliability where the same form is administered twice. In addition, it is important to establish that different forms are equivalent given the use of different forms for progress-monitoring across the year. The use of alternate form reliability is also justified due to the use of alternate forms when a benchmark administration is spoiled (e.g., interrupted administration).
*Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
Twenty-one schools administered DIBELS 8th Edition to 5,259 students in grades K - 8. The schools were located in the Pacific, East North Central, West North Central, Mountain, and South Atlantic census divisions. Schools represent towns, large cities, suburbs and rural areas. The sample of students was 50.6% male and 48.9% female; 1.5% American Indian or Alaskan Native; 2.5% Asian, 17.2% Black, 20.9% Hispanic, 4.1% two or more races, 0.4% Native Hawaiian/Pacific Islander, and 53.0% White. 13.9% of students had disabilities, 59.6% were eligible for free or reduced lunch, and 7.3% were English learners.
*Describe the analysis procedures for each reported type of reliability.
Test-retest reliability: Students were re-administered the same version of test (i.e., same item pool) at multiple benchmark assessments. Test-retest reliability was estimated as the correlation coefficient between the test and retest. Alternate form reliability: Students were administered multiple forms of each subtest, and scores from these two forms were correlated. Concurrent alternate-form reliability of a single (i.e., benchmark) form was estimated by the correlation between the score on that form and the score on an alternate (i.e., progress monitoring) form. Delayed alternate form reliability was estimated by correlating scores measured at different benchmark administrations across year—beginning-, middle-, and end of year.

*In the table(s) below, report the results of the reliability analyses described above (e.g., internal consistency or inter-rater reliability coefficients).

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.
Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated reliability data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Validity

Grade Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Rating Partially convincing evidence Unconvincing evidence Convincing evidence Partially convincing evidence Partially convincing evidence Convincing evidence Unconvincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
The DIBELS 8th Edition Maze subtest in grades 2-8 was validated against the Iowa Assessment of Reading. The Iowa Assessment is a published, group-administered, multiple-choice, norm-referenced measure of reading achievement. The Total Reading measure assesses broad reading achievement.
*Describe the sample(s), including size and characteristics, for each validity analysis conducted.
Sample 1. Twenty-one schools administered DIBELS 8th Edition to 5,259 students in grades K - 8. The schools were located in the Pacific, East North Central, West North Central, Mountain, and South Atlantic census divisions. Schools represent towns, large cities, suburbs and rural areas. The sample of students was 50.6% male and 48.9% female; 1.5% American Indian or Alaskan Native; 2.5% Asian, 17.2% Black, 20.9% Hispanic, 4.1% two or more races, 0.4% Native Hawaiian/Pacific Islander, and 53.0% White. 13.9% of students had disabilities, 59.6% were eligible for free or reduced lunch, and 7.3% were English learners.
*Describe the analysis procedures for each reported type of validity.
Concurrent validity: Concurrent validity was evaluated by examining the strength of correlation between the screening measure and the criterion measures administered at approximately the same time of the year. Predictive validity: Predictive validity was evaluated by examining the strength of correlation between the screening measure and student future performance on the criterion measure.

*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.
Describe the degree to which the provided data support the validity of the tool.
Overall, the validity of Maze for DIBELS 8th Edition is supported by a range of concurrent and predictive validity correlations. The majority of both concurrent and predictive correlation coefficients are .6 or above, with the majority of lower bounds of the coefficients at .50 or above. These correlations are in many ways to be expected to be somewhat lower due to the fact that Maze is known to tap relatively lower level comprehension (i.e., local coherence more so than global coherence), while a measure like the Iowa taps literal, inferential, and higher level comprehension, wherein the latter two types of questions call on the reader achieving global coherence rather than merely local coherence.
Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated validity data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Bias Analysis

Grade Grade 2
Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Rating No No No No No No No
Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
No
If yes,
a. Describe the method used to determine the presence or absence of bias:
b. Describe the subgroups for which bias analyses were conducted:
c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.

Data Collection Practices

Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.