Burst:Reading
Study: Dubal et al. (2012)
Summary
Burst: Reading (Burst) represents a breakthrough in delivering highly differentiated reading instruction based on formative assessment data. Using cutting-edge mobile technology for assessment administration, students attending schools that implement Burst are first screened with a multi-battery assessment that a) provides cross-skill information about a student’s reading ability and b) identifies students who are below expectations for specific skills at appropriate grade levels. The assessment provides information about skills that contribute to the successful development of reading comprehension and includes all of the measures from Dynamic Indicators of Basic Early Literacy Skills (DIBELS: Next) that assess letter name knowledge, phonological awareness, decoding, fluency, and comprehension. The multi-battery assessment includes three additional measures: comprehension, vocabulary, and late decoding inventories. Burst is driven by sophisticated data-analysis algorithms to generate lesson plans and engaging instruction materials for small groups. Incorporating instructional prioritization rules based on grade and time of year, the algorithm prescribes 30 minutes of small-group instruction in up to two skills (Gersten, et al., 2008) to students identified as needing intervention. Teachers, coaches, specialists, and qualified volunteers deliver 10-day “Bursts” of instruction to small groups of students based on the formative assessment results for each student. Instruction is then tailored to the skills defined as the most critical based on students’ grade and time of year.
- Target Grades:
- K, 1, 2, 3, 4, 5, 6
- Target Populations:
-
- Any student at risk for academic failure
- Area(s) of Focus:
-
- Phonological awareness
- Phonological awareness
- Phonics/word study
- Comprehension
- Fluency
- Vocabulary
- Where to Obtain:
- Amplify Education, Inc.
- 55 Washington St., Suite 900, Brooklyn NY 11201
- (800) 823-1969
- www.amplify.com
- Initial Cost:
- $60.00 per student
- Replacement Cost:
- $60.00 per student per year
-
The annual student license fee indicated above provides access to our digital intervention program, including customized curriculum modules and reporting. Teachers will be able to use 10-day lesson sequences that are customized for their small groups of intervention students based on the formative assessment results of each student. Additional costs associated with the program include per student licenses to formative assessment (generally $14.90 per student), teacher kits ($215 for K-3; $195 for 4-6) and professional development and implementation support (varies based on nature of the implementation).
- Staff Qualified to Administer Include:
-
- Special Education Teacher
- General Education Teacher
- Reading Specialist
- Math Specialist
- EL Specialist
- Interventionist
- Student Support Services Personnel (e.g., counselor, social worker, school psychologist, etc.)
- Paraprofessional
- Other:
- Training Requirements:
- Training time varies; may be 8+ hours
-
For new Burst Instructors, they typically attend 1 to 2 full days of training where they learn: Background/Context 1. How the Burst Algorithm works 2. What areas of literacy content/skills Burst is focusing on and why Practice 1. Instructors are given extensive practice with giving Burst Assessments within the training. Support 1. Exploration of Burst Base, the online repository for Burst materials and support
Training materials have been designed and revised to ensure that educators are prepared to faithfully implement the program. We regularly solicit feedback on our training sessions during field use and adjust our sessions and materials to continuously improve outcomes.
- Access to Technical Support:
- Various ongoing professional development sessions and coaching are available to practitioners, as well as real-time technical support at no additional cost. Amplify offers initial product and assessment training as well as follow-up Best Practice, the Educational Support Team (EST) and on-going mclasshome.com support.
- Recommended Administration Formats Include:
-
- Small group of students
- Minimum Number of Minutes Per Session:
- 30
- Minimum Number of Sessions Per Week:
- 5
- Minimum Number of Weeks:
- Detailed Implementation Manual or Instructions Available:
- Yes
- Is Technology Required?
-
- Computer or tablet
- Internet connection
Program Information
Descriptive Information
Please provide a description of program, including intended use:
Burst: Reading (Burst) represents a breakthrough in delivering highly differentiated reading instruction based on formative assessment data. Using cutting-edge mobile technology for assessment administration, students attending schools that implement Burst are first screened with a multi-battery assessment that a) provides cross-skill information about a student’s reading ability and b) identifies students who are below expectations for specific skills at appropriate grade levels. The assessment provides information about skills that contribute to the successful development of reading comprehension and includes all of the measures from Dynamic Indicators of Basic Early Literacy Skills (DIBELS: Next) that assess letter name knowledge, phonological awareness, decoding, fluency, and comprehension. The multi-battery assessment includes three additional measures: comprehension, vocabulary, and late decoding inventories. Burst is driven by sophisticated data-analysis algorithms to generate lesson plans and engaging instruction materials for small groups. Incorporating instructional prioritization rules based on grade and time of year, the algorithm prescribes 30 minutes of small-group instruction in up to two skills (Gersten, et al., 2008) to students identified as needing intervention. Teachers, coaches, specialists, and qualified volunteers deliver 10-day “Bursts” of instruction to small groups of students based on the formative assessment results for each student. Instruction is then tailored to the skills defined as the most critical based on students’ grade and time of year.
The program is intended for use in the following age(s) and/or grade(s).
Age 3-5
Kindergarten
First grade
Second grade
Third grade
Fourth grade
Fifth grade
Sixth grade
Seventh grade
Eighth grade
Ninth grade
Tenth grade
Eleventh grade
Twelth grade
The program is intended for use with the following groups.
Students with learning disabilities
Students with intellectual disabilities
Students with emotional or behavioral disabilities
English language learners
Any student at risk for academic failure
Any student at risk for emotional and/or behavioral difficulties
Other
If other, please describe:
ACADEMIC INTERVENTION: Please indicate the academic area of focus.
Early Literacy
Alphabet knowledge
Phonological awareness
Phonological awarenessEarly writing
Early decoding abilities
Other
If other, please describe:
Language
Grammar
Syntax
Listening comprehension
Other
If other, please describe:
Reading
Phonics/word study
Comprehension
Fluency
Vocabulary
Spelling
Other
If other, please describe:
Mathematics
Concepts and/or word problems
Whole number arithmetic
Comprehensive: Includes computation/procedures, problem solving, and mathematical concepts
Algebra
Fractions, decimals (rational number)
Geometry and measurement
Other
If other, please describe:
Writing
Spelling
Sentence construction
Planning and revising
Other
If other, please describe:
BEHAVIORAL INTERVENTION: Please indicate the behavior area of focus.
Externalizing Behavior
Verbal Threats
Property Destruction
Noncompliance
High Levels of Disengagement
Disruptive Behavior
Social Behavior (e.g., Peer interactions, Adult interactions)
Other
If other, please describe:
Internalizing Behavior
Anxiety
Social Difficulties (e.g., withdrawal)
School Phobia
Other
If other, please describe:
Acquisition and cost information
Where to obtain:
- Address
- 55 Washington St., Suite 900, Brooklyn NY 11201
- Phone Number
- (800) 823-1969
- Website
- www.amplify.com
Initial cost for implementing program:
- Cost
- $60.00
- Unit of cost
- student
Replacement cost per unit for subsequent use:
- Cost
- $60.00
- Unit of cost
- student
- Duration of license
- year
Additional cost information:
Describe basic pricing plan and structure of the program. Also, provide information on what is included in the published program, as well as what is not included but required for implementation (e.g., computer and/or internet access)
The annual student license fee indicated above provides access to our digital intervention program, including customized curriculum modules and reporting. Teachers will be able to use 10-day lesson sequences that are customized for their small groups of intervention students based on the formative assessment results of each student. Additional costs associated with the program include per student licenses to formative assessment (generally $14.90 per student), teacher kits ($215 for K-3; $195 for 4-6) and professional development and implementation support (varies based on nature of the implementation).Program Specifications
Setting for which the program is designed.
Small group of students
BI ONLY: A classroom of students
If group-delivered, how many students compose a small group?
4-6Program administration time
- Minimum number of minutes per session
- 30
- Minimum number of sessions per week
- 5
- Minimum number of weeks
- If intervention program is intended to occur over less frequently than 60 minutes a week for approximately 8 weeks, justify the level of intensity:
Does the program include highly specified teacher manuals or step by step instructions for implementation?- Yes
BEHAVIORAL INTERVENTION: Is the program affiliated with a broad school- or class-wide management program?-
If yes, please identify and describe the broader school- or class-wide management program: -
Does the program require technology? - Yes
-
If yes, what technology is required to implement your program? -
Computer or tablet
Internet connection
Other technology (please specify)
If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:
Burst:Reading requires administration of DIBELS: Next and Burst assessments (vocabulary, decoding, and comprehension) and these assessments are administered using either a handheld device such as an iPad or Chrome tablet or a computer. Results are synced to the mCLASS Home database where all results, group assignments, and reports are generated and stored.
Training
- How many people are needed to implement the program ?
Is training for the instructor or interventionist required?- Yes
- If yes, is the necessary training free or at-cost?
- At-cost
Describe the time required for instructor or interventionist training:- Training time varies; may be 8+ hours
Describe the format and content of the instructor or interventionist training:- For new Burst Instructors, they typically attend 1 to 2 full days of training where they learn: Background/Context 1. How the Burst Algorithm works 2. What areas of literacy content/skills Burst is focusing on and why Practice 1. Instructors are given extensive practice with giving Burst Assessments within the training. Support 1. Exploration of Burst Base, the online repository for Burst materials and support
What types or professionals are qualified to administer your program?
General Education Teacher
Reading Specialist
Math Specialist
EL Specialist
Interventionist
Student Support Services Personnel (e.g., counselor, social worker, school psychologist, etc.)
Applied Behavior Analysis (ABA) Therapist or Board Certified Behavior Analyst (BCBA)
Paraprofessional
Other
If other, please describe:
- Does the program assume that the instructor or interventionist has expertise in a given area?
-
No
If yes, please describe:
Are training manuals and materials available?- Yes
-
Describe how the training manuals or materials were field-tested with the target population of instructors or interventionist and students: - Training materials have been designed and revised to ensure that educators are prepared to faithfully implement the program. We regularly solicit feedback on our training sessions during field use and adjust our sessions and materials to continuously improve outcomes.
Do you provide fidelity of implementation guidance such as a checklist for implementation in your manual?- Yes
-
Can practitioners obtain ongoing professional and technical support? -
Yes
If yes, please specify where/how practitioners can obtain support:
Various ongoing professional development sessions and coaching are available to practitioners, as well as real-time technical support at no additional cost. Amplify offers initial product and assessment training as well as follow-up Best Practice, the Educational Support Team (EST) and on-going mclasshome.com support.
Summary of Evidence Base
- Please identify, to the best of your knowledge, all the research studies that have been conducted to date supporting the efficacy of your program, including studies currently or previously submitted to NCII for review. Please provide citations only (in APA format); do not include any descriptive information on these studies. NCII staff will also conduct a search to confirm that the list you provide is accurate.
-
Dubal, M., Harnly, A., Pavlov, M., Richards, K., Yambo, D., & Gushta, M. (2012). Effects of Burst®:Reading Early Literacy Intervention on Student Performance: 2012 Report. Retrieved from www.amplify.com/redirect/pdf/general/BurstEfficacyStudy.pdf.
Study Information
Study Citations
Dubal, M., Harnly, A., Pavlov, M., Richards, K., Yambo, D. & Gushta, M. (2012). Effects of Burst®:Reading Early Literacy Intervention on Student Performance: 2012 Report. Retrieved from: www.amplify.com/redirect/pdf/general/BurstEfficacyStudy.pdf.
Participants
- Describe how students were selected to participate in the study:
- Participants in this study were students from the 1,023 schools that had purchased and were using the Burst:Reading (formerly Burst:Early Literacy Intervention (Burst:ELI) in the 2010-2011 school year.
- Describe how students were identified as being at risk for academic failure (AI) or as having emotional or behavioral difficulties (BI):
- Within Burst:Reading, students are identified as being at risk of academic failure according to their performance on DIBELS at the beginning and middle of the school year (i.e., at the beginning of each semester). The Burst:ELI algorithm determines individual students’ intervention priority and creates intervention groups in the following manner: 1. Student assessment results are processed, yielding: a. A gross skill rating based on DIBELS Benchmark Status or risk category (i.e., Red, Yellow, and Green). The Burst Reading Assessment supplemental measures have comparable performance levels. b. A fine-grained skill rating that differentiates intervention priority within each risk category (e.g., students who are Red on DORF are further differentiated and prioritized for intervention based on DORF subscores). c. Up to two Zone of Proximal Development (ZPD) skills. A student’s highest priority ZPD skill is the skill that comes earliest in the set instructional sequence that the student tests poorly on, evidencing a need for intervention instruction. 2. Using the results from step 1, students are prioritized for intervention based on their performance relative to the rest of the students in their class or grade. 3. The teacher or other educator then selects the number of intervention groups that the Burst algorithm should create. The algorithm generates a number of possible groups and comes to a final decision on grouping based on a social welfare function. The social welfare function selects groups of students for whom the level of utility of the instruction to be delivered is the most similar among all students in the group, yielding homogenous groups. The information below demonstrates the pre-test means for treatment and control groups in the study on each measure by semester. Additionally, the score ranges associated with the At Risk and Some Risk performance levels for each DIBELS measure are provided. Lastly, scores for each measures associated with the 25th percentile for each measure are presented based on a national norming study conducted by Cummings et al. (2011). While a number of the pre-test means may be above the At Risk range, all score means are clearly below the 25th percentiles associated with national norms. K Spring: measure= PSF; treatment group=8.33; control group=8.38; at risk=0-6; some risk=7-17; score at 25th percentile=12 1 Fall: measure= NWF; treatment group=9.94; control group=10.00; at risk=0-12; some risk=13-23; score at 25th percentile=20 1 Spring: measure= NWF; treatment group=31.57; control group=32.30; at risk=0-29; some risk=30-49; score at 25th percentile=41 2 Fall: measure= ORF; treatment group=19.33; control group=19.34; at risk=0-25; some risk=26-43; score at 25th percentile=31 2 Spring: measure= ORF; treatment group=33.95; control group=34.57; at risk=0-51; some risk=52-67; score at 25th percentile=60 3 Fall: measure= ORF; treatment group=39.38; control group=41.07; at risk=0-52; some risk=53-76; score at 25th percentile=59 3 Spring: measure= ORF; treatment group=52.40; control group=53.85; at risk=0-66; some risk=67-91; score at 25th percentile=73
-
ACADEMIC INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:
- below the 30th percentile on local or national norm, or
- identified disability related to the focus of the intervention?
- %
-
BEHAVIORAL INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:
- emotional disability label,
- placed in an alternative school/classroom,
- non-responsive to Tiers 1 and 2, or
- designation of severe problem behaviors on a validated scale or through observation?
- %
- Specify which condition is the submitted intervention:
- Burst:Reading (formerly known as Burst:Early Literacy Intervention)
- Specify which condition is the control condition:
- Business as usual (BAU).
- If you have a third, competing condition, in addition to your control and intervention condition, identify what the competing condition is (data from this competing condition will not be used):
Using the tables that follow, provide data demonstrating comparability of the program group and control group in terms of demographics.
Grade Level
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
Age less than 1 | |||
Age 1 | |||
Age 2 | |||
Age 3 | |||
Age 4 | |||
Age 5 | |||
Kindergarten | 10.4% | 10.4% | 0.00 |
Grade 1 | 33.3% | 33.3% | 0.00 |
Grade 2 | 40.2% | 40.2% | 0.00 |
Grade 3 | 16.1% | 16.1% | 0.00 |
Grade 4 | |||
Grade 5 | |||
Grade 6 | |||
Grade 7 | |||
Grade 8 | |||
Grade 9 | |||
Grade 10 | |||
Grade 11 | |||
Grade 12 |
Race–Ethnicity
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
African American | 60.3% | 53.2% | 0.17 |
American Indian | 0.2% | 1.7% | 1.83 |
Asian/Pacific Islander | 1.0% | 0.9% | 0.00 |
Hispanic | 11.6% | 14.6% | 0.16 |
White | 22.2% | 26.4% | 0.13 |
Other | 4.6% | 3.2% | 0.32 |
Socioeconomic Status
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
Subsidized Lunch | 94.4% | 91.3% | 0.27 |
No Subsidized Lunch | 5.6% | 8.7% | 0.27 |
Disability Status
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
Speech-Language Impairments | |||
Learning Disabilities | |||
Behavior Disorders | |||
Emotional Disturbance | |||
Intellectual Disabilities | |||
Other | |||
Not Identified With a Disability |
ELL Status
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
English Language Learner | 14.0% | 12.7% | 0.05 |
Not English Language Learner | 86.0% | 87.3% | 0.05 |
Gender
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
Female | 43.4% | 41.3% | 0.05 |
Male | 56.6% | 58.7% | 0.05 |
Mean Effect Size
For any substantively (e.g., effect size ≥ 0.25 for pretest or demographic differences) or statistically significant (e.g., p < 0.05) pretest differences between groups in the descriptions below, please describe the extent to which these differences are related to the impact of the treatment. For example, if analyses were conducted to determine that outcomes from this study are due to the intervention and not demographic characteristics, please describe the results of those analyses here.
Design
- What method was used to determine students' placement in treatment/control groups?
- Systematic
- Please describe the assignment method or the process for defining treatment/comparison groups.
- This study used Propensity Score Matching to identify control group students who could be considered to be equivalent to the treatment students on the basis of pretest scores and other background characteristics. Based on an iterative approach, the best-fitting PSM model was determined to be one which considered student pretest scores, ethnicity, ELL status, and free or reduced-price lunch eligibility
-
What was the unit of assignment? - Students
- If other, please specify:
-
Please describe the unit of assignment: -
What unit(s) were used for primary data analysis? -
Schools
Teachers
Students
Classes
Other
If other, please specify:
-
Please describe the unit(s) used for primary data analysis:
Fidelity of Implementation
- How was the program delivered?
-
Individually
Small Group
Classroom
If small group, answer the following:
- Average group size
- Minimum group size
- Maximum group size
What was the duration of the intervention (If duration differed across participants, settings, or behaviors, describe for each.)?
- Weeks
- 12.00
- Sessions per week
- 5.00
- Duration of sessions in minutes
- 30.00
- What were the background, experience, training, and ongoing support of the instructors or interventionists?
- Prior to implementing Burst:ELI, school personnel participated in a standardized training series that included a one-day on-site session to prepare teachers or interventionists, a follow-up webinar for teachers or interventionists after 6–10 weeks, and a half-day on-site session to prepare instructional leaders. This training followed a common “see one, do one” model in the class with students, so teachers could quickly learn, through context, how the Burst:ELI instruction should be delivered. Ongoing technical training was also provided to school and district staff to help them install, manage, and troubleshoot the software.
- Describe when and how fidelity of treatment information was obtained.
- Burst:Reading is a software-based intervention requiring that educators access the product website to generate student grouping, review student performance reports, and download instructional materials. Fidelity of treatment information was inferred by reviewing teacher access of this website information. Additionally, the timing of assessment data collection during intervention was compared against the expected two-week assessment intervals.
- What were the results on the fidelity-of-treatment implementation measure?
- Fidelity of implementation analysis was not conducted for the current study.
Fidelity of implementation results were mentioned briefly in the white paper though not included. The results of the analysis from an unpublished paper are provided as follows.
Due to the post-hoc nature of this study, only one component of Burst: ELI intervention fidelity could be partially examined: exposure. Exposure was operationalized according to two types of implementation data that were automatically tracked by the Burst: ELI system:
· Number of instructions a Burst: ELI student received in a semester or a year; and
· Timeliness with which progress monitoring assessments were delivered.
The Burst: ELI system uses this data to remind teachers via email to assess students or to begin instruction when they begin to fall behind schedule.
Only Burst:ELI students were included in fidelity of implementation analyses. Those students missing demographic data were included in this analysis, as student characteristics were not used. There were 6,584 kindergarten students, 6,369 first grade students, 4,996 second grade students, and 3,045 third grade students included in the fidelity analysis.
See attachment "BurstReading Fidelity Analyses" for additional data.
- Was the fidelity measure also used in control classrooms?
- No
Measures and Results
Measures Broader :
Targeted Measure | Reverse Coded? | Reliability | Relevance | Exposure |
---|
Broader Measure | Reverse Coded? | Reliability | Relevance | Exposure |
---|
Administrative Data Measure | Reverse Coded? | Relevance |
---|
Effect Size
Effect size represents the how much performance changed because of the intervention. The larger the effect size, the greater the impact participating in the intervention had.
According to guidelines from the What Works Clearinghouse, an effect size of 0.25 or greater is “substantively important.” Additionally, effect sizes that are statistically significant are more trustworthy than effect sizes of the same magnitude that are not statistically significant.
Effect Size Dial
The purpose of the effect size dial is to help users understand the strength of a tool relative to other tools on the Tools Chart.
- The range represents where most effect sizes fall within reading or math based on effect sizes from tools on the Tools Chart.
- The orange pointer shows the average effect size for this study.
Targeted Measures (Full Sample)
Average Reading Effect Size
Measure | Sample Type | Effect Size |
---|---|---|
Average across all targeted measures | Full Sample | 0.11* |
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Broader Measures (Full Sample)
Measure | Sample Type | Effect Size |
---|---|---|
Average across all broader measures | Full Sample | -- |
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Administrative Measures (Full Sample)
Measure | Sample Type | Effect Size |
---|---|---|
Average across all admin measures | Full Sample | -- |
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Targeted Measures (Subgroups)
Measure | Sample Type | Effect Size |
---|---|---|
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Broader Measures (Subgroups)
Measure | Sample Type | Effect Size |
---|---|---|
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Administrative Measures (Subgroups)
Measure | Sample Type | Effect Size |
---|---|---|
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
- For any substantively (e.g., effect size ≥ 0.25 for pretest or demographic differences) or statistically significant (e.g., p < 0.05) pretest differences, please describe the extent to which these differences are related to the impact of the treatment. For example, if analyses were conducted to determine that outcomes from this study are due to the intervention and not pretest characteristics, please describe the results of those analyses here.
- Please explain any missing data or instances of measures with incomplete pre- or post-test data.
- If you have excluded a variable or data that are reported in the study being submitted, explain the rationale for exclusion:
- Describe the analyses used to determine whether the intervention produced changes in student outcomes:
- Subsequent to matching treatment and control students according to Propensity Score Matching values, a differences-in-differences approach was applied to the analysis of student performance on the DIBELS measures appropriate for each grade and semester. Difference scores for each grade and semester were calculated by subtracting the post-scores from pre-scores for each group. The control group difference score was then subtracted from the treatment group’s difference score and that value submitted to a cluster-corrected t-test to account for the nesting of students within schools. In addition, Student Growth Percentiles (SGPs; Betebenner, 2009) were also calculated for each grade and semester, placing student progress into a normative context. Median treatment and control group SGPs were calculated and compared to assess differences in student growth conditional on performance at the beginning of the semester. A median SGP of 50 or greater indicates overall growth better than the student sample in the study; SGPs medians for the treatment group that exceed the control group suggest that Burst:Reading had an impact on student growth. The Mann-Whitney U test was employed to evaluate the hypothesis that the distribution of SGPs in the treatment group shifted up with respect to the distribution of SGPs in the control group.
Additional Research
- Is the program reviewed by WWC or E-ESSA?
- E-ESSA
- Summary of WWC / E-ESSA Findings :
What Works Clearinghouse Review
This program was not reviewed by What Works Clearinghouse.
Evidence for ESSA
Program Outcomes: One study evaluated Burst: Reading with students in grades in K-3 in 57 schools across multiple states. Positive effects compared to the control group were found on DIBELS Next and STAR Early Literacy measures (effect size = +0.10). Results were not significant at the level of randomization (school) but were significant at the student level, qualifying Burst for the ESSA “Promising” category.
Number of Studies: 1
Average Effect Size: 0.10
Full Report
- How many additional research studies are potentially eligible for NCII review?
- 0
- Citations for Additional Research Studies :
Data Collection Practices
Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.