Pirate Math Equation Quest
Study: Powell et al. (2020)
Summary
Pirate Math Equation Quest is a word-problem intervention designed to help elementary students increase their knowledge of total, difference, and change problems (individual intervention) or total, difference, change, and equal groups problems (small-group intervention). The intervention is designed to be implemented by educators in school settings. We tested the efficacy of the intervention in third grade for students experiencing word-problem difficulty (<25th percentile on a word-problem screener).
- Target Grades:
- 3
- Target Populations:
-
- Students with learning disabilities
- Any student at risk for academic failure
- Area(s) of Focus:
-
- Concepts and/or word problems
- Whole number arithmetic
- Algebra
- Where to Obtain:
- Sarah Powell, Katherine Berry, Lynn Fuchs
- http://www.piratemathequationquest.com/
- Initial Cost:
- Free
- Replacement Cost:
- Free
-
There is no cost to download the Pirate Math Equation Quest materials. Educators may incur copying costs (for copying flashcards or other materials), but these costs should be minimal.
- Staff Qualified to Administer Include:
-
- Special Education Teacher
- General Education Teacher
- Math Specialist
- Interventionist
- Training Requirements:
- Training not required
-
On the Pirate Math Equation Quest website, we also created videos that show how to teach the different word-problem schemas and use the activities within the intervention. For additional details, please visist Fuchs Tutoring Professional Learning at https://www.air.org/fuchs-tutoring-professional-learning.
Each year for three years, we recruited a group of ~15 interventionists. We trained them to implemented Pirate Math Equation Quest using the intervention materials available to all educators.
- Access to Technical Support:
- Educators can reach out to the Pirate Math Equation Quest authors if they have any questions about implementation of the intervention.
- Recommended Administration Formats Include:
-
- Individual students
- Minimum Number of Minutes Per Session:
- 30
- Minimum Number of Sessions Per Week:
- 3
- Minimum Number of Weeks:
- 15
- Detailed Implementation Manual or Instructions Available:
- Yes
- Is Technology Required?
- No technology is required.
Program Information
Descriptive Information
Please provide a description of program, including intended use:
Pirate Math Equation Quest is a word-problem intervention designed to help elementary students increase their knowledge of total, difference, and change problems (individual intervention) or total, difference, change, and equal groups problems (small-group intervention). The intervention is designed to be implemented by educators in school settings. We tested the efficacy of the intervention in third grade for students experiencing word-problem difficulty (<25th percentile on a word-problem screener).
The program is intended for use in the following age(s) and/or grade(s).
Age 3-5
Kindergarten
First grade
Second grade
Third grade
Fourth grade
Fifth grade
Sixth grade
Seventh grade
Eighth grade
Ninth grade
Tenth grade
Eleventh grade
Twelth grade
The program is intended for use with the following groups.
Students with learning disabilities
Students with intellectual disabilities
Students with emotional or behavioral disabilities
English language learners
Any student at risk for academic failure
Any student at risk for emotional and/or behavioral difficulties
Other
If other, please describe:
ACADEMIC INTERVENTION: Please indicate the academic area of focus.
Early Literacy
Alphabet knowledge
Phonological awareness
Phonological awarenessEarly writing
Early decoding abilities
Other
If other, please describe:
Language
Grammar
Syntax
Listening comprehension
Other
If other, please describe:
Reading
Phonics/word study
Comprehension
Fluency
Vocabulary
Spelling
Other
If other, please describe:
Mathematics
Concepts and/or word problems
Whole number arithmetic
Comprehensive: Includes computation/procedures, problem solving, and mathematical concepts
Algebra
Fractions, decimals (rational number)
Geometry and measurement
Other
If other, please describe:
Writing
Spelling
Sentence construction
Planning and revising
Other
If other, please describe:
BEHAVIORAL INTERVENTION: Please indicate the behavior area of focus.
Externalizing Behavior
Verbal Threats
Property Destruction
Noncompliance
High Levels of Disengagement
Disruptive Behavior
Social Behavior (e.g., Peer interactions, Adult interactions)
Other
If other, please describe:
Internalizing Behavior
Anxiety
Social Difficulties (e.g., withdrawal)
School Phobia
Other
If other, please describe:
Acquisition and cost information
Where to obtain:
- Address
- Phone Number
- Website
- http://www.piratemathequationquest.com/
Initial cost for implementing program:
- Cost
- $0.00
- Unit of cost
Replacement cost per unit for subsequent use:
- Cost
- $0.00
- Unit of cost
- Duration of license
Additional cost information:
Describe basic pricing plan and structure of the program. Also, provide information on what is included in the published program, as well as what is not included but required for implementation (e.g., computer and/or internet access)
There is no cost to download the Pirate Math Equation Quest materials. Educators may incur copying costs (for copying flashcards or other materials), but these costs should be minimal.Program Specifications
Setting for which the program is designed.
Small group of students
BI ONLY: A classroom of students
If group-delivered, how many students compose a small group?
Program administration time
- Minimum number of minutes per session
- 30
- Minimum number of sessions per week
- 3
- Minimum number of weeks
- 15
- If intervention program is intended to occur over less frequently than 60 minutes a week for approximately 8 weeks, justify the level of intensity:
Does the program include highly specified teacher manuals or step by step instructions for implementation?- Yes
BEHAVIORAL INTERVENTION: Is the program affiliated with a broad school- or class-wide management program?-
If yes, please identify and describe the broader school- or class-wide management program: -
Does the program require technology? - No
-
If yes, what technology is required to implement your program? -
Computer or tablet
Internet connection
Other technology (please specify)
If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:
Training
- How many people are needed to implement the program ?
- 1
Is training for the instructor or interventionist required?- No
- If yes, is the necessary training free or at-cost?
- Free
Describe the time required for instructor or interventionist training:- The interventionist should read the Pirate Math Equation Quest teacher manual and become familiar with the intervention materials.
Describe the format and content of the instructor or interventionist training:- On the Pirate Math Equation Quest website, we also created videos that show how to teach the different word-problem schemas and use the activities within the intervention. For additional details, please visist Fuchs Tutoring Professional Learning at https://www.air.org/fuchs-tutoring-professional-learning.
What types or professionals are qualified to administer your program?
General Education Teacher
Reading Specialist
Math Specialist
EL Specialist
Interventionist
Student Support Services Personnel (e.g., counselor, social worker, school psychologist, etc.)
Applied Behavior Analysis (ABA) Therapist or Board Certified Behavior Analyst (BCBA)
Paraprofessional
Other
If other, please describe:
- Does the program assume that the instructor or interventionist has expertise in a given area?
-
Yes
If yes, please describe:
This program assumes the interventionist has a foundational understanding of the word-problem schemas of total, difference, and change. This knowledge can be learned through interaction with the Pirate Math Equation Quest intervention.
Are training manuals and materials available?- Yes
-
Describe how the training manuals or materials were field-tested with the target population of instructors or interventionist and students: - Each year for three years, we recruited a group of ~15 interventionists. We trained them to implemented Pirate Math Equation Quest using the intervention materials available to all educators.
Do you provide fidelity of implementation guidance such as a checklist for implementation in your manual?- Yes
-
Can practitioners obtain ongoing professional and technical support? -
Yes
If yes, please specify where/how practitioners can obtain support:
Educators can reach out to the Pirate Math Equation Quest authors if they have any questions about implementation of the intervention.
Summary of Evidence Base
- Please identify, to the best of your knowledge, all the research studies that have been conducted to date supporting the efficacy of your program, including studies currently or previously submitted to NCII for review. Please provide citations only (in APA format); do not include any descriptive information on these studies. NCII staff will also conduct a search to confirm that the list you provide is accurate.
-
Powell, S. R., Berry, K. A., Fall, A.-M., Roberts, G., Fuchs, L. S., & Barnes, M. A. (in press). Alternative paths to improved word-problem performance: An advantage for embedding pre-algebraic reasoning instruction within word-problem intervention. Journal of Educational Psychology. https://doi.org/10.1037/edu0000513
Study Information
Study Citations
Powell, S. R., Berry, K. A., Fall, A. M., Roberts, G., Fuchs, L. S. & Barnes, M. A. (2020). Alternative Paths to Improved Word-Problem Performance: An Advantage for Embedding Prealgebraic Reasoning Instruction Within Word-Problem Intervention. Journal of Educational Psychology, ()
Participants
- Describe how students were selected to participate in the study:
- We recruited two cohorts of students for project participation across two years. During the 2016-2017 school year, for cohort 1, we recruited 37 third-grade teachers from 13 elementary schools. Several schools used departmentalization (i.e., the same teacher taught multiple mathematics classes), which accounted for the different numbers of teachers and classes. These 37 third-grade teachers taught 52 separate mathematics classes. From these 52 classrooms, we screened 916 third-grade students. During the 2017-2018 school year, for cohort 2, we recruited 44 teachers from 13 schools who taught 51 classrooms of students. We screened 818 third-grade students in the second cohort. In this study, we combined the data from cohorts 1 and 2 for a total of 1,734 third-grade students who participated in screening.
- Describe how students were identified as being at risk for academic failure (AI) or as having emotional or behavioral difficulties (BI):
- We screened all students using a measure of Single-Digit Word Problems (Jordan & Hanich, 2000). We used this measure to screen for mathematics difficulty (MD) in the area of word problems because word-problem solving was the primary focus of the intervention. For study eligibility, we identified students who answered 7 or fewer items correctly (out of 14) as experiencing MD. This cut-off score of 7 represented performance at or below the 25th percentile, a common cut-off score in research related to MD (Geary et al., 2012; Hecht & Vagi, 2010; Locuniak & Jordan, 2008). Based on the initial screening, we identified 472 students with MD. Of these, we did not pretest students for the following reasons: no parent consent or student assent (n = 28); Limited English proficiency of student (i.e., student experienced great difficulty with testing in English and teacher agreed student was not ready to participate in word-problem intervention; n = 49); student had a disability and received too many other services (n = 9); student moved before pretesting finished (n = 7); student had behavior issues identified by teacher (n = 12); teacher had too many students with MD in one classroom (n = 35), in this case, we randomly selected four students from each classroom to participate; student’s parent opted the student out of study (n = 7); or could not schedule pretesting (n = 21). After completion of the pretest battery, we identified 304 third-grade students with MD across the two cohorts.
-
ACADEMIC INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:
- below the 30th percentile on local or national norm, or
- identified disability related to the focus of the intervention?
- 27.2%
-
BEHAVIORAL INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:
- emotional disability label,
- placed in an alternative school/classroom,
- non-responsive to Tiers 1 and 2, or
- designation of severe problem behaviors on a validated scale or through observation?
- %
- Specify which condition is the submitted intervention:
- The submitted intervention is named Pirate Math Equation Quest (PMEQ).
- Specify which condition is the control condition:
- The control condition is named BaU (business-as-usual).
- If you have a third, competing condition, in addition to your control and intervention condition, identify what the competing condition is (data from this competing condition will not be used):
- A competing condition is named Pirate Math-alone (PM-alone).
Using the tables that follow, provide data demonstrating comparability of the program group and control group in terms of demographics.
Grade Level
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
Age less than 1 | |||
Age 1 | |||
Age 2 | |||
Age 3 | |||
Age 4 | |||
Age 5 | |||
Kindergarten | |||
Grade 1 | |||
Grade 2 | |||
Grade 3 | 100.0% | 100.0% | 0.00 |
Grade 4 | |||
Grade 5 | |||
Grade 6 | |||
Grade 7 | |||
Grade 8 | |||
Grade 9 | |||
Grade 10 | |||
Grade 11 | |||
Grade 12 |
Race–Ethnicity
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
African American | 12.4% | 10.4% | 0.12 |
American Indian | |||
Asian/Pacific Islander | 1.9% | 2.6% | 0.25 |
Hispanic | 62.9% | 71.3% | 0.22 |
White | 6.7% | 5.2% | 0.22 |
Other | 2.9% | 3.5% | 0.00 |
Socioeconomic Status
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
Subsidized Lunch | |||
No Subsidized Lunch |
Disability Status
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
Speech-Language Impairments | |||
Learning Disabilities | |||
Behavior Disorders | |||
Emotional Disturbance | |||
Intellectual Disabilities | |||
Other | 11.4% | 9.6% | 0.06 |
Not Identified With a Disability | 88.6% | 90.4% | 0.06 |
ELL Status
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
English Language Learner | 61.9% | 59.1% | 0.08 |
Not English Language Learner | 38.1% | 40.9% | 0.08 |
Gender
Demographic | Program Number |
Control Number |
Effect Size: Cox Index for Binary Differences |
---|---|---|---|
Female | 59.0% | 58.3% | 0.02 |
Male | 41.0% | 41.7% | 0.02 |
Mean Effect Size
For any substantively (e.g., effect size ≥ 0.25 for pretest or demographic differences) or statistically significant (e.g., p < 0.05) pretest differences between groups in the descriptions below, please describe the extent to which these differences are related to the impact of the treatment. For example, if analyses were conducted to determine that outcomes from this study are due to the intervention and not demographic characteristics, please describe the results of those analyses here.
Design
- What method was used to determine students' placement in treatment/control groups?
- Random
- Please describe the assignment method or the process for defining treatment/comparison groups.
- We randomly assigned the 304 students to three conditions: Pirate Math Equation Quest (PMEQ) intervention; Pirate Math without Equation Quest (PM-alone) intervention; and business-as-usual (BaU) comparison. Students were allocated to condition within teachers, and teachers were nested in schools.
-
What was the unit of assignment? - Students
- If other, please specify:
-
Please describe the unit of assignment: - We randomly assigned the 304 students to three conditions: Pirate Math Equation Quest (PMEQ) intervention; Pirate Math without Equation Quest (PM-alone) intervention; and business-as-usual (BaU) comparison. Students were allocated to condition within teachers, and teachers were nested in schools.
-
What unit(s) were used for primary data analysis? -
Schools
Teachers
Students
Classes
Other
If other, please specify:
Interventionist -
Please describe the unit(s) used for primary data analysis: - We assigned students in the two active treatments (i.e., PMEQ or PM-alone) to an interventionist for purposes of delivering the intervention sessions. Placement with an interventionist was driven by students’ existing and likely school schedules and by interventionists’ availability; students were not randomized or “tracked” into interventionists nor were existing interventionists randomized to treatment conditions. Students randomized to BaU were not assigned to interventionists because BaU students did not receive supplemental intervention from the research team. This arrangement, where only a subset of a multilevel sample is nested (nested in interventionists here), is commonly described as a partially nested randomized design (Lohr et al., 2014). When units are randomized within blocks (teachers in this case), the design is a blocked partially nested randomized design. The nesting is partial because only a subset of students is subject to a given layer of nesting. All students in our study were nested in teachers (and teachers in schools), but, only students in the two treatment conditions also were nested in interventionists. Additionally, when comparing more than two conditions, the design can be described as a multi-arm blocked partially nested design. In the multi-arm scenario, some reasons favor random allocation at the partially nested level (randomized assignment to interventionists; see Lohr et al. 2014). However, randomizing to interventionists in school settings is simply not feasible in most cases (in our experience). That said, assignment to interventionists was systematic (non-random) only to the extent that placing students according to student and interventionist scheduled openings introduced patterned data (i.e., relatively unlikely).
Fidelity of Implementation
- How was the program delivered?
-
Individually
Small Group
Classroom
If small group, answer the following:
- Average group size
- Minimum group size
- Maximum group size
What was the duration of the intervention (If duration differed across participants, settings, or behaviors, describe for each.)?
- Weeks
- 15.00
- Sessions per week
- 3.00
- Duration of sessions in minutes
- 30.00
- What were the background, experience, training, and ongoing support of the instructors or interventionists?
- We recruited 27 research assistants to act as examiners for pre- and posttesting and interventionists during tutoring. All research staff were pursuing or had obtained a Master’s or doctoral degree in an education-related field. During the 2016-2017 school year (cohort 1), research staff (n = 15) were predominately female (n = 13), with 53% identifying as Caucasian (n = 8), 27% as Hispanic (n = 4), 13% as Asian American (n = 2), and 7% as African American (n = 1). During the 2017-2018 school year (cohort 2), all research staff were female (n = 15), with 73% (n = 11) identifying as Caucasian, 13% percent as Hispanic (n = 2), 7% as American Indian (n = 1), and 7% as African American (n = 1). Only 10% (n = 3) of research staff were the same from cohort 1 to cohort 2. Throughout the year, research staff participated in trainings to ensure strong preparation for all aspects of the intervention. In late August and early September, examiners participated in three, 3-hr pretesting trainings. In early October, the team participated in two, 1.5-hr tutoring trainings about the content of the intervention and Total problems. Two subsequent 1.5-hr tutoring trainings followed in November to introduce Difference problems and in January to introduce Change problems. Lastly, examiners participated in one, 1.5-hr posttesting training meeting.
- Describe when and how fidelity of treatment information was obtained.
- We collected fidelity of implementation in several ways. First, for pretesting and posttesting, interventionists recorded all testing sessions. We randomly selected 472 of 2,366 (19.9%) audio recordings for analysis, evenly distributed across interventionists, and measured fidelity to testing procedures against detailed fidelity checklists. We measured pretesting fidelity at 98.5% (SD = 0.024) and posttesting fidelity at 98.8% (SD = 0.031).
Second, we measured fidelity of implementation of the interventions. The Project Manager conducted in-person fidelity observations once every three weeks for every interventionist. We also measured fidelity of intervention implementation through analysis of audio-recorded sessions. We audio-recorded every intervention session and randomly selected 1,632 of 8,160 (20.0%) audio-recorded sessions for analysis, evenly distributed across interventionists. Fidelity averaged 98% (SD = 0.037) for in-person supervisory observations and 98% (SD = 0.03) for audio-recorded intervention sessions.
Third, all interventionists tracked the number of sessions for their PMEQ and PM-alone students. We designed the intervention for students to finish at least 45 sessions with a maximum number of 51 sessions. The average PMEQ student completed 47.7 sessions of intervention (range 41 to 50; SD = 1.2), and the average PM-alone student completed 47.4 sessions of intervention (range 38 to 50; SD = 1.9). All sessions lasted 30 min, meaning PMEQ students received approximately 23.8 hours of intervention and PM-alone students received approximately 23.7 hours of intervention.
- What were the results on the fidelity-of-treatment implementation measure?
- We collected fidelity of implementation in several ways. First, for pretesting and posttesting, interventionists recorded all testing sessions. We randomly selected 472 of 2,366 (19.9%) audio recordings for analysis, evenly distributed across interventionists, and measured fidelity to testing procedures against detailed fidelity checklists. We measured pretesting fidelity at 98.5% (SD = 0.024) and posttesting fidelity at 98.8% (SD = 0.031).
Second, we measured fidelity of implementation of the interventions. The Project Manager conducted in-person fidelity observations once every three weeks for every interventionist. We also measured fidelity of intervention implementation through analysis of audio-recorded sessions. We audio-recorded every intervention session and randomly selected 1,632 of 8,160 (20.0%) audio-recorded sessions for analysis, evenly distributed across interventionists. Fidelity averaged 98% (SD = 0.037) for in-person supervisory observations and 98% (SD = 0.03) for audio-recorded intervention sessions.
Third, all interventionists tracked the number of sessions for their PMEQ and PM-alone students. We designed the intervention for students to finish at least 45 sessions with a maximum number of 51 sessions. The average PMEQ student completed 47.7 sessions of intervention (range 41 to 50; SD = 1.2), and the average PM-alone student completed 47.4 sessions of intervention (range 38 to 50; SD = 1.9). All sessions lasted 30 min, meaning PMEQ students received approximately 23.8 hours of intervention and PM-alone students received approximately 23.7 hours of intervention.
- Was the fidelity measure also used in control classrooms?
- No.
Measures and Results
Measures Broader :
Targeted Measure | Reverse Coded? | Reliability | Relevance | Exposure |
---|
Broader Measure | Reverse Coded? | Reliability | Relevance | Exposure |
---|
Administrative Data Measure | Reverse Coded? | Relevance |
---|
Effect Size
Effect size represents the how much performance changed because of the intervention. The larger the effect size, the greater the impact participating in the intervention had.
According to guidelines from the What Works Clearinghouse, an effect size of 0.25 or greater is “substantively important.” Additionally, effect sizes that are statistically significant are more trustworthy than effect sizes of the same magnitude that are not statistically significant.
Effect Size Dial
The purpose of the effect size dial is to help users understand the strength of a tool relative to other tools on the Tools Chart.
- The range represents where most effect sizes fall within reading or math based on effect sizes from tools on the Tools Chart.
- The orange pointer shows the average effect size for this study.
Targeted Measures (Full Sample)
Average Math Effect Size
Measure | Sample Type | Effect Size |
---|---|---|
Average across all targeted measures | Full Sample | 0.99* |
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Broader Measures (Full Sample)
Measure | Sample Type | Effect Size |
---|---|---|
Average across all broader measures | Full Sample | -- |
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Administrative Measures (Full Sample)
Measure | Sample Type | Effect Size |
---|---|---|
Average across all admin measures | Full Sample | -- |
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Targeted Measures (Subgroups)
Measure | Sample Type | Effect Size |
---|---|---|
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Broader Measures (Subgroups)
Measure | Sample Type | Effect Size |
---|---|---|
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
Administrative Measures (Subgroups)
Measure | Sample Type | Effect Size |
---|---|---|
* = p ≤ 0.05; † = Vendor did not provide necessary data for NCII to calculate effect sizes. |
- For any substantively (e.g., effect size ≥ 0.25 for pretest or demographic differences) or statistically significant (e.g., p < 0.05) pretest differences, please describe the extent to which these differences are related to the impact of the treatment. For example, if analyses were conducted to determine that outcomes from this study are due to the intervention and not pretest characteristics, please describe the results of those analyses here.
- Please explain any missing data or instances of measures with incomplete pre- or post-test data.
- Overall, 20 students (6.7% of the 304 randomized students) did not complete the intervention because they (a) left the participating school prior to treatment’s end (n = 16), (b) were discontinued from intervention due to disruptive behavior (n = 2), or (c) went into protective custody (n = 2). Attrition rates varied across treatment conditions. In BaU, 1% of students did not complete the posttest battery, while 6% of students in the PM-alone condition did not complete the intervention and or posttesting, and 15% of students in PMEQ failed to finish the intervention or complete posttesting. An overall attrition rate of 6.7% combined with differential attrition of 14% represents bias, which poses a threat to the study’s internal validity. However, despite the differential attrition, the three groups were very similar on important baseline characteristics (Hedges’ g ranged from 0.01 to 0.11; p-values ranged from .25 to .97; see Tables 1 and 2), suggesting equivalence prior to onset of treatment (What Works Clearinghouse, 2017). There was no attrition among teachers or schools.
- If you have excluded a variable or data that are reported in the study being submitted, explain the rationale for exclusion:
- None.
- Describe the analyses used to determine whether the intervention produced changes in student outcomes:
- We assigned students in the two active treatments (i.e., PMEQ or PM-alone) to an interventionist for purposes of delivering the intervention sessions. Placement with an interventionist was driven by students’ existing and likely school schedules and by interventionists’ availability; students were not randomized or “tracked” into interventionists nor were existing interventionists randomized to treatment conditions. Students randomized to BaU were not assigned to interventionists because BaU students did not receive supplemental intervention from the research team. This arrangement, where only a subset of a multilevel sample is nested (nested in interventionists here), is commonly described as a partially nested randomized design (Lohr et al., 2014). When units are randomized within blocks (teachers in this case), the design is a blocked partially nested randomized design. The nesting is partial because only a subset of students is subject to a given layer of nesting. All students in our study were nested in teachers (and teachers in schools), but, only students in the two treatment conditions also were nested in interventionists. Additionally, when comparing more than two conditions, the design can be described as a multi-arm blocked partially nested design. In the multi-arm scenario, some reasons favor random allocation at the partially nested level (randomized assignment to interventionists; see Lohr et al. 2014). However, randomizing to interventionists in school settings is simply not feasible in most cases (in our experience). That said, assignment to interventionists was systematic (non-random) only to the extent that placing students according to student and interventionist scheduled openings introduced patterned data (i.e., relatively unlikely).
Ignoring the asymmetry suggested by the partial nesting of students in interventionists is problematic because the different data structures for the treatment and the BaU groups imply different variance components. In our case, outcomes for treatment-assigned cases may vary by interventionists, unlike students assigned to BaU. We modeled the effect of interventionists in the treatment conditions only, following recommendations of Bauer et al. (2008) and Lohr et al. (2014). We modeled this effect as random, and allowed different variance estimates for treatment groups and BaU under the assumption that errors are independent.
These data were cross-classified in addition to being partially nested. In multilevel data, cross-classification occurs when cases from different levels of the model are not completely nested. For example, in our case, teacher and interventionist were crossed because students from the same teacher may have worked with different interventionists and because students from different teachers may have worked with the same interventionist (similar to Fuchs, Schumacher, et al., 2014). Note as well that cross-classification only occurs in the two treatment conditions because interventionists did not interact with students in the BaU. In this partially cross-classified structure (Luo et al., 2015), cases in one condition are nested under one random factor whereas cases in the other conditions are cross-classified across two random factors. The nested and cross-classified model components differ in their random effects. The random effect of interventionists exists only for students in one of the two treatment conditions, conditional on teacher effects. We modeled the data accordingly. Treatment-assigned cases were crossed on teacher and interventionist; BaU-assigned students were nested in teachers (see Luo et al., 2015).
Additional Research
- Is the program reviewed by WWC or E-ESSA?
- No
- Summary of WWC / E-ESSA Findings :
What Works Clearinghouse Review
This program was not reviewed by the What Works Clearinghouse.
Evidence for ESSA
This program was not reviewed by Evidence for ESSA.
- How many additional research studies are potentially eligible for NCII review?
- 0
- Citations for Additional Research Studies :
Data Collection Practices
Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.