Hot Math Tutoring
Study: Fuchs et al. (2008)

Summary

Hot Math Tutoring is a third-grade small-group tutoring program designed to enhance at-risk (AR) students’ word-problem performance. Based on schema theory, Hot Math Tutoring provides explicit instruction on (a) solution strategies for four word-problem types and (b) how to transfer those solution strategies to word problems with unexpected features, such as problems that include irrelevant information, or that present a novel question requiring an extra step, or that include relevant information presented in charts or graphs, or that combine problem types, and so on. Hot Math Tutoring centers on four word-problem types, chosen from common third-grade curricula: “shopping list” word problems (e.g., Joe needs supplies for the science project. He needs 2 batteries, 3 wires, and 1 board. Batteries cost $4 each, wires cost $2 each, and boards cost $6 each. How much money does he need to buy supplies?), “half” problems (e.g., Marcy will buy 14 baseball cards. She’ll give her brother half the cards. How many cards will Marcy have?), step-up function or “buying bags” problems (e.g., Jose needs 32 party hats for his party. Party Hats come in bags of 4. How many bags of party hats does Jose need?), and 2-step “pictograph” problems (e.g., Mary keeps track of the number of chores she does on this chart [pictograph is shown with label: each picture stands for 3 chores]. She also took her grandmother to the market 3 times last week. How many chores has Mary done?). The content of Hot Math Tutoring mirrors a companion, classroom Hot Math program, but randomized control trial efficacy data support the efficacy of Hot Math Tutoring when it is used with or without the classroom Hot Math program. Hot Math Tutoring relies on explicit instruction and self-regulated learning strategies. The program is divided into 3-week units (3 20-30 minute sessions per week); one unit is devoted to each of the four word-problem types, and a 1 week review is conducted following winter break. Frequent cumulative review across word-problem types is incorporated. During the first 5 sessions of each unit, problem-solution instruction is delivered. In Session 1, tutors address the underlying concepts and structural features for the problem type using concrete objects. Together with the students, tutors work several examples and, as they refer to the poster and the concrete objects, explain why and how each step of the solution method was applied in the examples. Next, students respond frequently to questions as they work 2-4 problems together with the tutor. Beginning in Session 2, students complete one problem independently, which the tutor reviews and scores. Sessions 6-9 in each unit are designed to teach students to transfer the solution strategy to problems with unexpected questions or irrelevant information. Tutors first teach the meaning of the word transfer and then teach about unexpected questions and irrelevant information. In Sessions 6-9, students continue to word on transfer and to complete problems independently. Throughout each tutoring session, tutors award points for attention to task and correct work. Once per week, students have an opportunity to trade points for small prizes. Tutors follow scripts to ensure consistency but are not permitted to read or memorize scripts.

Target Grades:
3
Target Populations:
  • Students with disabilities only
  • Students with learning disabilities
  • Students with intellectual disabilities
  • Students with emotional or behavioral disabilities
  • Any student at risk for academic failure
Area(s) of Focus:
  • Concepts and/or word problems
Where to Obtain:
Lynn Fuchs and Doug Fuchs
228 Peabody Vanderbilt University, Nashville, TN 37220
615-343-4782
www.peerassistedlearningstrategies.net
Initial Cost:
$80.00 per tutor
Replacement Cost:
$25.00 per student per year

Initial cost per student for implementing program: $80 per tutor plus ~$25 per student in copying. Replacement cost per student for subsequent use: ~$25. The manual provides all information necessary for implementation and include masters of all materials. Schools need to make copies of materials (we recommend lamination for posters and reusable materials). INCLUDED: Manual ($40), masters of all materials ($40) NOT INCLUDED: individual student copies of materials, concrete reinforcers

Staff Qualified to Administer Include:
  • Special Education Teacher
  • General Education Teacher
  • Reading Specialist
  • Math Specialist
  • EL Specialist
  • Interventionist
  • Student Support Services Personnel (e.g., counselor, social worker, school psychologist, etc.)
  • Paraprofessional
  • Other:
Training Requirements:
1 day of training plus follow up by school or district staff

Tutors are trained in one full-day session. Tutors are introduced to the program and its goals and provided instruction, demonstrations, and scripted materials. They are paired to practice the program. Then, they conduct one lesson for a trainer and are judged on a point-by-point system for fidelity to treatment. A tutor who achieves 95% fidelity is considered reliable. A tutor who scores lower than 95% fidelity is coached on points he/she missed, asked to practice more, and then re-rated at a later time on another lesson. At weekly meetings, tutors meet with a trainer for to solve problems that arise. At the beginning of each unit, a 3-hour training sessions is conducted to orient tutors and distribute supporting materials.


The manuals have already been used widely, and users report high levels of satisfaction.

Access to Technical Support:
Contact Flora.Murray@vanderbilt.edu for information on how to arrange a 1-day workshop.
Recommended Administration Formats Include:
  • Individual students
  • Small group of students
Minimum Number of Minutes Per Session:
20
Minimum Number of Sessions Per Week:
3
Minimum Number of Weeks:
13
Detailed Implementation Manual or Instructions Available:
Yes
Is Technology Required?
No technology is required.

Program Information

Descriptive Information

Please provide a description of program, including intended use:

Hot Math Tutoring is a third-grade small-group tutoring program designed to enhance at-risk (AR) students’ word-problem performance. Based on schema theory, Hot Math Tutoring provides explicit instruction on (a) solution strategies for four word-problem types and (b) how to transfer those solution strategies to word problems with unexpected features, such as problems that include irrelevant information, or that present a novel question requiring an extra step, or that include relevant information presented in charts or graphs, or that combine problem types, and so on. Hot Math Tutoring centers on four word-problem types, chosen from common third-grade curricula: “shopping list” word problems (e.g., Joe needs supplies for the science project. He needs 2 batteries, 3 wires, and 1 board. Batteries cost $4 each, wires cost $2 each, and boards cost $6 each. How much money does he need to buy supplies?), “half” problems (e.g., Marcy will buy 14 baseball cards. She’ll give her brother half the cards. How many cards will Marcy have?), step-up function or “buying bags” problems (e.g., Jose needs 32 party hats for his party. Party Hats come in bags of 4. How many bags of party hats does Jose need?), and 2-step “pictograph” problems (e.g., Mary keeps track of the number of chores she does on this chart [pictograph is shown with label: each picture stands for 3 chores]. She also took her grandmother to the market 3 times last week. How many chores has Mary done?). The content of Hot Math Tutoring mirrors a companion, classroom Hot Math program, but randomized control trial efficacy data support the efficacy of Hot Math Tutoring when it is used with or without the classroom Hot Math program. Hot Math Tutoring relies on explicit instruction and self-regulated learning strategies. The program is divided into 3-week units (3 20-30 minute sessions per week); one unit is devoted to each of the four word-problem types, and a 1 week review is conducted following winter break. Frequent cumulative review across word-problem types is incorporated. During the first 5 sessions of each unit, problem-solution instruction is delivered. In Session 1, tutors address the underlying concepts and structural features for the problem type using concrete objects. Together with the students, tutors work several examples and, as they refer to the poster and the concrete objects, explain why and how each step of the solution method was applied in the examples. Next, students respond frequently to questions as they work 2-4 problems together with the tutor. Beginning in Session 2, students complete one problem independently, which the tutor reviews and scores. Sessions 6-9 in each unit are designed to teach students to transfer the solution strategy to problems with unexpected questions or irrelevant information. Tutors first teach the meaning of the word transfer and then teach about unexpected questions and irrelevant information. In Sessions 6-9, students continue to word on transfer and to complete problems independently. Throughout each tutoring session, tutors award points for attention to task and correct work. Once per week, students have an opportunity to trade points for small prizes. Tutors follow scripts to ensure consistency but are not permitted to read or memorize scripts.

The program is intended for use in the following age(s) and/or grade(s).

not selected Age 0-3
not selected Age 3-5
not selected Kindergarten
not selected First grade
not selected Second grade
selected Third grade
not selected Fourth grade
not selected Fifth grade
not selected Sixth grade
not selected Seventh grade
not selected Eighth grade
not selected Ninth grade
not selected Tenth grade
not selected Eleventh grade
not selected Twelth grade


The program is intended for use with the following groups.

selected Students with disabilities only
selected Students with learning disabilities
selected Students with intellectual disabilities
selected Students with emotional or behavioral disabilities
not selected English language learners
selected Any student at risk for academic failure
not selected Any student at risk for emotional and/or behavioral difficulties
not selected Other
If other, please describe:

ACADEMIC INTERVENTION: Please indicate the academic area of focus.

Early Literacy

not selected Print knowledge/awareness
not selected Alphabet knowledge
not selected Phonological awareness
not selected Phonological awarenessEarly writing
not selected Early decoding abilities
not selected Other

If other, please describe:

Language

not selected Expressive and receptive vocabulary
not selected Grammar
not selected Syntax
not selected Listening comprehension
not selected Other
If other, please describe:

Reading

not selected Phonological awareness
not selected Phonics/word study
not selected Comprehension
not selected Fluency
not selected Vocabulary
not selected Spelling
not selected Other
If other, please describe:

Mathematics

not selected Computation
selected Concepts and/or word problems
not selected Whole number arithmetic
not selected Comprehensive: Includes computation/procedures, problem solving, and mathematical concepts
not selected Algebra
not selected Fractions, decimals (rational number)
not selected Geometry and measurement
not selected Other
If other, please describe:

Writing

not selected Handwriting
not selected Spelling
not selected Sentence construction
not selected Planning and revising
not selected Other
If other, please describe:

BEHAVIORAL INTERVENTION: Please indicate the behavior area of focus.

Externalizing Behavior

not selected Physical Aggression
not selected Verbal Threats
not selected Property Destruction
not selected Noncompliance
not selected High Levels of Disengagement
not selected Disruptive Behavior
not selected Social Behavior (e.g., Peer interactions, Adult interactions)
not selected Other
If other, please describe:

Internalizing Behavior

not selected Depression
not selected Anxiety
not selected Social Difficulties (e.g., withdrawal)
not selected School Phobia
not selected Other
If other, please describe:

Acquisition and cost information

Where to obtain:

Address
228 Peabody Vanderbilt University, Nashville, TN 37220
Phone Number
615-343-4782
Website
www.peerassistedlearningstrategies.net

Initial cost for implementing program:

Cost
$80.00
Unit of cost
tutor

Replacement cost per unit for subsequent use:

Cost
$25.00
Unit of cost
student
Duration of license
year

Additional cost information:

Describe basic pricing plan and structure of the program. Also, provide information on what is included in the published program, as well as what is not included but required for implementation (e.g., computer and/or internet access)

Initial cost per student for implementing program: $80 per tutor plus ~$25 per student in copying. Replacement cost per student for subsequent use: ~$25. The manual provides all information necessary for implementation and include masters of all materials. Schools need to make copies of materials (we recommend lamination for posters and reusable materials). INCLUDED: Manual ($40), masters of all materials ($40) NOT INCLUDED: individual student copies of materials, concrete reinforcers

Program Specifications

Setting for which the program is designed.

selected Individual students
selected Small group of students
not selected BI ONLY: A classroom of students

If group-delivered, how many students compose a small group?

   2-4

Program administration time

Minimum number of minutes per session
20
Minimum number of sessions per week
3
Minimum number of weeks
13
not selected N/A (implemented until effective)

If intervention program is intended to occur over less frequently than 60 minutes a week for approximately 8 weeks, justify the level of intensity:

Does the program include highly specified teacher manuals or step by step instructions for implementation?
Yes

BEHAVIORAL INTERVENTION: Is the program affiliated with a broad school- or class-wide management program?

If yes, please identify and describe the broader school- or class-wide management program:

Does the program require technology?
No

If yes, what technology is required to implement your program?
not selected Computer or tablet
not selected Internet connection
not selected Other technology (please specify)

If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:

Training

How many people are needed to implement the program ?

Is training for the instructor or interventionist required?
Yes
If yes, is the necessary training free or at-cost?
At-cost

Describe the time required for instructor or interventionist training:
1 day of training plus follow up by school or district staff

Describe the format and content of the instructor or interventionist training:
Tutors are trained in one full-day session. Tutors are introduced to the program and its goals and provided instruction, demonstrations, and scripted materials. They are paired to practice the program. Then, they conduct one lesson for a trainer and are judged on a point-by-point system for fidelity to treatment. A tutor who achieves 95% fidelity is considered reliable. A tutor who scores lower than 95% fidelity is coached on points he/she missed, asked to practice more, and then re-rated at a later time on another lesson. At weekly meetings, tutors meet with a trainer for to solve problems that arise. At the beginning of each unit, a 3-hour training sessions is conducted to orient tutors and distribute supporting materials.

What types or professionals are qualified to administer your program?

selected Special Education Teacher
selected General Education Teacher
selected Reading Specialist
selected Math Specialist
selected EL Specialist
selected Interventionist
selected Student Support Services Personnel (e.g., counselor, social worker, school psychologist, etc.)
not selected Applied Behavior Analysis (ABA) Therapist or Board Certified Behavior Analyst (BCBA)
selected Paraprofessional
not selected Other

If other, please describe:

Does the program assume that the instructor or interventionist has expertise in a given area?
No   

If yes, please describe: 


Are training manuals and materials available?
Yes

Describe how the training manuals or materials were field-tested with the target population of instructors or interventionist and students:
The manuals have already been used widely, and users report high levels of satisfaction.

Do you provide fidelity of implementation guidance such as a checklist for implementation in your manual?

Can practitioners obtain ongoing professional and technical support?
Yes

If yes, please specify where/how practitioners can obtain support:

Contact Flora.Murray@vanderbilt.edu for information on how to arrange a 1-day workshop.

Summary of Evidence Base

Please identify, to the best of your knowledge, all the research studies that have been conducted to date supporting the efficacy of your program, including studies currently or previously submitted to NCII for review. Please provide citations only (in APA format); do not include any descriptive information on these studies. NCII staff will also conduct a search to confirm that the list you provide is accurate.

Study Information

Study Citations

Fuchs, L. S., Fuchs, D., Craddock, C., Hollenbeck, K. N., Hamlett, C. L. & Schatschneider, C. (2008). Effects of small-group tutoring with and without validated classroom instruction on at-risk students' math problem solving: Are two tiers of prevention better than one?. Journal of Educational Psychology, 100() 491-509.

Participants Full Bobble

Describe how students were selected to participate in the study:
This study was conducted across four years, with 30 classrooms participating each year, for a total of 120 third-grade classrooms. We refer to each year’s sample as a “cohort.” Stratifying so that each condition was represented approximately equally in each school, we randomly assigned 40 classrooms to control (i.e., teacher-designed math problem-solving instruction) and 80 classrooms to Hot Math SBI (i.e., researcher-designed SBI). One Hot Math SBI teacher dropped out due to personal reasons during the first month of the study. To obtain a representative sample, we screened 2,023 students on whom we had consent. That is, in the 119 third-grade classrooms, we randomly sampled 1,200 students for participation, blocking within classroom and within three strata: (a) 25% of students with scores 1 SD below the mean of the entire distribution on the Test of Computational Fluency (Fuchs, Hamlett, & Fuchs, 1990); (b) 50% of students with scores within 1 SD of the mean of the entire distribution on the Test of Computational Fluency; and (c) 25% of students with scores 1 SD above the mean of the entire distribution on the Test of Computational Fluency. Of these 1,200 students, 59 moved prior to posttesting (including 45 AR students). The 59 children who moved prior to posttesting were demographically comparable to the pupils who remained in the study. Among these students, we identified 288 students as AR of poor problem-solving outcomes. To derive a parsimonious equation for predicting problem-solving outcomes, we conducted regression analyses on a previous database (Fuchs, Fuchs, Prentice, Hamlett, Finelli, & Courey, 2004) of third-grade students who had received Hot Math SBI. The final prediction equation included pretest performance on the immediate transfer problem-solving measure (see Measures) and pretest performance on the Test of Computational Fluency (Fuchs et al., 1990). For each cohort, we rank ordered students on the predicted score and selected the lowest 72 students in that year’s sample. All of these students scored below the district criterion designating risk for math learning disabilities on the Test of Computation Fluency. These students were randomly assigned to tutoring conditions, while stratifying by classroom condition. In this way, some AR students received neither classroom nor tutoring Hot Math SBI; some received classroom but not tutoring Hot Math SBI; some received tutoring but not classroom Hot Math SBI; and some received classroom and tutoring Hot Math SBI. Of the 288 AR students, 45 moved prior to posttesting. On demographic and pretest performance variables, the 45 children who moved prior to posttesting were comparable to the 243 pupils who remained in the study, and there were no significant interactions between AR students’ tutoring condition and attrition status. See Table 1 for student demographics and pretreatment intelligence, reading, and math standard scores by group for the “program” and “control” groups. The demographics of the two groups relevant to the TRC review were comparable. (NOTE: In the research report, classrooms were randomly assigned to control (teacher-designed word-problem instruction; one-third of classrooms) or Classroom Hot Math (two-thirds of classrooms). Within these whole-class conditions, at-risk students were randomly assigned to control (no tutoring; one-third of students within each classroom condition) or Hot Math Tutoring (two-thirds of students within each classroom condition). In this way, AR students received one of four conditions: (1) no Classroom Hot Math and no Hot Math Tutoring (we refer to this as “control” in this protocol), (2) Classroom Hot Math and no Hot Math Tutoring (not addressed in this protocol), (3) no Classroom Hot Math with Hot Math Tutoring (referred to as the “program” condition in this protocol), and (4) Classroom Hot Math and Hot Math Tutoring (not addressed in this protocol). In the tables in the attached research report, look under “Class-level Condition: Control: Tutoring-level condition” (i.e., left side of tables). Then look at “AR tutor” for the program condition (second column) and at “AR control” for the control condition (third column). This contrasts AR students who did not receive Hot Math Tutoring against AR students who did receive Hot Math Tutoring. None of the students in these two conditions received Classroom Hot Math.)

Describe how students were identified as being at risk for academic failure (AI) or as having emotional or behavioral difficulties (BI):
All of these students scored below the district criterion designating risk for math learning disabilities on the Test of Computation Fluency. The at-risk sample was at the 24th percentile (lowest 72 of each cohort’s 300 students). The 300 students were a representative sample on a combination of the pretest immediate transfer measure of math problem solving (a reliable index that correlates well with commercial measures of math problem solving) and pretest performance on the Test of Computational Fluency, a reliable and widely used measure of mathematics skill. I use the term “representative sample” in the research design sense, i.e., representing the full range of performance (e.g., not among a sample of students selected low or high performing). In the case of this study/sample, students were in a metropolitan area with a high proportion of subsidized lunch students. So in terms of a national sample, it is safe to assume the samples are below the 25th percentile of a nationally representative sample in the demographic sense.

ACADEMIC INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:
  • below the 30th percentile on local or national norm, or
  • identified disability related to the focus of the intervention?
%

BEHAVIORAL INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:
  • emotional disability label,
  • placed in an alternative school/classroom,
  • non-responsive to Tiers 1 and 2, or
  • designation of severe problem behaviors on a validated scale or through observation?
%

Specify which condition is the submitted intervention:
In the research report, classrooms were randomly assigned to control (teacher-designed word-problem instruction; one-third of classrooms) or Classroom Hot Math (two-thirds of classrooms). Within these whole-class conditions, at-risk students were randomly assigned to control (no tutoring; one-third of students within each classroom condition) or Hot Math Tutoring (two-thirds of students within each classroom condition). In this way, AR students received one of four conditions: (1) no Classroom Hot Math and no Hot Math Tutoring (we refer to this as “control” in this protocol), (2) Classroom Hot Math and no Hot Math Tutoring (not addressed in this protocol), (3) no Classroom Hot Math with Hot Math Tutoring (referred to as the “program” condition in this protocol), and (4) Classroom Hot Math and Hot Math Tutoring (not addressed in this protocol). In the tables in the attached research report, look under “Class-level Condition: Control: Tutoring-level condition” (i.e., left side of tables). Then look at “AR tutor” for the program condition (second column) and at “AR control” for the control condition (third column). This contrasts AR students who did not receive Hot Math Tutoring against AR students who did receive Hot Math Tutoring. None of the students in these two conditions received Classroom Hot Math

Specify which condition is the control condition:
In the research report, classrooms were randomly assigned to control (teacher-designed word-problem instruction; one-third of classrooms) or Classroom Hot Math (two-thirds of classrooms). Within these whole-class conditions, at-risk students were randomly assigned to control (no tutoring; one-third of students within each classroom condition) or Hot Math Tutoring (two-thirds of students within each classroom condition). In this way, AR students received one of four conditions: (1) no Classroom Hot Math and no Hot Math Tutoring (we refer to this as “control” in this protocol), (2) Classroom Hot Math and no Hot Math Tutoring (not addressed in this protocol), (3) no Classroom Hot Math with Hot Math Tutoring (referred to as the “program” condition in this protocol), and (4) Classroom Hot Math and Hot Math Tutoring (not addressed in this protocol). In the tables in the attached research report, look under “Class-level Condition: Control: Tutoring-level condition” (i.e., left side of tables). Then look at “AR tutor” for the program condition (second column) and at “AR control” for the control condition (third column). This contrasts AR students who did not receive Hot Math Tutoring against AR students who did receive Hot Math Tutoring. None of the students in these two conditions received Classroom Hot Math.

If you have a third, competing condition, in addition to your control and intervention condition, identify what the competing condition is (data from this competing condition will not be used):

Using the tables that follow, provide data demonstrating comparability of the program group and control group in terms of demographics.

Grade Level

Demographic Program
Number
Control
Number
Effect Size: Cox Index
for Binary Differences
Age less than 1
Age 1
Age 2
Age 3
Age 4
Age 5
Kindergarten
Grade 1
Grade 2
Grade 3 56 28 0.78
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Grade 9
Grade 10
Grade 11
Grade 12

Race–Ethnicity

Demographic Program
Number
Control
Number
Effect Size: Cox Index
for Binary Differences
African American 35 17 0.12
American Indian 0 0 0.00
Asian/Pacific Islander 0 0 0.00
Hispanic 2 3 0.78
White 16 6 0.31
Other 3 3 0.45

Socioeconomic Status

Demographic Program
Number
Control
Number
Effect Size: Cox Index
for Binary Differences
Subsidized Lunch 43 21 0.18
No Subsidized Lunch 13 7 0.03

Disability Status

Demographic Program
Number
Control
Number
Effect Size: Cox Index
for Binary Differences
Speech-Language Impairments
Learning Disabilities 6 4 0.18
Behavior Disorders
Emotional Disturbance
Intellectual Disabilities
Other
Not Identified With a Disability 50 24 0.37

ELL Status

Demographic Program
Number
Control
Number
Effect Size: Cox Index
for Binary Differences
English Language Learner 0 1 2.08
Not English Language Learner 56 27 0.95

Gender

Demographic Program
Number
Control
Number
Effect Size: Cox Index
for Binary Differences
Female 26 16 0.17
Male 30 12 0.32

Mean Effect Size

0.45

For any substantively (e.g., effect size ≥ 0.25 for pretest or demographic differences) or statistically significant (e.g., p < 0.05) pretest differences between groups in the descriptions below, please describe the extent to which these differences are related to the impact of the treatment. For example, if analyses were conducted to determine that outcomes from this study are due to the intervention and not demographic characteristics, please describe the results of those analyses here.

Design Full Bobble

What method was used to determine students' placement in treatment/control groups?
Random
Please describe the assignment method or the process for defining treatment/comparison groups.
In a southeastern metropolitan school district, 120 third-grade classrooms participated in this study. Stratifying so that each condition was represented approximately comparably in each school, we randomly assigned 40 classrooms to the control condition (i.e., 3 weeks of researcher-designed general math problem-solving instruction) and assigned 80 classrooms to the Hot Math SBI condition (i.e., 3 weeks of researcher-designed general math problem solving plus 13 weeks of researcher-designed SBI). The study occurred over 4 school years during a time when the school district was relatively stable. One quarter of the sample entered the study each year. During the first 3 years, SBI classrooms were randomly assigned to Hot Math SBI or to a variant designed to strengthen Hot Math SBI. In the first three cohorts, the effects of the two SBI conditions were not statistically significantly different, but both were reliably better than control. Therefore, we did not test a variant in Cohort 4 (all Cohort 4 teachers were randomly assigned to control or to the standard version of SBI). We considered all student in the SBI classrooms to have participated in one SBI condition; however, to assess SBI variants, we included cohort effects in the analytic model. One Cohort 3 classroom in the SBI condition left the study during the first month of participation because of the classroom teacher’s personal reasons.

What was the unit of assignment?
Teachers
If other, please specify:

Please describe the unit of assignment:

What unit(s) were used for primary data analysis?
not selected Schools
selected Teachers
not selected Students
not selected Classes
not selected Other
If other, please specify:

Please describe the unit(s) used for primary data analysis:

Fidelity of Implementation Full Bobble

How was the program delivered?
not selected Individually
selected Small Group
not selected Classroom

If small group, answer the following:

Average group size
3
Minimum group size
2
Maximum group size
4

What was the duration of the intervention (If duration differed across participants, settings, or behaviors, describe for each.)?

Weeks
13.00
Sessions per week
3.00
Duration of sessions in minutes
25.00
What were the background, experience, training, and ongoing support of the instructors or interventionists?
None of the tutors was a certified teacher; only one tutor had previous experience tutoring. Tutors were trained in one full-day session. Tutors were introduced to the program and its goals and provided instruction, demonstrations, and scripted materials. They were paired to practice the program. Then, they condcuted one lesson for a trainer and were judged on a point-by point system for fidelity to treatment. A tutor who achieved 95% fidelity was considered reliable. A tutor who scored lower than 95% fidelity was coached on points he/she missed, asked to practice more, and then re-rated at a later time on another lesson. At weekly meetings, tutors met with a trainer to solve problems that arose. At the beginning of each unit, a 3-hour training session was conducted to orient tutors and distribute supporting materials. Across the four years of the study, the typical tutor was one to two years beyond undergraduate education, studying for a graduated degree in education, special education, counseling, or education policy. The majority of tutors worked for the project one year, with three tutors working for more than one year. Each year of the study, two full-time project coordinators, typically with bachelor's or master's level degrees typically outside of education, also tutored. Each year, five or six tutors were needed. (None of the tutors conducted Classroom Hot Math and Hot Math Tutoring).

Describe when and how fidelity of treatment information was obtained.
Each tutoring session was audiotaped. At the study’s end, four research assistants independently listened to tapes while completing a checklist to identify the percentage of points addressed. We sampled tapes so that, within conditions, tutors, groups, and session numbers were sampled equitably. For each of 64 tutoring small groups, 20% of sessions were sampled (7-8 tapes distributed equally across the four units). Intercoder agreement, calculated on 20% of the sampled tapes, was 96.4%.

What were the results on the fidelity-of-treatment implementation measure?
The mean percentage of points addressed across all units was 98.12 (SD = 1.28).

Was the fidelity measure also used in control classrooms?

Measures and Results

Measures Targeted : Full Bobble
Measures Broader : Full Bobble

Study measures are classified as targeted, broader, or administrative data according to the following definitions:

  • Targeted measures
    Assess outcomes, such as competencies or skills that the program was directly targeted to improve.
    • In the academic domain, targeted measures typically are not the very items taught but rather novel items structured similarly to the content addressed in the program. For example, if a program taught word-attack skills, a targeted measure would be decoding of pseudo words. If a program taught comprehension of cause-effect passages, a targeted measure would be answering questions about cause-effect passages structured similarly to those used during intervention, but not including the very passages used for intervention.
    • In the behavioral domain, targeted measures evaluate aspects of external or internal behavior the program was directly targeted to improve and are operationally defined.
  • Broader measures
    Assess outcomes that are related to the competencies or skills targeted by the program but not directly taught in the program.
    • In the academic domain, if a program taught word-level reading skill, a broader measure would be answering questions about passages the student reads. If a program taught calculation skill, a broader measure would be solving word problems that require the same kinds of calculation skill taught in the program.
    • In the behavioral domain, if a program taught a specific skill like on-task behavior in one classroom, a broader measure would be academic performance in that setting or on-task behavior in another setting.
  • Administrative data measures apply only to behavioral intervention tools and are measures such as office discipline referrals (ODRs) and graduation rates which do not have psychometric properties as do other, more traditional targeted or broader measures.

Click here for more information on effect size.


What populations are you submitting outcome data for?
selected Full sample
not selected Students at or below the 20th percentile
not selected English language learners
not selected Racial/ethnic subgroups
not selected Economically disadvantaged students (low socioeconomic status)
Targeted Measure Reverse Coded? Reliability Relevance Exposure
Broader Measure Reverse Coded? Reliability Relevance Exposure
Administrative Data Measure Reverse Coded? Relevance

Posttest Data

Targeted Measures (Full Sample)

Measure Sample Type Effect Size P

Broader Measures (Full Sample)

Measure Sample Type Effect Size P

Administrative Measures (Full Sample)

Measure Sample Type Effect Size P

Targeted Measures (Subgroups)

Measure Sample Type Effect Size P

Broader Measures (Subgroups)

Measure Sample Type Effect Size P

Administrative Measures (Subgroups)

Measure Sample Type Effect Size P
For any substantively (e.g., effect size ≥ 0.25 for pretest or demographic differences) or statistically significant (e.g., p < 0.05) pretest differences, please describe the extent to which these differences are related to the impact of the treatment. For example, if analyses were conducted to determine that outcomes from this study are due to the intervention and not pretest characteristics, please describe the results of those analyses here.
Please explain any missing data or instances of measures with incomplete pre- or post-test data.
If you have excluded a variable or data that are reported in the study being submitted, explain the rationale for exclusion:
Describe the analyses used to determine whether the intervention produced changes in student outcomes:
We converted scores on the three problem-solving measures to percentage correct so performance on the three measures could be compared. To examine how much of the total variance in improvement on the three problem-solving measures was explained by the clustering of children in classrooms and in tutoring groups, we estimated variance components with SAS PROC MIXED (Littell, 2006). The resulting intraclass correlations showed that the effect for classroom clustering explained 16.10% of the variance (p <.001) and the effect for tutoring-group clustering explained 4.10% of the variance (p = .006). We therefore incorporated each as a random effect into our model, which also included four fixed effects: one within-subjects factor (problem-solving measure) and three between-subjects factors (classroom condition, tutoring condition, and cohort). To assess pretreatment comparability, we fit a full model that included all main effects, 2-way, 3-way, and 4-way interactions as well as estimated the impact of classroom as a random effects factor. To index learning as a function of study condition, we used improvement on the three problem-solving measures. (Fitting a model using improvement scores produces identical effects as would considering the interaction between test occasion [pre vs. posttest] and study conditions. We opted for improvement scores because their interpretation is more straightforward.) In this full model, the variance component for tutoring group decreased to zero (indicating that all of the variance associated with tutoring group clusters was explained in the model). So we fixed the random effects of tutoring group to zero and also eliminated from the final model all higher-order interactions that were not statistically significant. To follow-up significant effects, we Bonferroni-corrected p-values by the number of follow-up tests we ran for that significant effect. To compute effect sizes (ESs), we subtracted the difference between improvement means and then divided by the pooled standard deviation of the improvement/square root of 2(1-rxy) (Glass, McGaw, & Smith, 1981). To compute ESs for pre- and posttreatment scores, we subtracted the difference between means and divided by the pooled SD (Hedges & Olkin, 1985). For improvement scores, we corrected for the correlation between the pre- and posttest: difference between improvement means, divided by the pooled SD of improvement/square root of 2(1-rxy) (Glass, McGaw, & Smith, l981).

Additional Research

Is the program reviewed by WWC or E-ESSA?
WWC & E-ESSA
Summary of WWC / E-ESSA Findings :

What Works Clearingouse

 

WWC only reviewed the report “Effects of small-group tutoring with and without validated classroom instruction on at-risk students’ math problem solving: Are two tiers of prevention better than one?” The findings from this review do not reflect the full body of research evidence on Hot Math Tutoring.

 

WWC Rating: Meets WWC standards without reservations.

 

Full Report

Evidence for ESSA

No studies considered met Evidence for ESSA's inclusion requirements.

How many additional research studies are potentially eligible for NCII review?
0
Citations for Additional Research Studies :

Disclaimer

Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.