Opportunities to Respond

Study: Lambert, Cartledge, Heward, & Lo (2006)

Study Type: Single-Subject Design

Descriptive Information Usage Acquisition and Cost Program Specifications and Requirements Training

Opportunities to Respond (OtR) is an intervention that involves providing all students in a group or classroom with the means (e.g., dry erase board, response cards) to respond to all questions posed by the teacher. The intent is to increase engagement by giving students the opportunity to respond to academic questions at a higher rate than the traditional form of hand raising provides. 

Opportunities to Respond is intended for use in Kindergarten through high school. It is intended for use with students with disabilities, learning disabilities, emotional or behavioral disabilities, English Language Learners, and any student at risk for emotional and/or behavioral difficulties.

The areas of focus are externalizing behavior (including high levels of disengagement, and disruptive behavior) and internalizing behavior (including anxiety). 

Opportunities to Respond is a non-commercial intervention and, therefore, does not have a formal pricing plan. All that is required for implementation is supplies for responding (e.g., cards, white boards, and markers). No costs are associated with implementation. 

Opportunities to Respond is designed for use with small groups or whole classrooms of students. Only one interventionist is needed to implement the program.

Program administration varies depending on program procedures. It should be implemented until effective.

The program includes highly specified teacher manuals or instructions for implementation.

The program is not affiliated with a broad school or class wide management program.

Technology is not required for implementation. 

Training is not required for the interventionist thought if needed can likely be done in less than one hour.

The interventionist must at a minimum be a paraprofessional.

Training manuals and materials are not available although the intervention is clearly described in published research. There is no ongoing support available for practitioners. 

 

Participants: Unconvincing Evidence

Risk Status: Students are not described as having special education labels.

Demographics: Nominated students were verified by the first author through three sessions of direct observations. Eight were African American and 1 was Caucasian and ranged in age from 9 years 4 months to 10 years 8 months. All students received free or reduced-price lunch.

Training of Instructors: Both classroom teachers were Caucasian, certified in elementary education, and had two years of teaching experience in their current classrooms. Teacher A was a male who was in his first teaching assignment and Teacher B was female who had previously taught one 1.5 years in a middle school. Previously, both participated in in-service training provided by the authors as part of a larger project on academic and behavioral interventions, but neither had used response cards. 

Design: Convincing Evidence

Does the study include three data points or sufficient number to document a stable performance within that phase? Yes

Is there opportunity for at least three demonstrations of experimental control? Yes

If the study is an alternating treatment design, are there five repetitions of the alternating sequence? Not applicable

If the study is a multiple baseline, is it concurrent? Not applicable

Implemented with Fidelity: Partially Convincing Evidence

Description of when and how fidelity of treatment information was obtained: Integrity checks were conducted for 30% of sessions within each condition using a checklist for each condition.

Results on the fidelity of treatment implementation measure: Across both classrooms, the teachers averaged 95.7% (range 91%-99%) compliance with steps during single-student responding and 97.5% (range 96%-98%) compliance during response cards.

Measures Targeted: Convincing Evidence

Targeted Measure

Reliability statistics

Relevance to program focus

Exposure to related support among control group

Disruptive behavior

Interobserver agreement    
Hand raise Interobserver agreement    
Academic response Interobserver agreement    
Correct response Not described    

 

Broader Measure

Reliability statistics

Relevance to program focus

Exposure to related support among control group

N/A

 

 

 

 

Mean ES Targeted Outcomes: N/A

Mean ES Administrative Outcomes: N/A

Effect Size:

Visual Analysis (Single-Subject Designs): Convincing Evidence

Description of the method of analyses used to determine whether the intervention condition improved relative to baseline phase (e.g. visual analysis, computation of change score, mean difference): Visual analysis and mean difference.

Results in terms of within and between phase patterns: When analyzing disruptive behavior between conditions (single-student responding and response cards) for classroom A, there was an immediate effect from SSR 1 to RC 1 with only 1 overlapping data point across all 4 participants. There was an immediate increase in disruptive behaviors from RC 1 to SSR 2, again with only 1 overlapping data point. In the last phase change from SSR 2 to RC 2, there was also an immediate reduction in disruptive behavior, but a higher rate of overlapping data (5 data points).

Similar results are found for Classroom B in terms of immediacy and only 1 of 5 had overlapping data points from SSR 1 to RC 1 (4/5 overlapping points). When returning to SSR 2, however, 3 of 4 students had overlapping data points (2/7 for student B1, 0/6 for student B2, 1/7 for student B3, 6/7 for B4, and 4/6 for B5). When returning to RC 2, effects were immediate with 3/5 students having overlapping data points (3/11 for B1, 0 for B2, 0 for B3, 8/8 for B4 and 10/10 for B5).

Data for academic responding showed that the mean number of disruptive behaviors was lower in the RC conditions (classroom A mean for SSR 1 = 7.4 and 8.2 for SSR 2; classroom B 6.7 and 5.3) than during the RC conditions (Classroom A for RC 1 = 1.1 and 2.1 for RC 2; Classroom B for RC 1 = 1.0 and 1.0 for RC 2).

A visual analysis of responses per minute across both classrooms showed that for classroom A, there was an immediate increase in the rate of academic responding/min when going from each SSR condition to each RC condition with no overlapping data points in any phase for all students across both classrooms.

The mean academic responses per minute and mean percentage of responding accuracy for each student across conditions were also analyzed. In classrooms 1 and 2, students’ academic responses during SSR conditions were lower than RC conditions. Classroom 1 (M for SSR 1 = 0.7, M for SSR 2 = 0.7, M for RC 1 = 1.03 and RC 2 = 0.90) and Classroom 2 (M for SSR 1 = 0.13, M for SSR 2 = 0.08, M for RC 1 = 0.92 and RC 2 = 0.91) both had more responses during the RC condition.

There did not appear to be much difference in academic accuracy. Targeted students in Classroom A were accurate an average of 89.7% during SSR 1, 88.3% during RC 1, 100% during SSR 2, and 90% during RC 2. In Classroom B, targeted students were accurate an average of 87.4% during SSR 1, 90.3% during RC 1, 97.4% during SSR 2, and 92.0% during RC 2. 

Disaggregated Outcome Data Available for Demographic Subgroups: No

Target Behavior(s): Externalizing, Internalizing

Delivery: Small Groups, (n = 3-30)

Fidelity of Implementation Check List Available: No

Minimum Interventionist Requirements: Paraprofessionals, 0-1 hour of training

Intervention Reviewed by What Works Clearinghouse: No

What Works Clearinghouse Review

This program was not reviewed by What Works Clearinghouse.

Other Research: Potentially Eligible for NCII Review: 0 studies