Learning Strategies Curriculum: Test-Taking Strategy
Study: Hughes & Schumaker (1991)
Summary
The purpose of this instructional program is to teach secondary students how to take tests. Research has shown that the majority of secondary students’ test grades are derived from their test scores. Such tests are comprised of an average of 32 questions requiring 40 answers, which translates into students being required to make about 1.3 responses per minute in a 50-minute class period. Additionally, these tests usually have about four sections, each with instructions that are about three sentences long. Thus, students must be strategic in order to respond to these tests quickly and efficiently. The Test-Taking Strategy is designed to help students markedly improve their performance on these tests. By using the strategy, they (a) allocate time and order of importance to each section of a test, (b) carefully read and focus on important elements of test instructions, (c) quickly progress through the test by selectively answering or abandoning questions, (d) make well-informed guesses, and (e) talk themselves through the process to keep themselves calm and using the strategy steps. In other words, students take an active role during each testing situation. Research has shown that instruction in this strategy results in significant gains in student performance on tests and in improved course grades.
- Target Grades:
- 4, 5, 6, 7, 8, 9, 10, 11, 12
- Target Populations:
-
- Students with learning disabilities
- Students with emotional or behavioral disabilities
- English language learners
- Any student at risk for academic failure
- Any student at risk for emotional and/or behavioral difficulties
- Other: Any student having difficulty taking tests or failing tests in required courses
- Area(s) of Focus:
-
- Other: This program helps students with all aspects of taking tests, including expressing knowledge when writing essays.
- Where to Obtain:
- Edge Enterprises, Inc. (publisher); Charles Hughes, Jean Schumaker, Donald Deshler, & Cecil Mercer (authors/developers)
- Edge Enterprises, Inc., 708 W. 9th St. Suite 104, Lawrence, KS 66044
- 785-749-1473; FAX: 785-749-0207
- www.edgeenterprisesinc.com
- Initial Cost:
- $15.00 per Teacher
- Replacement Cost:
- $15.00 per Teacher per N/A
-
The instructor's (teacher's) manual costs $15. It includes step-by-step instructions for teaching the Test-Taking Strategy to students. It also contains all the materials needed for teaching the strategy including visual aids, progress charts, score sheets, pretest and posttest, practice tests, maintenance tests, and answer keys. The teacher has permission to copy any of these materials for use with students. Additionally, a CD is also available to accompany the instructor's manual. The CD program contains the majority of instruction for the Test-Taking Strategy for students. Students will hear and see other students talking about and using parts of the strategy, and they can practice using the steps of the strategy. After working through the CD, students are provided some final instruction from the instructor and final practice activities with practice tests. The CD increases flexibility for both students and teachers to increase the opportunities for students to practice using the strategy and to observe how other students successfully use the strategy. The instructor’s manual must be purchased with the CD. They come as a set. The set costs $41. Additional CDs can be purchases (teachers often want multiple copies for their room). CDs are $30.
- Staff Qualified to Administer Include:
-
- Special Education Teacher
- General Education Teacher
- Reading Specialist
- Math Specialist
- EL Specialist
- Interventionist
- Training Requirements:
- Approximately 2 to 3 hours
-
The training involves lecture, discussion, question/answer, cooperative-group activities, practice activities, scoring activities, and planning for implementation.
The training manual and associated materials were field-tested by the International Network of Certified Professional Developers associated with the University of Kansas Center for Research on Learning. These professionals provided feedback and the materials were refined and revised accordingly.
- Access to Technical Support:
- Initial training, coaching, and ongoing support can be provided by members of the International Network of Professional Developers associated with the University of Kansas Center for Research on Learning.
- Recommended Administration Formats Include:
-
- Individual students
- Small group of students
- Minimum Number of Minutes Per Session:
- 30
- Minimum Number of Sessions Per Week:
- 5
- Minimum Number of Weeks:
- 4
- Detailed Implementation Manual or Instructions Available:
- Yes
- Is Technology Required?
- No technology is required.
Program Information
Descriptive Information
Please provide a description of program, including intended use:
The purpose of this instructional program is to teach secondary students how to take tests. Research has shown that the majority of secondary students’ test grades are derived from their test scores. Such tests are comprised of an average of 32 questions requiring 40 answers, which translates into students being required to make about 1.3 responses per minute in a 50-minute class period. Additionally, these tests usually have about four sections, each with instructions that are about three sentences long. Thus, students must be strategic in order to respond to these tests quickly and efficiently. The Test-Taking Strategy is designed to help students markedly improve their performance on these tests. By using the strategy, they (a) allocate time and order of importance to each section of a test, (b) carefully read and focus on important elements of test instructions, (c) quickly progress through the test by selectively answering or abandoning questions, (d) make well-informed guesses, and (e) talk themselves through the process to keep themselves calm and using the strategy steps. In other words, students take an active role during each testing situation. Research has shown that instruction in this strategy results in significant gains in student performance on tests and in improved course grades.
The program is intended for use in the following age(s) and/or grade(s).
Age 3-5
Kindergarten
First grade
Second grade
Third grade
Fourth grade
Fifth grade
Sixth grade
Seventh grade
Eighth grade
Ninth grade
Tenth grade
Eleventh grade
Twelth grade
The program is intended for use with the following groups.
Students with learning disabilities
Students with intellectual disabilities
Students with emotional or behavioral disabilities
English language learners
Any student at risk for academic failure
Any student at risk for emotional and/or behavioral difficulties
Other
If other, please describe:
Any student having difficulty taking tests or failing tests in required courses
ACADEMIC INTERVENTION: Please indicate the academic area of focus.
Early Literacy
Alphabet knowledge
Phonological awareness
Phonological awarenessEarly writing
Early decoding abilities
Other
If other, please describe:
Language
Grammar
Syntax
Listening comprehension
Other
If other, please describe:
Reading
Phonics/word study
Comprehension
Fluency
Vocabulary
Spelling
Other
If other, please describe:
Mathematics
Concepts and/or word problems
Whole number arithmetic
Comprehensive: Includes computation/procedures, problem solving, and mathematical concepts
Algebra
Fractions, decimals (rational number)
Geometry and measurement
Other
If other, please describe:
Writing
Spelling
Sentence construction
Planning and revising
Other
If other, please describe:
This program helps students with all aspects of taking tests, including expressing knowledge when writing essays.
BEHAVIORAL INTERVENTION: Please indicate the behavior area of focus.
Externalizing Behavior
Verbal Threats
Property Destruction
Noncompliance
High Levels of Disengagement
Disruptive Behavior
Social Behavior (e.g., Peer interactions, Adult interactions)
Other
If other, please describe:
Internalizing Behavior
Anxiety
Social Difficulties (e.g., withdrawal)
School Phobia
Other
If other, please describe:
Acquisition and cost information
Where to obtain:
- Address
- Edge Enterprises, Inc., 708 W. 9th St. Suite 104, Lawrence, KS 66044
- Phone Number
- 785-749-1473; FAX: 785-749-0207
- Website
- www.edgeenterprisesinc.com
Initial cost for implementing program:
- Cost
- $15.00
- Unit of cost
- Teacher
Replacement cost per unit for subsequent use:
- Cost
- $15.00
- Unit of cost
- Teacher
- Duration of license
- N/A
Additional cost information:
Describe basic pricing plan and structure of the program. Also, provide information on what is included in the published program, as well as what is not included but required for implementation (e.g., computer and/or internet access)
The instructor's (teacher's) manual costs $15. It includes step-by-step instructions for teaching the Test-Taking Strategy to students. It also contains all the materials needed for teaching the strategy including visual aids, progress charts, score sheets, pretest and posttest, practice tests, maintenance tests, and answer keys. The teacher has permission to copy any of these materials for use with students. Additionally, a CD is also available to accompany the instructor's manual. The CD program contains the majority of instruction for the Test-Taking Strategy for students. Students will hear and see other students talking about and using parts of the strategy, and they can practice using the steps of the strategy. After working through the CD, students are provided some final instruction from the instructor and final practice activities with practice tests. The CD increases flexibility for both students and teachers to increase the opportunities for students to practice using the strategy and to observe how other students successfully use the strategy. The instructor’s manual must be purchased with the CD. They come as a set. The set costs $41. Additional CDs can be purchases (teachers often want multiple copies for their room). CDs are $30.Program Specifications
Setting for which the program is designed.
Small group of students
BI ONLY: A classroom of students
If group-delivered, how many students compose a small group?
4 to 6Program administration time
- Minimum number of minutes per session
- 30
- Minimum number of sessions per week
- 5
- Minimum number of weeks
- 4
- If intervention program is intended to occur over less frequently than 60 minutes a week for approximately 8 weeks, justify the level of intensity:
- This program is to be taught until students master using and generalizing the strategy. Instruction can be discontinued after mastery is attained.
Does the program include highly specified teacher manuals or step by step instructions for implementation?- Yes
BEHAVIORAL INTERVENTION: Is the program affiliated with a broad school- or class-wide management program?- No
-
If yes, please identify and describe the broader school- or class-wide management program: -
Does the program require technology? - No
-
If yes, what technology is required to implement your program? -
Computer or tablet
Internet connection
Other technology (please specify)
If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:
No technology is required. However, a multimedia program is available for use in teaching individual students the Test-Taking Strategy. This program requires teacher supervision and occasional teacher participation in providing feedback to a student. Otherwise, students can work through the program at their own pace. It is available through Edge Enterprises, Inc. for $30 per cd or $36 for a flash drive. Teachers will also need a copy of the instructor's manual.
Training
- How many people are needed to implement the program ?
- 1
Is training for the instructor or interventionist required?- Yes
- If yes, is the necessary training free or at-cost?
- At-cost
Describe the time required for instructor or interventionist training:- Approximately 2 to 3 hours
Describe the format and content of the instructor or interventionist training:- The training involves lecture, discussion, question/answer, cooperative-group activities, practice activities, scoring activities, and planning for implementation.
What types or professionals are qualified to administer your program?
General Education Teacher
Reading Specialist
Math Specialist
EL Specialist
Interventionist
Student Support Services Personnel (e.g., counselor, social worker, school psychologist, etc.)
Applied Behavior Analysis (ABA) Therapist or Board Certified Behavior Analyst (BCBA)
Paraprofessional
Other
If other, please describe:
- Does the program assume that the instructor or interventionist has expertise in a given area?
-
No
If yes, please describe:
Are training manuals and materials available?- Yes
-
Describe how the training manuals or materials were field-tested with the target population of instructors or interventionist and students: - The training manual and associated materials were field-tested by the International Network of Certified Professional Developers associated with the University of Kansas Center for Research on Learning. These professionals provided feedback and the materials were refined and revised accordingly.
Do you provide fidelity of implementation guidance such as a checklist for implementation in your manual?- Yes
-
Can practitioners obtain ongoing professional and technical support? -
Yes
If yes, please specify where/how practitioners can obtain support:
Initial training, coaching, and ongoing support can be provided by members of the International Network of Professional Developers associated with the University of Kansas Center for Research on Learning.
Summary of Evidence Base
- Please identify, to the best of your knowledge, all the research studies that have been conducted to date supporting the efficacy of your program, including studies currently or previously submitted to NCII for review. Please provide citations only (in APA format); do not include any descriptive information on these studies. NCII staff will also conduct a search to confirm that the list you provide is accurate.
-
Hughes, C. A., & Schumaker, J. B. (1991). Test-taking strategy instruction for adolescents with learning disabilities. Exceptionality, 2, 205-221. https://doi.org/10.1080/09362839109524784
Hughes, C. A., & Schumaker, J. B. (1991). Reflections on "test-taking strategy instruction for adolescents with learning disabilities. Exceptionality, 2, 237-242.
Hughes, C. A., Deshler, D. D., Ruhl, K. L., & Schumaker, J. B. (1993). Test-taking strategy instruction for adolescents with emotional and behavioral disorders. Journal of Emotional and Behavioral Disorders, 1(3), 189-198. https://doi.org/10.1177/106342669300100307
Study Information
Study Citations
Hughes, C. A. & Schumaker, J. B. (1991). Test-taking Strategy Instruction for Adolescents with Learning Disabilities. Exceptionality, 2() 205-221.
Participants
- Describe how students were selected to participate in the study:
- Six students, five eighth graders and one seventh grader (five males and one female), served as subjects. Half the students were white; half were black. Ages ranged between 13.1 years and 17.2 years (M - 15.1 years). Full-scale IQ scores (derived from the Weschler Intelligence Scale for Children—Revised) ranged from 80 to 101 (M = 88), and their reading grade-level scores (derived from the Metropolitan Achievement Test) ranged from 4.0 to 7.1 (M = 5.2). All the students, who were enrolled in a resource class for one or two periods a day, had been formally classified as having learning disabilities according to state of Florida guidelines. To qualify for participation in this study, students had to be (a) classified as having a learning disability; (b) enrolled in a mainstream science or social studies course; (c) capable of reading at or above the fourth-grade level; and (d) earning below-average or failing test scores in at least one science or one social studies course.
-
Describe how students were identified as being at risk for academic failure (AI) or as having emotional/behavioral difficulties (BI): - Guidelines in the state of Florida at the time the study specified that students were to be achieving two or more standard deviations below the mean in one or more core academic skill areas to be classified as a student with a learning disability. Additionally, both general education and special education teachers nominated the students who participated in the study as meeting the learning disability criteria.
-
ACADEMIC INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:- below the 30th percentile on local or national norm, or
- identified disability related to the focus of the intervention?
- 100.0%
-
BEHAVIORAL INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:- emotional disability label,
- placed in an alternative school/classroom,
- non-responsive to Tiers 1 and 2, or
- designation of severe problem behaviors on a validated scale or through observation?
- %
Provide a description of the demographic and other relevant characteristics of the case used in your study (e.g., student(s), classroom(s)).
Case (Name or number) | Age/Grade | Gender | Race / Ethnicity | Socioeconomic Status | Disability Status | ELL status | Other Relevant Descriptive Characteristics |
---|---|---|---|---|---|---|---|
test | test | test | test | test | test | test | test |
Design
- Please describe the study design:
- Multiple Probe Design Across Three Participants Replicated Once. NOTE: The DESIGN was repeated once. Each design contained three participants for a total of six participants.
Clarify and provide a detailed description of the treatment in the submitted program/intervention:- The overriding goal of this intervention was to teach students a strategy for taking tests in the general education classroom (e.g., end of chapter or unit tests) that consist of the most commonly administered types of tests (i.e., those containing multiple-choice, fill-in-the-blank, matching, and short essay questions). A set of steps were designed based on the following factors: (a) the most frequently cited test-taking principles and strategies reported in the literature, (b) an analysis of the nature and extent of cued items in content tests, and (c) a task analysis of test-taking behaviors. The identified behaviors were placed in sequence. To help students remember the behaviors, a first-letter mnemonic device (“PIRATES”) was designed to increase the likelihood that students would remember the strategy steps in a testing situation. In the Prepare to Succeed step, students say a positive statement to themselves to affirm their belief in themselves as test takers, scan the test to determine sections and types of questions, consider difficulty of and rank each section, and begin working on the first-ranked section. In the Inspect the Instructions step, students read the instructions for the first ranked section, underlining key words that suggest how and where to respond. In the Read, Remember, Reduce step, student read the item, remember what they have learned/studied, and eliminate obviously wrong response options. In the Answer or Abandon step, students either answer the item or they abandon it for the moment by marking it with a symbol to remind them to answer it later. Students continue cycling through the third and fourth steps until they reach the end of a section. Then they apply the Inspect the Instructions step to the next set of instructions and repeat other steps until they reach the end of the test. At this point, they complete the Turn back step by going back to the beginning of the test to answer abandoned items. If they don’t know the answer, they apply the Estimate step to help them make an informed guess. They are taught specific guessing techniques based on the empirical literature on test-taking (e.g., avoid absolutes). Finally, they Survey the test by looking over the entire test to ensure that they have not left any question unanswered. To standardize the instructional procedures, an instructional manual was written (Hughes, Schumaker, Deshler, & Mercer, 1988). This manual contains very detailed scripts for teachers to follow to ensure both accuracy and consistency of the instruction. The instructional methodology was comprised of seven stages similar to those described in Ellis, Deshler, Lenz, Schumaker, and Clark (1991) for promoting strategy acquisition and generalization in adolescents with LD. The first three stages were: Describe (introduce the purpose of the Test-taking Strategy and provide rationales for using it and description of each strategy step); Model (the researcher modeled how to complete each strategy step in sequence while thinking aloud so students could witness all the cognitive processes involved in performing the strategy steps); Verbal rehearsal (the researcher led students in a rapid-fire exercise for the purpose of rehearsing the steps of the strategy. This practice occurred initially as a group and then individually to ensure that each student met mastery in naming the steps of the strategy. In the fourth stage, Initial practice, students applied the steps of the test-taking strategy to practice tests constructed at their instructional reading level and received individual elaborated feedback after each practice attempt. Students continued to practice until they met the mastery criterion of 90% of the required responses. Next, they engaged in Advanced practice where the students were administered practice tests constructed at their grade level. After completing each practice test, they were given individual elaborated feedback until they reached a mastery criterion of 90%). In the Generalization stage, students were encouraged to use the Test-taking Strategy each time they took a test in their general education classes. To assist themselves, they made cue cards to carry with them that listed the strategy steps. After they took a test in their general education class, the researcher held an individual meeting with each student to review how they applied the strategy and to provide feedback and suggestions for what they could do differently on their next test. During the Maintenance stage, approximately every two weeks, students completed a practice test to check the maintenance of their use of the strategy steps.
Clarify what procedures occurred during the control/baseline condition (third, competing conditions are not considered; if you have a third, competing condition [e.g., multi-element single subject design with a third comparison condition], in addition to your control condition, identify what the competing condition is [data from this competing condition will not be used]):- During baseline, separate tests were administered to the students on different days. For each administration, the test was distributed, and the students were told that they would have 25 minutes to complete the test. Further, they were instructed to answer each question as well as possible and to do everything they normally would to earn the best grade possible on the test. They were informed that the purpose of the test was to determine how they take tests. They were also told that they might not know the answers to several items on the test, but that they should make their "best guess" in such instances. Finally, they were reassured that the grade they would earn on the test would not be included in their grade for a course. The researcher administered all of the practice tests.
Please describe how replication of treatment effect was demonstrated (e.g., reversal or withdrawal of intervention, across participants, across settings)- A multiple-probe design (Horner & Baer, 1978) was employed. Three students participated in each of two applications of the design. All students received at least two practice tests during baseline. When the baselines of the first two students in each design were stable, they received instruction that was followed by at least two practice tests and then by at least one maintenance-practice test. Once the first student within each multiple-probe design met mastery on two practice tests, the second student in that design was administered at least one more baseline practice test in addition to the two practice tests that had already been administered. When his or her baseline was stable, instruction began. After the second student in a design met mastery on two practice tests, the third student in that design was administered at least one more baseline practice test. Again, the instructional methodology was implemented when the third student’s baseline was stable. All students were administered at least one maintenance practice test. Thus, all students experienced three conditions: baseline, instruction, and maintenance.
-
Please indicate whether (and how) the design contains at least three demonstrations of experimental control (e.g., ABAB design, multiple baseline across three or more participants). - It was replicated across participants four times – twice in each of the two designs.
If the study is a multiple baseline, is it concurrent or non-concurrent?- Concurrent
Fidelity of Implementation
- How was the program delivered?
-
Individually
Small Group
Classroom
If small group, answer the following:
- Average group size
- 4
- Minimum group size
- 3
- Maximum group size
- 5
What was the duration of the intervention (If duration differed across participants, settings, or behaviors, describe for each.)?
- Weeks
- 4.00
- Sessions per week
- 5.00
- Duration of sessions in minutes
- 30.00
- Weeks
- Sessions per week
- Duration of sessions in minutes
- Weeks
- Sessions per week
- Duration of sessions in minutes
- What were the background, experience, training, and ongoing support of the instructors or interventionists?
- The instructor (the first author), who was a graduate student at the time, administered all practice tests. He (a) was certified to teach students with LD; (b) had 4 years' experience teaching students with learning and behavioral disabilities; and (c) was certified to provide learning strategies instruction and instruction related to the Strategies Intervention Model (Deshler & Schumaker, 1988).
Describe when and how fidelity of treatment information was obtained.- A trained observer randomly attended the instructional sessions. Using a checklist that listed all the components of the instruction, the observer recorded whether the instructional procedures were followed as described and whether the instructor provided help to the students while they were taking the initial and advanced practice tests.
What were the results on the fidelity-of-treatment implementation measure?- Observations showed that the instruction was provided consistently across all students. Specifically, the observer's records indicated that the instructor followed the prescribed procedures 100% of the time in all observed sessions. Additionally, the observer's records revealed that the instructor did not provide cues or assistance to the students as they were engaged in the practice activities.
Was the fidelity measure also used in baseline or comparison conditions?- No fidelity of instruction data were collected during baseline since no test-taking instruction was taking place. The researcher was the only person who provided test-taking instruction, and that took place at the required times during the design only. Fidelity of instruction checklists would not have been relevant to a class where, essentially, homework assistance was taking place.
Measures and Results
Measures Broader :
Study measures are classified as targeted, broader, or administrative data according to the following definitions:
-
Targeted measures
Assess outcomes, such as competencies or skills, that the program was directly targeted to improve.- In the academic domain, targeted measures typically are not the very items taught but rather novel items structured similarly to the content addressed in the program. For example, if a program taught word-attack skills, a targeted measure would be decoding of pseudo words. If a program taught comprehension of cause-effect passages, a targeted measure would be answering questions about cause-effect passages structured similarly to those used during intervention, but not including the very passages used for intervention.
- In the behavioral domain, targeted measures evaluate aspects of external or internal behavior the program was directly targeted to improve and are operationally defined.
-
Broader measures
Assess outcomes that are related to the competencies or skills targeted by the program but not directly taught in the program.- In the academic domain, if a program taught word-level reading skill, a broader measure would be answering questions about passages the student reads. If a program taught calculation skill, a broader measure would be solving word problems that require the same kinds of calculation skill taught in the program.
- In the behavioral domain, if a program taught a specific skill like on-task behavior in one classroom, a broader measure would be on-task behavior in another setting.
- Administrative data measures apply only to behavioral intervention tools and are measures such as office discipline referrals (ODRs) and graduation rates, which do not have psychometric properties as do other, more traditional targeted or broader measures.
Targeted Measure | Reverse Coded? | Evidence | Relevance |
---|---|---|---|
Targeted Measure 1 | Yes | A1 | A2 |
Broader Measure | Reverse Coded? | Evidence | Relevance |
---|---|---|---|
Broader Measure 1 | Yes | A1 | A2 |
Administrative Data Measure | Reverse Coded? | Relevance |
---|---|---|
Admin Measure 1 | Yes | A2 |
- If you have excluded a variable or data that are reported in the study being submitted, explain the rationale for exclusion:
- No variables have been excluded.
Results
- Describe the method of analyses you used to determine whether the intervention condition improved relative to baseline phase (e.g., visual inspection, computation of change score, mean difference):
- Visual inspection and the Non-overlap of Pairs statistic (Parker & Vannest, 2009) were the methods that have been used for analyses of the data.
Please present results in terms of within and between phase patterns. Data on the following data characteristics must be included: level, trend, variability, immediacy of the effect, overlap, and consistency of data patterns across similar conditions. Submitting only means and standard deviations for phases is not sufficient. Data must be included for each outcome measure (targeted, broader, and administrative if applicable) that was described above.- Probe test results. During baseline, students made between 22% and 36% of the required strategic responses on Probe Tests. They performed a mean of 30% of the strategic responses during the baseline condition. All of the students' baselines were stable and indicate that the students did not use many of the test-taking skills sampled by the measurement system. During baseline, the students typically earned points by following some of the instructions correctly (e.g., they placed their names and the date on the test in the appropriate place). When training was implemented, the students reached mastery with regard to performing the required strategic behaviors (at or above the 90% criterion) within an average of two initial practice attempts and within an average of two advanced practice attempts. The mean percentage of appropriate strategic responses on initial practice attempts was 84%; on advanced practice attempts, it was 91%. When the posttests were administered in the posttest stage of instruction, the students completed a mean of 90% of the strategic responses. During the maintenance condition, the students performed a mean of 85% of the test-taking strategy responses for as long as 11 weeks after instruction had been terminated. When the multiple-probe design data were examined for the percentage of non-overlap related to the Probe Test scores across the conditions, the percentage of non-overlapping pairs was 100%. Thus, Tau = 1.0, which is a very large effect size. Generalization of the strategy to other settings. During baseline, the average unit test grade for two of the students in mainstream courses was "D;” the average unit test grade for four of the students was an "F." Overall, the average unit test score for the students during baseline was 57%, a failing grade. After instruction in the Test-taking Strategy, all the students' average scores improved. Specifically, two of the students' average test scores were in the "D" range, and four students' average test scores were in the "C" range. In summary, four students' unit test score averages improved by one letter grade, and two of the students' averages improved by two letter grades. This is a socially important outcome: All of the students were earning passing grades on their tests in mainstream courses at the end of the study.
Additional Research
- Is the program reviewed by WWC or E-ESSA?
- E-ESSA
- Summary of WWC / E-ESSA Findings :
What Works Clearinghouse Review
This program was not reviewed by the What Works Clearinghouse.
Evidence for ESSA*
Program Outcomes: A total of six studies met standards. Five involved targeted forms of SIM and one involved CLC. Outcomes were remarkably consistent, with four of the six effect sizes falling in the range from +0.07 to +0.15, with an average of +0.10. Several of the outcomes were statistically significant, qualifying SIM for the ESSA “Strong” category.
Number of Studies: 6
Average Effect Size: 0.10
*Evidence for ESSA evaluated the Strategic Instruction Model, which encompasses Learning Strategies Curriculum.
- How many additional research studies are potentially eligible for NCII review?
- 0
- Citations for Additional Research Studies :
Data Collection Practices
Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.