Learning Strategies Curriculum: The Word Identification Strategy
Study: Lenz & Hughes (1990)

Summary

The Word Identification Strategy provides a functional and efficient strategy to help challenged readers successfully decode and identify unknown words in their reading materials. The strategy is based on the premise that most words in the English language can be pronounced by identifying prefixes, suffixes, and stems and by following three short syllabication rules. In a research study, students made an average of 20 errors in a passage of 400 words before learning this strategy. Having learned the Word Identification Strategy, students reduced their errors to an average of three per 400 words. Reading comprehension increased from 40 percent on the pretest to 70 percent on grade-level passages. Please note that professional development, coaching, and infrastructure support are essential components to the effective implementation of SIM instructional tools and interventions. It is highly recommended that you work with a SIM professional developer. Please email simpd@ku.edu to learn more.

Target Grades:
4, 5, 6, 7, 8, 9, 10, 11, 12
Target Populations:
  • Students with learning disabilities
  • Students with intellectual disabilities
  • Students with emotional or behavioral disabilities
  • English language learners
  • Any student at risk for academic failure
  • Any student at risk for emotional and/or behavioral difficulties
Area(s) of Focus:
  • Comprehension
  • Fluency
  • Other: Decoding
Where to Obtain:
B. Keith Lenz, Jean B. Schumaker, Donald D. Deshler, and Victoria L. Beals; University of Kansas
1122 West Campus Road • Rm. 732 • Lawrence, KS 66045 PHONE: (785)-864-4780 • FAX: (785)-864-5728 EMAIL: orderkucrl@ku.edu • WEBSITE: kucrl.ku.edu
785-864-4780
kucrl.ku.edu
Initial Cost:
$22.00 per teacher
Replacement Cost:
$22.00 per teacher per NA

There is an optional manual of student practice materials available through Edge Enterprises, Inc. for $12 per teacher and $7 per student booklet, but it is not required.

Staff Qualified to Administer Include:
  • Special Education Teacher
  • General Education Teacher
  • Reading Specialist
  • EL Specialist
  • Interventionist
Training Requirements:
Typically, a three-hour workshop plus in-class coaching and ongoing follow-up to determine the fidelity of implementation are required.

Describe, model, student evaluation practice, scoring practice, instructor and peer evaluation of live or video instruction followed by ongoing coaching.


Two initial studies were single-subject multiple baseline. Later studies integrated this intervention into a reading program and the set of interventions were validated via federally funded Striving Reader's project using an experimental RCT study and was judged as effective and is included in the What Works Clearinghouse under SIM and Xtreme Reading reviews.

Access to Technical Support:
Contact the KU Center for Research on Learning in Lawrence, Kansas Mona Tipton Assistant Director of Professional Development The University of Kansas Center for Research on Learning 1122 West Campus Road Lawrence, KS 66045 (785) 864-0626
Recommended Administration Formats Include:
  • Individual students
  • Small group of students
  • BI ONLY: A classroom of students
Minimum Number of Minutes Per Session:
45
Minimum Number of Sessions Per Week:
5
Minimum Number of Weeks:
8
Detailed Implementation Manual or Instructions Available:
Yes
Is Technology Required?
No technology is required.

Program Information

Descriptive Information

Please provide a description of program, including intended use:

The Word Identification Strategy provides a functional and efficient strategy to help challenged readers successfully decode and identify unknown words in their reading materials. The strategy is based on the premise that most words in the English language can be pronounced by identifying prefixes, suffixes, and stems and by following three short syllabication rules. In a research study, students made an average of 20 errors in a passage of 400 words before learning this strategy. Having learned the Word Identification Strategy, students reduced their errors to an average of three per 400 words. Reading comprehension increased from 40 percent on the pretest to 70 percent on grade-level passages. Please note that professional development, coaching, and infrastructure support are essential components to the effective implementation of SIM instructional tools and interventions. It is highly recommended that you work with a SIM professional developer. Please email simpd@ku.edu to learn more.

The program is intended for use in the following age(s) and/or grade(s).

not selected Age 0-3
not selected Age 3-5
not selected Kindergarten
not selected First grade
not selected Second grade
not selected Third grade
selected Fourth grade
selected Fifth grade
selected Sixth grade
selected Seventh grade
selected Eighth grade
selected Ninth grade
selected Tenth grade
selected Eleventh grade
selected Twelth grade


The program is intended for use with the following groups.

not selected Students with disabilities only
selected Students with learning disabilities
selected Students with intellectual disabilities
selected Students with emotional or behavioral disabilities
selected English language learners
selected Any student at risk for academic failure
selected Any student at risk for emotional and/or behavioral difficulties
not selected Other
If other, please describe:

ACADEMIC INTERVENTION: Please indicate the academic area of focus.

Early Literacy

not selected Print knowledge/awareness
not selected Alphabet knowledge
not selected Phonological awareness
not selected Phonological awarenessEarly writing
not selected Early decoding abilities
not selected Other

If other, please describe:

Language

not selected Expressive and receptive vocabulary
not selected Grammar
not selected Syntax
not selected Listening comprehension
not selected Other
If other, please describe:

Reading

not selected Phonological awareness
not selected Phonics/word study
selected Comprehension
selected Fluency
not selected Vocabulary
not selected Spelling
selected Other
If other, please describe:
Decoding

Mathematics

not selected Computation
not selected Concepts and/or word problems
not selected Whole number arithmetic
not selected Comprehensive: Includes computation/procedures, problem solving, and mathematical concepts
not selected Algebra
not selected Fractions, decimals (rational number)
not selected Geometry and measurement
not selected Other
If other, please describe:

Writing

not selected Handwriting
not selected Spelling
not selected Sentence construction
not selected Planning and revising
not selected Other
If other, please describe:

BEHAVIORAL INTERVENTION: Please indicate the behavior area of focus.

Externalizing Behavior

not selected Physical Aggression
not selected Verbal Threats
not selected Property Destruction
not selected Noncompliance
not selected High Levels of Disengagement
not selected Disruptive Behavior
not selected Social Behavior (e.g., Peer interactions, Adult interactions)
not selected Other
If other, please describe:

Internalizing Behavior

not selected Depression
not selected Anxiety
not selected Social Difficulties (e.g., withdrawal)
not selected School Phobia
not selected Other
If other, please describe:

Acquisition and cost information

Where to obtain:

Address
1122 West Campus Road • Rm. 732 • Lawrence, KS 66045 PHONE: (785)-864-4780 • FAX: (785)-864-5728 EMAIL: orderkucrl@ku.edu • WEBSITE: kucrl.ku.edu
Phone Number
785-864-4780
Website
kucrl.ku.edu

Initial cost for implementing program:

Cost
$22.00
Unit of cost
teacher

Replacement cost per unit for subsequent use:

Cost
$22.00
Unit of cost
teacher
Duration of license
NA

Additional cost information:

Describe basic pricing plan and structure of the program. Also, provide information on what is included in the published program, as well as what is not included but required for implementation (e.g., computer and/or internet access)

There is an optional manual of student practice materials available through Edge Enterprises, Inc. for $12 per teacher and $7 per student booklet, but it is not required.

Program Specifications

Setting for which the program is designed.

selected Individual students
selected Small group of students
selected BI ONLY: A classroom of students

If group-delivered, how many students compose a small group?

   8

Program administration time

Minimum number of minutes per session
45
Minimum number of sessions per week
5
Minimum number of weeks
8
not selected N/A (implemented until effective)

If intervention program is intended to occur over less frequently than 60 minutes a week for approximately 8 weeks, justify the level of intensity:

Does the program include highly specified teacher manuals or step by step instructions for implementation?
Yes

BEHAVIORAL INTERVENTION: Is the program affiliated with a broad school- or class-wide management program?
No

If yes, please identify and describe the broader school- or class-wide management program:

Does the program require technology?
No

If yes, what technology is required to implement your program?
not selected Computer or tablet
not selected Internet connection
not selected Other technology (please specify)

If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:

Training

How many people are needed to implement the program ?
1

Is training for the instructor or interventionist required?
Yes
If yes, is the necessary training free or at-cost?
At-cost

Describe the time required for instructor or interventionist training:
Typically, a three-hour workshop plus in-class coaching and ongoing follow-up to determine the fidelity of implementation are required.

Describe the format and content of the instructor or interventionist training:
Describe, model, student evaluation practice, scoring practice, instructor and peer evaluation of live or video instruction followed by ongoing coaching.

What types or professionals are qualified to administer your program?

selected Special Education Teacher
selected General Education Teacher
selected Reading Specialist
not selected Math Specialist
selected EL Specialist
selected Interventionist
not selected Student Support Services Personnel (e.g., counselor, social worker, school psychologist, etc.)
not selected Applied Behavior Analysis (ABA) Therapist or Board Certified Behavior Analyst (BCBA)
not selected Paraprofessional
not selected Other

If other, please describe:

Person who has some literacy background
Does the program assume that the instructor or interventionist has expertise in a given area?
No   

If yes, please describe: 


Are training manuals and materials available?
Yes

Describe how the training manuals or materials were field-tested with the target population of instructors or interventionist and students:
Two initial studies were single-subject multiple baseline. Later studies integrated this intervention into a reading program and the set of interventions were validated via federally funded Striving Reader's project using an experimental RCT study and was judged as effective and is included in the What Works Clearinghouse under SIM and Xtreme Reading reviews.

Do you provide fidelity of implementation guidance such as a checklist for implementation in your manual?
Yes

Can practitioners obtain ongoing professional and technical support?
Yes

If yes, please specify where/how practitioners can obtain support:

Contact the KU Center for Research on Learning in Lawrence, Kansas Mona Tipton Assistant Director of Professional Development The University of Kansas Center for Research on Learning 1122 West Campus Road Lawrence, KS 66045 (785) 864-0626

Summary of Evidence Base

Please identify, to the best of your knowledge, all the research studies that have been conducted to date supporting the efficacy of your program, including studies currently or previously submitted to NCII for review. Please provide citations only (in APA format); do not include any descriptive information on these studies. NCII staff will also conduct a search to confirm that the list you provide is accurate.

These are the early studies in which The Word Identification Strategy was evaluated alone and not as a part of Xtreme Reading.

Woodruff, S., Schumaker, J. B., & Deshler, D. D. (2002). The effects of an intensive reading intervention on the decoding skills of high school students with reading deficits. (Research Report No. 15). Lawrence, KS: University of Kansas Center for Research on Learning.

Lenz, B.K., & Hughes, C.A. (1990). A word identification strategy for adolescents with learning disabilities. Journal of Learning Disabilities, 23(3), 149-158, 163.

Schumaker, J.B., Deshler, D.D., Woodruff, S.K., Hock, M.F., Bulgren, J.A., & Lenz, B.K. (2006). Reading strategy interventions: Can literacy outcomes be enhanced for at-risk adolescents? Teaching Exceptional Children, 38(3), 64-68.

Study Information

Study Citations

Lenz, K. B. & Hughes, C. A. (1990). A Word Identification Strategy for Adolescents with Learning Disabilities . Journal of Learning Disabilities , 23(3) 149-163.

Participants Full Bobble

Describe how students were selected to participate in the study:
The 12 subjects were seventh-, eighth-, and ninth-grade students who met the state of Florida requirements for being classified as learning disabled. The requirements at the time of this study were as specified by Florida State Board of Education Rule 6A-6.3018 and included (a) evidence of a basic psychological process disorder as documented by a standardized instrument selected by the school district; (b) evidence of academic achievement that is significantly below the student's level of intellectual functioning as documented (for ages 11 and above) by a discrepancy of 1 Vi standard deviations or more between an intellectual standard score and academic standard score in reading, writing, arithmetic, or spelling; (c) evidence that the learning problems are not due primarily to other handicapping conditions; and (d) evidence that indicates that general education alternatives have been attempted and found to be ineffective in meeting the student's educational needs. Subjects attended two different schools (a middle school and a high school) and were served in classes for students with learning disabilities for no more than two periods per day. In addition, subjects were only selected if they met the following criteria: (a) read at or above the third-grade level, (b) had knowledge of phonic sounds, and (c) could find words in the dictionary.

Describe how students were identified as being at risk for academic failure (AI) or as having emotional/behavioral difficulties (BI):
A total of 21 students met the selection criteria. Twelve students were randomly selected as subjects for this study. Three females and 9 males composed the student sample. Eight of the students were white and 4 of the students were black. Student ages ranged from 13 to 15 years (M = 13.2 years); 4 were seventh graders, 2 were eighth graders, and 6 were ninth graders. IQ scores (obtained within the last 3 years—Wechsler Intelligence Scale for Children-Revised (WISC-R) (Wechsler, 1974)-ranged from 82 to 113 (M = 94.3, SD>=11.3). Grade- level reading scores, as measured by the Woodcock-Johnson Psycho-Educational Battery (Woodcock & Johnson, 1977), ranged from 3.5 to 7.0; reading percentile scores ranged from 7 to 32 ( M = 15.7, SD = 8.8); standard scores ranged from 78 to 93 (M= 84.17, SD> = 9.17).

ACADEMIC INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:
  • below the 30th percentile on local or national norm, or
  • identified disability related to the focus of the intervention?
100.0%

BEHAVIORAL INTERVENTION: What percentage of participants were at risk, as measured by one or more of the following criteria:
  • emotional disability label,
  • placed in an alternative school/classroom,
  • non-responsive to Tiers 1 and 2, or
  • designation of severe problem behaviors on a validated scale or through observation?
%

Provide a description of the demographic and other relevant characteristics of the case used in your study (e.g., student(s), classroom(s)).

Case (Name or number) Age/Grade Gender Race / Ethnicity Socioeconomic Status Disability Status ELL status Other Relevant Descriptive Characteristics
test test test test test test test test

Design Full Bobble

Please describe the study design:
A multiple-baseline across subjects design (Baer, Wolf, & Risley, 1968) was employed and replicated three times. Three students participated in each design. The first subject in each group of three students received a minimum of two pretests on materials written at ability and grade levels prior to instruction. The second subject in each group received a minimum of four pretests. The third subject in each group received a minimum of five pretests. Training was instituted only after baseline data were stable.

Clarify and provide a detailed description of the treatment in the submitted program/intervention:
The treatment was instruction in the Word Identification Strategy (Lenz, Schumaker, Deshler, & Beals, 1984), a systematic cognitive process through which multisyllabic words can be recognized in reading assignments in content areas such as science and social studies. The intervention consisted of training the student in a general problem-solving strategy in which specific substrategies are applied for the quick identification of difficult words. The substrategies, based in part on work presented by Forgan and Mangrum (1976) and Wolf (1974), follow the premise that most words in the English language can be pronounced by identifying prefixes, suffixes, and stems and by following three short syllabication rules. These rules, combined with several other practical approaches to word attack, are embedded in a general problem-solving procedure and make up the seven-step strategy. The seven steps of the strategy required the student to focus on the context surrounding the word, dissect the word into component parts using simple rules, and use available resources (e.g., teacher, dictionary) if needed. The keywords used to teach the steps of the strategy form a first-letter mnemonic device, DISSECT, that can be used to facilitate student memorization of the steps. The first step, Discover the Context, requires the student to skip a difficult word, read to the end of the sentence, and then use the meaning of the sentence to guess the best word that fits in the place of the word in question. If the guessed word does not match the difficult word, the student proceeds to the next step, Isolate the Prefix. In this step, the student is taught to look at the beginning of the word to see if the first several letters create a phoneme that the student can pronounce. A list of prefixes is taught to the student to facilitate recognition. If a prefix is recognized, it is isolated by boxing it off (e.g., hyper\ sonic). Using similar procedures and a list of suffixes, students then Separate the Suffix. Whether or not the word contains a suffix, students proceed to the fourth step and Say the Stem. Students are taught that the stem is what is left after the prefix is isolated and the suffix is separated. If the stem is recognized, the student says the prefix, stem, and suffix together. (Note: For the purposes of this strategy, the terms prefix and suffix are broadly defined as any recognizable group of letters at the beginning or end of a word that the student can identify and pronounce correctly.) If the stem cannot be named, the student proceeds to Examine the Stem. This step involves dissecting the stem into easy-to-pronounce word parts using the Rules of Twos and Threes. The Word Identification Strategy was taught to the students by their teachers using an eight-step instructional sequence that was originally described by Deshler, Alley, Warner, and Schumaker (1981) for promoting strategy acquisition and generalization. The procedures were as follows. Step 1: Pretest and Obtain Commitment to Learn. During this step, each subject's word identification skills were measured using 400-word passages from Timed Readings (Spargo & Williston, 1980). Subjects' reading skills were measured at both ability and grade level. After the oral reading and comprehension tests were scored, the results were discussed with each student individually and a written commitment to learn the strategy was obtained from the student. Step 2: Describe the Strategy. Next, each student participated in a goal-setting discussion and specified goal dates for completing various phases of the strategy training. The strategy steps were then described in detail. In addition, the general characteristics of situations where the strategy could be used, as well as the types of benefits students could expect if they learned and applied the strategy, were described. Also described were general guidelines or cautions related to the use of the strategy, such as (a) the strategy works best on reading assignments that follow a teacher's description of the content in class, (b) the first five steps of the strategy usually will not work on vocabulary words to which the student has not been introduced, and (c) the strategy should be learned to such a level of fluency that no more than 10 seconds are required to complete the first five steps. Step 3: Model the Strategy. In this instructional step, the strategy was demonstrated in its entirety, with the teacher thinking aloud so subjects could witness all of the processes involved. A script was used to ensure appropriate teacher modeling. However, the teacher was encouraged to expand on the script if additional modeling was necessary. Once the teacher modeled the strategy, the students were then enlisted in the modeling process. Students were asked to demonstrate the strategy, using materials written slightly above the students' reading level. The teacher guided and prompted the students to think aloud, demonstrate appropriate self-instruction behaviors, ask and answer appropriate "what next" questions, and explore solutions to the word identification problems they encountered. Step 4: Verbal Rehearsal of Strategy Steps. During this step, subjects verbally rehearsed the strategy steps (including the Rules of Twos and Threes). First, each student described the general nature of the strategy and the problem-solving process in his or her own words. Second, each student described each step of the strategy and what was involved in each step of the problem-solving process. Third, after each student demonstrated an understanding of the strategy steps, a rapid-fire oral practice of the steps was led by the teacher to assist students in memorizing the steps. A criterion of 100% correct had to be reached in order to proceed to Step 5: Controlled Practice and Feedback. Also, subjects were required to pronounce correctly at least 80% of the prefixes and suffixes provided to them in a list of 56 prefixes and 54 suffixes. Students had been introduced to the list of prefixes and suffixes in their language arts classes earlier in the year, and mastery of the list was demonstrated very quickly. Step 5: Controlled Practice and Feedback. Subjects practiced the strategy while orally reading into a tape recorder passages from the Timed Readings series. Passages used during this step were written at each subject's reading ability level. During the early stages of controlled practice, assistance was provided to the student on up to five of the initial trials to insure that the strategy was being applied correctly. During these practice sessions, each student was prompted to think aloud, demonstrate self-instruction behaviors, and ask and answer "What do I do next?" and "Does this work?" types of questions. The teacher would provide assistance and feedback to students on an individual basis to insure that each student was using the strategy in a problem- solving fashion. These guided practice sessions were not considered independent practice trials and were not graphed as progress data. When subjects could independently read a passage with six or fewer oral errors (i.e., 99% correct), they began Step 6. (Note: Throughout the testing and practice sessions, the student was allowed to appropriately ask for help on up to three different words in order to learn use of all the steps of the strategy. If the student requested help on three words, these words were not included in the computation of the percentage of words correctly read.) After each practice attempt, the student was provided with corrective feedback related to how he or she did and how to correct specific errors and improve general performance. Step 6: Grade-Appropriate Practice and Feedback. During this step, subjects practiced the strategy while orally reading Timed Readings passages written at the grade level in which they were enrolled. During the early stages of grade-appropriate practice, assistance was provided to the student on up to five of the initial trials to insure that the strategy was being applied correctly to these more difficult reading materials. During these guided practice sessions, the difficulty of the reading passages was gradually increased until the student was practicing the strategy on grade level materials. These guided practice sessions were not considered independent practice trials and were not graphed as progress data. Practice was infused within the context of daily lessons and other assignments in the language arts and English classes. When subjects could independently read a passage with six or fewer oral errors (i.e., 99% correct) in the grade level passages, they were given the posttest. Specific corrective feedback was provided to the student after each practice attempt. Feedback consisted of information related to adherence to the problem-solving process as well as to overall word identification performance. Step 7: Posttest and Obtain Commitment to Generalize. The final grade level practice attempt was used as the posttest for the intervention, since the procedures for the posttest paralleled the pretest and grade-level practice procedures. Once all students had met the mastery criterion (had independently read a passage with six or fewer oral errors in the grade level passages), a written commitment was obtained from each student to generalize the Word Identification Strategy to school and home situations. Step 8: Generalization. Generalization activities were organized into three phases. The first phase, called Orientation, involved teachers leading the students in identifying settings where the Word Identification Strategy could be used and then helping students to plan ways to remember the use of the strategy in those settings. The second phase, called Activation, consisted of trial attempts and reports of strategy usage across various settings, situations, and materials. The third phase, called Maintenance, consisted of planned use of the strategy over time in classroom situations. Generalization probes of student performance were taken intermittently and only during the Maintenance phase. A generalization probe consisted of the subject orally reading two 400-word passages into a tape recorder. The first passage was selected from the Timed Readings series written at the student's grade level. The second passage consisted of an unfamiliar selection from the textbook used in the student's mainstream science class. A comprehension measure was obtained only on the Timed Readings selection. Comprehension measures were not taken from the grade level text because of the difficulties in standardizing the measures across teachers and texts. The probes were conducted 1 week, 3 weeks, and 5 weeks after completion of the strategy posttest on grade level materials. The students were informed of the schedule for the probes and expectations as soon as the Activation phase activities were completed, and then they were reminded of the probe the day before each probe was given. In general, the average amount of time spent on strategy instruction throughout the instructional sequence (Step 1 through Step 7) was approximately 20 to 25 minutes per day for a 6-week period. Instructional time for the strategy was allocated in the language arts or English classes at least 3 days each week, but was usually provided daily in the context of a classroom situation when other types of instructional activities were required.

Clarify what procedures occurred during the control/baseline condition (third, competing conditions are not considered; if you have a third, competing condition [e.g., multi-element single subject design with a third comparison condition], in addition to your control condition, identify what the competing condition is [data from this competing condition will not be used]):
During baseline, the student was introduced to and taught how to use an audio recorder and was given practice using it to reduce a technology novelty effect. When the student was at ease using the recorder, the student was given a 400-word Timed Reading’ passage and was asked to turn on the audio recorder and read aloud the passage. The student was told to turn in the passage and audiotape to the teacher. The audiotape was later scored for oral-reading errors on ability-level passages. Errors were counted according to the scoring procedures in the Word Identification Strategy manual. The following day, the student was given a ten-question multiple-choice comprehension test included in Timed Readings designed to measure comprehension of the passage that the student had read the day before. Each day of baseline, this process was repeated with 400-word Timed Reading passages written at the student’s grade level. The student was not given the opportunity to look at or review the passage before the comprehension test was administered. The teacher then scored the number of comprehension questions that were answered correctly. The oral reading and comprehension scores were then visually depicted on a graph by the student so the student could see test his or her performance. This process was repeated for each data point entered on the graph. The oral reading results collected during baseline indicated that the number of oral reading errors was higher for grade-level materials than ability level materials for all subjects. Four subjects (6, 7, 10, and 12) made six or fewer errors (99% correct oral-reading criterion) in ability-level materials. The remaining subjects' mean baseline performances ranged from 6.3 to 20.5 errors on the ability-level materials. The baseline performance of all subjects on grade-level materials ranged from 12.6 to 37.0 errors.

Please describe how replication of treatment effect was demonstrated (e.g., reversal or withdrawal of intervention, across participants, across settings)
A multiple-baseline across subjects design (Baer, Wolf, & Risley, 1968) was employed and replicated three times. Replication occurred in two ways. First, in each instance of the design, replication occurred across three students. Second, the whole design was repeated four times (3 students per design x 4= 12 students in all). A total of 12 students participated for a total of 12 replications. The design included a baseline phase, a phase of instruction and practice in ability-level materials, a phase of instruction and practice in grade-level materials, and a maintenance phase to determine strategy generalization beyond the instruction and practice phases. The maintenance phase consisted of planned use of the strategy over time in classroom situations. Generalization probes of student performance were taken intermittently and only during the Maintenance phase. A generalization probe consisted of the subject orally reading two 400-word passages into a tape recorder. The first passage was selected from the Timed Readings series written at the student's grade level. The second passage consisted of an unfamiliar selection from the textbook used in the student's mainstream science class. A comprehension measure was obtained only on the Timed Readings selection. Comprehension measures were not taken from grade-level texts because of the difficulties in standardizing the measures across teachers and texts. The maintenance probes were conducted 1 week, 3 weeks, and 5 weeks after completion of the strategy posttest on grade-level materials. The students were informed of the schedule for the probes and expectations as soon as the Activation phase activities were completed, and then they were reminded of the probe the day before each probe was given. In general, the average amount of time spent on strategy instruction throughout the instructional sequence (Step 1 through Step 7) was approximately 20 to 25 minutes per day for a 6-week period. Instructional time for the strategy was allocated in the language arts or English classes at least 3 days each week, but was usually provided daily in the context of a classroom situation when other types of instructional activities were required.

Please indicate whether (and how) the design contains at least three demonstrations of experimental control (e.g., ABAB design, multiple baseline across three or more participants).
A multiple-baseline across there subjects design (Baer, Wolf, & Risley, 1968) was employed and replicated three times. Multiple-baseline across students graphs showing the decoding and comprehension scores of three students in each of the four multiple-baseline graphs were developed to show student performance across the four phases of the study. Thus, the results showed the functional relation of the strategy instruction to student decoding performance three times in each design (because three students participated in each design). Because the design was replicated three times, this functional relation was demonstrated three more times with nine more students. Thus, twelve students in all showed the functional relation of the instruction to the decreased level of student reading errors.

If the study is a multiple baseline, is it concurrent or non-concurrent?
Concurrent

Fidelity of Implementation Full Bobble

How was the program delivered?
not selected Individually
selected Small Group
not selected Classroom

If small group, answer the following:

Average group size
4
Minimum group size
3
Maximum group size
4

What was the duration of the intervention (If duration differed across participants, settings, or behaviors, describe for each.)?

Condition A
Weeks
6.00
Sessions per week
5.00
Duration of sessions in minutes
25.00
Condition B
Weeks
Sessions per week
Duration of sessions in minutes
Condition C
Weeks
Sessions per week
Duration of sessions in minutes
What were the background, experience, training, and ongoing support of the instructors or interventionists?
The Word Identification Strategy intervention was delivered to the subjects by one middle-school and one high- school teacher certified as learning disability teachers who had been trained to implement the strategy instruction by the investigators. Each teacher received 3 hours of training in an overview of the learning strategy instructional approach as operationalized in the Strategies Intervention Model (Deshler & Schumaker, 1988) and 6 hours of specific training and scoring practice in the Word Identification Strategy. In addition, as part of the training process, both teachers had previously taught the strategy to students with learning disabilities to mastery and had been observed and been provided feedback on critical teaching behaviors involved in the delivery of the steps of the strategy and application of the instructional procedures.

Describe when and how fidelity of treatment information was obtained.
Both teachers had previously taught the strategy to students with learning disabilities to mastery. At that time, they had been observed and were given feedback on critical teaching behaviors involved in the description of the steps of the strategy and application of the instructional procedures. While they were teaching the strategy during the study reported here, SIM professional developers observed the teachers and evaluated teacher adherence to the scripted manual for Steps 2, 3, and 4 of the strategy instruction using a checklist listing the behaviors in each instructional step. Both the teachers and the SIM professional developers scored the students' work. Their scores were compared item-by-item. Feedback was provided by the SIM Professional Developer after each observation of each of the observed instructional steps.

What were the results on the fidelity-of-treatment implementation measure?
Fidelity of instructional implementation was determined by creating a measure of adherence to the intervention script. Two measures were taken. One measure, called the Verbal Presentation Measure, was the degree to which the teacher’s verbal presentation matched the bolded words in the script. The other measure, the Directions Measure, was the number of bracketed directions in the script completed by the teacher (e.g., “Prompt the students to write the information on their cue card,” “Present the next cue card,” “Distribute the prefix worksheet,” “Prompt students to separate the suffix.”) These measures were taken for the first four chapters (i.e., each manual chapter comprises each step in the instructional sequence) of The Word Identification Manual script when the strategy was presented to the student and included: (Step 1) Pretest and Obtain Commitment to Learn, (Step 2) Describe the Strategy, (Step 3) Model the Strategy, and (Step 4) Verbal Rehearsal of Strategy Steps. For the Verbal Presentation Measure, two observers highlighted in yellow every word spoken by the teacher that matched the words in the script, regardless of the order. Repetitions, expanded explanations, or paraphrases of the script were not scored, but were informally recorded to determine the degree to which the original script was modified. For the Directions Measure, the number of directions included in the text that were presented by the teacher were underlined. Observers were trained to highlight words and underline directions in the script prior to the study in a laboratory setting in which one of the researchers presented components of each step of the intervention and the two observers either highlighted or underlined matches compared to what was included in the script. Across three trials, interscorer agreement for highlighted words averaged 96%; interscorer agreement for bracketed directions averaged 100%. The total number of words highlighted by each observer was counted and was compared to the number of bolded words printed in the script. The interscorer reliability during the study was determined by dividing the number of observer-highlighted words by the number of bolded scripted words and multiplying by 100. For example, in Step 1, the Pretest and Make Commitment step, there were 504 bolded words in the script compared to 482 words highlighted by the observer. For this step, fidelity was calculated to be 95.6% (482/504 x 100). The number of bracketed directions underlined by each observer was compared to the number of bracketed directions in the script. For example, in the Pretest and Make Commitment step there were 24 bracketed directions prompted in the script compared to 23 bracketed directions underlined by the observer. The total number of bracketed directions in the script was compared to the number of bracketed directions underlined by the observer. For this step, fidelity was calculated to be 95.8% (24/23 x 100). This process was repeated for the first four steps of instructions. All fidelity (i.e., bolded script/bracketed directions) scores for the implementation of each step ranged from 90% to 100%. For Step 1 the range was 95.6% to 95.8%; the Step 2 scores range was from 92.1% to 100%); the Step 3, scores ranged from 93% to 98.1%; the Step 4 scores ranged from 90.3% to 93.6%. These results indicated that that there was minimum variance between the instructor’s

Was the fidelity measure also used in baseline or comparison conditions?
No. There was no instruction provided during the baseline condition. Thus, there was no target available for fidelity of instruction observations.

Measures and Results

Measures Targeted : Full Bobble
Measures Broader : Full Bobble

Study measures are classified as targeted, broader, or administrative data according to the following definitions:

  • Targeted measures
    Assess outcomes, such as competencies or skills, that the program was directly targeted to improve.
    • In the academic domain, targeted measures typically are not the very items taught but rather novel items structured similarly to the content addressed in the program. For example, if a program taught word-attack skills, a targeted measure would be decoding of pseudo words. If a program taught comprehension of cause-effect passages, a targeted measure would be answering questions about cause-effect passages structured similarly to those used during intervention, but not including the very passages used for intervention.
    • In the behavioral domain, targeted measures evaluate aspects of external or internal behavior the program was directly targeted to improve and are operationally defined.
  • Broader measures
    Assess outcomes that are related to the competencies or skills targeted by the program but not directly taught in the program.
    • In the academic domain, if a program taught word-level reading skill, a broader measure would be answering questions about passages the student reads. If a program taught calculation skill, a broader measure would be solving word problems that require the same kinds of calculation skill taught in the program.
    • In the behavioral domain, if a program taught a specific skill like on-task behavior in one classroom, a broader measure would be on-task behavior in another setting.
  • Administrative data measures apply only to behavioral intervention tools and are measures such as office discipline referrals (ODRs) and graduation rates, which do not have psychometric properties as do other, more traditional targeted or broader measures.
Targeted Measure Reverse Coded? Evidence Relevance
Targeted Measure 1 Yes A1 A2
Broader Measure Reverse Coded? Evidence Relevance
Broader Measure 1 Yes A1 A2
Administrative Data Measure Reverse Coded? Relevance
Admin Measure 1 Yes A2
If you have excluded a variable or data that are reported in the study being submitted, explain the rationale for exclusion:
NA

Results Full Bobble

Describe the method of analyses you used to determine whether the intervention condition improved relative to baseline phase (e.g., visual inspection, computation of change score, mean difference):
Four multiple-baseline graphs were created to allow for visual inspection of the decoding and comprehension data. Data from all five measures were visually represented acoss four graphic figures. Figures 1 through 4 showed baseline, training, and maintenance results on ability-level and grade-level materials for all 12 subjects. Oral reading errors on ability-level Timed Readings passages were depicted with closed circles, and the corresponding comprehension scores for those passages were depicted with open circles. Oral reading errors on grade level Timed Readings passages were depicted with closed squares, and the corresponding comprehension scores for those passages were depicted with open squares. Oral reading scores for the grade level science textbook passages are depicted with triangles. The scale on the left side of each figure indicates the number of reading error scores; the scale on the right side of each figure indicates the percentage of comprehension questions answered correctly, training, and maintenance results on ability-level and grade level materials for all 12 subjects. Oral-reading errors on ability-level Timed Readings passages are depicted with closed circles, and the corresponding comprehension scores for those passages are depicted with open circles. Oral reading errors on grade-level Timed Readings passages are depicted with closed squares, and the corresponding comprehension scores for those passages are depicted with open squares. Oral reading scores for the grade-level science textbook passages are depicted with triangles. The scale on the left side of each figure indicates the number of reading error scores; the scale on the right side of each figure indicates the percentage of comprehension questions answered correctly. Therefore, the graph allowed for the visual inspection of the oral reading score error related to and the comprehension score earned for the same 400-word passage.

Please present results in terms of within and between phase patterns. Data on the following data characteristics must be included: level, trend, variability, immediacy of the effect, overlap, and consistency of data patterns across similar conditions. Submitting only means and standard deviations for phases is not sufficient. Data must be included for each outcome measure (targeted, broader, and administrative if applicable) that was described above.
The Visual inspection of the word identification data showed the oral reading results collected during baseline indicate that the number of oral reading errors was higher for grade-level materials than ability level materials for all subjects. Four subjects (6, 7, 10, and 12) made six or fewer errors (99%) correct oral-reading criterion) in ability level materials. The remaining subjects' mean baseline performances ranged from 6.3 to 20.5 errors on the ability level all subjects on grade-level materials ranged from 12.6 to 37.0 errors. The frequency of oral reading errors on ability level materials decreased after strategy instruction for the subjects who did not meet the 99% criterion during baseline (Subjects 1, 2, 3, 4, 5, 7, 8, 9, 11). The four subjects who had met the 99% criterion during baseline demonstrated no errors in ability level materials during the ability level training condition. All subjects reached the mastery criterion (6.0 or fewer errors) in ability level materials within five independent practice attempts. The mean number of errors across subjects ranged from 0 to 6.4 errors in the ability level materials. The four multiple-baseline graphs showed four phases for this study. Overall, after the intervention began in each multiple-baseline design, all subjects met the mastery criterion for use of the strategy. The first phase depicted the baseline passage oral reading and comprehension scores for both ability-level and grade-level materials. The mean comprehension scores obtained during baseline ranged from 50% to 100% with a mean of 83.13% on ability level materials and from 0% to 80% with a mean of 38.72% on grade-level materials. The mean comprehension scores for the ability level training condition ranged from 60% to 100% with a mean of 88.21% on the ability level materials. The second phase depicted in the graph showed the results of a student’s oral reading and reading comprehension scores when the strategy was applied only in ability-level reading materials. All but two students increased their scores well over baseline in one to three trials. Two students required four and five trials to reach mastery. Phase three represented implementing and collecting data on students reading materials written at grade level. When training was instituted using grade-level materials, a decrease in errors occurred. In fact, in phase three all students increased their scores well above their grade-level scores that were obtained during the baseline phase. In the third phase, when grade-level materials were introduced, one student met the mastery criterion (6 errors or fewer) in grade-level materials within nine independent practice attempts. The mean number of errors across subjects ranged from 2.9 to 8.3 errors for the grade-level materials. The mean comprehension scores for the grade level training condition ranged from 20% to 74.3% with a mean of 58.31%. Finally, this study represented one of the few research efforts designed to determine the degree to which students maintained their use of the learned strategy long after the direct-instruction intervention period ended. In the fourth phase depicted in each multiple-baseline graph and in which students were participating in general education science courses, The results of the three generalization probes that were based on 400-word passages taken from grade-level science textbooks show maintenance of student performance levels consistent with those achieved during grade-level training on both the oral reading and comprehension measures obtained with the Timed Readings selections. All oral reading scores collected during the maintenance phase are better than those scores obtained before training. Also, all grade level comprehension scores collected during the maintenance phase are better than those earned before training, with the exception of Subject 7's scores, which were at the 80% level in grade-level materials during baseline. In addition, scores based on oral reading performance on the students' actual classroom textbooks are similar to those scores from the Timed Readings selections. To provide additional information regarding the percentage of non-overlapping data scores related to reading comprehension, additional analyses were conducted. The multiple-probe comprehension data were examined using the Non-overlap of All Pairs (NAP) statistic (Parker & Vannest, 2009) to evaluate the difference between the baseline and intervention phases (grade level and maintenance phases). Tau U was the effect size that was determined as a result. In the first multiple-probe design (three students), the range for the percentage of non-overlap was 84.6% (weighted average Tau =.94). In the second replication, the percentage of non-overlap was 76.7 % (weighted average Tau =.89). The percentage of non-overlap in the third replication was 66.6% (weighted average Tau =.39). In the fourth replication, the percentage of non-overlap was 72.5% (weighted average Tau =.66). Across all 12 students, the percentage of non-overlap was 75.1%. The range for the percentage of non-overlap scores was 9.l% for Student 7 to 100% for Student 2. As mentioned earlier, Student 7 was the only student whose comprehension scores stayed consistent across the study, and comprehension appeared not to be influenced by the reduction in word-identification errors made by that student. Except for Student 7, effect sizes related to an increase in passage reading comprehension ranged from Tau =.50 for Student 2 to Tau = 1.0 for Students 2 & 4. Tau U scores below .65 show weak effects; scores between .66 and .92 show moderate effects; scores above .93 show strong effects. Therefore, the comprehension scores of nine out of the 12 students showed moderate to high effect sizes. This is significant since the focus of the study was primarily to reduce word-identification errors. Parker, R. I., & Vannest, K. (2009). An improved effect size for single-case research: Nonoverlap of all pairs. Behavior Therapy, 40, 357-367.

Additional Research

Is the program reviewed by WWC or E-ESSA?
E-ESSA
Summary of WWC / E-ESSA Findings :

What Works Clearinghouse Review

This program was not reviewed by the What Works Clearinghouse.

 

 

Evidence for ESSA*

Program Outcomes: A total of six studies met standards. Five involved targeted forms of SIM and one involved CLC. Outcomes were remarkably consistent, with four of the six effect sizes falling in the range from +0.07 to +0.15, with an average of +0.10. Several of the outcomes were statistically significant, qualifying SIM for the ESSA “Strong” category.

 

Number of Studies: 6

 

Average Effect Size: 0.10

 

Full Report

 

*Evidence for ESSA evaluated the Strategic Instruction Model, which encompasses Learning Strategies Curriculum.

How many additional research studies are potentially eligible for NCII review?
2
Citations for Additional Research Studies :

Schumaker, J.B., Deshler, D.D., Woodruff, S.K., Hock, M.F., Bulgren, J.A., & Lenz, B.K. (2006). Reading strategy interventions: Can literacy outcomes be enhanced for at-risk adolescents? Teaching Exceptional Children, 38(3), 64-68.

Woodruff, S., Schumaker, J. B., & Deshler, D. D. (2002). The effects of an intensive reading intervention on the decoding skills of high school students with reading deficits. (Research Report No. 15). Lawrence, KS: University of Kansas Center for Research on Learning.)

Data Collection Practices

Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.