iSTEEP
Letter Sound Fluency

Summary

The iSTEEP Letter Sound assessment consists of printed letters and letter blends. The student is asked to say the sound of each letter or group of letters. In this task, the student must decode the letters and blends.

Where to Obtain:
iSTEEP
support@isteep.com
800.881.9142
www.isteep.com
Initial Cost:
$2.00 per student
Replacement Cost:
$2.00 per student per year
Included in Cost:
iSTEEP provides pricing options that range from $2.00/student for early literacy screening up to $8/student for a comprehensive “Pro” package. The “Pro” package includes access to the full iSTEEP program which includes benchmarking assessments, screening assessments, an adaptive diagnostic, and progress monitoring for both reading and math. A writing component and behavior component is also included. All assessments are computer based meaning the computer will automatically time the assessments, calculate the scores, and enter the scores into the system.
Training Requirements:
Minimal time is required. There are quick demo videos and coach cards available to help walk users through the process.
Qualified Administrators:
Access to Technical Support:
Assessment Format:
  • One-to-one
Scoring Time:
  • Scoring is automatic
Scores Generated:
  • Raw score
  • Percentile score
Administration Time:
  • 1 minutes per student
Scoring Method:
  • Automatically (computer-scored)
Technology Requirements:
  • Computer or tablet
  • Internet connection
Accommodations:

Descriptive Information

Please provide a description of your tool:
The iSTEEP Letter Sound assessment consists of printed letters and letter blends. The student is asked to say the sound of each letter or group of letters. In this task, the student must decode the letters and blends.
The tool is intended for use with the following grade(s).
not selected Preschool / Pre - kindergarten
selected Kindergarten
selected First grade
not selected Second grade
not selected Third grade
not selected Fourth grade
not selected Fifth grade
not selected Sixth grade
not selected Seventh grade
not selected Eighth grade
not selected Ninth grade
not selected Tenth grade
not selected Eleventh grade
not selected Twelfth grade

The tool is intended for use with the following age(s).
not selected 0-4 years old
not selected 5 years old
not selected 6 years old
not selected 7 years old
not selected 8 years old
not selected 9 years old
not selected 10 years old
not selected 11 years old
not selected 12 years old
not selected 13 years old
not selected 14 years old
not selected 15 years old
not selected 16 years old
not selected 17 years old
not selected 18 years old

The tool is intended for use with the following student populations.
selected Students in general education
selected Students with disabilities
selected English language learners

ACADEMIC ONLY: What skills does the tool screen?

Reading
Phonological processing:
not selected RAN
not selected Memory
not selected Awareness
selected Letter sound correspondence
selected Phonics
not selected Structural analysis

Word ID
not selected Accuracy
not selected Speed

Nonword
not selected Accuracy
not selected Speed

Spelling
not selected Accuracy
not selected Speed

Passage
not selected Accuracy
not selected Speed

Reading comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
not selected Retell
not selected Maze
not selected Sentence verification
not selected Other (please describe):


Listening comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
not selected Retell
not selected Maze
not selected Sentence verification
not selected Vocabulary
not selected Expressive
not selected Receptive

Mathematics
Global Indicator of Math Competence
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Early Numeracy
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Concepts
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Computation
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematic Application
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Fractions/Decimals
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Algebra
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Geometry
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

not selected Other (please describe):

Please describe specific domain, skills or subtests:
BEHAVIOR ONLY: Which category of behaviors does your tool target?


BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.

Acquisition and Cost Information

Where to obtain:
Email Address
support@isteep.com
Address
Phone Number
800.881.9142
Website
www.isteep.com
Initial cost for implementing program:
Cost
$2.00
Unit of cost
student
Replacement cost per unit for subsequent use:
Cost
$2.00
Unit of cost
student
Duration of license
year
Additional cost information:
Describe basic pricing plan and structure of the tool. Provide information on what is included in the published tool, as well as what is not included but required for implementation.
iSTEEP provides pricing options that range from $2.00/student for early literacy screening up to $8/student for a comprehensive “Pro” package. The “Pro” package includes access to the full iSTEEP program which includes benchmarking assessments, screening assessments, an adaptive diagnostic, and progress monitoring for both reading and math. A writing component and behavior component is also included. All assessments are computer based meaning the computer will automatically time the assessments, calculate the scores, and enter the scores into the system.
Provide information about special accommodations for students with disabilities.

Administration

BEHAVIOR ONLY: What type of administrator is your tool designed for?
not selected General education teacher
not selected Special education teacher
not selected Parent
not selected Child
not selected External observer
not selected Other
If other, please specify:

What is the administration setting?
not selected Direct observation
not selected Rating scale
not selected Checklist
not selected Performance measure
not selected Questionnaire
not selected Direct: Computerized
selected One-to-one
not selected Other
If other, please specify:

Does the tool require technology?

If yes, what technology is required to implement your tool? (Select all that apply)
selected Computer or tablet
selected Internet connection
not selected Other technology (please specify)

If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:

What is the administration context?
selected Individual
not selected Small group   If small group, n=
not selected Large group   If large group, n=
selected Computer-administered
not selected Other
If other, please specify:

What is the administration time?
Time in minutes
1
per (student/group/other unit)
student

Additional scoring time:
Time in minutes
0
per (student/group/other unit)
student

ACADEMIC ONLY: What are the discontinue rules?
not selected No discontinue rules provided
selected Basals
not selected Ceilings
not selected Other
If other, please specify:


Are norms available?
Yes
Are benchmarks available?
Yes
If yes, how many benchmarks per year?
3
If yes, for which months are benchmarks available?
Fall, Winter, Spring
BEHAVIOR ONLY: Can students be rated concurrently by one administrator?
If yes, how many students can be rated concurrently?

Training & Scoring

Training

Is training for the administrator required?
Yes
Describe the time required for administrator training, if applicable:
Minimal time is required. There are quick demo videos and coach cards available to help walk users through the process.
Please describe the minimum qualifications an administrator must possess.
not selected No minimum qualifications
Are training manuals and materials available?
Yes
Are training manuals/materials field-tested?
Yes
Are training manuals/materials included in cost of tools?
Yes
If No, please describe training costs:
Can users obtain ongoing professional and technical support?
Yes
If Yes, please describe how users can obtain support:

Scoring

How are scores calculated?
not selected Manually (by hand)
selected Automatically (computer-scored)
not selected Other
If other, please specify:

Do you provide basis for calculating performance level scores?
Yes
What is the basis for calculating performance level and percentile scores?
not selected Age norms
selected Grade norms
selected Classwide norms
selected Schoolwide norms
not selected Stanines
not selected Normal curve equivalents

What types of performance level scores are available?
selected Raw score
not selected Standard score
selected Percentile score
not selected Grade equivalents
not selected IRT-based score
not selected Age equivalents
not selected Stanines
not selected Normal curve equivalents
not selected Developmental benchmarks
not selected Developmental cut points
not selected Equated
not selected Probability
not selected Lexile score
not selected Error analysis
not selected Composite scores
not selected Subscale/subtest scores
not selected Other
If other, please specify:

Does your tool include decision rules?
Yes
If yes, please describe.
Decision rules are available for the screening with the iSTEEP assessment and determining need for Tier 1, Tier 2 or Tier 3 intervention. Beyond that an optional protocol is offered for deeper data analysis and decision making. With the optional process, screening is the first step in a multiple gating process. After screening students receive a second assessment to determine if the student’s deficit is due to skill or performance problems (can’t do or wont’ do). This assessment provides an additional check on the student’s initial screening score. Conceptually this assessment could be construed as a type of test retest reliability for students with skill deficits. The goal is identifying students with skill deficits and then students with skill deficits move on to the next step which is a survey level assessment to determine grade and skill level in reading (this latter step is not considered screening but is part of intervention planning). Further, the STEEP process recommends that initial selection of students in the screening process be based upon a dual standard. In addition to being “low” with respect to benchmarks, we recommend that students also be in the lowest X% of the class. We typically recommend that X=16%. This helps districts to begin with students most in need and it helps to insure only true positives become the target of intervention. Districts, depending on their intervention resources and goals, can set their own percentage of students for initial intervention. This percentage can be changed as a school is able to accommodate less or more students for intervention. Over identifying students for intervention can be a very significant problem for districts that lack the resources to deliver interventions for high numbers of students who may not truly need intervention. The STEEP data management system will automatically list students who meet the dual criteria of bottom X% (user specifies X) and below benchmark to facilitate decision making.
Can you provide evidence in support of multiple decision rules?
If yes, please describe.
The STEEP protocol was evaluated in various research including the following article: VanDerHeyden AM, Witt JC, Gilbertson DA. Multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology. 2007;45:225–256. This article provides a comprehensive evaluation of the various decision rules. Other research has been conducted on separate decision rules such the process for determining is low scores is the result of skill or motivation issue.
Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
This assessment yields a score representing the number correct in one minute. The score is calculated automatically by the system by subtracting responses with errors from the total sounds produced.
Describe the tool’s approach to screening, samples (if applicable), and/or test format, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
The assessment contains representative exemplars for the skill. Test stimuli are reviewed by content experts to ensure the items are well suited for this skill and do not contain irrelevant difficulty. The probes have been reviewed for ethnic and gender bias.

Technical Standards

Classification Accuracy & Cross-Validation Summary

Grade Kindergarten
Classification Accuracy Fall Data unavailable
Classification Accuracy Winter Partially convincing evidence
Classification Accuracy Spring Data unavailable
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available

Words Their Way Inventory

Classification Accuracy

Select time of year
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
The criterion was the Words Their Way Inventory (WTW, 2012, Pearson) which purports to have adequate reliability and validity. According to Sterbinksy (2007) the assessment has reliability coefficients in the high .80’s to low .90’s. Concurrent and predictive validity is in the upper .60’s to mid .70’s. The test had concurrent validity with the California Standards Test of .74. The criterion measure is not published by iSTEEP and is a completely independent assessment method. WTW assesses word analysis and spelling. Spelling has received increased attention as an indicator of the acquisition of key skills related to reading including phonemic awareness and the alphabetic principle. As Berninger (2019) has pointed out, spelling requires bringing to mind the sounds within a word and then matching letters with sounds and finally writing the letters. As the student becomes more sophisticated s/he sounds out the final spelled words and self-checks by blending the letters into a word. Spelling then is the application and integration of phonological (i.e., analyzing the word at the subword level which includes phonemes, rimes or syllables), orthographic (i.e., the retrieval of whole word, letter cluster unit, or a component letter) and morphological (i.e., whether a word is composed of smaller meaning units) information. Spelling skills have been shown to correlate highly with some reading skills (Berniger, 2019) More specifically, Sterbinsky (2007) indicated the concurrent validity of the WTW with Word Analysis portion of the California Standards Test was .74. Since the STEEP ISF assessment requires listening to a word and saying the beginning sound of the word, the WTW appeared to be an appropriate criterion measure. It has the additional advantage of mitigating method variance. References Berninger, V. (2019). Reading and writing acquisition: A developmental neuropsychological perspective. New York: Routledge Pearson Education. (2012) Word their Way Inventory. Upper Saddle River, NJ: Pearson Education. Sterbinsky, A. (2007). Words Their Way Inventories: Reliability and Validity Analyses. Center for Research in Educational Policy, University of Memphis.
Do the classification accuracy analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Here, we offer an explanation of how WTW helps us build a case for the validity of the assessment. A fundamental component of of the validity of assessment is accuracy, as defined in behavior analysis which means: Accuracy refers to the extent to which the observed value, the quantitative label produced by measuring an event, matches the true state, or true value. For example, if run a known five mile course, your GPS watch will provide a quantitative label for the distance which is sometimes more than or less can the “true” distance of five miles. A good GPS will yield a measurement which is more accurate, meaning the observed value is closer to the true value. In classical measurement theory this lack of accuracy might be labeled measurement error. From a behavior perspective, its simply bad measurement because the focus is not a theoretical true score; instead the focus in on actual observed behavior. Literacy skills, such as letter-sound associations, are discreet behaviors that can be measured and taught. Our concept of validity includes the supposition that assessment has little utility to schools if it is not linked to instruction. During screening, we want to identify students who are “low” on a screening measure. Yes we tie this to national norms. However, we also situate the score in the context of instruction. This begins with the accuracy of score that is screened and includes follow-up diagnostic testing to understand the conditions that have led to a low score on this assessment. The contributing conditions may include a lack of pre-requisite skills, a lack of effective instruction, a lack of student motivation, and/or problems in how a student learns. Most frequently there is an issue with instruction and/or the skills the student learned previous. We use two frameworks for understanding instructional influences on student learning. First, students learn to read, for example, following a sequence (i.e., starting with phonological awareness, then letter sound relations, etc). The skill sequence we follow is based on the IES Practice Guide: Foundational Skills to Support Reading for Understanding in Kindergarten Through 3rd Grade (Foorman, 2016) which builds on the National Reading Panel report and summarizes 20 years of research on reading. This IS framework address what to teach and in what sequence. The second framework we use is the Instructional Hierarchy (Haring, Lovitt, Eaton, & Hansen, 1978) which addresses how to teach. The sequence for teaching a specific skill begins with first establishing a skill with direct teaching, modeling, tell-show-do activities, etc. After a skill is established then fluency building is initiated using practice with feedback. Finally as the student becomes fluent, teaching can focus on generalizing and using the skill in more complex comprehension activities The instructional hierarchy has been well established with research on behavioral instructional design. Given these frameworks for what to teach and how to teach, we need our assessment to be accurate in determining which skills need attention, which skills don’t need attention and where instruction should begin. Given this context and our goals for assessment we have and use a broader view of validity than some other assessments. We want our assessments to identify the correct students. We also want our assessment to have what has been called treatment validity. Treatment validity refers to an empirical demonstration that the use of a an assessment should lead to more accurately determined treatments (in this case instruction) which in turn should produce superior student outcomes. In other words assessment should lead to the design more effective instruction if we provide accurate information. Hence, in building a case for validity, we include some criterion referenced assessments. The Words Their Way inventory (WTW) is an assessment that requires students to spell carefully selected words. The words can each be scored with respect to at least 7 skills. The seven skills begin with initial sounds, then letter sound relationships, then blending, etc. This skill sequence is very similar to the skill sequence recommended by IES for teaching reading. Low scores indicate students are just beginning to learn the beginning skills and higher scores means students have acquired some beginning skills and have moved on to more advanced skills. WTW has been shown to correlate with several measures of reading and early literacy. While we don’t use WTW to the exclusion of other more traditional assessments, we find it provides some indication that our assessments are accurately aligned with skills. We do additional analyses, beyond the correlation coefficient. to see if there is agreement between our measures of literacy and WTW. In addition to the criterion referenced foundation for WTW, some standard research has also been conducted. This assessment is considered an appropriate criterion because it measures early literacy skills that overlap with the skills assessed by the iSTEEP assessment. The Words Their Way Inventory (WTW, 2012, Pearson), purports to have data showing adequate reliability and validity. According to Sterbinksy (2007) the assessment has reliability coefficients in the high .80’s to low .90’s. Concurrent and predictive validity is in the upper .60’s to mid .70’s. The test had concurrent validity with the California Standards Test of .74. The criterion measure is not published by iSTEEP and is a completely independent assessment method. WTW assesses word analysis and orthographic knowledge. Spelling and orthographic knowledge have received increased attention as an indicator of the acquisition of key skills related to reading including phonemic awareness and the alphabetic principle. As Berninger (2019) has pointed out, orthographic knowledge requires bringing to mind the sounds within a word and then matching letters with sounds and, in some cases, writing the letters. As the student becomes more sophisticated s/he sounds out words and self-checks by blending the letters into a word. Orthographic knowledge ultimately is the application and integration of phonological (i.e., analyzing the word at the subword level which includes phonemes, rimes or syllables), orthographic (i.e., the retrieval of whole word, letter cluster unit, or a component letter) and morphological (i.e., whether a word is composed of smaller meaning units) information. Skills related to orthographic knowledge have been shown to correlate highly with other reading skills (Berniger, 2019) WTW has the additional advantage of mitigating the method variance problem. References: Berninger, V. (2019). Reading and writing acquisition: A developmental neuropsychological perspective. New York: Routledge. Pearson Education. (2012) Word their Way Inventory. Upper Saddle River, NJ: Pearson Education. Sterbinsky, A. (2007). Words Their Way Inventories: Reliability and Validity Analyses. Center for Research in Educational Policy, University of Memphis. Given that part of our goal for this assessment is a behaviorally accurate skill assessment, why is it more valuable to correlate a reading assessment with a computer adaptive assessment that takes 45 minutes to yield a score that combines all the hundreds of items into a single latent factor that correlates with just about every thing but has little utility for instructional design? If criterion referenced assessments are prohibited by the TRC then this should be stated clearly in the FAQ for NCII. If so, then this would signal that we have come full circle in screening on the idiographic/nomothetic dimension. Deno, the person given credit for starting CBM in the 1970’s, was very much focused on instruction and his measures were brief and criterion referenced. There is value in this ideograph approach and value as well in a nomothetic approach measuring broad achievement with tests requiring 45 minutes to measure a single factor. Whether you value one method over another depends on the purpose of the assessment and different purposes require different types of validity documentation. The TRC clearly recognizes this in the FAQ for screening tools.
Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
We used the 20th percentile on the criterion and on the predictor measure as the cut points. This cut-point was chosen because there is wide agreement that students below the 20th percentile need intensive intervention because, without such intervention, the students are unlikely to accomplish subsequent literacy goals. This cut-point also appears to align with the goals of NCII. We contrasted only two groups: students at high risk vs low risk. The analyses were performed using ROC analysis. Crosstabs were used to generate a 2 X 2 table (confusion matrix) to yield the classification data. The performance level descriptors for the iSTEEP assessments, were as follows: (a) Below 20th Percentile: Needs Intervention, (b) Between 20th and 40th Percentile: Below Benchmark, May need individual intervention, (c) Above 40th Percentile: Above Benchmark, Unlikely to Need Individual Intervention. The Percentage of Students at Each Performance Level was as follows: Needs Intervention: 20% of students Below Benchmark: 23% of students Above Benchmark: 57% of students.
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
Yes
If yes, please describe the intervention, what children received the intervention, and how they were chosen.
While many of the students were subsequently placed on intervention, this was beginning of the year in kindergarten. Hence, none had begun formal intervention,.

Cross-Validation

Has a cross-validation study been conducted?
No
If yes,
Select time of year.
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
Do the cross-validation analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
If yes, please describe the intervention, what children received the intervention, and how they were chosen.

Classification Accuracy - Winter

Evidence Kindergarten
Criterion measure Words Their Way Inventory
Cut Points - Percentile rank on criterion measure 20
Cut Points - Performance score on criterion measure 21
Cut Points - Corresponding performance score (numeric) on screener measure 16
Classification Data - True Positive (a) 35
Classification Data - False Positive (b) 26
Classification Data - False Negative (c) 11
Classification Data - True Negative (d) 157
Area Under the Curve (AUC) 0.89
AUC Estimate’s 95% Confidence Interval: Lower Bound 0.85
AUC Estimate’s 95% Confidence Interval: Upper Bound 0.94
Statistics Kindergarten
Base Rate 0.20
Overall Classification Rate 0.84
Sensitivity 0.76
Specificity 0.86
False Positive Rate 0.14
False Negative Rate 0.24
Positive Predictive Power 0.57
Negative Predictive Power 0.93
Sample Kindergarten
Date January
Sample Size 229
Geographic Representation East North Central (IN)
Male  
Female  
Other  
Gender Unknown  
White, Non-Hispanic  
Black, Non-Hispanic  
Hispanic  
Asian/Pacific Islander  
American Indian/Alaska Native  
Other  
Race / Ethnicity Unknown  
Low SES  
IEP or diagnosed disability  
English Language Learner  

Reliability

Grade Kindergarten
Rating Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Offer a justification for each type of reliability reported, given the type and purpose of the tool.
Study 1: Alternate Form Justification: Alternate form reliability provides an indication of the consistency of a student’s score at two different points in time. It also provides an indicator of the consistency of response to different items which is partially dependent of the equivalence of the forms. Study 2: Inter-Rater Justification: The consistency of student scores can be influenced by examiner error. Inter-rater reliability provides and estimate of the extent to which student scores contain error related to the examiner.
*Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
The sample included all students in classes that were included. The performance level descriptors for the iSTEEP assessments, were as follows: (a) Below 20th Percentile: Needs Intervention, (b) Between 20th and 40th Percentile: Below Benchmark, May need individual intervention, (c) Above 40th Percentile: Above Benchmark, Unlikely to Need Individual Intervention, Across all reliability analyses the median Percentage of Students at Each Performance Level for the various Samples ranged as follows: Needs Intervention: 21% of students Below Benchmark: 24% of students Above Benchmark: 55% of students
*Describe the analysis procedures for each reported type of reliability.
Study 1: Two alternate forms were administered in a single setting. The scores were used within a correlational analysis. Study 2: Inter-Rater Audio recordings were made of student responses during a single assessment. Two different experienced assessors then independently scored each recording. The two scoring protocols were examined for agreement on a word-by-word basis. The analysis of agreement consisted of dividing the total number of agreements by the number of agreements plus disagreements.

*In the table(s) below, report the results of the reliability analyses described above (e.g., internal consistency or inter-rater reliability coefficients).

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
Provide citations for additional published studies.
Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated reliability data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
Provide citations for additional published studies.

Validity

Grade Kindergarten
Rating Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
Grade K Concurrent Validity. For concurrent validity, Words their Way was selected as the criterion measure. This assessment is considered an appropriate criterion because it measures early literacy skills that overlap with the skills assessed by the iSTEEP WIF. The Words Their Way Inventory (WTW, 2012, Pearson), purports to have adequate reliability and validity. According to Sterbinksy (2007) the assessment has reliability coefficients in the high .80’s to low .90’s. Concurrent and predictive validity is in the upper .60’s to mid .70’s. The test had concurrent validity with the California Standards Test of .74. The criterion measure is not published by iSTEEP and is a completely independent assessment method. WTW assesses word analysis and spelling. Spelling has received increased attention as an indicator of the acquisition of key skills related to reading including phonemic awareness and the alphabetic principle. As Berninger (2019) has pointed out, spelling requires bringing to mind the sounds within a word and then matching letters with sounds and finally writing the letters. As the student becomes more sophisticated s/he sounds out the final spelled words and self-checks by blending the letters into a word. Spelling then is the application and integration of phonological (i.e., analyzing the word at the subword level which includes phonemes, rimes or syllables), orthographic (i.e., the retrieval of whole word, letter cluster unit, or a component letter) and morphological (i.e., whether a word is composed of smaller meaning units) information. Spelling skills have been shown to correlate highly with some reading skills (Berniger, 2019) More specifically, Sterbinsky (2007) indicated the concurrent validity of the WTW with Word Analysis portion of the California Standards Test was .74. WTW has the additional advantage of mitigating the method variance problem. References Berninger, V. (2019). Reading and writing acquisition: A developmental neuropsychological perspective. New York: Routledge Pearson Education. (2012) Word their Way Inventory. Upper Saddle River, NJ: Pearson Education. Sterbinsky, A. (2007). Words Their Way Inventories: Reliability and Validity Analyses. Center for Research in Educational Policy, University of Memphis. Predictive Validity The criterion used for the predictive validity study was the Benchmark Assessment System (BAS, Fountas and Pinnell, 2016). This was an appropriate measure because learning sound fluency is an important foundational skill as students learn to read. Hence, both theoretically and empirically the STEEP LSF would be expected to be predictive of the end of year reading expectations for the BAS. Studies on the reliability of this measure indicate median reliability of .94. The authors report concurrent validity coefficients using external measures ranging from the mid .60’s to the mid .90’s. Compton, Fuchs, Fuchs, Fuchs, Bouton, Gilbert, Barquero, Cho, E., & Crouch, (2010) reported concurrent validity coefficients with WIF and ORF measures in .70’s and .80’s. References Fountas IC, Pinnell GS. (2016) Field Study of Reliability and Validity of the Benchmark Assessment Systems I and 2. Portsmouth: Heinemann Compton, D. L., Fuchs, D., Fuchs, L. S., Bouton, B., Gilbert, J. K., Barquero, L. A., Cho, E., & Crouch, R. C. (2010). Selecting At-Risk First-Grade Readers for Early Intervention: Eliminating False Positives and Exploring the Promise of a Two-Stage Gated Screening Process. Journal of educational psychology, 102(2), 327 ++++ADDITIONAL JUSTIFICATION FOR WTW AND BAS BASED ON INTERIM REVIEW +++Justification for WTW In the interim review the TRC questioned the use of WTW as an appropriate measure. Here, we offer an explanation of how WTW helps us build a case for the validity of the assessment. A fundamental component of of the validity of assessment is accuracy, as defined in behavior analysis which means: Accuracy refers to the extent to which the observed value, the quantitative label produced by measuring an event, matches the true state, or true value. For example, if run a known five mile course, your GPS watch will provide a quantitative label for the distance which is sometimes more than or less can the “true” distance of five miles. A good GPS will yield a measurement which is more accurate, meaning the observed value is closer to the true value. In classical measurement theory this lack of accuracy might be labeled measurement error. From a behavior perspective, its simply bad measurement because the focus is not a theoretical true score; instead the focus in on actual observed behavior. Literacy skills, such as letter-sound associations, are discreet behaviors that can be measured and taught. Our concept of validity includes the supposition that assessment has little utility to schools if it is not linked to instruction. During screening, we want to identify students who are “low” on a screening measure. Yes we tie this to national norms. However, we also situate the score in the context of instruction. This begins with the accuracy of score that is screened and includes follow-up diagnostic testing to understand the conditions that have led to a low score on this assessment. The contributing conditions may include a lack of pre-requisite skills, a lack of effective instruction, a lack of student motivation, and/or problems in how a student learns. Most frequently there is an issue with instruction and/or the skills the student learned previous. We use two frameworks for understanding instructional influences on student learning. First, students learn to read, for example, following a sequence (i.e., starting with phonological awareness, then letter sound relations, etc). The skill sequence we follow is based on the IES Practice Guide: Foundational Skills to Support Reading for Understanding in Kindergarten Through 3rd Grade (Foorman, 2016) which builds on the National Reading Panel report and summarizes 20 years of research on reading. This IS framework address what to teach and in what sequence. The second framework we use is the Instructional Hierarchy (Haring, Lovitt, Eaton, & Hansen, 1978) which addresses how to teach. The sequence for teaching a specific skill begins with first establishing a skill with direct teaching, modeling, tell-show-do activities, etc. After a skill is established then fluency building is initiated using practice with feedback. Finally as the student becomes fluent, teaching can focus on generalizing and using the skill in more complex comprehension activities The instructional hierarchy has been well established with research on behavioral instructional design. Given these frameworks for what to teach and how to teach, we need our assessment to be accurate in determining which skills need attention, which skills don’t need attention and where instruction should begin. Given this context and our goals for assessment we have and use a broader view of validity than some other assessments. We want our assessments to identify the correct students. We also want our assessment to have what has been called treatment validity. Treatment validity refers to an empirical demonstration that the use of a an assessment should lead to more accurately determined treatments (in this case instruction) which in turn should produce superior student outcomes. In other words assessment should lead to the design more effective instruction if we provide accurate information. Hence, in building a case for validity, we include some criterion referenced assessments. The Words Their Way inventory (WTW) is an assessment that requires students to spell carefully selected words. The words can each be scored with respect to at least 7 skills. The seven skills begin with initial sounds, then letter sound relationships, then blending, etc. This skill sequence is very similar to the skill sequence recommended by IES for teaching reading. Low scores indicate students are just beginning to learn the beginning skills and higher scores means students have acquired some beginning skills and have moved on to more advanced skills. WTW has been shown to correlate with several measures of reading and early literacy. While we don’t use WTW to the exclusion of other more traditional assessments, we find it provides some indication that our assessments are accurately aligned with skills. We do additional analyses, beyond the correlation coefficient. to see if there is agreement between our measures of literacy and WTW. In addition to the criterion referenced foundation for WTW, some standard research has also been conducted. This assessment is considered an appropriate criterion because it measures early literacy skills that overlap with the skills assessed by the iSTEEP assessment. The Words Their Way Inventory (WTW, 2012, Pearson), purports to have data showing adequate reliability and validity. According to Sterbinksy (2007) the assessment has reliability coefficients in the high .80’s to low .90’s. Concurrent and predictive validity is in the upper .60’s to mid .70’s. The test had concurrent validity with the California Standards Test of .74. The criterion measure is not published by iSTEEP and is a completely independent assessment method. WTW assesses word analysis and orthographic knowledge. Spelling and orthographic knowledge have received increased attention as an indicator of the acquisition of key skills related to reading including phonemic awareness and the alphabetic principle. As Berninger (2019) has pointed out, orthographic knowledge requires bringing to mind the sounds within a word and then matching letters with sounds and, in some cases, writing the letters. As the student becomes more sophisticated s/he sounds out words and self-checks by blending the letters into a word. Orthographic knowledge ultimately is the application and integration of phonological (i.e., analyzing the word at the subword level which includes phonemes, rimes or syllables), orthographic (i.e., the retrieval of whole word, letter cluster unit, or a component letter) and morphological (i.e., whether a word is composed of smaller meaning units) information. Skills related to orthographic knowledge have been shown to correlate highly with other reading skills (Berniger, 2019) WTW has the additional advantage of mitigating the method variance problem. References: Berninger, V. (2019). Reading and writing acquisition: A developmental neuropsychological perspective. New York: Routledge. Pearson Education. (2012) Word their Way Inventory. Upper Saddle River, NJ: Pearson Education. Sterbinsky, A. (2007). Words Their Way Inventories: Reliability and Validity Analyses. Center for Research in Educational Policy, University of Memphis. Given that part of our goal for this assessment is a behaviorally accurate skill assessment, why is it more valuable to correlate a reading assessment with a computer adaptive assessment that takes 45 minutes to yield a score that combines all the hundreds of items into a single latent factor that correlates with just about every thing but has little utility for instructional design? If criterion referenced assessments are prohibited by the TRC then this should be stated clearly in the FAQ for NCII. If so, then this would signal that we have come full circle in screening on the idiographic/nomothetic dimension. Deno, the person given credit for starting CBM in the 1970’s, was very much focused on instruction and his measures were brief and criterion referenced. There is value in this ideograph approach and value as well in a nomothetic approach measuring broad achievement with tests requiring 45 minutes to measure a single factor. Whether you value one method over another depends on the purpose of the assessment and different purposes require different types of validity documentation. The TRC clearly recognizes this in the FAQ for screening tools. +++BAS In the interim review the TRC questioned the use of the Benchmark Assessment System (BAS) as an appropriate measure. Here, we offer an explanation of how the BAS helps us build a case for the validity of the assessment. The BAS is a criterion referenced assessment with a focus on accurately assessing early reading skills. The BAS takes a comprehensive approach to early literacy assessment which begins with the content validity of the passages and stimuli. Passages are divided into 26 levels of difficulty. This is done by evaluating texts using ten characteristics: (1) genre/form; (2) text structure; (3) content; (4) themes and ideas; (5) language and literary features; (6) sentence complexity; (7) vocabulary; (8) word difficulty; (9) illustrations/graphics; and (10) book and print features. Text level is are leveled by teams of trained teachers who arrive at a consensus to determine the level of each text. This method differs markedly from other publishers who typically rely on readability formulas to assign text difficulty. Research by Ardoin, Suldo, Witt, Aldrich, & McDonald, (2005) has indicated these formulas are inaccuracy, don’t agree with each other and, importantly, don’t predict reading outcomes such as words correct per minute. The largest problems with the validity of readability formulas is that they do not take into consideration the learning history of the student. The BAS, by having experienced grade level teachers, improves on the content validity because the passages are leveled based on those teachers knowledge of what students at specific grades know in terms of vocabulary, background knowledge, etc. The criterion referenced aspect of BAS helps us document the validity of our assessments for instruction. For us, an important component of validity is accuracy which is defined in behavior analysis as follows: Accuracy refers to the extent to which the observed value, the quantitative label produced by measuring an event, matches the true state, or true value. For example, if run a known five mile course, your GPS watch will provide a quantitative label for the distance which is sometimes more than or less can the “true” distance of five miles. A good GPS will yield a measurement which is more accurate, meaning the observed value is closer to the true value. In classical measurement theory this lack of accuracy might be labeled measurement error. From a behavior perspective, it’s simply bad measurement because the focus is not a theoretical true score; instead the focus in on actual observed behavior. Literacy skills are viewed discreet behaviors that can be measured and taught. Our concept of validity includes the supposition that assessment has little utility to schools if it is not linked to instruction. During screening, we want to identify students who are “low” on a screening measure. Yes we tie this to national norms. However, we also situate the score in the context of instruction. This begins with the accuracy of score that is screened and includes follow-up diagnostic testing to understand the conditions that have led to a low score on this assessment. The contributing conditions may include a lack of pre-requisite skills, a lack of effective instruction, a lack of student motivation, and/or problems in how a student learns. Most frequently there is an issue with instruction and/or the skills the student learned previous. We use two frameworks for understanding instructional influences on student learning. First, students learn to read, for example, following a sequence (i.e., starting with phonological awareness, then letter sound relations, etc). The skill sequence we follow is based on the IES Practice Guide: Foundational Skills to Support Reading for Understanding in Kindergarten Through 3rd Grade (Foorman, 2016) which builds on the National Reading Panel report and summarizes 20 years of research on reading. This IS framework address what to teach and in what sequence. The second framework we use is the Instructional Hierarchy (Haring, Lovitt, Eaton, & Hansen, 1978) which addresses how to teach. The sequence for teaching a specific skill begins with first establishing a skill with direct teaching, modeling, tell-show-do activities, etc. After a skill is established then fluency building is initiated using practice with feedback. Finally as the student becomes fluent, teaching can focus on generalizing and using the skill in more complex comprehension activities The instructional hierarchy has been well established with research on behavioral instructional design. Given these frameworks for what to teach and how to teach, we need our assessment to be accurate in determining which skills need attention, which skills don’t need attention and where instruction should begin. Given this context and our goals for assessment we have and use a broader view of validity than some other assessments. We want our assessments to identify the correct students. We also want our assessment to have what has been called treatment validity. Treatment validity refers to an empirical demonstration that the use of a an assessment should lead to more accurately determined treatments (in this case instruction) which in turn should produce superior student outcomes. In other words assessment should lead to the design more effective instruction if we provide accurate information. Hence, in building a case for validity, we include some criterion referenced assessments. The BAS is an assessment that requires students to read and assessors can score 32 different aspects of reading ranging from simple errors to prosody and comprehension. Hence, they take oral reading, which is a keystone skill in early literacy, and provide a much more detailed analysis of this skill than can be found elsewhere. The BAS has been shown to correlate with several measures of reading and early literacy. While we don’t use the BAS to the exclusion of other more traditional assessments, we find it provides some indication that our assessments are accurately aligned with skills. We do additional analyses, beyond the correlation coefficient. to see if there is agreement between our measures of literacy and WTW. In addition to the criterion referenced foundation for BAS, standard research studies on the BAS have also been conducted. This assessment is considered an appropriate criterion because it measures early literacy skills that overlap with the skills assessed by the iSTEEP assessment. We begin our review of research on the BAS with a study by Compton, Fuchs, Fuchs, Fuchs, Bouton, Gilbert, Barquero, Cho, E., & Crouch, (2010) because of who authored the study and where it appeared. The authors included Lynn and Doug Fuchs who were instrumental in founding was is now called NCII. The study appeared in the Journal of Educational Psychology which is one of the more respected journals in educational research. These authors reported concurrent validity coefficients with WIF and ORF measures in .70’s and .80’s. In other research Klingbeil, McComas, Burns, & Helman, (2015) as well as Burns, M. K., Pulles, S. M., Maki, K. E., Kanive, R., Hodgson, J., Helman, L. A., Preast, J. L., (2015) reported moderate (.70’s and .80’s ) to strong correlations between the BAS and Aimsweb ORF and NWEA MAP. Also, adding the BAS to ORF increased variance accounted for from 40% to 54%. The authors of the BAS (Fountas and Pinnell, 2016) report studies on the reliability of this measure and ndicate median reliability of .94. The authors report concurrent validity coefficients using external measures ranging from the mid .60’s to the mid .90’s. In summary, the BAS has a strong approach to content validity which helps us support the accuracy of the iSTEEP assessment. The measure also has internal and external resaeach supporting its reliability and validity. Given that part of our goal for this assessment is a behaviorally accurate skill assessment, why is it more valuable to correlate a reading assessment with a computer adaptive assessment that takes 45 minutes to yield a score that combines all the hundreds of items into a single latent factor that correlates with just about every thing but has little utility for instructional design? If criterion referenced assessments are prohibited by the TRC, then this should be stated clearly in the FAQ for NCII. If so, then this would signal that we have come full circle in screening on the idiographic/nomothetic dimension. Deno, the person given credit for starting CBM in the 1970’s, was very much focused on instruction and his measures were brief (one to three minutes) and criterion referenced. There is value in this idiographic approach and value as well in a nomothetic approach measuring broad achievement with tests requiring 45 minutes to assess a single factor. Whether you value one method over another depends on the purpose of the assessment and different purposes require different types of validity documentation.
*Describe the sample(s), including size and characteristics, for each validity analysis conducted.
Grade K Concurrent Validity Sample The sample included a diverse group of 229 students from one midwestern state. The sample was representative of students across all performance levels. Predictive Validity For the predictive validity study, the sample included a diverse group of 232 students from rural and suburban schools in one midwestern state. The sample was representative of students across all performance levels. The performance level descriptors for the iSTEEP assessments, were as follows: (a) Below 20th Percentile: Needs Intervention, (b) Between 20th and 40th Percentile: Below Benchmark, May need individual intervention, (c) Above 40th Percentile: Above Benchmark, Unlikely to Need Individual Intervention, Across all validity analyses the median Percentage of Students at Each Performance Level for the various Samples ranged as follows: Needs Intervention: 20% of students Below Benchmark: 28% of students Above Benchmark: 52% of students
*Describe the analysis procedures for each reported type of validity.
For both the concurrent and predictive validity sample, the scores from the iSTEEP screener and the criterion were subjected to analysis using bi-variate correlational analysis.

*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
Provide citations for additional published studies.
Describe the degree to which the provided data support the validity of the tool.
The validity coefficients for kindergarten provide moderate support for the use of the iSTEEP LSF assessment for early literacy screening.
Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated validity data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Bias Analysis

Grade Kindergarten
Rating No
Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
If yes,
a. Describe the method used to determine the presence or absence of bias:
b. Describe the subgroups for which bias analyses were conducted:
c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.

Data Collection Practices

Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.