i-Ready Diagnostic and Growth Monitoring
Mathematics

Summary

i Ready Growth Monitoring is a brief, computer-delivered, periodic adaptive assessment in mathematics for students in grades K–8, assessing Number & Operations/The Number System, Algebra & Algebraic Thinking, Geometry, and Measurement & Data. Growth Monitoring is part of the i Ready Diagnostic & Instruction suite and is designed to be used jointly with i Ready Diagnostic to allow for progress monitoring throughout the year and determine whether students are on track for appropriate growth. Growth Monitoring is designed to be administered monthly but may be administered as frequently as every week in which the i Ready Diagnostic assessment is not administered. i Ready Growth Monitoring is a general outcome measure form of progress monitoring. The reports show whether students are on track for their target growth by projecting where their ability level will likely be at the end of the school year and comparing the projected growth-to-growth targets. For students who are below level, Growth Monitoring can be used as a tool for Response to Intervention (RTI) programs. Evidence-based and proven valid and reliable, Curriculum Associates designed and developed i Ready specifically to assess student mastery of state and Common Core State Standards (CCSS). Growth Monitoring assessment takes approximately 15 minutes and may be conducted with all students or with specific groups of students who have been identified as at risk of academic failure. i Ready’s sophisticated adaptive algorithm automatically selects from thousands of multiple-choice and technology-enhanced items to get to the core of each student's strengths and challenges, regardless of the grade level at which he or she is performing. The depth of the item bank enables the assessment to truly pinpoint each student’s ability and ensures the accuracy of results. The system automatically analyzes and scores student responses. Available as soon as a student completes the assessment, i Ready’s intuitive Growth Monitoring reports—available at the student and class levels—focus solely on how students are tracking toward their end-of-year growth.

Where to Obtain:
Curriculum Associates, LLC
info@cainc.com
153 Rangeway Road, N. Billerica MA 01862
800-225-0248
www.curriculumassociates.com
Initial Cost:
$6.00 per student
Replacement Cost:
$6.00 per student per year
Included in Cost:
$6.00/student/year for i Ready Diagnostic for mathematics, which includes Growth Monitoring. The license fee includes online student access to assessment, plus staff access to management and reporting suite, downloadable lesson plans, and user resources including the i Ready Central® support website; account set-up and secure hosting; all program maintenance/ updates/enhancements during the active license term; and unlimited user access to U.S.-based service and support via toll-free phone and email during business hours. Professional development is required and available at an additional cost ($2,000/session up to six hours). Site license pricing is also available.
i Ready is a fully web-based, vendor-hosted, Software-as-a-Service application. The per-student or site-based license fee includes account set-up and management; unlimited access to i Ready’s assessment, management, and reporting functionality; plus unlimited access to U.S.-based customer service/technical support and all program maintenance, updates, and enhancements for as long as the license remains active. The license fee also includes hosting, data storage, and data security. Via the i Ready teacher and administrator dashboards and i Ready Central support website, educators may access comprehensive user guides and downloadable lesson plans, as well as implementation tips, best practices, video tutorials, and more to supplement onsite, fee-based professional development. These online resources are self-paced and available 24/7. Curriculum Associates engaged an independent consultant to thoroughly evaluate i Ready Diagnostic’s accessibility and provide some recommendations regarding how best to support the broadest possible range of student learners. Overall, the report found that i Ready “materials included significant functionality that indirectly supports… students with disabilities.” The report also indicated ways to support these groups of students more directly, which we are in the process of prioritizing for future development. We are committed to meaningful ongoing enhancement and expansion of the program’s accessibility. Diverse student groups experience success with the program largely due to its adaptive nature and program design. All items in i Ready Diagnostic are designed to be accessible for most students. In a majority of cases, students who require accommodations (e.g., large print, extra time) will not require additional help during administration. The thoughtful planning Curriculum Associates invested in the general assessment design ensures that a large percentage of students requiring accommodations will have the necessary adjustments without compromising the interpretation or purpose of the test. To address the elements of Universal Design as they apply to large-scale assessment (http://www.cehd.umn.edu/nceo/onlinepubs/Synthesis44.html), in developing i Ready Curriculum Associates considered several issues related to accommodations. Most may be grouped into the following general categories that i Ready addresses: • Timing and Flexible Scheduling—Students may need extra time to complete the task. The Growth Monitoring assessment may be stopped and started as needed to allow students needing extra time to finish. Growth Monitoring is untimed and can be administered in multiple test sessions. In fact, to ensure accurate results, a time limit is not recommended for any student, though administration must be completed within a period of no longer than 22 days. • Accommodated Presentation of Material—All i Ready items are presented in a large, easily legible format specifically chosen for its readability. i Ready currently offers the ability to change the screen size; with the coming HTML5 items slated for a future release, users will be able to adjust the font size. There is only one item on the screen at a time. Most items for grade levels K–5 mathematics have optional audio support. • Setting—Students may need to complete the task in a quiet room to minimize distraction. This can easily be done, as i Ready is available on any computer with internet access that meets the technical requirements. Furthermore, all students are encouraged to use quality headphones in order to hear the audio portion of the items. Headphones also help to cancel out peripheral noise, which can be distracting to students. • Response Accommodation—Students should be able to control a mouse. They only need to be able to move a cursor with the mouse and be able to point, click, and drag. We are moving toward iPad® compatibility (see updates at www.i Ready.com/support), would mean touchscreen, which is potentially easier for those with motor impairments. Some schools report that they have successfully used i Ready with a screen reader or other assistive technologies, but we cannot certify those applications at this time.
Training Requirements:
4 - 8 hours of training.
Qualified Administrators:
Paraprofessional or professional
Access to Technical Support:
Dedicated account manager plus unlimited access to in-house technical support during business hours.
Assessment Format:
  • Individual
  • Computer-administered
Scoring Time:
  • Scoring is automatic OR
  • 0 minutes per
Scores Generated:
  • Percentile score
  • IRT-based score
  • Developmental benchmarks
  • Other : on-grade achievement level placements
Administration Time:
  • 15 minutes per student
Scoring Method:
  • Automatically (computer-scored)
Technology Requirements:
  • Computer or tablet
  • Internet connection

Tool Information

Descriptive Information

Please provide a description of your tool:
i Ready Growth Monitoring is a brief, computer-delivered, periodic adaptive assessment in mathematics for students in grades K–8, assessing Number & Operations/The Number System, Algebra & Algebraic Thinking, Geometry, and Measurement & Data. Growth Monitoring is part of the i Ready Diagnostic & Instruction suite and is designed to be used jointly with i Ready Diagnostic to allow for progress monitoring throughout the year and determine whether students are on track for appropriate growth. Growth Monitoring is designed to be administered monthly but may be administered as frequently as every week in which the i Ready Diagnostic assessment is not administered. i Ready Growth Monitoring is a general outcome measure form of progress monitoring. The reports show whether students are on track for their target growth by projecting where their ability level will likely be at the end of the school year and comparing the projected growth-to-growth targets. For students who are below level, Growth Monitoring can be used as a tool for Response to Intervention (RTI) programs. Evidence-based and proven valid and reliable, Curriculum Associates designed and developed i Ready specifically to assess student mastery of state and Common Core State Standards (CCSS). Growth Monitoring assessment takes approximately 15 minutes and may be conducted with all students or with specific groups of students who have been identified as at risk of academic failure. i Ready’s sophisticated adaptive algorithm automatically selects from thousands of multiple-choice and technology-enhanced items to get to the core of each student's strengths and challenges, regardless of the grade level at which he or she is performing. The depth of the item bank enables the assessment to truly pinpoint each student’s ability and ensures the accuracy of results. The system automatically analyzes and scores student responses. Available as soon as a student completes the assessment, i Ready’s intuitive Growth Monitoring reports—available at the student and class levels—focus solely on how students are tracking toward their end-of-year growth.
Is your tool designed to measure progress towards an end-of-year goal (e.g., oral reading fluency) or progress towards a short-term skill (e.g., letter naming fluency)?
not selected
selected
The tool is intended for use with the following grade(s).
not selected Preschool / Pre - kindergarten
selected Kindergarten
selected First grade
selected Second grade
selected Third grade
selected Fourth grade
selected Fifth grade
selected Sixth grade
selected Seventh grade
selected Eighth grade
not selected Ninth grade
not selected Tenth grade
not selected Eleventh grade
not selected Twelfth grade

The tool is intended for use with the following age(s).
not selected 0-4 years old
not selected 5 years old
not selected 6 years old
not selected 7 years old
not selected 8 years old
not selected 9 years old
not selected 10 years old
not selected 11 years old
not selected 12 years old
not selected 13 years old
not selected 14 years old
not selected 15 years old
not selected 16 years old
not selected 17 years old
not selected 18 years old

The tool is intended for use with the following student populations.
selected Students in general education
selected Students with disabilities
selected English language learners

ACADEMIC ONLY: What dimensions does the tool assess?

Reading
not selected Global Indicator of Reading Competence
not selected Listening Comprehension
not selected Vocabulary
not selected Phonemic Awareness
not selected Decoding
not selected Passage Reading
not selected Word Identification
not selected Comprehension

Spelling & Written Expression
not selected Global Indicator of Spelling Competence
not selected Global Indicator of Writting Expression Competence

Mathematics
selected Global Indicator of Mathematics Comprehension
selected Early Numeracy
selected Mathematics Concepts
selected Mathematics Computation
selected Mathematics Application
selected Fractions
selected Algebra

Other
Please describe specific domain, skills or subtests:
Four domains are assessed within i Ready Growth Monitoring for mathematics; each domain has corresponding sub-domains. The topics addressed in the The Number System domain are: counting and cardinality; base ten—whole numbers and decimals (place value, compare, add, subtract, multiply, divide); fractions (model, compare, add, subtract, multiply, divide); rational numbers (model, compare, add, subtract, multiply, divide); and real and complex numbers (model, compare, add, subtract, multiply, divide). The topics addressed in the Algebra and Algebraic Thinking domain are: operations and algebraic thinking (fluency, number relationships, properties, solving word problems); expressions and equations (variables, exponents, solving word problems); ratio and proportional relationships (percent, rate, lines, and slope); functions (linear, exponential, quadratic, polynomial, logarithmic, trigonometric, rational interpreting functions); building functions; and systems of equations and inequalities. The topics addressed in the Geometry domain are: two-dimensional shapes; three-dimensional shapes; lines, segments, points, rays, and angles; symmetry and transformations; congruence and similarity; coordinate geometry; pythagorean theorem; circles; and proofs. The topics addressed in the Measurements and Data domain are: measurement units and tools - customary and metric (time, money, length, capacity, weight, and mass); geometric measurement; area, perimeter, surface area, volume; creating and interpreting graphs; and statistics and probability (randomness, probability distributions, collecting and analyzing data, making inferences and conclusions based on probability and expected values, and correlations).

BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.
BEHAVIOR ONLY: Which category of behaviors does your tool target?

Acquisition and Cost Information

Where to obtain:
Email Address
info@cainc.com
Address
153 Rangeway Road, N. Billerica MA 01862
Phone Number
800-225-0248
Website
www.curriculumassociates.com
Initial cost for implementing program:
Cost
$6.00
Unit of cost
student
Replacement cost per unit for subsequent use:
Cost
$6.00
Unit of cost
student
Duration of license
year
Additional cost information:
Describe basic pricing plan and structure of the tool. Provide information on what is included in the published tool, as well as what is not included but required for implementation.
$6.00/student/year for i Ready Diagnostic for mathematics, which includes Growth Monitoring. The license fee includes online student access to assessment, plus staff access to management and reporting suite, downloadable lesson plans, and user resources including the i Ready Central® support website; account set-up and secure hosting; all program maintenance/ updates/enhancements during the active license term; and unlimited user access to U.S.-based service and support via toll-free phone and email during business hours. Professional development is required and available at an additional cost ($2,000/session up to six hours). Site license pricing is also available.
Provide information about special accommodations for students with disabilities.
i Ready is a fully web-based, vendor-hosted, Software-as-a-Service application. The per-student or site-based license fee includes account set-up and management; unlimited access to i Ready’s assessment, management, and reporting functionality; plus unlimited access to U.S.-based customer service/technical support and all program maintenance, updates, and enhancements for as long as the license remains active. The license fee also includes hosting, data storage, and data security. Via the i Ready teacher and administrator dashboards and i Ready Central support website, educators may access comprehensive user guides and downloadable lesson plans, as well as implementation tips, best practices, video tutorials, and more to supplement onsite, fee-based professional development. These online resources are self-paced and available 24/7. Curriculum Associates engaged an independent consultant to thoroughly evaluate i Ready Diagnostic’s accessibility and provide some recommendations regarding how best to support the broadest possible range of student learners. Overall, the report found that i Ready “materials included significant functionality that indirectly supports… students with disabilities.” The report also indicated ways to support these groups of students more directly, which we are in the process of prioritizing for future development. We are committed to meaningful ongoing enhancement and expansion of the program’s accessibility. Diverse student groups experience success with the program largely due to its adaptive nature and program design. All items in i Ready Diagnostic are designed to be accessible for most students. In a majority of cases, students who require accommodations (e.g., large print, extra time) will not require additional help during administration. The thoughtful planning Curriculum Associates invested in the general assessment design ensures that a large percentage of students requiring accommodations will have the necessary adjustments without compromising the interpretation or purpose of the test. To address the elements of Universal Design as they apply to large-scale assessment (http://www.cehd.umn.edu/nceo/onlinepubs/Synthesis44.html), in developing i Ready Curriculum Associates considered several issues related to accommodations. Most may be grouped into the following general categories that i Ready addresses: • Timing and Flexible Scheduling—Students may need extra time to complete the task. The Growth Monitoring assessment may be stopped and started as needed to allow students needing extra time to finish. Growth Monitoring is untimed and can be administered in multiple test sessions. In fact, to ensure accurate results, a time limit is not recommended for any student, though administration must be completed within a period of no longer than 22 days. • Accommodated Presentation of Material—All i Ready items are presented in a large, easily legible format specifically chosen for its readability. i Ready currently offers the ability to change the screen size; with the coming HTML5 items slated for a future release, users will be able to adjust the font size. There is only one item on the screen at a time. Most items for grade levels K–5 mathematics have optional audio support. • Setting—Students may need to complete the task in a quiet room to minimize distraction. This can easily be done, as i Ready is available on any computer with internet access that meets the technical requirements. Furthermore, all students are encouraged to use quality headphones in order to hear the audio portion of the items. Headphones also help to cancel out peripheral noise, which can be distracting to students. • Response Accommodation—Students should be able to control a mouse. They only need to be able to move a cursor with the mouse and be able to point, click, and drag. We are moving toward iPad® compatibility (see updates at www.i Ready.com/support), would mean touchscreen, which is potentially easier for those with motor impairments. Some schools report that they have successfully used i Ready with a screen reader or other assistive technologies, but we cannot certify those applications at this time.

Administration

BEHAVIOR ONLY: What type of administrator is your tool designed for?
not selected
not selected
not selected
not selected
not selected
not selected
If other, please specify:

BEHAVIOR ONLY: What is the administration format?
not selected
not selected
not selected
not selected
not selected
If other, please specify:

BEHAVIOR ONLY: What is the administration setting?
not selected
not selected
not selected
not selected
not selected
not selected
not selected
If other, please specify:

Does the program require technology?

If yes, what technology is required to implement your program? (Select all that apply)
selected
selected
not selected

If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:

What is the administration context?
selected
not selected    If small group, n=
not selected    If large group, n=
selected
not selected
If other, please specify:

What is the administration time?
Time in minutes
15
per (student/group/other unit)
student

Additional scoring time:
Time in minutes
0
per (student/group/other unit)

How many alternate forms are available, if applicable?
Number of alternate forms
per (grade/level/unit)

ACADEMIC ONLY: What are the discontinue rules?
selected
not selected
not selected
not selected
If other, please specify:

BEHAVIOR ONLY: Can multiple students be rated concurrently by one administrator?
If yes, how many students can be rated concurrently?

Training & Scoring

Training

Is training for the administrator required?
Yes
Describe the time required for administrator training, if applicable:
4 - 8 hours of training.
Please describe the minimum qualifications an administrator must possess.
Paraprofessional or professional
not selected No minimum qualifications
Are training manuals and materials available?
Yes
Are training manuals/materials field-tested?
No
Are training manuals/materials included in cost of tools?
Yes
If No, please describe training costs:
Can users obtain ongoing professional and technical support?
Yes
If Yes, please describe how users can obtain support:
Dedicated account manager plus unlimited access to in-house technical support during business hours.

Scoring

BEHAVIOR ONLY: What types of scores result from the administration of the assessment?
Score
Observation Behavior Rating
not selected Frequency
not selected Duration
not selected Interval
not selected Latency
not selected Raw score
Conversion
Observation Behavior Rating
not selected Rate
not selected Percent
not selected Standard score
not selected Subscale/ Subtest
not selected Composite
not selected Stanine
not selected Percentile ranks
not selected Normal curve equivalents
not selected IRT based scores
Interpretation
Observation Behavior Rating
not selected Error analysis
not selected Peer comparison
not selected Rate of change
not selected Dev. benchmarks
not selected Age-Grade equivalent
How are scores calculated?
not selected Manually (by hand)
selected Automatically (computer-scored)
not selected Other
If other, please specify:

Do you provide basis for calculating performance level scores?
Yes

What is the basis for calculating performance level and percentile scores?
not selected Age norms
selected Grade norms
not selected Classwide norms
not selected Schoolwide norms
not selected Stanines
not selected Normal curve equivalents

What types of performance level scores are available?
not selected Raw score
not selected Standard score
selected Percentile score
not selected Grade equivalents
selected IRT-based score
not selected Age equivalents
not selected Stanines
not selected Normal curve equivalents
selected Developmental benchmarks
not selected Developmental cut points
not selected Equated
not selected Probability
not selected Lexile score
not selected Error analysis
not selected Composite scores
not selected Subscale/subtest scores
selected Other
If other, please specify:
on-grade achievement level placements

Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
i Ready scale scores are linear transformations of logit values. Logits, also known as “log odd units,” are measurement units for logarithmic probability models such as the Rasch model. Logits are used to determine both student ability and item difficulty. Within the Rasch model, if the ability matches the item difficulty, then the person has a .50 chance of answering the item correctly. For i Ready, student ability and item logit values generally range from around -6 to 6. When the i Ready vertical scale was updated in August 2016, the equipercentile equating method was applied to the updated logit scale. The appropriate scaling constant and slope were applied to the logit value to convert to scale score values between 100 and 800 (Kolen and Brennan, 2014). This scaling is accomplished by converting the estimated logit values with the following equations: Scale Value = 499.38 + 37.81 × Logit Value Once this conversion is made, floor and ceiling values are imposed to keep the scores within the 100–800 scale range. This is achieved by simply recoding all values below 100 up to 100 and all values above 800 down to 800. The scale score range, mean, and standard deviation on the updated scale are either exactly the same as (range) or very similar (mean and standard deviation) to those from the scale prior to the August 2016 scale update, which generally allows year-over-year comparisons of i Ready scale scores. Additional information on the formulas used to derive raw scores is available from the Center upon request. i Ready is a computer-adaptive test that uses Item Response Theory (IRT) to estimate a student’s score. In addition to the measurement model used to provide student scores, i Ready Growth Monitoring also has a projection model that yields projected scores, which are particularly useful to educators interested in progress monitoring. The Growth Monitoring projection model was developed after the first full-year implementation of the assessment. Several models were evaluated in an extensive research study, in collaboration with independent researchers from Harvard University. The model that had the best psychometric characteristics (e.g., low residual, low residual bias, consistent projection precision across the school year) and was operationally feasible was selected. The final projection model has the following key structural features: • Projection is based on a weighted combination of two values: o The average across all test scores a student receives during the academic year, including Diagnostic and Growth Monitoring (grand mean, or GM) o Predicted end-of-year scale score based on a simple linear regression (linear prediction, or LP) • Weighting of the GM and the LP is determined by fitting multiple linear regression models to the preceding year’s assessment data on the relationship between GM and LP and the actual end-of-year Diagnostic test scores students obtained in the previous year. A set of multiple regression intercept and weighting factors is derived for each of the nine grades (K–8), two subjects, three ability groups based on fall percentile rank (bottom 25%, middle 50%, and top 25%), and eight months (October to May). Thus, a total of 432 (9 × 2 × 3 × 8) sets of model parameters are developed. These structural features of the projection model have a few advantages: • Because model parameters are obtained based on operational data, they can be updated yearly with the most current growth pattern from the past academic year. • Because model parameters are obtained for three ability groups, the differential growth rate for students at the high and low ends of the ability spectrum is taken into consideration. • Because model parameters are obtained for each month, the projection error stays low even at the beginning of the school year, when the number of data points is small. To illustrate the accuracy of the Growth Monitoring projection model, all students from the 2014–2015 school year were randomly assigned into one of two samples: the training sample or the validation sample. The training sample was used to derive weighting parameters for each of the 432 models. These parameters were then applied to the validation sample. Figure 4 of the Technical Manual shows the normalized root-mean-square error (NRMSE) from the validation sample. NRMSE is zero when the prediction matches perfectly to the actual test score; an NRMSE of less than .10 is considered adequate fit. Figure 4 of the Technical Manual shows that, while the prediction error is relatively higher in October when only three months of test data are available and the projection is more than six months out, it quickly drops to a lower level (i.e., most are below .10) in November and stays low and stable across the rest of the year. Section 2.2 of the i Ready Technical Manual provides more details about the projection model. The methodology for setting growth targets is described in Chapter 6 of the i Ready Technical Manual. Consumers interested in more detailed information should contact the publisher of the i-Ready Technical Manual, Curriculum Associates.
Do you provide basis for calculating slope (e.g., amount of improvement per unit in time)?
ACADEMIC ONLY: Do you provide benchmarks for the slopes?
Yes
ACADEMIC ONLY: Do you provide percentile ranks for the slopes?
No
What is the basis for calculating slope and percentile scores?
not selected Age norms
selected Grade norms
not selected Classwide norms
not selected Schoolwide norms
not selected Stanines
not selected Normal curve equivalents

Describe the tool’s approach to progress monitoring, behavior samples, test format, and/or scoring practices, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
i Ready Growth Monitoring is a brief, computer delivered, periodic adaptive assessment in mathematics for students in grades K–8. Growth Monitoring is part of the i Ready Diagnostic & Instruction suite and is designed to be used jointly with i Ready Diagnostic to allow for progress monitoring throughout the year to determine whether students are on track for appropriate growth. Growth monitoring is a periodic assessment that may be administered as frequently as every week in which the i Ready Diagnostic assessment is not administered. The reports for these brief assessments (an average duration of 15 minutes or less) show whether students are on track for their target growth by projecting where their ability level will likely be at the end of the school year and comparing the projected growth-to-growth targets. For students who are below level, Growth Monitoring can be used as a tool for RTI programs. i Ready Growth Monitoring is a general outcome measure form of progress monitoring. The reports associated with Growth Monitoring—available at the student and class levels—focus solely on how students are tracking toward their end-of-year growth. Curriculum Associates is committed to fair and unbiased product development. i Ready is developmentally, linguistically, and culturally appropriate for a wide range of students at each of assessed grades. For instance, the names, characters, and scenarios used within the program are ethnically and culturally diverse. We developed all items and passages in i Ready to be accessible for all students regardless of their need for accommodation. In most cases, students who require accommodations (e.g., large print or extra time) will not require additional help to complete an i Ready assessment. The design of the assessment emphasizes making necessary adjustments to the items so that a large percentage of students requiring accommodations will be able to take the test in a standard manner and the interpretation or the purpose of the test is not compromised. According to the Standards (AERA, APA, NCME, 2014), “Universal Design processes strive to minimize access challenges by taking into account test characteristics that may impede access to the construct for certain test takers.” i Ready was developed with the universal principles of design for assessment in mind and followed the seven elements of Universal Design for large-scale assessments recommended by NCEO (2002): 1. Inclusive assessment population 2. Precisely defined constructs 3. Accessible, non-biased items 4. Amenable to accommodations 5. Simple, clear, and intuitive instructions and procedures 6. Maximum readability and comprehensibility Maximum legibility Curriculum Associates periodically runs differential item functioning (DIF) analysis to ensure that items are operating properly and to identify items that need to go through additional review by subject matter experts and key stakeholders to determine if the items should be removed from the item pool for further editing or replaced. Items with moderate and large DIF are subjected to this extensive review to identify the potential sources of differential functioning. We then determine whether each item should remain in the operational pool, be removed from the item pool, or be revised and resubmitted for field-testing. DIF analysis and subsequent item reviews are important quality assurance procedures to support the validity of the items in the item pool, and are carried out annually by Curriculum Associates following the best practices in the field of educational measurement. Validity refers to the degree to which evidence and theory can support the interpretations of scores used for the assessment (AERA, APA, NCME, 2014). Under the Rasch item response theory (IRT) model, the probability of a correct response to an item is only dependent on the item difficulty and the person’s ability level. If an item favors one group of students over another based on the test taker’s characteristics (e.g., gender, ethnicity), then the assumption of IRT is violated, and the item is considered biased and unfair. A biased item will exhibit DIF. DIF analysis is a procedure used to determine if items are fair and appropriate for assessing the knowledge of various subgroups (e.g., gender and ethnicity) while controlling for ability. However, it should be noted that the presence of DIF alone is not evidence of item bias. Difference in item responses would be expected when the student groups differed in knowledge or the ability level being measured. Consequently, difference in item performance obtained from groups of students with different ability levels does not represent item bias. The determination of DIF, therefore, should be based on not only DIF analysis, but also content experts’ comprehensive review. The following describes the latest DIF analysis conducted on the i Ready items. DIF was investigated using WINSTEPS® by comparing the item difficulty measure for two demographic categories in a pairwise comparison through a combined calibration analysis. The essence of this methodology is to investigate the interaction of the person-groups with each item, while fixing all other item and person measures to those from the combined calibration. The method used to detect DIF is based on the Mantel-Haenszel procedure (MH), and the work of Linacre & Wright (1989) and Linacre (2012). Typically, the group representing test takers in a specific demographic group is referred to as the focal group. The group made up of test takers from outside this group is referred to as the reference group. For example, for gender, Female is the focal group, and Male is the reference group. More information is provided in section 3.4 of the i Ready Technical Manual. Consumers interested in more detailed information should contact the publisher of the i-Ready Technical Manual, Curriculum Associates.

Rates of Improvement and End of Year Benchmarks

Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in your manual or published materials?
Yes
If yes, specify the growth standards:
For grades K–8, our mathematics growth targets over a 30-week period are 29, 28, 26, 26, 23, 18, 13, 11, and 10.
Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?
Yes
If yes, specify the end-of-year performance standards:
This information is provided directly to districts and schools as part of our support process.
What is the basis for specifying minimum acceptable growth and end of year benchmarks?
not selected
selected
not selected Other
If other, please specify:
False

If norm-referenced, describe the normative profile.

National representation (check all that apply):
Northeast:
not selected New England
not selected Middle Atlantic
Midwest:
not selected East North Central
not selected West North Central
South:
not selected South Atlantic
not selected East South Central
not selected West South Central
West:
not selected Mountain
not selected Pacific

Local representation (please describe, including number of states)
Date
Size
Gender (Percent)
Male
Female
Unknown
SES indicators (Percent)
Eligible for free or reduced-price lunch
Other SES Indicators
Race/Ethnicity (Percent)
White, Non-Hispanic
Black, Non-Hispanic
Hispanic
American Indian/Alaska Native
Asian/Pacific Islander
Other
Unknown
Disability classification (Please describe)


First language (Please describe)


Language proficiency status (Please describe)
Do you provide, in your user’s manual, norms which are disaggregated by race or ethnicity? If so, for which race/ethnicity?
not selected White, Non-Hispanic
not selected Black, Non-Hispanic
not selected Hispanic
not selected American Indian/Alaska Native
not selected Asian/Pacific Islander
not selected Other
not selected Unknown

If criterion-referenced, describe procedure for specifying criterion for adequate growth and benchmarks for end-of-year performance levels.
The setting of the Diagnostic performance levels in each grade was based on four years of research on data collected from national panels of accomplished teachers and from statewide testing programs. These performance levels reflect the knowledge and skill levels of students who are “early on grade level” and “mid on grade level” in each grade and subject area. The i Ready growth targets in each grade and subject area stem from these performance levels and reflect the levels of progress expected with respect to the knowledge and skills targeted by i Ready Diagnostic and the CCSS in each grade level. Specifically, a modified Bookmark standard setting was used to determine criterion-referenced growth targets, which were launched in the system in 2013. Appendix L in the i Ready Technical Manual provides information on how the historical criterion-referenced growth targets were calculated. Because i Ready Diagnostic underwent a recalibration for the 2014–2015 school year and a new Contrasting Groups standard setting was conducted in spring 2014, a rigorous review of the growth targets was conducted in summer 2015 to determine if changes to these growth targets should be made. The detailed descriptions of the standard-setting process and setting the criterion-referenced growth targets are provided in Chapter 6 of the i Ready Technical Manual.

Describe any other procedures for specifying adequate growth and minimum acceptable end of year performance.

Performance Level

Reliability

Grade Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Rating Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Offer a justification for each type of reliability reported, given the type and purpose of the tool.
For the i Ready Diagnostic, Curriculum Associates prepares the IRT-based marginal reliability, as well as the standard error of measurement (SEM). Given that the i Ready Diagnostic is a computer-adaptive assessment that does not have a fixed form, some traditional reliability estimates such as Cronbach’s alpha are inappropriate for quantifying consistency of student scores. The IRT analogue to classical reliability is called marginal reliability, and operates on the variance of the theta scores (i.e., proficiency) and the average of the expected error variance. The marginal reliability uses the classical definition of reliability as proportion of variance in the total observed score due to true score under an IRT model (the i Ready Diagnostic uses a Rasch model to be specific). In addition to marginal reliability, SEMs are also important for quantifying the precision of scores. In an IRT model, SEMs are affected by factors such as how well the data fit the underlying model, student response consistency, student location on the ability continuum, match of items to student ability, and test length. Given the adaptive nature of i Ready and the wide difficulty range in the item bank, standard errors are expected to be low and very close to the theoretical minimum for tests of similar length. The theoretical minimum would be reached if each interim estimate of student ability is assessed by an item with difficulty matching perfectly to the student’s ability estimated from previous items. Theoretical minimums are restricted by the number of items served in the assessment—the more items that are served up, the lower the SEM could potentially be. For mathematics, the minimum SEM for overall scores is 6.00. In addition to providing the mean SEM by subject and grade, the graphical representations of the conditional standard errors of measurement (CSEM) provide additional evidence of the precision with which i-Ready measures student ability across the operational score scale. In the context of model-based reliability analyses for computer adaptive tests, such as i-Ready, CSEM plots permit test users to judge the relative precision of the estimate. These figures are available from the Center upon request.
*Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
Data for obtaining the marginal reliability and SEM was from the August and September administrations of the i Ready Diagnostic from 2016 (reported in Table 4.4 of the i Ready Diagnostic Technical Manual). All students tested within the time frame were included and this time period was selected because it coincides with most districts’ first administration of the i Ready Diagnostic. Sample sizes by grade are presented in the table shown under question #4 on the next page.
*Describe the analysis procedures for each reported type of reliability.
This marginal reliability uses the classical definition of reliability as the proportion of variance in the total observed score due to true score. The true score variance is computed as the observed score variance minus the error variance: ρ_θ=(σ_(θ-)^2 σ ̅_E^2)/(σ_θ^2 ) where ρθ is the marginal reliability estimate, σ^2θ is the observed error variance of the ability estimate, σ ̅_E^2 is the observed average conditional error variance. Similar to a classical reliability coefficient, the marginal reliability estimate increases as the standard error decreases; it approaches 1 when the standard error approaches 0. The observed score variance, the error variance, and SEM (the square root of the error variance) are obtained through WINSTEPS calibrations. One separate calibration was conducted for each grade.

*In the table(s) below, report the results of the reliability analyses described above (e.g., model-based evidence, internal consistency or inter-rater reliability coefficients). Include detail about the type of reliability data, statistic generated, and sample size and demographic information.

Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)
n
(raters)
Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.
Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated reliability data.

Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)
n
(raters)
Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Validity

Grade Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Rating Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
The North Carolina End-of-Grade (NC EOG) mathematics tests measure student performance on the grade-level competencies specified by North Carolina Public Schools. Ohio’s State Tests (OST) in mathematics measure the knowledge and skills specified by Ohio’s Learning Standards. The Mississippi Academic Assessment Program (MAAP) measures student achievement in relation to the Mississippi College and Career Readiness Standards for Mathematics. The Florida Standards Assessments (FSA) in mathematics measure student achievement in relation to the education standards outlined by the Florida Department of Education. These criterions are appropriate because they measure the knowledge and skills specified by the educational standards of four different states.
*Describe the sample(s), including size and characteristics, for each validity analysis conducted.
The samples described in this section were selected specifically to be representative of the states in terms of urbanicity; district size; proportion of English language learners and students with disabilities; and proportion of students eligible for free- and reduced-priced lunch. The North Carolina sample consisted of 38,049 students from 12 school districts and 202 schools across the state of North Carolina. The Ohio sample consisted of 10,315 students from 10 school districts and 62 schools across the state of Ohio. The Mississippi sample consisted of 20,545 students from 13 school districts and 78 schools across the state of Mississippi. The Florida sample consisted of 222,686 students from 13 school districts and 816 schools across the state of Florida.
*Describe the analysis procedures for each reported type of validity.
For the North Carolina and Ohio studies, correlations were calculated between the given state assessment (administered in spring of 2016) and last i-Ready Diagnostic administration in spring of 2016. The state assessments were administered within 1–3 months of the i-Ready Diagnostic. For the Mississippi and Florida studies, correlations were calculated between the given state assessment (administered in spring of 2017) and the first i-Ready Diagnostic administration in fall of 2016. The state assessments were administered 4–10 months after the i-Ready Diagnostic. Fisher’s r to z transformation was used to obtain the 95% confidence interval for the correlation coefficients of all studies.

*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.

Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)
n
(raters)
Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.
Describe the degree to which the provided data support the validity of the tool.
The data show that the i-Ready Diagnostic is highly correlated with both near-term and future state assessment scores. The inclusion of four different state assessments shows that i-Ready is a general measure of students’ knowledge and skills in mathematics standards across states.
Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?

If yes, fill in data for each subgroup with disaggregated validity data.

Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)
n
(raters)
Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
Provide citations for additional published studies.

Bias Analysis

Grade Grade 3
Grade 4
Grade 5
Grade 6
Grade 7
Grade 8
Rating Yes Yes Yes Yes Yes Yes
Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
Yes
If yes,
a. Describe the method used to determine the presence or absence of bias:
DIF was investigated using WINSTEPS® (Version 3.92) by comparing item difficulty for pairs of demographic subgroups through a combined calibration analysis. This methodology evaluates the interaction of the person-level subgroups with each item, while fixing all other item and person measures to those from the combined calibration. The method used to detect DIF is based on the Mantel-Haenszel procedure (MH), and the work of Linacre & Wright (1989) and Linacre (2012). Typically, the groups of test takers are referred to as “reference” and “focal” groups. For example, for analysis of gender bias, Female test takers are the focal group, and Male test takers are the reference group. More information is provided in section 3.4 of the i Ready Technical Manual. Consumers interested in more detailed information should contact the publisher of the i-Ready Technical Manual, Curriculum Associates.
b. Describe the subgroups for which bias analyses were conducted:
The latest large-scale DIF analysis included a random sample (20%) of students from the 2015–2016 i Ready operational data. Given the large size of the 2015–2016 i Ready student population, it is practical to carry out the calibration analysis with a random sample. The following demographic categories were compared: Female vs. Male; African American and Hispanic vs. Caucasian; English Learner vs. non–English Learner; Special Ed vs. General Ed; Economically Disadvantaged vs. Not Economically Disadvantaged. In each pairwise comparison, estimates of item difficulty for each category in the comparison were calculated. The table below presents the total number and percentage of students included in the DIF analysis. Subgroup n Percent Male 267200 52 Female* 247000 48 White 126400 34.1 African American or Hispanic* 244100 65.9 Non-EL 262700 80.8 EL* 62400 19.2 General Education 181000 85.1 Special Education* 31600 14.9 Not Economically Disadvantaged 192100 67.1 Economically Disadvantaged* 94100 32.9 *Denotes the focal group
c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.
All active items in the current item pool for the 2015–2016 school year are included in the DIF analysis. The total numbers of items are 3,103 for mathematics. WINSTEPS was used to conduct the calibration for DIF analysis by grade. To help interpret the results, the Educational Testing Service (ETS) criteria using the delta method was used to categorize DIF (Zwick, Thayer, & Lewis, 1999) and is presented below. ETS DIF Category A (negligible): |DIF| < 0.43 B (moderate): |DIF| ≥ 0.43 and |DIF| < 0.64 C (large): |DIF| ≥ 0.64 B- or C- suggests DIF against focal group B+ or C+ suggests DIF against reference group Tables reporting the numbers and percentages of items exhibiting DIF for each of the demographic categories are available, upon request, from the Center. The majority of reading items showed negligible DIF (at least 90 percent), and for very few categories did more than 3 percent of items show large DIF (level C) by grade.

Growth Standards

Sensitivity: Reliability of Slope

Grade Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Rating Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
Describe the sample, including size and characteristics. Please provide documentation showing that the sample was composed of students in need of intensive intervention. A sample of students with intensive needs should satisfy one of the following criteria: (1) all students scored below the 30th percentile on a local or national norm, or the sample mean on a local or national test fell below the 25th percentile; (2) students had an IEP with goals consistent with the construct measured by the tool; or (3) students were non-responsive to Tier 2 instruction. Evidence based on an unknown sample, or a sample that does not meet these specifications, may not be considered.
Describe the frequency of measurement (for each student in the sample, report how often data were collected and over what span of time).
Describe the analysis procedures.

In the table below, report reliability of the slope (e.g., ratio of true slope variance to total slope variance) by grade level (if relevant).

Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)
n
(raters)
Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
Provide citations for additional published studies.
Do you have reliability of the slope data that is disaggregated by subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?

If yes, fill in data for each subgroup with disaggregated reliability of the slope data.

Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)
n
(raters)
Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
Provide citations for additional published studies.

Sensitivity: Validity of Slope

Grade Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Rating Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
Describe the sample(s), including size and characteristics. Please provide documentation showing that the sample was composed of students in need of intensive intervention. A sample of students with intensive needs should satisfy one of the following criteria: (1) all students scored below the 30th percentile on a local or national norm, or the sample mean on a local or national test fell below the 25th percentile; (2) students had an IEP with goals consistent with the construct measured by the tool; or (3) students were non-responsive to Tier 2 instruction. Evidence based on an unknown sample, or a sample that does not meet these specifications, may not be considered.
Describe the frequency of measurement (for each student in the sample, report how often data were collected and over what span of time).
Describe the analysis procedures for each reported type of validity.

In the table below, report predictive validity of the slope (correlation between the slope and achievement outcome) by grade level (if relevant).
NOTE: The TRC suggests controlling for initial level when the correlation for slope without such control is not adequate.

Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)
n
(raters)
Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published validity studies:
Provide citations for additional published studies.
Describe the degree to which the provided data support the validity of the tool.
Do you have validity of the slope data that is disaggregated by subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?

If yes, fill in data for each subgroup with disaggregated validity of the slope data.

Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)
n
(raters)
Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published validity studies:
Provide citations for additional published studies.

Alternate Forms

Grade Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Rating Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
Describe the sample for these analyses, including size and characteristics:
The i Ready assessment forms are assembled automatically by Curriculum Associates’ computer-adaptive testing (CAT) algorithm, subject to objective content and other constraints described in section 2.1.3 in Chapter 2 of the i Ready Technical Manual. As such, the sample size per form that would be applicable to linear (i.e., non-adaptive) assessments does not directly apply to Curriculum Associates’ i Ready Diagnostic assessment. Note that many analyses that Curriculum Associates conducts (e.g., to estimate growth targets) are based on normative samples, which for the 2015–2016 school year, included 3.9 million i Ready Diagnostic assessments taken by more than one million students from over 4,000 schools. The demographics of the normative sample at each grade closely match that of the national student population. Tables 7.3 and 7.4 of the Technical Manual present the sample sizes for each normative sample and the demographics of the samples compared with the latest population target, as reported by the National Center for Education Statistics. Consumers interested in more detailed information should contact the publisher of the i-Ready Technical Manual, Curriculum Associates.
What is the number of alternate forms of equal and controlled difficulty?
Virtually infinite. As a computer-adaptive test, in i Ready all administrations are equivalent forms. However, each student is presented with an individualized testing experience where he or she is served test items based on answer choices to previous questions. In essence, this scenario provides a virtually infinite number of test forms, because individual student testing experiences are largely unique.
If IRT based, provide evidence of item or ability invariance
Section 2.1.3 in Chapter 2 of the i Ready Technical Manual describes the adaptive nature of the tests and how the item selection process works. The i Ready Growth Monitoring assessments are a general outcome measure of student ability and measure a subset of skills that are tested on the Diagnostic. Items on Growth Monitoring are from the same domain item pool for the Diagnostic. Test items are served based on the same IRT ability estimate and item selection logic. Often, test developers want to show that the items in their measure are invariant, meaning the items are measuring both groups similarly. To illustrate the property of item invariance across the groups of i-Ready test takers in need of intensive intervention (i.e., below the national norming sample’s 30th percentile rank in terms of overall mathematics scale score) and those without such need (i.e., at or above the 30th percentile rank), a special set of item calibrations were prepared. Correlations between independent item calibrations for subgroups of students below and at-or-above the 30th percentile rank were computed to demonstrate the extent that i-Ready parameter estimates are appropriate for use with both groups. To demonstrate comparable item parameter estimates, correlations between the below and at-or-above the 30th percentile item difficulty parameter estimates and their corresponding confidence interval—constructed using Fisher’s r-to-z transformation (Fisher, R. A. 1915. Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika, 10(4), 507-521)—were provided. Correlations and corresponding confidence intervals can serve as a measure of the consistency between the item difficulty estimates. Student response data used for item invariance analyses were from the August and September 2017 administrations of the i-Ready Diagnostic. Students tested within this timeframe were subjected to the same inclusion rules that Curriculum Associates uses for new item calibration (i.e., embedded field test). This administration window was selected because it coincides with most districts’ first administration of the i-Ready Diagnostic. To ensure appropriately precise item parameter estimates, the sample was restricted to those items to which there were at least 300 students from each group (those below and those at-or-above the 30th percentile rank). Subgroup sample sizes and the counts of items included by grade for mathematics are presented in the table below. Analysis Grd < 30th % > 30th % Items Coefficient CI Item Invariance K 75,436 136,444 227 0.886 [0.854, 0.911] Item Invariance 1 106,874 263,264 383 0.832 [0.798, 0.860] Item Invariance 2 146,696 277,506 470 0.861 [0.836, 0.883] Item Invariance 3 167,020 315,559 467 0.849 [0.821, 0.872] Item Invariance 4 160,444 338,955 540 0.826 [0.798, 0.851] Item Invariance 5 163,664 328,824 603 0.825 [0.798, 0.849] Item Invariance 6 146,499 247,250 623 0.797 [0.767, 0.824] Item Invariance 7 121,737 215,261 655 0.788 [0.757, 0.815] Item Invariance 8 116,054 185,534 679 0.787 [0.756, 0.814] Note: Counts of students include all measurement occasions and hence may include the same unique student tested more than once.
If computer administered, how many items are in the item bank for each grade level?
For grades 1-8, typical item pool sizes are 1670, 1864, 2087, 2311, 2554, 2665, 2794, and 2913, respectively. Students who perform at an extremely high level will be served with items from grade levels higher than the grade level restriction.
If your tool is computer administered, please note how the test forms are derived instead of providing alternate forms:
The i Ready Diagnostic and Growth Monitoring tests are computer adaptive, meaning the items presented to each student vary depending upon how the student has responded to the previous items. Upon completion of an item randomly selected from a set of five items around a predetermined starting difficulty level, interim ability estimates are updated, and the next item is chosen relative to the new interim ability estimate. Thus, the items can better target the estimated student ability, and more information is obtained from each item presented.

Decision Rules: Setting & Revising Goals

Grade Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Rating Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
In your manual or published materials, do you specify validated decision rules for how to set and revise goals?
No
If yes, specify the decision rules:
What is the evidentiary basis for these decision rules?
NOTE: The TRC expects evidence for this standard to include an empirical study that compares a treatment group to a control and evaluates whether student outcomes increase when decision rules are in place.

Decision Rules: Changing Instruction

Grade Grade 3 Grade 4 Grade 5 Grade 6 Grade 7 Grade 8
Rating Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable Data unavailable
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
In your manual or published materials, do you specify validated decision rules for when changes to instruction need to be made?
No
If yes, specify the decision rules:
What is the evidentiary basis for these decision rules?
NOTE: The TRC expects evidence for this standard to include an empirical study that compares a treatment group to a control and evaluates whether student outcomes increase when decision rules are in place.

Data Collection Practices

Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.