i-Ready Diagnostic and Growth Monitoring

Mathematics

Cost

Technology, Human Resources, and Accommodations for Special Needs

Service and Support

Purpose and Other Implementation Information

Usage and Reporting

Initial Cost:

$6.00 per student

 

Replacement Cost:

$6.00 per student per year. Annual license renewal fee subject to change.

 

Included in Cost:

The license fee includes online student access to assessment, plus staff access to management and reporting suite, downloadable lesson plans, and user resources including the i-Ready Central® support website; account set-up and secure hosting; all program maintenance/ updates/enhancements during the active license term; and unlimited user access to U.S.-based service and support via toll-free phone and email during business hours. The license fee also includes hosting, data storage, and data security.

 

Via the i-Ready teacher and administrator dashboards and i-Ready Central support website, educators may access comprehensive user guides and downloadable lesson plans, as well as implementation tips, best practices, video tutorials, and more to supplement onsite, fee-based professional development. These online resources are self-paced and available 24/7.

 

Professional development is required and available at an additional cost ($2,000/session up to six hours).

Technology Requirements:

  • Computer or tablet
  • Internet connection

 

Training Requirements:

  • 4-8 hours of training

 

Qualified Administrators:

  • Paraprofessionals
  • Professionals

 

Accommodations:

Curriculum Associates engaged an independent consultant to thoroughly evaluate i‑Ready Diagnostic’s accessibility and provide some recommendations regarding how best to support the broadest possible range of student learners.

Overall, the report found that i‑Ready “materials included significant functionality that indirectly supports… students with disabilities.” The report also indicated ways to support these groups of students more directly, which we are in the process of prioritizing for future development. We are committed to meaningful ongoing enhancement and expansion of the program’s accessibility.

 

Diverse student groups experience success with the program largely due to its adaptive nature and program design. All items in i‑Ready Diagnostic are designed to be accessible for most students. In a majority of cases, students who require accommodations (e.g., large print, extra time) will not require additional help during administration.

 

To address the elements of Universal Design as they apply to large-scale assessment (http://www.cehd.umn.edu/nceo/onlinepubs/Synthesis44.html), in developing i‑Ready Curriculum Associates considered several issues related to accommodations. Most may be grouped into the following general categories that i‑Ready addresses:

 

Timing and Flexible Scheduling—The Growth Monitoring assessment may be stopped and started as needed to allow students needing extra time to finish. Growth Monitoring is untimed and can be administered in multiple test sessions.

 

Accommodated Presentation of Material—All i‑Ready items are presented in a large, easily legible format specifically chosen for its readability. i‑Ready currently offers the ability to change the screen size. There is only one item on the screen at a time. Most items for grade levels K–5 mathematics have optional audio support.

 

Setting—Students may need to complete the task in a quiet room to minimize distraction. This can easily be done, as i‑Ready is available on any computer with internet access that meets the technical requirements.

 

Response Accommodation—Students should be able to control a mouse. They only need to be able to move a cursor with the mouse and be able to point, click, and drag.

Where to Obtain:

Website:
www.curriculumassociates.com

Address:
153 Rangeway Road, N. Billerica MA 01862

Phone Number:
800-225-0248

Email: info@cainc.com


Access to Technical Support:

Dedicated account manager plus unlimited access to in-house technical support during business hours.

 

i-Ready Growth Monitoring is a brief, computer-delivered, periodic adaptive assessment in reading/English language arts (ELA) for students in grades K–8, assessing Phonological Awareness, Phonics, High-Frequency Words, Vocabulary, Comprehension of Informational Text, and Comprehension of Literature. Growth Monitoring is part of the i-Ready Diagnostic & Instruction suite and is designed to be used jointly with i-Ready Diagnostic to allow for progress monitoring throughout the year and determine whether students are on track for appropriate growth. Growth Monitoring is designed to be administered monthly but may be administered as frequently as every week in which the
i-Ready Diagnostic assessment is not administered.

 

Evidence-based and proven valid and reliable, Curriculum Associates designed and developed i‑Ready specifically to assess student mastery of state and Common Core State Standards (CCSS). Growth Monitoring assessment takes approximately 15 minutes and may be conducted with all students or with specific groups of students who have been identified as at risk of academic failure.

 

Assessment Format:

  • Individual
  • Computer-administered

 

Administration Time:

  • 15 minutes per student

 

Scoring Time:

  • Scoring is automatic

 

Scoring Method:

  • Calculated automatically

 

Scores Generated:

  • Percentile Score
  • IRT-based Score
  • Developmental Benchmarks
  • Lexile Score
  • On-grade Achievement Level Placements

 

 

Reliability

Grade345678
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

Justify the appropriateness of each type of reliability reported:

For the i Ready Diagnostic, Curriculum Associates prepares the IRT-based marginal reliability, as well as the standard error of measurement (SEM).

Given that the i Ready Diagnostic is a computer-adaptive assessment that does not have a fixed form, some traditional reliability estimates such as Cronbach’s alpha are inappropriate for quantifying consistency of student scores. The IRT analogue to classical reliability is called marginal reliability, and operates on the variance of the theta scores (i.e., proficiency) and the average of the expected error variance. The marginal reliability uses the classical definition of reliability as proportion of variance in the total observed score due to true score under an IRT model (the i Ready Diagnostic uses a Rasch model to be specific).

In addition to marginal reliability, SEMs are also important for quantifying the precision of scores. In an IRT model, SEMs are affected by factors such as how well the data fit the underlying model, student response consistency, student location on the ability continuum, match of items to student ability, and test length. Given the adaptive nature of i Ready and the wide difficulty range in the item bank, standard errors are expected to be low and very close to the theoretical minimum for tests of similar length.

The theoretical minimum would be reached if each interim estimate of student ability is assessed by an item with difficulty matching perfectly to the student’s ability estimated from previous items. Theoretical minimums are restricted by the number of items served in the assessment—the more items that are served up, the lower the SEM could potentially be. For mathematics, the minimum SEM for overall scores is 6.00.

In addition to providing the mean SEM by subject and grade, graphical representations of the conditional standard errors of measurement (CSEM) provide additional evidence of the precision with which i-Ready measures student ability across the operational score scale.  In the context of model-based reliability analyses for computer adaptive tests, such as i-Ready, CSEM plots permit test users to judge the relative precision of the estimate. These figures are available from the Center upon request.

 

Describe the sample characteristics for each reliability analysis conducted:

Data for obtaining the marginal reliability and SEM was from the August and September administrations of the i Ready Diagnostic from 2016 (reported in Table 4.4 of the i Ready Diagnostic Technical Manual). All students tested within the timeframe were included and this time period was selected because it coincided with most districts’ first administration of the i Ready Diagnostic.

 

Describe the analysis procedures for each reported type of reliability:

This marginal reliability uses the classical definition of reliability as the proportion of variance in the total observed score due to true score. The true score variance is computed as the observed score variance minus the error variance. Similar to a classical reliability coefficient, the marginal reliability estimate increases as the standard error decreases; it approaches 1 when the standard error approaches 0. The observed score variance, the error variance, and SEM (the square root of the error variance) are obtained through WINSTEPS calibrations. One separate calibration was conducted for each grade.

 

Type of Reliability

Age or Grade

n

Coefficient

Confidence Interval

Marginal

Kindergarten

191,221

0.92

 

Marginal

Grade 1

298,476

0.93

 

Marginal

Grade 2

334,238

0.94

 

Marginal

Grade 3

376,087

0.95

 

Marginal

Grade 4

366,044

0.96

 

Marginal

Grade 5

366,142

0.96

 

Marginal

Grade 6

276,255

0.96

 

Marginal

Grade 7

254,216

0.97

 

Marginal

Grade 8

238,758

0.97

 

SEM

Kindergarten

191,221

6.48

 

SEM

Grade 1

298,476

6.45

 

SEM

Grade 2

334,238

6.43

 

SEM

Grade 3

376,087

6.43

 

SEM

Grade 4

366,044

6.43

 

SEM

Grade 5

366,142

6.43

 

SEM

Grade 6

276,255

6.43

 

SEM

Grade 7

254,216

6.43

 

SEM

Grade 8

238,758

6.44

 

 

Validity

Grade345678
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

Describe and justify the criterion measures used to demonstrate validity:

The North Carolina End-of-Grade (NC EOG) mathematics tests measure student performance on the grade-level competencies specified by North Carolina Public Schools. Ohio’s State Tests (OST) in mathematics measure the knowledge and skills specified by Ohio’s Learning Standards. The Mississippi Academic Assessment Program (MAAP) measures student achievement in relation to the Mississippi College and Career Readiness Standards for Mathematics.

The Florida Standards Assessments (FSA) in mathematics measure student achievement in relation to the education standards outlined by the Florida Department of Education. These criterions are appropriate because they measure the knowledge and skills specified by the educational standards of four different states.

 

Describe the sample characteristics for each validity analysis conducted:

The samples described in this section were selected specifically to be representative of the states in terms of urbanicity; district size; proportion of English language learners and students with disabilities; and proportion of students eligible for free- and reduced-priced lunch. The North Carolina sample consisted of 38,049 students from 12 school districts and 202 schools across the state of North Carolina. The Ohio sample consisted of 10,315 students from 10 school districts and 62 schools across the state of Ohio. The Mississippi sample consisted of 20,545 students from 13 school districts and 78 schools across the state of Mississippi. The Florida sample consisted of 222,686 students from 13 school districts and 816 schools across the state of Florida.

 

Describe the analysis procedures for each reported type of validity:

For the North Carolina and Ohio studies, correlations were calculated between the given state assessment (administered in spring of 2016) and last i-Ready Diagnostic administration in spring of 2016. The state assessments were administered within 1–3 months of the i-Ready Diagnostic. For the Mississippi and Florida studies, correlations were calculated between the given state assessment (administered in spring of 2017) and the first i-Ready Diagnostic administration in fall of 2016. The state assessments were administered 4–10 months after the i-Ready Diagnostic. Fisher’s r to z transformation was used to obtain the 95% confidence interval for the correlation coefficients of all studies.

Type of Validity

Age or Grade

Test or Criterion

n

Coefficient

Confidence Interval

Concurrent/Construct

Grade 3

2016 North Carolina End-of-Grade (NC EOG) Tests

7,662

0.83

[0.82, 0.83]

Concurrent/Construct

Grade 4

2016 NC EOG Tests

7,686

0.83

[0.82, 0.83]

Concurrent/Construct

Grade 5

2016 NC EOG Tests

7,208

0.83

[0.82, 0.83]

Concurrent/Construct

Grade 6

2016 NC EOG Tests

4,829

0.83

[0.82, 0.84]

Concurrent/Construct

Grade 7

2016 NC EOG Tests

5,578

0.82

[0.81, 0.83]

Concurrent/Construct

Grade 8

2016 NC EOG Tests

5,086

0.81

[0.81, 0.82]

Concurrent/Construct

Grade 3

2016 Out-of-School Time (OST)

2,451

0.79

[0.77, 0.80]

Concurrent/Construct

Grade 4

2016 OST

2,166

0.81

[0.79, 0.82]

Concurrent/Construct

Grade 5

2016 OST

2,204

0.83

[0.82, 0.84]

Concurrent/Construct

Grade 6

2016 OST

1,257

0.85

[0.83, 0.86]

Concurrent/Construct

Grade 7

2016 OST

1,141

0.81

[0.79, 0.83]

Concurrent/Construct

Grade 8

2016 OST

1,096

0.77

[0.75, 0.79]

Predictive

Grade 3

2017 Mississippi Academic Assessment Program (MAAP)

3,246

0.75

[0.73, 0.76]

Predictive

Grade 4

2017 MAAP

3,881

0.78

[0.76, 0.79]

Predictive

Grade 5

2017 MAAP

3,665

0.80

[0.79, 0.81]

Predictive

Grade 6

2017 MAAP

3,561

0.80

[0.79, 0.81]

Predictive

Grade 7

2017 MAAP

3,178

0.81

[0.80, 0.82]

Predictive

Grade 8

2017 MAAP

3,014

0.81

[0.80, 0.82]

Predictive

Grade 3

2017 Florida Standard Assessments (FSA)

49,942

0.75

[0.75, 0.75]

Predictive

Grade 4

2017 FSA

45,495

0.78

[0.77, 0.78]

Predictive

Grade 5

2017 FSA

47,866

0.80

[0.80, 0.80]

Predictive

Grade 6

2017 FSA

31,954

0.82

[0.82, 0.82]

Predictive

Grade 7

2017 FSA

28,160

0.79

[0.78, 0.79]

Predictive

Grade 8

2017 FSA

19,269

0.69

[0.68, 0.69]

 

Describe the degree to which the provided data support the validity of the tool:

The data show that the i-Ready Diagnostic is highly correlated with both near-term and future state assessment scores. The inclusion of four different state assessments shows that i-Ready is a general measure of students’ knowledge and skills in mathematics standards across states.

Bias Analysis Conducted

Grade345678
RatingYesYesYesYesYesYes

Have additional analyses been conducted to establish whether the tool is or is not biased against demographic subgroups (e.g., students who vary by race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?

Bias Analysis Method:

DIF was investigated using WINSTEPS® (Version 3.92) by comparing item difficulty for pairs of demographic subgroups through a combined calibration analysis. This methodology evaluates the interaction of the person-level subgroups with each item, while fixing all other item and person measures to those from the combined calibration. The method used to detect DIF is based on the Mantel-Haenszel procedure (MH), and the work of Linacre & Wright (1989) and Linacre (2012). Typically, the groups of test takers are referred to as “reference” and “focal” groups. For example, for analysis of gender bias, Female test takers are the focal group, and Male test takers are the reference group. More information is provided in section 3.4 of the i‑Ready Technical Manual.

 

Subgroups Included:

The latest large-scale DIF analysis included a random sample (20%) of students from the 2015–2016 i‑Ready operational data. Given the large size of the 2015–2016 i‑Ready student population, it is practical to carry out the calibration analysis with a random sample. The following demographic categories were compared: Female vs. Male; African American and Hispanic vs. Caucasian; English Learner vs. non–English Learner; Special Ed vs. General Ed; Economically Disadvantaged vs. Not Economically Disadvantaged.

 

Bias Analysis Results:

All active items in the current item pool for the 2015–2016 school year are included in the DIF analysis. The total numbers of items is 3,103 for mathematics. WINSTEPS was used to conduct the calibration for DIF analysis by grade. To help interpret the results, the Educational Testing Service (ETS) criteria using the delta method was used to categorize DIF (Zwick, Thayer, & Lewis, 1999) and is presented in the table below.

ETS DIF Category

Criterion

A (negligible)

|DIF| < 0.43

B (moderate)

|DIF| ≥ 0.43 and |DIF| < 0.64

C (large)

|DIF| ≥ 0.64

B- or C- suggests DIF against focal group

B+ or C+ suggests DIF against reference group

 

The numbers and percentages of items exhibiting DIF for each of the demographic categories are reported in the table below. The majority of mathematics items show negligible DIF (at least 90 percent), and for very few categories do more than 3 percent of items show large DIF (level C) by grade.

Grade

ETS DIF

Category

Gender

Ethnicity

ELL

Special

Education

Economically

Disadvantaged

N

Percent

N

Percent

N

Percent

N

Percent

N

Percent

K

A

602

98.4

579

94.6

560

98.6

424

97.5

582

98.6

B+

4

0.7

16

2.6

4

0.7

2

0.5

2

0.3

B-

4

0.7

9

1.5

3

0.5

9

2.1

5

0.8

C+

0

0.0

7

1.1

1

0.2

0

0.0

0

0.0

C-

2

0.3

1

0.2

568

100.0

0

0.0

1

0.2

Total

612

100.0

612

100.0

817

96.5

435

100.0

590

100.0

1

A

895

97.8

841

94.0

19

2.2

733

98.1

861

98.6

B+

10

1.1

30

3.4

4

0.5

4

0.5

1

0.1

B-

5

0.5

9

1.0

7

0.8

10

1.3

5

0.6

C+

4

0.4

12

1.3

0

0.0

0

0.0

5

0.6

C-

1

0.1

3

0.3

0

0.0

0

0.0

1

0.1

Total

915

100.0

895

100.0

847

100.0

747

100.0

873

100.0

2

A

1,160

97.3

1,062

93.9

1,095

96.8

1,000

97.9

1,134

99.0

B+

24

2.0

42

3.7

20

1.8

10

1.0

5

0.4

B-

4

0.3

16

1.4

9

0.8

8

0.8

5

0.4

C+

4

0.3

10

0.9

7

0.6

1

0.1

1

0.1

C-

0

0.0

1

0.1

0

0.0

2

0.2

0

0.0

Total

1,192

100.0

1,131

100.0

1,131

100.0

1,021

100.0

1,145

100.0

3

 

 

A

1,576

96.2

1,434

91.7

1,396

94.3

1,297

95.9

1,509

97.0

B+

29

1.8

51

3.3

45

3.0

21

1.6

8

0.5

B-

20

1.2

46

2.9

25

1.7

26

1.9

24

1.5

C+

8

0.5

13

0.8

9

0.6

5

0.4

1

0.1

C-

5

0.3

19

1.2

6

0.4

4

0.3

14

0.9

Total

1,638

100.0

1,563

100.0

1,481

100.0

1,353

100.0

1,556

100.0

4

A

1,812

95.1

1,610

90.6

1,588

91.0

1,467

95.0

1,759

96.3

B+

44

2.3

66

3.7

52

3.0

26

1.7

18

1.0

B-

37

1.9

69

3.9

66

3.8

37

2.4

36

2.0

C+

9

0.5

20

1.1

20

1.1

5

0.3

4

0.2

C-

3

0.2

12

0.7

20

1.1

9

0.6

10

0.5

Total

1,905

100.0

1,777

100.0

1,746

100.0

1,544

100.0

1,827

100.0

5

A

2,113

93.7

1,779

89.4

1,677

89.6

1,488

92.6

2,039

94.5

B+

62

2.7

79

4.0

63

3.4

42

2.6

41

1.9

B-

51

2.3

88

4.4

86

4.6

58

3.6

50

2.3

C+

18

0.8

28

1.4

18

1.0

10

0.6

14

0.6

C-

11

0.5

17

0.9

28

1.5

9

0.6

13

0.6

Total

2,255

100.0

1,991

100.0

1,872

100.0

1,607

100.0

2,157

100.0

6

A

2,169

91.3

1,717

89.6

1,483

86.3

1,420

88.8

2,081

93.5

B+

73

3.1

95

5.0

70

4.1

53

3.3

47

2.1

B-

84

3.5

70

3.7

118

6.9

76

4.8

58

2.6

C+

28

1.2

20

1.0

23

1.3

20

1.3

13

0.6

C-

21

0.9

15

0.8

25

1.5

30

1.9

27

1.2

Total

2,375

100.0

1,917

100.0

1,719

100.0

1,599

100.0

2,226

100.0

7

A

2,296

92.5

1,796

85.2

1,474

84.5

1,359

88.3

2,158

93.5

B+

77

3.1

126

6.0

77

4.4

63

4.1

48

2.1

B-

76

3.1

123

5.8

114

6.5

75

4.9

67

2.9

C+

20

0.8

20

0.9

29

1.7

12

0.8

19

0.8

C-

12

0.5

43

2.0

51

2.9

30

1.9

16

0.7

Total

2,481

100.0

2,108

100.0

1,745

100.0

1,539

100.0

2,308

100.0

8

A

2,289

92.1

1,804

86.6

1,348

81.7

1,326

88.1

2,182

93.5

B+

108

4.3

102

4.9

86

5.2

59

3.9

52

2.2

B-

54

2.2

101

4.8

114

6.9

76

5.0

46

2.0

C+

20

0.8

26

1.2

44

2.7

13

0.9

30

1.3

C-

14

0.6

51

2.4

57

3.5

31

2.1

24

1.0

Total

2,485

100.0

2,084

100.0

1,649

100.0

1,505

100.0

2,334

100.0

 

Sensitivity: Reliability of the Slope

Grade345678
Ratingdashdashdashdashdashdash

Describe the sample used for analyses, including size and characteristics:

No qualifying evidence provided.

 

Describe the frequency of measurement:

No qualifying evidence provided.

 

Describe reliability of the slope analyses conducted with a population of students in need of intensive intervention:

No qualifying evidence provided.

Sensitivity: Validity of the Slope

Grade345678
Ratingdashdashdashdashdashdash

Describe and justify the criterion measures used to demonstrate validity:

No qualifying evidence provided.

 

Describe the sample used for analyses, including size and characteristics:

No qualifying evidence provided.

 

Describe predictive validity of the slope of improvement analyses conducted with a population of students in need of intensive intervention:

No qualifying evidence provided.

 

Describe the degree to which the provided data support the validity of the tool:

No qualifying evidence provided.

Alternate Forms

Grade345678
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

Describe the sample for these analyses, including size and characteristics:

The i Ready assessment forms are assembled automatically by Curriculum Associates’ computer-adaptive testing (CAT) algorithm, subject to objective content and other constraints described in section 2.1.3 in Chapter 2 of the attached i Ready Technical Manual. As such, the sample size per form that would be applicable to linear (i.e., non-adaptive) assessments does not directly apply to Curriculum Associates’ i Ready Diagnostic assessment. 

Note that many analyses that Curriculum Associates conducts (e.g., to estimate growth targets) are based on normative samples, which for the 2015–2016 school year, included 3.9 million i Ready Diagnostic assessments taken by more than one million students from over 4,000 schools. The demographics of the normative sample at each grade closely match that of the national student population.

Tables 7.3 and 7.4 of the Technical Manual present the sample sizes for each normative sample and the demographics of the samples compared with the latest population target, as reported by the National Center for Education Statistics.

 

Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance:

Section 2.1.3 in Chapter 2 of the attached i‑Ready Technical Manual describes the adaptive nature of the tests and how the item selection process works. The i‑Ready Growth Monitoring assessments are a general outcome measure of student ability and measure a subset of skills that are tested on the Diagnostic. Items on Growth Monitoring are from the same domain item pool for the Diagnostic. Test items are served based on the same IRT ability estimate and item selection logic.

Often, test developers want to show that the items in their measure are invariant, meaning the items are measuring both groups similarly. To illustrate the property of item invariance across the groups of i-Ready test takers in need of intensive intervention (i.e., below the national norming sample’s 30th percentile rank in terms of overall mathematics scale score) and those without such need (i.e., at or above the 30th percentile rank), a special set of item calibrations were prepared. Correlations between independent item calibrations for subgroups of students below and at-or-above the 30th percentile rank were computed to demonstrate the extent that i-Ready parameter estimates are appropriate for use with both groups.

To demonstrate comparable item parameter estimates, correlations between the below and at-or-above the 30th percentile item difficulty parameter estimates and their corresponding confidence interval—constructed using Fisher’s r-to-z transformation (Fisher, R. A. 1915. Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika, 10(4), 507-521)—were provided. Correlations and corresponding confidence intervals can serve as a measure of the consistency between the item difficulty estimates.

Student response data used for item invariance analyses were from the August and September 2017 administrations of the i-Ready Diagnostic. Students tested within this timeframe were subjected to the same inclusion rules that Curriculum Associates uses for new item calibration (i.e., embedded field test). This administration window was selected because it coincides with most districts’ first administration of the i-Ready Diagnostic. To ensure appropriately precise item parameter estimates, the sample was restricted to those items to which there were at least 300 students from each group (those below and those at-or-above the 30th percentile rank). Subgroup sample sizes and the counts of items included by grade for mathematics are presented in the table below.

Type of Analysis

Age or Grade

Students by 30th Percentile Rank

# Items

Correlation Coefficient

Confidence Interval

Below

At or Above

Item Invariance

K

75,436

136,444

227

0.886

[0.854, 0.911]

Item Invariance

1

106,874

263,264

383

0.832

[0.798, 0.860]

Item Invariance

2

146,696

277,506

470

0.861

[0.836, 0.883]

Item Invariance

3

167,020

315,559

467

0.849

[0.821, 0.872]

Item Invariance

4

160,444

338,955

540

0.826

[0.798, 0.851]

Item Invariance

5

163,664

328,824

603

0.825

[0.798, 0.849]

Item Invariance

6

146,499

247,250

623

0.797

[0.767, 0.824]

Item Invariance

7

121,737

215,261

655

0.788

[0.757, 0.815]

Item Invariance

8

116,054

185,534

679

0.787

[0.756, 0.814]

Note: Counts of students include all measurement occasions and hence may include the same unique student tested more than once.

 

The i‑Ready Diagnostic and Growth Monitoring tests are computer adaptive, meaning the items presented to each student vary depending upon how the student has responded to the previous items. Upon completion of an item randomly selected from a set of five items around a predetermined starting difficulty level, interim ability estimates are updated, and the next item is chosen relative to the new interim ability estimate. Thus, the items can better target the estimated student ability, and more information is obtained from each item presented.

 

Number of alternate forms of equal and controlled difficulty:

Virtually infinite. As a computer-adaptive test, in i‑Ready all administrations are equivalent forms. However, each student is presented with an individualized testing experience where he or she is served test items based on answer choices to previous questions. In essence, this scenario provides a virtually infinite number of test forms, because individual student testing experiences are largely unique. For grades 1-8, typical item pool sizes are 1670, 1864, 2087, 2311, 2554, 2665, 2794, and 2913, respectively. Students who perform at an extremely high level will be served with items from grade levels higher than the grade level restriction. 

Decision Rules: Setting and Revising Goals

Grade345678
Ratingdashdashdashdashdashdash

Specification of validated decision rules for when goals should be set or revised:

No qualifying evidence provided.

 

Evidentiary basis for these rules:

No qualifying evidence provided.

Decision Rules: Changing Instruction

Grade345678
Ratingdashdashdashdashdashdash

Specification of validated decision rules for when changes to instruction should be made:

No qualifying evidence provided.

 

Evidentiary basis for these rules:

No qualifying evidence provided.

Administration Format

Grade345678
Data
  • Individual
  • Computer-administered
  • Individual
  • Computer-administered
  • Individual
  • Computer-administered
  • Individual
  • Computer-administered
  • Individual
  • Computer-administered
  • Individual
  • Computer-administered
  • Administration & Scoring Time

    Grade345678
    Data
  • 12 minutes
  • 12 minutes
  • 12 minutes
  • 12 minutes
  • 12 minutes
  • 12 minutes
  • Scoring Format

    Grade345678
    Data
  • Computer-scored
  • Computer-scored
  • Computer-scored
  • Computer-scored
  • Computer-scored
  • Computer-scored
  • ROI & EOY Benchmarks

    Grade345678
    Data
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • Specify the minimum acceptable rate of growth/improvement:

    For grades K–8, the tool's mathematics growth targets over a 30-week period are 29, 28, 26, 26, 23, 18, 13, 11, and 10.

     

    Specify the benchmarks for minimum acceptable end-of-year performance:

    This information is provided directly to districts and schools as part of i-Ready’s support process.