Spring Math

Mathematics

Cost

Technology, Human Resources, and Accommodations for Special Needs

Service and Support

Purpose and Other Implementation Information

Usage and Reporting

Initial Cost:

$9-$15 per student depending upon enrollment.

 

Replacement Cost:

$9-$15 per student depending upon enrollment.

Annual license renewal fee subject to change.

 

Included in Cost:

Spring Math provides extensive implementation support at no additional cost through a support portal to which all users have access. Support materials include how-to videos, brief how-to documents, access to all assessments and acquisition lesson plans for 130 skills, and live and archived webinars. In addition to the support portal, sites that wish to purchase additional coaching support can do so by accessing our network of trained coaches who have expertise in RtI/MTSS leadership and specific training in Spring Math; see http://www.springmath.com/training-support.

 

Sites must have access to one computer per teacher, internet connection, and the ability to print in black and white.

 

Technology Requirements:

  • Computer or tablet
  • Internet connection
  • Printer

 

Training Requirements:

  • Less than 1 hour of training

 

Qualified Administrators:

  • Trained educators

 

Accommodations:

Assessments are standardized, but very brief in duration. If a student requires accomodations, intervention allows for oral and written responding, the use of individual rewards for “beating the last best score,” a range of concrete, representational, and abstract understanding activities, and individualized modeling with immediate corrective feedback.

 

Where to Obtain:

Website: www.springmath.com

Address: 2340 Energy Park Drive, Suite 200, St. Paul, MN 55108

Phone: (651) 999-6100                         
Email: sales@springmath.com


Access to Technical Support:

Support materials are provided organized by user-role (teacher, coach, data administrator) via the online support portal under the drop-down menu below the log-in icon. Spring Math provides free webinars throughout the year for users and host free training institutes at least annually. Spring Math provides a systematic on-boarding process for new users to help them get underway. If users encounter technical difficulties, they can submit a request for help directly from their account, which generates a support ticket with the tech support team. Support tickets are monitored during business hours and are responded to the same day.

 

Spring Math is a comprehensive RtI system that includes screening, progress monitoring, class-wide and individual math intervention, and implementation and decision-making support. Assessments are generated within the tool when needed and Spring Math uses student data to customize class-wide and individual intervention plans for students. Clear and easy-to-understand graphs and reports are provided within the teacher and coach dashboards. Spring Math uses gated screening that involves CBMs administered to the class as a whole followed by class-wide intervention to identify students in need of intensive intervention.

 

Spring Math assesses 130 skills in Grades K-8 and addresses gaps in learning for grades K-12. The skills offer comprehensive but strategic coverage of the Common Core State Standards. Spring Math assesses mastery of number operations, pre-algebraic thinking, and mathematical logic. It also measures understanding of “tool skills.” Tool skills provide the foundation a child needs to question, speculate, reason, solve, and explain real-world problems. Spring Math emphasizes tool skills across grades with grade-appropriate techniques and materials.

Assessment Format:

  • Performance measure

 

Administration Time:

  • K: 4 minutes per student
  • Grades 1-6: 6-8 minutes per student
  • K: 4 minutes per group
  • Grades 1-6: 6-8 minutes per group

 

Scoring Time:

  • 1 minute per student
  • 20-30 minutes per group

 

Scoring Method:

  • Calculated manually

 

Scores Generated:

  • Raw score

 

 

Classification Accuracy

GradeK1357
Criterion 1 FallHalf-filled bubbleHalf-filled bubbleHalf-filled bubbleHalf-filled bubbleFull bubble
Criterion 1 WinterHalf-filled bubbleFull bubbleHalf-filled bubbleEmpty bubbleFull bubble
Criterion 1 SpringEmpty bubbleFull bubbleHalf-filled bubbleHalf-filled bubbledash
Criterion 2 Falldashdashdashdashdash
Criterion 2 Winterdashdashdashdashdash
Criterion 2 Springdashdashdashdashdash

Primary Sample

 

Criterion 1, Fall

Grade

K

1

3

5

7

Criterion

Winter Composite Score

Winter Composite Score

Year-End State Mathematics Assessment in AZ

Year-End State Mathematics Assessment in AZ

Year-End State Mathematics Assessment in AZ

Cut points: Percentile rank on criterion measure

20th

20th

20th

20th

20th

Cut points: Performance score (numeric) on criterion measure

17

59

3515

3582

3633

Cut points: Corresponding performance score (numeric) on screener measure

12

39

24

35

13

Base rate in the sample for children requiring intensive intervention

0.20

0.20

0.20

0.20

0.20

False Positive Rate

0.17

0.15

0.19

0.21

0.18

False Negative Rate

0.30

0.29

0.18

0.27

0.23

Sensitivity

0.83

0.71

0.82

0.73

0.78

Specificity

0.70

0.85

0.81

0.79

0.82

Positive Predictive Power

0.56

0.58

0.39

0.42

0.51

Negative Predictive Power

0.90

0.91

0.97

0.94

0.94

Overall Classification Rate

0.80

0.82

0.81

0.78

0.81

Area Under the Curve (AUC)

0.85

0.85

0.86

0.77

0.91

AUC 95% Confidence Interval Lower Bound

0.76

0.76

0.73

0.63

0.87

AUC 95% Confidence Interval Upper Bound

0.94

0.93

0.99

0.91

0.96

 

Criterion 1, Winter

Grade

K

1

3

5

7

Criterion

Spring Composite Score

Spring Composite Score

Year-End State Mathematics Assessment in AZ

Year-End State Mathematics Assessment in AZ

Year-End State Mathematics Assessment in AZ

Cut points: Percentile rank on criterion measure

20th

20th

20th

20th

20th

Cut points: Performance score (numeric) on criterion measure

29

37

3515

3582

3633

Cut points: Corresponding performance score (numeric) on screener measure

21

26

31

31

26

Base rate in the sample for children requiring intensive intervention

0.20

0.20

0.20

0.20

0.20

False Positive Rate

0.18

0.19

0.28

0.47

0.19

False Negative Rate

0.24

0.14

0.28

0.26

0.19

Sensitivity

0.76

0.86

0.72

0.74

0.81

Specificity

0.82

0.81

0.72

0.53

0.81

Positive Predictive Power

0.48

0.55

0.37

0.29

0.51

Negative Predictive Power

0.94

0.94

0.92

0.89

0.95

Overall Classification Rate

0.81

0.82

0.72

0.57

0.81

Area Under the Curve (AUC)

0.87

0.91

0.79

0.74

0.91

AUC 95% Confidence Interval Lower Bound

0.79

0.86

0.67

0.62

0.87

AUC 95% Confidence Interval Upper Bound

0.95

0.97

0.91

0.86

0.95

 

Criterion 1, Spring

Grade

K

1

3

5

7

Criterion

Spring Composite Score

Spring Composite Score

Year-End State Mathematics Assessment in AZ

Year-End State Mathematics Assessment in AZ

Not Provided

Cut points: Percentile rank on criterion measure

20th

20th

20th

20th

Not Provided

Cut points: Performance score (numeric) on criterion measure

29

37

3515

3582

Not Provided

Cut points: Corresponding performance score (numeric) on screener measure

-0.06

-0.08

-0.11

-0.05

Not Provided

Base rate in the sample for children requiring intensive intervention

0.20

0.20

0.20

0.20

Not Provided

False Positive Rate

0.35

0.14

0.20

0.14

Not Provided

False Negative Rate

0.18

0.14

0.20

0.23

Not Provided

Sensitivity

0.82

0.90

0.80

0.77

Not Provided

Specificity

0.65

0.86

0.80

0.86

Not Provided

Positive Predictive Power

0.33

0.63

0.50

0.61

Not Provided

Negative Predictive Power

0.94

0.96

0.94

0.93

Not Provided

Overall Classification Rate

0.68

0.87

0.80

0.84

Not Provided

Area Under the Curve (AUC)

0.79

0.95

0.86

0.85

Not Provided

AUC 95% Confidence Interval Lower Bound

0.67

0.90

0.78

0.78

Not Provided

AUC 95% Confidence Interval Upper Bound

0.91

0.99

0.96

0.93

Not Provided

 

Reliability

GradeK1357
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble
  1. Justification for each type of reliability reported, given the type and purpose of the tool: Probes are generated following a set of programmed parameters that were built and tested in a development phase. To determine measure equivalence, problem sets were generated, and each problem within a problem set was scored for possible digits correct. The digits correct metric comes from the curriculum-based measurement literature (Deno & Mirkin, 1977) and allows for sensitive measurement of child responding. Typically, each digit that appears in the correct place value position to arrive at the correct final answer is counted as a digit correct. Generally, digits correct work is counted for all the work that occurs below the problem (in the answer) but does not include any work that may appear above the problem in composing or decomposing hundreds or tens, for example, when regrouping.

A standard response format was selected for all measures, which reflected the relevant responses in steps to arrive at a correct and complete answer. Potential digits correct was the unit of analysis that we used to test the equivalence of generated problem sets. For example, in scoring adding and subtracting fractions with unlike denominators, all digits correct in generating fractions with equivalent denominators, then the digits correct in combining or taking the fraction quantity, and digits correct in simplifying the final fraction were counted. The number of problems generated depended upon the task difficulty of the measure. If the measure assessed an easier skill (defined as having fewer potential digits correct), then the number of problems generated was greater than the number of problems that were generated and tested for harder skills for which the possible digits correct scores were much higher. Problems generated for equivalence testing ranged from 80 problems to 480 problems per measure.

A total of 46,022 problems were generated and scored for possible digits correct to test the equivalence of generated problem sets. Problem sets ranged from 8-48 problems. Most problem sets contained 30 problems. For each round of testing, 10 problem sets were generated per measure. The mean possible digits correct per problem was computed for each problem set for each measure. The standard deviation of possible digits correct across the ten generated problem sets was computed and was required to be less than 10% of the mean possible digits correct to establish equivalence.

Spring Math has 130 measures. Thirty-eight measures were not tested for equivalence because there was no variation in possible digits correct per problem type. These measures were all single-digit answers and included measures like Sums to 6, Subtraction 0-5, and Number Names. Eighty-three measures met equivalence standards on the first round of testing, with a standard deviation of possible digits correct per problem per problem set that was on average 4% of the mean possible digits correct per problem. Seven measures required revision and a second round of testing. These measures included Mixed Fraction Operations, Multiply Fractions, Convert Improper to Mixed, Solve 2-Step Equations, Solve Equations with Percentages, Convert Fractions to Decimals, and Collect Like Terms. After revision and re-testing, the average percent of the mean that the standard deviation represented was 4%. One measure required a third round of revision and re-testing. This measure was Order of Operations. On the third round, it met the equivalence criterion with the standard deviation representing on average 10% of the mean possible digits correct per problem across generated problem sets. In this section of the application, we report the results from a year-long study in Louisiana during which screening measures were generated and administered to classes of children with a 1-week interval of time between assessment occasions. Measures were administered by researchers with rigorous integrity and inter-reliability controls in place.

 

  1. Description of the sample(s), including size and characteristics, for each reliability analysis conducted: Reliability data were collected in three schools in southeastern Louisiana with appropriate procedural controls. Researchers administered the screening measures for the reliability study following an administration script. On 25% of testing occasions balanced across times (time1 and time2), grades, and classrooms, a second trained observer documented the percentage of correctly completed steps during screening administration. Average integrity (percentage of steps correctly conducted) was 99.36% with a less than perfect integrity score on only 4 occasions (one missed the sentence in the protocol telling students not to skip around, one exceeded the 2-min timing interval for one measure by 5 seconds, and on two occasions, students turned their papers over before being told to do so). Demographic data are provided in the table below for the reliability sample.

 

n

Ethnicity

Sex

Percent Students with Disabilities

Kindergarten Fall

86

76% White

20% African American

5% Hispanic

50% Male

17%

Kindergarten Winter

79

71% White

25% African American

4% Hispanic

51% Male

14%

Grade 1 Fall

79

67% White

27% Black

5% Hispanic

1 % Native American

49% Male

22%

Grade 1 Winter

75

75% White

19% African American

5% Hispanic

1% Native American

45% Male

19%

Grade 3 Fall

93

68% White

29% African American

3% Hispanic

51% Male

8%

Grade 3 Winter

91

66% White

30% African American

4% Hispanic

43% Male

11%

Grade 5 Fall

48

98% White

2% African American

62% Male

4%

Grade 5 Winter

48

93% White

4% African American

3% Hispanic

62% Male

4%

Grade 7 Fall

41

98% White

2% Hispanic

61% Male

15%

Grade 7 Winter

38

100% White

63%  Male

13%

 

  1. Description of the analysis procedures for each reported type of reliability: Spring Math uses 3-4 timed measures per screening occasions. The initial risk decision and subsequent class-wide intervention risk decision is based on the set of measures as a whole and the subsequent risk during class-wide intervention. For these reliability analyses, we combine the measures at each testing occasion (fall & winter) to yield a composite score for analysis. We report the Pearson r correlation coefficient (with 95% CI) for time 1 and time 2 composite scores for the generated (i.e., alternate form) measures administered 1 week apart. Because the Spring Math screening measures are rigorous by design (i.e., sensitivity is emphasized), a restricted range of scores was not unexpected and certainly impacted the individual CBM reliability values. To test that possibility, for the measures for which we had two consecutive weeks of class-wide intervention scores for classes of students within a school in Arizona where we collected validity data, we examined the Pearson r values. Pearson r values were much higher once score ranges increased with only one week of intervention. For example, in Grade 3, the subtest for Division 0-9 t1 – t2 correlation increased from r = .61 to r = .92, Multiply 1-digit by 2-3 digits with and without regrouping increased from r = .45 to r = .91.  This pattern was replicated across grade levels with stronger correlation values collected in a setting in which range restriction was not an issue. We did not provide these data in the table below because not all of the subtests were accessed during the class-wide intervention within the school (i.e., some subtests would have had missing data).

 

  1. Reliability of performance level score (e.g., model-based, internal consistency, inter-rater reliability).

Type of Reliability

Age or Grade

n

Coefficient

95% Confidence Interval: Lower Bound

95% Confidence Interval: Upper Bound

Alternate Form

K Fall

86

0.79

0.69

0.86

Alternate Form

K Winter

79

0.80

0.70

0.86

Alternate Form

Grade 1 Fall

79

0.85

0.78

0.90

Alternate Form

Grade 1 Winter

75

0.86

0.78

0.91

Alternate Form

Grade 3 Fall

93

0.82

0.74

0.88

Alternate Form

Grade 3 Winter

91

0.84

0.77

0.89

Alternate Form

Grade 5 Fall

48

0.77

0.62

0.86

Alternate Form

Grade 5 Winter

45

0.87

0.77

0.93

Alternate Form

Grade 7 Fall

41

0.80

0.66

0.89

Alternate Form

Grade 7 Winter

38

0.88

0.78

0.94

 

Disaggregated Reliability

The following disaggregated reliability data are provided for context and did not factor into the Reliability rating.

Type of Reliability

Subgroup

Age or Grade

n

Coefficient

95% Confidence Interval: Lower Bound

95% Confidence Interval: Upper Bound

None

 

 

 

 

 

 

 

Validity

GradeK1357
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble
  1. Description of each criterion measure used and explanation as to why each measure is appropriate, given the type and purpose of the tool: The validity measure in Grades 3, 5 and 7 was AzMERIT, the statewide achievement test in Arizona.  

 

  1. Description of the sample(s), including size and characteristics, for each validity analysis conducted: The sample size is included in the table. The demographics are similar to those described in the reliability section.

 

  1. Description of the analysis procedures for each reported type of validity: We have reported the Pearson r correlation for theoretically anticipated convergent measures and theoretically anticipated discriminant measures.  

 

  1. Validity for the performance level score (e.g., concurrent, predictive, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.

Type of Validity

Age or Grade

Test or Criterion

n

Coefficient

95% Confidence Interval: Lower Bound

95% Confidence Interval: Upper Bound

Predictive Convergent

Fall K

Winter Composite

85

0.64

0.49

0.75

Predictive Convergent

Win K

Spring Composite

97

0.72

0.61

0.81

Predictive Convergent

Fall 1st

Winter Composite

94

0.65

0.52

0.76

Predictive Convergent

Win 1st

Spring Composite

102

0.80

0.72

0.86

Predictive Convergent

Fall 3rd

State Year-End Math Score

86

0.65

0.51

0.76

Predictive Discriminant

Fall 3rd

State Year-End Reading Score

86

0.57

0.41

0.70

Predictive Convergent

Win 3rd

State Year-End Math Score

96

0.58

0.43

0.70

Predictive Discriminant

Win 3rd

State Year-End Reading Score

96

0.52

0.36

0.65

Predictive Convergent

Fall 5th

State Year-End Math Score

88

0.66

0.53

0.77

Predictive Discriminant

Fall 5th

State Year-End Reading Score

88

0.38

0.19

0.55

Predictive Convergent

Win 5th

State Year-End Math Score

94

0.63

0.49

0.74

Predictive Discriminant

Win 5th

State Year-End Reading Score

94

0.38

0.19

0.54

Predictive Convergent

Fall 7th

State Year-End Math Score

48

0.73

0.56

0.84

Predictive Discriminant

Fall 7th

State Year-End Reading Score

48

0.59

0.36

0.75

Predictive Convergent

Win 7th

State Year-End Math Score

49

0.67

0.48

0.80

Predictive Discriminant

Win 7th

State Year-End Reading Score

49

0.57

0.34

0.73

 

  1. Results for other forms of validity (e.g. factor analysis) not conducive to the table format: Not provided.

 

  1. Describe the degree to which the provided data support the validity of the tool: We see a pattern of correlations that supports multi-trait, multi-method logic (Campbell & Fiske, 1959).  

 

 

Disaggregated Validity

The following disaggregated validity data are provided for context and did not factor into the Validity rating.

Type of Validity

Subgroup

Age or Grade

Test or Criterion

n

Coefficient

95% Confidence Interval: Lower Bound

95% Confidence Interval: Upper Bound

None

 

 

 

 

 

 

 

 

Sample Representativeness

GradeK1357
Data
  • Local without Cross-Validation
  • Local without Cross-Validation
  • Local without Cross-Validation
  • Local without Cross-Validation
  • Local without Cross-Validation
  • Primary Classification Accuracy Sample

    Criterion 1, Fall 

    Grade

    K

    1

    3

    5

    7

    Criterion

    Winter Composite

    Winter Composite

    Year-End State Test in AZ

    Year-End State Test in AZ

    Year-End State Test in AZ

    National/Local Representation

    AZ

    AZ

    AZ

    AZ

    AZ

    Date

    9/1/17

    9/1/17

    9/1/17

    9/1/17

    9/1/17

    Sample Size

    85

    95

    86

    88

    210

    Male

    58%

    56%

    57%

    51%

    54%

    Female

    35%

    44%

    43%

    49%

    46%

    Gender Unknown

    7%

    0%

    0%

    0%

    0%

    Free or Reduced-price Lunch Eligible

    18%

    34%

    28%

    39%

    22%

    White, Non-Hispanic

    44%

    50%

    52%

    49%

    70%

    Black, Non-Hispanic

    4%

    4%

    0%

    0%

    0%

    Hispanic

    33%

    40%

    41%

    43%

    27%

    American Indian/Alaska Native

    0%

    1%

    3.5%

    0%

    0.9%

    Other

    13%

    4%

    3.5%

    8%

    2.6%

    Race/Ethnicity Unknown

    7%

    1%

    0%

    0%

    0%

    Disability Classification

    8%

    9%

    7%

    13%

    9%

    First Language

    1%

    1%

    0%

    2%

    0.5%

    Language Proficiency Status

    N/A

    N/A

    N/A

    N/A

    N/A

     

    Criterion 2, Winter

    Grade

    K

    1

    3

    5

    7

    Criterion

    Spring Composite

    Spring Composite

    Year-End State Test in AZ

    Year-End State Test in AZ

    Year-End State Test in AZ

    National/Local Representation

    AZ

    AZ

    AZ

    AZ

    AZ

    Date

    1/5/18

    1/5/18

    1/5/18

    1/5/18

    1/5/18

    Sample Size

    96

    101

    96

    94

    215

    Male

    55%

    54%

    53%

    53%

    53%

    Female

    38%

    45%

    47%

    47%

    47%

    Gender Unknown

    7%

    2%

    0%

    0%

    0%

    Free or Reduced-price Lunch Eligible

    19%

    36%

    32%

    38%

    22%

    White, Non-Hispanic

    45%

    49%

    52%

    50%

    70%

    Black, Non-Hispanic

    3%

    5%

    0%

    0%

    0%

    Hispanic

    34%

    38%

    42%

    42%

    27%

    American Indian/Alaska Native

    0%

    1%

    3%

    0%

    0.5%

    Other

    12%

    5%

    3%

    8%

    3%

    Race/Ethnicity Unknown

    7%

    3%

    0%

    0%

    0%

    Disability Classification

    7%

    9%

    8%

    13%

    9%

    First Language

    1%

    1%

    0%

    3%

    0.5%

    Language Proficiency Status

    N/A

    N/A

    N/A

    N/A

    N/A

     

    Bias Analysis Conducted

    GradeK1357
    RatingYesYesYesYesYes
    1. Description of the method used to determine the presence or absence of bias: We conducted a series of binary logistic regression analyses using Stata. Scoring below the 20th percentile on the Arizona year-end state test was the outcome criterion. The interaction term for each subgroup and the fall composite screening score, winter composite screening score, and class-wide intervention risk is provided in the table below.

     

    1. Description of the subgroups for which bias analyses were conducted: Gender, Students with Disabilities, Ethnicity, and SES.

     

    1. Description of the results of the bias analyses conducted, including data and interpretative statements: None of the interactions were significant. Thus, screening accuracy did not differ across subgroups in a way that was statistically significant.

     

    Grade

    Interaction Tested

    Fall Composite

    Winter Composite

    Classwide Intervention Risk

    K

    Gender

    0.201

    p = 0.252

    n = 79

    - 0.134

    p = 0.340

    n = 89

    - 18.06

    p = 0.172

    n = 89

     

    Students with Disabilities

    - 5.22

    p = 0.995

    n = 79

    model doesn’t converge

    - 55.30

    p = 0.524

    n = 89

     

    Ethnicity

    0.039

    p = 0.426

    n = 79

    - 0.190

    p = 0.853

    n = 89

    2.912

    p = 0.602

    n = 89

     

    SES

    0.140

    p = 0.265

    n = 79

    0.174

    p = 0.415

    n = 89

    - 40.39

    p = 0.171

    n = 89

    1

    Gender

    0.012

    p = 0.777

    n = 95

    - 0.003

    p = 0.907

    n = 99

    - 7.29

    p = 0.627

    n = 99

     

    Students with Disabilities

    model doesn’t converge

    0.007

    p = 0.851

    n = 99

    - 411.70

    p = 0.993

    n = 99

     

    Ethnicity

    0.722

    p = 0.230

    n = 95

    - 1.04

    p = 0.269

    n = 99

    2.08

    p = 0.723

    n= 99

     

    SES

    - 0.102

    p = 0.259

    n = 95

    - 0.457

    p = 0.820

    n = 99

    - 21.60

    p = 0.265

    n = 99

    3rd

    Gender x

    7.838

    p = 0.994

    n = 86

    0.065

    p = 0.140

    n = 96

    9.822

    p = 0.166

    n =99

     

    Students with Disabilities x

    - 2.838

    p = 0.993

    n = 86

    0.010

    p = 0.785

    n = 96

    -2.51

    p = 0.815

    n = 99

     

    Ethnicity x

    - 0.008

    p = 0.814

    n = 86

    -0.006

    p = 0.729

    n = 96

    -4.68

    p = 0.850

    n = 99

     

    SES x

    0.124

    p = 0.297

    n = 88

    0.126

    p =0.061

    n = 94

    5.96

    p = 0.404

    n = 101

    5th

    Gender x

    -0.04

    p = 0.257

    n = 88

    -0.06

    p = 0.150

    n = 94

    1.46

    p = 0.883

    n = 101

     

    Students w Disabilities x

    -0.037

    p = 0.601

    n =88

    0.023

    p = 0.216

    n = 94

    6.91

    p = 0.560

    n = 101

     

    Ethnicity x

    0.007

    p = 0.640

    n = 88

    0.023

    p = 0.216

    n = 94

    2.88

    p = 0.463

    n = 101

     

    SES x

    - 0.015

    p = 698

    n = 86

    - 0.027

    p = 0.535

    n = 96

    - 15.431

    p = 0.208

    n = 99

    7th

    Gender x

    0.045

    p = 0.734

    n = 210

    - 0.014

    p = 0.809

    n = 215

    - 12.293

    p = 0.336

    n = 50

     

    Students w Disabilities x

    0.026

    p = 0.908

    n = 210

    0.045

    p = 0.686

    n = 215

    3.01

    p = 0.839

    n = 50

     

    Ethnicity x

    - 0.021

    p = 0.664

    n = 210

    0.013

    p = 0.648

    n = 215

    - .106

    p = 0.989

    n = 50

     

    SES x

    - 0.445

    p = 0.124

    n = 210

    0.006

    p = 0.926

    n = 215

    16.678

    p = 0.181

    n = 50

     

    Administration Format

    GradeK1357
    Data
  • Individual
  • Group
  • Individual
  • Group
  • Individual
  • Group
  • Individual
  • Group
  • Individual
  • Group
  • Administration & Scoring Time

    GradeK1357
    Data
  • 5-13 minutes
  • 5-13 minutes
  • 5-13 minutes
  • 5-13 minutes
  • 5-13 minutes
  • Scoring Format

    GradeK1357
    Data
  • Manual
  • Manual
  • Manual
  • Manual
  • Manual
  • Types of Decision Rules

    GradeK1357
    Data
  • Student & Class-wide Intervention Rules
  • Student & Class-wide Intervention Rules
  • Student & Class-wide Intervention Rules
  • Student & Class-wide Intervention Rules
  • Student & Class-wide Intervention Rules
  • Evidence Available for Multiple Decision Rules

    GradeK1357
    Data
  • Yes
  • Yes
  • Yes
  • Yes
  • Yes