FAST

CBMReading - English

Cost

Technology, Human Resources, and Accommodations for Special Needs

Service and Support

Purpose and Other Implementation Information

Usage and Reporting

Initial Cost:

FAST™ assessments are accessed through an annual subscription offered by FastBridge Learning, priced on a “per student assessed” model. The subscription rate for school year 2017–18 is $7.00 per student. There are no additional fixed costs. FAST subscriptions are all inclusive providing access to: all FAST reading and math assessments for universal screening, progress monitoring and diagnostic purposes including Computer Adaptive Testing and Curriculum-Based Measurement; Behavior and Developmental Milestones assessment tools; the FAST data management and reporting system; embedded online system training for staff; and basic implementation and user support.

 

In addition to the online training modules embedded within the FAST application, FastBridge Learning offers onsite training options. One-, two-, and three-day packages are available. Packages are determined by implementation size and which FAST assessments (e.g., reading, math, and/or behavior) a district intends to use: 1-day package: $3,000.00; 2-day package: $6,000.00; 3-day package: $9,000.00. Any onsite training purchase also includes a complimentary online Admin/Manager training session (2 hours) for users who will be designated as District Managers and/or School Managers in FAST. Additionally, FastBridge offers web-based consultation and training delivered by certified FAST trainers. The web-based consultation and training rate is $200.00/hour.

 

Replacement Cost:

Annual rates subject to change.

 

Included in Cost:

  • The FAST™ application is a fully cloud-based system, and therefore computer and Internet access are required for full use of the application. Teachers will require less than one hour of training on the administration of the assessment. A paraprofessional can administer the assessment as a Group Proctor in the FAST application.

Technology Requirements:

  • Computer or tablet
  • Internet connection

 

Training Requirements:

  • Less than 1 hour of training

 

Qualified Administrators:

No minimum requirements

 

Accommodations:

The application allows for the following accommodations to support accessibility for culturally and linguistically diverse populations:

  • Enlarged and printed paper materials are available upon request.
  • Extra breaks as needed.
  • Preferential seating and use of quiet space.
  • Proxy responses.
  • Use of scratch paper.
  • As part of item development, all items were reviewed for bias and fairness.

 

Where to Obtain:

Website: www.fastbridge.org

Address: 520 Nicollet Mall, Suite 910, Minneapolis, MN 55402

Phone number: 612.254.2534

Email address: info@fastbridge.org


Access to Technical Support:

Users have access to professional development technicians, as well as ongoing technical support.

FAST™ CBMreading is a version of Curriculum Based Measurement of Oral Reading (CBM-R), which was originally developed to index the level and rate of reading achievement. FAST™ CBMreading is used to screen and monitor student progress in reading competency in the primary grades (1-8). Students read aloud for one minute from grade-level or instructional-level passages (three passages per assessment). The words read correct per minute functions as a robust indicator of reading and a sensitive indicator of intervention effects.

 

 

Assessment Format:

  • Direct: Computerized
  • One-to-one

 

Administration Time:

  • 1-5 minutes per student

 

Scoring Time:

  • Scoring is automatic

 

Scoring Method:

Three raw scores are calculated for FAST™ CBMreading: a) total words read, which is defined as the total number of words read, including correct and incorrect responses; b) number of errors, which is defined as the total number of errors the student made during the one minute administration; and c) words read correct per minute, which is calculated as the total number of words read in one minute minus the number of errors made during the one minute.

                             

Scores Generated:

  • Raw score
  • Percentile score
  • Developmental benchmarks
  • Error analysis
  • Words read correct per minute      

 

 

 

Classification Accuracy

Grade123456
Criterion 1 FallHalf-filled bubbleFull bubbleFull bubbleFull bubbleHalf-filled bubbleHalf-filled bubble
Criterion 1 Winterdashdashdashdashdashdash
Criterion 1 Springdashdashdashdashdashdash
Criterion 2 Falldashdashdashdashdashdash
Criterion 2 Winterdashdashdashdashdashdash
Criterion 2 Springdashdashdashdashdashdash

Time of Year: Fall

 

Grade 1

Grade 2

Grade 3

Grade 4

Grade 5

Grade 6

Cut points

16.5 (20th percentile)

42.5 (20th percentile)

75.5 (20th percentile)

108.5 (20th percentile)

117.5 (20th percentile)

118.5 (20th percentile)

Base rate in the sample for children requiring intensive intervention

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Base rate in the sample for children considered at-risk, including those with the most intensive needs

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

False Positive Rate

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

False Negative Rate

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Sensitivity

0.75

0.88

0.84

0.78

0.84

0.92

Specificity

0.63

0.87

0.83

0.82

0.79

0.72

Positive Predictive Power

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Negative Predictive Power

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Overall Classification Rate

0.71

0.88

0.84

0.79

0.83

0.88

Area Under the Curve (AUC)

0.81

0.93

0.89

0.87

0.90

0.90

AUC 95% Confidence Interval Lower

0.76

0.90

0.86

0.83

0.87

0.87

AUC 95% Confidence Interval Upper

0.86

0.96

0.92

0.91

0.93

0.93

At 90% Sensitivity, specificity equals

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

At 80% Sensitivity, specificity equals

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

At 70% Sensitivity, specificity equals

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

Not Provided

 

 

Reliability

Grade123456
RatingHalf-filled bubbleHalf-filled bubbleHalf-filled bubbleFull bubbleFull bubbleEmpty bubble
  1. Justification for each type of reliability reported, given the type and purpose of the tool:

The first type of reliability evidence we present is inter-rater reliability. Inter-rater reliability is an appropriate measure of reliability for the use of FAST CBMreading because teachers listen to students and evaluate their oral reading fluency, including accuracy, so consistency across teachers (raters) is important.

 

The second type of reliability evidence we present is alternate-form reliability. Alternate-form reliability is an appropriate measure of reliability for FAST CBMreading as a screening tool because students take alternate forms (actually passages) at each screening time point, so consistency in the rank order of scores over forms (passages) is important. The results presented below are median correlations between students’ scores on multiple passages (39 in first grade, and 60 in the other grades). The maximum amount of time between administration of the passages was two weeks.

 

  1. Description of the sample(s), including size and characteristics, for each reliability analysis conducted:

Inter-rater: Approximately 1,900 students in grades 1-6 (see table below for student N by grade level). Students came from three samples, one from Minnesota, one from Georgia, and one from New York.

 

Alternate forms: Approximately 150 students in each of grades 1-5. Students came from three samples: one from Minnesota, one from Georgia, and another from New York.

 

  1. Description of the analysis procedures for each reported type of reliability:

Inter-rater: Inter-rater reliability coefficients were estimated by calculating the median percent agreement between two teachers scores for each student. Confidence intervals represent 95% confidence intervals.

 

Alternate forms: Students were tested on multiple passages in two weeks or less. Alternate-form reliability coefficients were estimated by calculating the Pearson product moment correlations between scores for each combination of passages. The coefficients below represent the median of those correlations. Confidence intervals represent 95% confidence intervals.

 

  1. Reliability of performance level score (e.g., model-based, internal consistency, inter-rater reliability).

Type of Reliability

Age or Grade

n

Coefficient

Confidence Interval

Inter-rater

1

146

0.97

0.96, 0.98

Inter-rater

2

695

0.97

0.97, 0.97

Inter-rater

3

698

0.97

0.97, 0.97

Inter-rater

4

465

0.98

0.98, 0.98

Inter-rater

5

459

0.98

0.98, 0.98

Inter-rater

6

462

0.98

0.98, 0.98

Alternate-form

1

206

0.74

0.64, 0.80

Alternate-form

2

179

0.75

0.68, 0.81

Alternate-form

3

126

0.75

0.66, 0.82

Alternate-form

4

156

0.83

0.77, 0.87

Alternate-form

5

140

0.83

0.77, 0.88

 

Disaggregated Reliability

The following disaggregated reliability data are provided for context and did not factor into the Reliability rating.

Type of Reliability

Subgroup

Age or Grade

n

Coefficient

Confidence Interval

None

 

 

 

 

 

 

Validity

Grade123456
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble
  1. Description of each criterion measure used and explanation as to why each measure is appropriate, given the type and purpose of the tool:

The criterion measure for both types of validity analyzes (concurrent and predictive) is the oral reading fluency measure that is a part of the AIMSWEB system. The measure is an appropriate criterion because is measures a construct hypothesized to be related to FAST™ CBMreading.

 

  1. Description of the sample(s), including size and characteristics, for each validity analysis conducted:

Concurrent and predictive analyses with AIMSWEB oral reading fluency measure were conducted on a sample of students from Minnesota. There were approximately 220 students in each of grades 1-6.

 

  1. Description of the analysis procedures for each reported type of validity:

Validity coefficients were calculated by computing Pearson product moment correlations between FAST™ CBMreading and the criterion measure. Confidence intervals represent 95% confidence intervals.

 

  1. Validity for the performance level score (e.g., concurrent, predictive, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.

Type of Validity

Age or Grade

Test or Criterion

n

Coefficient

Confidence Interval

Concurrent

1

AIMSWEB

215

0.97

0.96, 0.98

Concurrent

2

AIMSWEB

245

0.97

0.96, 0.98

Concurrent

3

AIMSWEB

245

0.95

0.94, 0.96

Concurrent

4

AIMSWEB

247

0.97

0.96, 0.98

Concurrent

5

AIMSWEB

224

0.96

0.95, 0.97

Concurrent

6

AIMSWEB

220

0.95

0.93, 0.96

Predictive

1

AIMSWEB

208

0.91

0.88, 0.93

Predictive

2

AIMSWEB

230

0.92

0.90, 0.94

Predictive

3

AIMSWEB

220

0.90

0.87, 0.92

Predictive

4

AIMSWEB

242

0.92

0.90, 0.94

Predictive

5

AIMSWEB

223

0.92

0.90, 0.94

Predictive

6

AIMSWEB

220

0.94

0.92, 0.95

 

  1. Results for other forms of validity (e.g. factor analysis) not conducive to the table format:

None

 

  1. Describe the degree to which the provided data support the validity of the tool:

The validity coefficients provide moderate to strong evidence for the use of FAST™ CBMreading as a measure of CBM-R.

 

 

Disaggregated Validity

The following disaggregated validity data are provided for context and did not factor into the Validity rating.

Type of Validity

Subgroup

Age or Grade

Test or Criterion

n

Coefficient

Confidence Interval

None

 

 

 

 

 

 

 

Results for other forms of disaggregated validity (e.g. factor analysis) not conducive to the table format:

None

 

Sample Representativeness

Grade123456
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble

Primary Classification Accuracy Sample

Representation: Large local sample from a single state (Minnesota).

Date: 2012-13

Size: 1,153

Male

45%

Female

55%

Unknown

NA

Free or reduced-price lunch

20%

Other SES Indicators

Not provided

White, Non-Hispanic

52%

Black, Non-Hispanic

12%

Hispanic

30%

American Indian/Alaska Native

1%

Asian/Pacific Islander

4%

Other

Not provided

Unknown

Not provided

Disability classification

15% special education

First language

Not provided

Language proficiency status

All students were English proficient

 

Bias Analysis Conducted

Grade123456
RatingNoNoNoNoNoNo
  1. Description of the method used to determine the presence or absence of bias

None

 

  1. Description of the subgroups for which bias analyses were conducted

None

 

  1. Description of the results of the bias analyses conducted, including data and interpretative statements

None

 

Administration Format

Grade123456
Data
  • Individual
  • Individual
  • Individual
  • Individual
  • Individual
  • Individual
  • Administration & Scoring Time

    Grade123456
    Data
  • 1-5 minutes
  • 1-5 minutes
  • 1-5 minutes
  • 1-5 minutes
  • 1-5 minutes
  • 1-5 minutes
  • Scoring Format

    Grade123456
    Data
  • Automatic
  • Automatic
  • Automatic
  • Automatic
  • Automatic
  • Automatic
  • Types of Decision Rules

    Grade123456
    Data
  • None
  • None
  • None
  • None
  • None
  • None
  • Evidence Available for Multiple Decision Rules

    Grade123456
    Data
  • No
  • No
  • No
  • No
  • No
  • No