FAST CBMReading

Reading

 

Cost

Technology, Human Resources, and Accommodations for Special Needs

Service and Support

Purpose and Other Implementation Information

Usage and Reporting

Initial Cost:

$7.00 per student

 

Replacement Cost:

$7.00 per student (for 2018-19 school year; annual license renewal fee subject to change)

 

Included in Cost:

FAST™ assessments are accessed through an annual subscription offered by FastBridge Learning, priced on a “per student assessed” model. The subscription rate for school year 2018–19 is $7.00 per student. There are no additional fixed costs. FAST™ subscriptions are all-inclusive providing access to: all FAST™ reading and math assessments for universal screening, progress monitoring and diagnostic purposes including Computer Adaptive Testing and Curriculum-Based Measurement; Behavior and Developmental Milestones assessment tools; the FAST™ data management and reporting system; embedded online system training for staff; and basic implementation and user support.

 

In addition to the online training modules embedded within the FAST™ application, FastBridge Learning offers onsite training options. One, two, and three-day packages are available. Packages are determined by implementation size and which FAST™ assessments (e.g., reading, math, and/or behavior) a district intends to use: 1-day package: $3,000.00; 2-day package: $5,750.00; 3-day package: $8,500.00. Any onsite training purchase also includes a complimentary online Admin/Manager training session (2 hours) for users who will be designated as District Managers and/or School Managers in FAST. Additionally, FastBridge offers web-based consultation and training delivered by certified FAST™ trainers. The web-based consultation and training rate is $175.00/hour.

Technology Requirements:

  • Computer or tablet
  • Internet connection

 

Training Requirements:

  • Less than 1 hour of training

 

Qualified Administrators:

  • No minimum qualifications specified

 

Accommodations:

The application allows for the following accommodations to support accessibility for culturally and linguistically diverse populations:

  • Enlarged and printed paper materials are available upon request.
  • Extra breaks as needed.
  • Preferential seating and use of quiet space.
  • Proxy responses.
  • Use of scratch paper.

Where to Obtain:

Website:

www.fastbridge.org

Address:

FastBridge Learning, LLC

520 Nicollet Mall, Suite 910, Minneapolis, MN 55402

Phone:
612-254-2534

Email:
sales@fastbridge.org


Access to Technical Support:

Users have access to ongoing technical support.

FAST™ CBMreading (English) is a version of Curriculum Based Measurement of Oral Reading (CBM-R), which was originally developed to index the level and rate of reading achievement. FAST™ CBMreading is used to screen and monitor student progress in reading competency in the primary grades (1-8). Students read aloud for one minute from grade-level or instructional-level passages (three passages per assessment). The words read correct per minute functions as a robust indicator of reading and a sensitive indicator of intervention effects.

 

Assessment Format:

  • Individual

 

Administration Time:

  • 1 - 5 minutes per student (depending on number of passages read)

 

Scoring Time:

  • Less than 1 minute

 

Scoring Method:

  • Calculated automatically

 

Scores Generated:

  • Percentile Score
  • Raw Score
  • Developmental Benchmarks
  • Error Analysis
  • Words Read Correct per Minute

 

 

Reliability

Grade123456
RatingHalf-filled bubbleHalf-filled bubbleHalf-filled bubbleFull bubbleFull bubbleEmpty bubble

Justify the appropriateness of each type of reliability reported:

The first type of reliability evidence presented is inter-rater reliability. Inter-rater reliability is an appropriate measure of reliability for the use of FAST™ CBMreading because teachers listen to students and evaluate their oral reading fluency, including accuracy, so consistency across teachers (raters) is important.

The second type of reliability evidence presented is alternate-form reliability. Alternate-form reliability is an appropriate measure of reliability for FAST™ CBMreading as a screening tool because students take alternate forms (actually passages) at each screening time point, so consistency in the rank order of scores over forms (passages) is important. The results presented below are median correlations between students’ scores on multiple passages (39 in first grade, and 60 in the other grades). The maximum amount of time between administration of the passages was two weeks.

 

Describe the sample characteristics for each reliability analysis conducted:

The first sample comprised approximately 1,900 students in grades 1-6. Students came from three samples, one from Minnesota, one from Georgia, and one from New York.

The second sample comprised approximately 150 students in each of grades 1-5. Students came from three samples: one from Minnesota, one from Georgia, and another from New York.

 

Describe the analysis procedures for each reported type of reliability:

Inter-rater reliability coefficients were estimated by calculating the median percent agreement between two teachers scores for each student. Confidence intervals represent 95% confidence intervals

Students were tested on multiple passages in two weeks or less. Alternate-form reliability coefficients were estimated by calculating the Pearson product moment correlations between scores for each combination of passages. The coefficients below represent the median of those correlations. Confidence intervals represent 95% confidence intervals.

Type of Reliability

Age or Grade

n

Coefficient

Confidence Interval

Inter-rater

Grade 1

146

0.97

0.96, 0.98

Inter-rater

Grade 2

695

0.97

0.97, 0.97

Inter-rater

Grade 3

698

0.97

0.97, 0.97

Inter-rater

Grade 4

465

0.98

0.98, 0.98

Inter-rater

Grade 5

459

0.98

0.98, 0.98

Inter-rater

Grade 6

462

0.98

0.98, 0.98

Alternate Form

Grade 1

206

0.74

0.64, 0.80

Alternate Form

Grade 2

179

0.75

0.68, 0.81

Alternate Form

Grade 3

126

0.75

0.66, 0.82

Alternate Form

Grade 4

156

0.83

0.77, 0.87

Alternate Form

Grade 5

140

0.83

0.77, 0.88

 

Validity

Grade123456
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubbleFull bubble

Describe and justify the criterion measures used to demonstrate validity:

The criterion measure for both types of validity analyzes (concurrent and predictive) is the oral reading fluency measure that is a part of the AIMSWEB system. The measure is an appropriate criterion because it measures a construct hypothesized to be related to FAST™ CBMreading.

 

Describe the sample characteristics for each validity analysis conducted:

Concurrent and predictive analyses with AIMSWEB oral reading fluency measure were conducted on a sample of students from Minnesota. There were approximately 220 students in each of grades 1-6.

 

Describe the analysis procedures for each reported type of validity:

Validity coefficients were calculated by computing Pearson product moment correlations between FAST™ CBMreading and the criterion measure. Confidence intervals represent 95% confidence intervals.

Type of Validity

Age or Grade

Test or Criterion

n

Coefficient

Confidence Interval

Concurrent

Grade 1

AIMSWEB

215

0.97

0.96, 0.98

Concurrent

Grade 2

AIMSWEB

245

0.97

0.96, 0.98

Concurrent

Grade 3

AIMSWEB

245

0.95

0.94, 0.96

Concurrent

Grade 4

AIMSWEB

247

0.97

0.96, 0.98

Concurrent

Grade 5

AIMSWEB

224

0.96

0.95, 0.97

Concurrent

Grade 6

AIMSWEB

220

0.95

0.93, 0.96

Predictive

Grade 1

AIMSWEB

208

0.91

0.88, 0.93

Predictive

Grade 2

AIMSWEB

230

0.92

0.90, 0.94

Predictive

Grade 3

AIMSWEB

220

0.90

0.87, 0.92

Predictive

Grade 4

AIMSWEB

242

0.92

0.90, 0.94

Predictive

Grade 5

AIMSWEB

223

0.92

0.90, 0.94

Predictive

Grade 6

AIMSWEB

220

0.94

0.92, 0.95

 

Describe the degree to which the provided data support the validity of the tool:

The validity coefficients provide moderate to strong evidence for the use of FAST™ CBMreading as a measure of CBM-R.

Bias Analysis Conducted

Grade123456
RatingNoNoNoNoNoNo

Have additional analyses been conducted to establish whether the tool is or is not biased against demographic subgroups (e.g., students who vary by race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?

Bias Analysis Method: No qualifying evidence provided.

Subgroups Included: No qualifying evidence provided.

Bias Analysis Results: No qualifying evidence provided.

Sensitivity: Reliability of the Slope

Grade123456
Ratingdashdashdashdashdashdash

Describe the sample used for analyses, including size and characteristics:

No qualifying evidence provided.

 

Describe the frequency of measurement:

No qualifying evidence provided.

 

Describe reliability of the slope analyses conducted with a population of students in need of intensive intervention:

No qualifying evidence provided.

 

Sensitivity: Validity of the Slope

Grade123456
Ratingdashdashdashdashdashdash

Describe and justify the criterion measures used to demonstrate validity:

No qualifying evidence provided.

 

Describe the sample used for analyses, including size and characteristics:

No qualifying evidence provided.

 

Describe predictive validity of the slope of improvement analyses conducted with a population of students in need of intensive intervention:

No qualifying evidence provided.

 

Describe the degree to which the provided data support the validity of the tool:

No qualifying evidence provided.

 

Alternate Forms

Grade123456
Ratingdashdashdashdashdashdash

Describe the sample for these analyses, including size and characteristics:

No qualifying evidence provided.

 

Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance:

No qualifying evidence provided.

 

Number of alternate forms of equal and controlled difficulty:

The number of alternate forms is 20. 

Decision Rules: Setting and Revising Goals

Grade123456
Ratingdashdashdashdashdashdash

Specification of validated decision rules for when goals should be set or revised:

No qualifying evidence provided.

 

Evidentiary basis for these rules:

No qualifying evidence provided.

Decision Rules: Changing Instruction

Grade123456
Ratingdashdashdashdashdashdash

Specification of validated decision rules for when changes to instruction should be made:

No qualifying evidence provided.

 

Evidentiary basis for these rules:

No qualifying evidence provided.

Administration Format

Grade123456
Data
  • Individual
  • Individual
  • Individual
  • Individual
  • Individual
  • Individual
  • Administration & Scoring Time

    Grade123456
    Data
  • 1-5 minutes
  • 1-5 minutes
  • 1-5 minutes
  • 1-5 minutes
  • 1-5 minutes
  • 1-5 minutes
  • Scoring Format

    Grade123456
    Data
  • Computer-scored
  • Computer-scored
  • Computer-scored
  • Computer-scored
  • Computer-scored
  • Computer-scored
  • ROI & EOY Benchmarks

    Grade123456
    Data
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • ROI & EOY Benchmarks Available
  • Specify the minimum acceptable rate of growth/improvement:

     

    Norms

    Recommendations

    Grade

    Fall-to-Spring Screening ROIs

    Weekly Progress Monitoring ROIs

    Progress

    Monitoring Levels

    Minimal/Ambitious

    ROIs

    End of Year Levels

    1

    1.91

    1.47

    < 18

    1.5 / 2.0

    > 70

    2

    1.36

    1.22

    40 to 59

    1.5 / 2.0

    > 105

    3

    1.13

    0.98

    60 to 91

    1.5 / 2.0

    > 130

    4

    1.01

    0.90

    92 to 133

    1.0 / 1.5

    > 150

    5

    0.89

    0.91

    133 to 141

    0.90 / 1.0

    > 161

    6

    0.96

    0.89

    141 <

    0.90 / 1.0

    > 171

    Note. Normative and criterion standards were used to set the recommendations for ROI.

    Norms for Fall-to-Spring Screening ROI were derived from large nationally representative samples (range, 6,485 to 44,102) of student performances during the fall-and-spring screening periods. The norms depict average growth in the typically developing population, which includes those cases with a fall score between the 30th and 85th percentiles in this case.

    Norms for Weekly ROI were derived from modest nationally representative samples of ROIs (range, 640 to 2,982) with 10 to 30 weeks of data from weekly monitoring. These are presented together with the Norms for Fall-to-Spring to illustrate that normative growth in the population is often greater than normative growth among those who receive intervention and weekly monitoring, which is not an acceptable outcome.

    Recommended for the Monitoring Levels and End of Year Levels correspond with research-based benchmark estimates. Students within the Monitoring Levels are at risk for reading deficits and are likely to perform below the 40th percentile on nationally normed assessments and below proficiency on national standards. Those students should be considered for intervention and monitoring to accelerate their ROI and meet the End of Year benchmarks.

    Recommended ROI Minimal/Ambitious are benchmark levels for the minimal acceptable ROIs by grade level and Recommended Monitoring Level. The minimal acceptable standards are presented along with recommendation for more ambitious ROIs (Deno et al., 2001; Fuchs et al., 1993). These recommendations are useful to set expectations for ROIs when evidence-based interventions are implemented with adequate intensity (4 days a week for 20 min each day). The goal ROI should be sufficiently ambitious to approximate the End of Year Level, but not so ambitious as to make the goal unattainable.

     

    Specify the benchmarks for minimum acceptable end-of-year performance:

     

    Words Read Correct per Minute

    Grade

    Fall

    Winter

    Spring

    1

    15

    (14) 24

    (36) 56

    2

    (40) 56

    (59) 78

    (70) 95

    3

    (60) 76

    (74) 93

    (86) 108

    4

    (94) 114

    (103) 128

    (120) 140

    5

    (108) 123

    (113) 131

    (117) 140

    6

    (100) 119

    (102) 131

    (120) 140

    Note: Standards are in bold. That level of performance indicates that students are 80% likely on track. Students below those standards are less likely to be on track. High risk indicators are in parentheses. Students at or below those levels are less than 20% likely to be on track.