FAST

Adaptive Reading (aReading)

Cost

Technology, Human Resources, and Accommodations for Special Needs

Service and Support

Purpose and Other Implementation Information

Usage and Reporting

Initial Cost:

FAST™ assessments are accessed through an annual subscription offered by FastBridge Learning, priced on a “per student assessed” model. The subscription rate for school year 2017–18 is $7.00 per student. There are no additional fixed costs. FAST subscriptions are all inclusive providing access to: all FAST reading and math assessments for universal screening, progress monitoring and diagnostic purposes including Computer Adaptive Testing and Curriculum-Based Measurement; Behavior and Developmental Milestones assessment tools; the FAST data management and reporting system; embedded online system training for staff; and basic implementation and user support.

 

In addition to the online training modules embedded within the FAST application, FastBridge Learning offers onsite training options. One-, two-, and three-day packages are available. Packages are determined by implementation size and which FAST assessments (e.g., reading, math, and/or behavior) a district intends to use: 1-day package: $3,000.00; 2-day package: $6,000.00; 3-day package: $9,000.00. Any onsite training purchase also includes a complimentary online Admin/Manager training session (2 hours) for users who will be designated as District Managers and/or School Managers in FAST. Additionally, FastBridge offers web-based consultation and training delivered by certified FAST trainers. The web-based consultation and training rate is $200.00/hour.

 

Replacement Cost:

Annual rates subject to change.

 

Included in Cost:

The FAST™ application is a fully cloud-based system, and therefore computer and Internet access are required for full use of the application. Teachers will require less than one hour of training on the administration of the assessment. A paraprofessional can administer the assessment as a Group Proctor in the FAST application.

Technology Requirements:

  • Computer or tablet
  • Internet connection

 

Training Requirements:

  • Less than 1 hour of training

 

Qualified Administrators:

No minimum qualifications

 

Accommodations:

The application allows for the following accommodations to support accessibility for culturally and linguistically diverse populations:

  • Text magnification.
  • Sound amplification.
  • Enlarged and printed paper materials are available upon request.
  • Students with differing needs or disabilities may take computer-adaptive tests such as aReading via a tablet-type device to facilitate screen optimization, magnification, sound amplification, and standard accommodations.
  • Extended time in aReading and untimed portions of earlyReading.
  • Extra breaks as needed.
  • Preferential seating and use of quiet space.
  • Proxy responses.
  • Use of scratch paper.
  • As part of item development, all items were reviewed for bias and fairness.

 

Where to Obtain:

Website: www.fastbridge.org

Address: 520 Nicollet Mall, Suite 910, Minneapolis, MN 55402

Phone number: 612-254-2534

Email address: info@fastbridge.org


Access to Technical Support:

Users have access to professional development technicians, as well as ongoing technical support.

FAST™ Adaptive Reading (FAST™ aReading) is a fully automated computer adaptive measure of broad reading ability that is individualized for each student. FAST™ aReading provides a useful estimate of broad reading achievement from kindergarten through twelfth grade. The assessment is online, group administered in 15-30 minutes. The questions and response format used in FAST aReading is substantially similar to many statewide, standardized assessments and assesses Common Core State Standards skills and domains including concepts of print, phonological awareness, phonics, vocabulary, comprehension, orthology, and morphology. Browser-based software adapts and individualizes the assessment for each child so that it essentially functions at the child’s developmental and skill level. The adaptive nature of the test makes it more efficient and more precise than paper-and-pencil assessments. 

 

 

Assessment Format:

  • Direct: Computerized
  • One-to-one

 

Administration Time:

  • 15-30 minutes per student

 

Scoring Time:

  • Scoring is automatic

 

Scoring Method:

FAST™ aReading is a computer-adaptive test (CAT), and therefore yields scores based on an IRT logit scale. This type of scale is not useful to most school professionals; in addition, it is difficult to interpret scores on a scale for which everything below the mean value yields a negative number. Therefore, it was necessary to create a FAST aReading scale more similar to existing educational measures. The FAST aReading scale yields scores that are transformed from logits using the following formula:

y = 500 + (50*Logit Score)

where y is the new FAST aReading scaled score, and Logit Score is the initial FAST aReading theta estimate. Scores were scaled with a lower bound of 350 and a higher bound of 650. The mean value is 500 and the standard deviation is 50.

 

Scores Generated:

  • Percentile score
  • IRT-based score            
  • Developmental benchmarks

 

 

Classification Accuracy

Grade12345
Criterion 1 Falldashdashdashdashdash
Criterion 1 Winterdashdashdashdashdash
Criterion 1 SpringFull bubbleFull bubbleFull bubbleFull bubbleFull bubble
Criterion 2 Falldashdashdashdashdash
Criterion 2 Winterdashdashdashdashdash
Criterion 2 Springdashdashdashdashdash

Primary Sample

 

Criterion 1: GMRT-4th

Time of Year: Spring

 

Grade 1

Grade 2

Grade 3

Grade 4

Grade 5

Cut points

317 (20th percentile)

381 (20th percentile)

418 (20th percentile)

439 (20th percentile)

439 (20th percentile)

Base rate in the sample for children requiring intensive intervention

0.20

0.21

0.18

0.20

0.20

Base rate in the sample for children considered at-risk, including those with the most intensive needs

Unknown

Unknown

Unknown

Unknown

Unknown

False Positive Rate

0.13

0.20

0.20

0.14

0.17

False Negative Rate

0.09

0.26

0.14

0.13

0.16

Sensitivity

0.91

0.74

0.86

0.87

0.84

Specificity

0.87

0.80

0.80

0.86

0.84

Positive Predictive Power

0.64

0.49

0.49

0.60

0.56

Negative Predictive Power

0.98

0.92

0.96

0.96

0.96

Overall Classification Rate

0.88

0.79

0.81

0.86

0.83

Area Under the Curve (AUC)

0.94

0.88

0.92

0.94

0.87

95% Confidence Interval Lower

0.91

0.84

0.89

0.91

0.83

95% Confidence Interval Upper

0.97

0.92

0.95

0.97

0.91

At 90% Sensitivity, specificity equals

0.87

0.65

0.82

0.82

0.67

At 80% Sensitivity, specificity equals

0.89

0.77

0.82

0.90

0.84

At 70% Sensitivity, specificity equals

0.89

0.84

0.90

0.94

0.90

 

Reliability

Grade12345
RatingFull bubbleFull bubbleFull bubbleFull bubbleFull bubble
  1. Justification for each type of reliability reported, given the type and purpose of the tool:

FAST aReading is an IRT-based CAT test; as such, a single model-based approach to reliability will be presented.

 

  1. Description of the sample(s), including size and characteristics, for each reliability analysis conducted

The reliability results presented below are based on the 2017-2018 norming sample.  

 

  1. Description of the analysis procedures for each reported type of reliability:

Given the adaptive nature of FASTjTM aReading tests, a model-based reliability estimate based on the standard error of measurement and test information function of an instrument was computed following Samejima (1994).

 

  1. Reliability of performance level score (e.g., model-based, internal consistency, inter-rater reliability).

Type of Reliability

Age or Grade

n

Coefficient

Confidence Interval

Model-based

K

10,000

0.96

0.95, 0.97

Model-based

1

10,000

0.95

0.94, 0.96

Model-based

2

10,000

0.93

0.92, 0.94

Model-based

3

10,000

0.91

0.90, 0.92

Model-based

4

10,000

0.91

0.90, 0.92

Model-based

5

10,000

0.91

0.90, 0.92

 

Disaggregated Reliability

The following disaggregated reliability data are provided for context and did not factor into the Reliability rating.

Type of Reliability

Subgroup

Age or Grade

n

Coefficient

Confidence Interval

None

 

 

 

 

 

 

Validity

Grade12345
RatingHalf-filled bubbleFull bubbleFull bubbleFull bubbleHalf-filled bubble

1.Description of each criterion measure used and explanation as to why each measure is appropriate, given the type and purpose of the tool

The criterion measure for the first type of validity analysis (predictive validity) is the Gates MacGinitie Reading Tests-4th Edition (GMRT-4th). The GMRT-4th is a norm-referenced, group administered measure of reading achievement distributed by Riverside Publishing Company. It is designed to provide guidance in planning instruction and intervention and is typically used as a diagnostic tool for general reading achievement, which makes it an appropriate criterion for FAST aReading. Like FAST aReading, the GMRT-4th was normed with students in the pre-reading stages through high school levels. The GMRT-4th was also selected because of its strong criterion validity. Correlations between the GMRT composite score and comprehension and vocabulary subtests of the Iowa Test of Basic Skills and GMRT composite scores across grades is high (.76 and .78 respectively; Morsy, Kieffer, & Snow, 2010). A similar pattern of results were observed between the GMRT and subscales of the California Tests of Basic Skills (.84 and .81 respectively; Morsy et al., 2010). GMRT scores also correlate highly with Comprehensive Tests of Basic Skills vocabulary, comprehension, and composite scores (.72, .79, and. 83 respectively; Morsy et al., 2010). Further, the correlation between GMRT composite scores and reading scores on the Basic Academic Skills Samples were strong as well (.79; Jenkins & Jewell, 1992).

 

The criterion measure for the second type of validity analysis (construct validity) is the Measures of Academic Progress (MAP). MAP is a diagnostic and computer adaptive assessment designed to measure mathematics ability and progress, which makes it an appropriate criterion to FAST aReading when considering construct validity. In addition, MAP is a known psychometrically sound assessment.

 

2.Description of the sample(s), including size and characteristics, for each validity analysis conducted

Validity analyses were conducted on a sample of students from Minnesota. There were 1,382 students in grades 1-5 from two school districts. Students were 70% White, 5% Black, 8% Hispanic, 15% Asian, and 2% other ethnicities. Approximately 16% of students were eligible for free or reduced price lunch, 14% were English language learners, and 10% were receiving special education services.

 

3.Description of the analysis procedures for each reported type of validity

Validity coefficients were calculated by computing Pearson product moment correlations between FAST aReading and the criterion measure. Confidence intervals represent 95% confidence intervals.

 

4.Validity for the performance level score (e.g., concurrent, predictive, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.

Type of Validity

Age or Grade

Test or Criterion

n

Coefficient

Confidence Interval

Predictive

1

GMRT-4th

125

0.83

0.76, 0.88

Predictive

2

GMRT-4th

215

0.75

0.68, 0.80

Predictive

3

GMRT-4th

165

0.84

0.79, 0.88

Predictive

4

GMRT-4th

175

0.78

0.71, 0.83

Predictive

5

GMRT-4th

181

0.58

0.47, 0.67

Construct

1

MAP

55

0.69

0.52, 0.81

Construct

2

MAP

302

0.83

0.79, 0.86

Construct

3

MAP

391

0.83

0.80, 0.86

Construct

4

MAP

398

0.77

0.73, 0.81

Construct

5

MAP

376

0.73

0.68, 0.77

 

5.Results for other forms of validity (e.g. factor analysis) not conducive to the table format:

None provided

 

6.Describe the degree to which the provided data support the validity of the tool

The validity coefficients provide moderate to strong evidence for the use of FAST aReading as a measure of broad read ability.

 

 

Disaggregated Validity

The following disaggregated validity data are provided for context and did not factor into the Validity rating.

Type of Validity

Subgroup

Age or Grade

Test or Criterion

n

Coefficient

Confidence Interval

None

 

 

 

 

 

 

 

Results for other forms of disaggregated validity (e.g. factor analysis) not conducive to the table format:

None provided

Sample Representativeness

Grade12345
RatingEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubbleEmpty bubble

Size: 2,333

Male

 

Female

 

Unknown

100%

SES

19%

FRPL

 

White

70%

Black or African American

6%

Hispanic

7%

American Indian/Alaska Native

1%

Asian/Pacific Islander

16%

Other

Unknown

Unknown

Unknown

Disability classification

11%

First language

Unknown

ELL

14%

 

Bias Analysis Conducted

Grade12345
RatingNoNoNoNoNo
  1. Description of the method used to determine the presence or absence of bias:

None provided

 

  1. Description of the subgroups for which bias analyses were conducted:

None provided

 

  1. Description of the results of the bias analyses conducted, including data and interpretative statements:

None provided

 

Administration Format

Grade12345
Data
  • Individual
  • Individual
  • Individual
  • Individual
  • Individual
  • Administration & Scoring Time

    Grade12345
    Data
  • 15-30 minutes
  • 15-30 minutes
  • 15-30 minutes
  • 15-30 minutes
  • 15-30 minutes
  • Scoring Format

    Grade12345
    Data
  • Automatic
  • Automatic
  • Automatic
  • Automatic
  • Automatic
  • Types of Decision Rules

    Grade12345
    Data
  • None
  • None
  • None
  • None
  • None
  • Evidence Available for Multiple Decision Rules

    Grade12345
    Data
  • No
  • No
  • No
  • No
  • No