FAST™
Social, Academic, & Emotional Behavior Risk Screener (SAEBRS)

Summary

The FAST™ SAEBRS is a brief and efficient tool to universally screen students individually, or by class, grade, or school for risk for social-emotional and behavioral problems. The FAST™ SAEBRS evaluates general student behavior, as well as behavior within the social, academic, and emotional domains. The FAST™ SAEBRS is considered a brief behavior rating scale, which is comprised of 19 items. To screen using the FAST™ SAEBRS, a teacher completes the scale once for each student in his/her classroom. As noted above, the FAST™ SAEBRS includes items from three domains. Each domain is defined as follows. Social Behavior (6 items) is defined as behaviors that promote (e.g., social skills) or limit (e.g., externalizing problems) one's ability to maintain age appropriate relationships with peers and adults. Academic Behavior (6 items) is defined as behaviors that promote (e.g., academic enablers) or limit (e.g., attentional problems) one's ability to be prepared for, participate in, and benefit from academic instruction. Finally, Emotional Behavior (EB; 7 items) is defined as actions that promote (e.g., social-emotional competencies) or limit (e.g., internalizing problems) one's ability to regulate internal states, adapt to change, and respond to stressful/challenging events.

Where to Obtain:
Stephen Kilgus, Ph.D. & Nathaniel von der Emse, Ph.D., Publisher: FastBridge Learning, LLC
info@fastbridge.org
520 Nicollet Mall, Suite #910, Minneapolis, MN 55402
612.254.2534
www.fastbridge.org
Initial Cost:
$6.00 per student
Replacement Cost:
Contact vendor for pricing details.
Included in Cost:
FAST™ assessments are accessed through an annual subscription offered by FastBridge Learning, priced on a “per student assessed” model. The subscription rate for school year 2017–18 is $6.00 per student. There are no additional fixed costs. FAST subscriptions are all inclusive providing access to: all FAST reading and math assessments for universal screening, progress monitoring and diagnostic purposes including Computer Adaptive Testing and Curriculum-Based Measurement; Behavior and Developmental Milestones assessment tools; the FAST data management and reporting system; embedded online system training for staff; and basic implementation and user support. In addition to the online training modules embedded within the FAST application, FastBridge Learning offers onsite training options. One, two, and three day packages are available. Packages are determined by implementation size and which FAST assessments (e.g., reading, math, and/or behavior) a district intends to use: 1-day package: $3,000.00; 2-day package: $5,750.00; 3-day package: $8,500.00. Any onsite training purchase also includes a complimentary online Admin/Manager training session (2 hours) for users who will be designated as District Managers and/or School Managers in FAST. Additionally, FastBridge offers web-based consultation and training delivered by certified FAST trainers. The web-based consultation and training rate is $175.00/hour. The FAST™ application is a fully cloud-based system, and therefore computer and Internet access are required for full use of the application. Teachers will require less than one hour of training on the administration of the assessment. A paraprofessional can administer the assessment as a Group Proctor in the FAST application. As part of item development, all items were reviewed for bias and fairness.
Training Requirements:
Less than 1 hour of training
Qualified Administrators:
Must have been able to observe and interact with the student over the past month.
Access to Technical Support:
Users have access to professional development technicians, as well as ongoing technical support.
Assessment Format:
  • Rating scale
Scoring Time:
  • Scoring is automatic OR
  • minutes per
Scores Generated:
  • Raw score
  • Percentile score
  • Composite scores
  • Subscale/subtest scores
Administration Time:
  • 2 minutes per student
Scoring Method:
  • Manually (by hand)
  • Automatically (computer-scored)
Technology Requirements:
  • Computer or tablet
  • Internet connection
Accommodations:

Descriptive Information

Please provide a description of your tool:
The FAST™ SAEBRS is a brief and efficient tool to universally screen students individually, or by class, grade, or school for risk for social-emotional and behavioral problems. The FAST™ SAEBRS evaluates general student behavior, as well as behavior within the social, academic, and emotional domains. The FAST™ SAEBRS is considered a brief behavior rating scale, which is comprised of 19 items. To screen using the FAST™ SAEBRS, a teacher completes the scale once for each student in his/her classroom. As noted above, the FAST™ SAEBRS includes items from three domains. Each domain is defined as follows. Social Behavior (6 items) is defined as behaviors that promote (e.g., social skills) or limit (e.g., externalizing problems) one's ability to maintain age appropriate relationships with peers and adults. Academic Behavior (6 items) is defined as behaviors that promote (e.g., academic enablers) or limit (e.g., attentional problems) one's ability to be prepared for, participate in, and benefit from academic instruction. Finally, Emotional Behavior (EB; 7 items) is defined as actions that promote (e.g., social-emotional competencies) or limit (e.g., internalizing problems) one's ability to regulate internal states, adapt to change, and respond to stressful/challenging events.
The tool is intended for use with the following grade(s).
not selected Preschool / Pre - kindergarten
selected Kindergarten
selected First grade
selected Second grade
selected Third grade
selected Fourth grade
selected Fifth grade
selected Sixth grade
selected Seventh grade
selected Eighth grade
selected Ninth grade
selected Tenth grade
selected Eleventh grade
selected Twelfth grade

The tool is intended for use with the following age(s).
not selected 0-4 years old
selected 5 years old
selected 6 years old
selected 7 years old
selected 8 years old
selected 9 years old
selected 10 years old
selected 11 years old
selected 12 years old
selected 13 years old
selected 14 years old
selected 15 years old
selected 16 years old
selected 17 years old
selected 18 years old

The tool is intended for use with the following student populations.
not selected Students in general education
not selected Students with disabilities
not selected English language learners

ACADEMIC ONLY: What skills does the tool screen?

Reading
Phonological processing:
not selected RAN
not selected Memory
not selected Awareness
not selected Letter sound correspondence
not selected Phonics
not selected Structural analysis

Word ID
not selected Accuracy
not selected Speed

Nonword
not selected Accuracy
not selected Speed

Spelling
not selected Accuracy
not selected Speed

Passage
not selected Accuracy
not selected Speed

Reading comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
not selected Retell
not selected Maze
not selected Sentence verification
not selected Other (please describe):


Listening comprehension:
not selected Multiple choice questions
not selected Cloze
not selected Constructed Response
not selected Retell
not selected Maze
not selected Sentence verification
not selected Vocabulary
not selected Expressive
not selected Receptive

Mathematics
Global Indicator of Math Competence
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Early Numeracy
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Concepts
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematics Computation
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Mathematic Application
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Fractions/Decimals
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Algebra
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

Geometry
not selected Accuracy
not selected Speed
not selected Multiple Choice
not selected Constructed Response

not selected Other (please describe):

Please describe specific domain, skills or subtests:
BEHAVIOR ONLY: Which category of behaviors does your tool target?


BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.
Broad Domain = Total Behavior Sub-Domain = Social Behavior, Academic Behavior, and Emotional Behavior

Acquisition and Cost Information

Where to obtain:
Email Address
info@fastbridge.org
Address
520 Nicollet Mall, Suite #910, Minneapolis, MN 55402
Phone Number
612.254.2534
Website
www.fastbridge.org
Initial cost for implementing program:
Cost
$6.00
Unit of cost
student
Replacement cost per unit for subsequent use:
Cost
Unit of cost
Duration of license
year
Additional cost information:
Describe basic pricing plan and structure of the tool. Provide information on what is included in the published tool, as well as what is not included but required for implementation.
FAST™ assessments are accessed through an annual subscription offered by FastBridge Learning, priced on a “per student assessed” model. The subscription rate for school year 2017–18 is $6.00 per student. There are no additional fixed costs. FAST subscriptions are all inclusive providing access to: all FAST reading and math assessments for universal screening, progress monitoring and diagnostic purposes including Computer Adaptive Testing and Curriculum-Based Measurement; Behavior and Developmental Milestones assessment tools; the FAST data management and reporting system; embedded online system training for staff; and basic implementation and user support. In addition to the online training modules embedded within the FAST application, FastBridge Learning offers onsite training options. One, two, and three day packages are available. Packages are determined by implementation size and which FAST assessments (e.g., reading, math, and/or behavior) a district intends to use: 1-day package: $3,000.00; 2-day package: $5,750.00; 3-day package: $8,500.00. Any onsite training purchase also includes a complimentary online Admin/Manager training session (2 hours) for users who will be designated as District Managers and/or School Managers in FAST. Additionally, FastBridge offers web-based consultation and training delivered by certified FAST trainers. The web-based consultation and training rate is $175.00/hour. The FAST™ application is a fully cloud-based system, and therefore computer and Internet access are required for full use of the application. Teachers will require less than one hour of training on the administration of the assessment. A paraprofessional can administer the assessment as a Group Proctor in the FAST application. As part of item development, all items were reviewed for bias and fairness.
Provide information about special accommodations for students with disabilities.

Administration

BEHAVIOR ONLY: What type of administrator is your tool designed for?
selected General education teacher
selected Special education teacher
not selected Parent
not selected Child
not selected External observer
selected Other
If other, please specify:
Paraprofessional

What is the administration setting?
not selected Direct observation
selected Rating scale
not selected Checklist
not selected Performance measure
not selected Questionnaire
not selected Direct: Computerized
not selected One-to-one
not selected Other
If other, please specify:

Does the tool require technology?
Yes

If yes, what technology is required to implement your tool? (Select all that apply)
selected Computer or tablet
selected Internet connection
not selected Other technology (please specify)

If your program requires additional technology not listed above, please describe the required technology and the extent to which it is combined with teacher small-group instruction/intervention:

What is the administration context?
selected Individual
not selected Small group   If small group, n=
not selected Large group   If large group, n=
selected Computer-administered
not selected Other
If other, please specify:

What is the administration time?
Time in minutes
2
per (student/group/other unit)
student

Additional scoring time:
Time in minutes
per (student/group/other unit)

ACADEMIC ONLY: What are the discontinue rules?
not selected No discontinue rules provided
not selected Basals
not selected Ceilings
not selected Other
If other, please specify:


Are norms available?
Yes
Are benchmarks available?
Yes
If yes, how many benchmarks per year?
1 set, which is applicable to three time points
If yes, for which months are benchmarks available?
September, December, and May
BEHAVIOR ONLY: Can students be rated concurrently by one administrator?
No
If yes, how many students can be rated concurrently?

Training & Scoring

Training

Is training for the administrator required?
Yes
Describe the time required for administrator training, if applicable:
Less than 1 hour of training
Please describe the minimum qualifications an administrator must possess.
Must have been able to observe and interact with the student over the past month.
not selected No minimum qualifications
Are training manuals and materials available?
Yes
Are training manuals/materials field-tested?
Yes
Are training manuals/materials included in cost of tools?
Yes
If No, please describe training costs:
Can users obtain ongoing professional and technical support?
Yes
If Yes, please describe how users can obtain support:
Users have access to professional development technicians, as well as ongoing technical support.

Scoring

How are scores calculated?
selected Manually (by hand)
selected Automatically (computer-scored)
not selected Other
If other, please specify:

Do you provide basis for calculating performance level scores?
Yes
What is the basis for calculating performance level and percentile scores?
not selected Age norms
selected Grade norms
not selected Classwide norms
not selected Schoolwide norms
not selected Stanines
not selected Normal curve equivalents

What types of performance level scores are available?
selected Raw score
not selected Standard score
selected Percentile score
not selected Grade equivalents
not selected IRT-based score
not selected Age equivalents
not selected Stanines
not selected Normal curve equivalents
not selected Developmental benchmarks
not selected Developmental cut points
not selected Equated
not selected Probability
not selected Lexile score
not selected Error analysis
selected Composite scores
selected Subscale/subtest scores
not selected Other
If other, please specify:

Does your tool include decision rules?
No
If yes, please describe.
Can you provide evidence in support of multiple decision rules?
No
If yes, please describe.
Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
To score the FAST™ SAEBRS, negatively worded items are first reverse scored. Item scores are then then summed within each subscale and the overall scale.
Describe the tool’s approach to screening, samples (if applicable), and/or test format, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
The FAST™ SAEBRS was originally subjected to expert content validation, within which experts considered the extent to which items corresponded to the various constructs, were relevant to those constructs, and were fair/appropriate/unbiased indicators of each construct. The FAST™ SAEBRS has gone on to be refined and validated with large, heterogeneous samples that are representative of the broader US population.

Usability

Has a usability study been conducted on your tool (i.e., a study that examines the extent to which the tool is convenient and practicable for use?)
No
Has a social validity study been conducted on your tool (i.e., a study that examines the significance of goals, appropriateness of procedures (e.g., ethics, cost, practicality), and the importance of treatment effects)?

Technical Standards

Classification Accuracy & Cross-Validation Summary

Age / Grade
Informant
Grades K-5
Teacher
Classification Accuracy Fall Convincing evidence
Classification Accuracy Winter Convincing evidence
Classification Accuracy Spring Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available

BASC-2 BESS

Classification Accuracy

Select time of year
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
The BESS was factor analytically derived from the broader BASC-2 measure, which is considered a gold standard of behavioral assessment. Research suggests that BESS scores are valid and diagnostically accurate indicators of the BASC-2, as well as other alternative measures. Within this study, teachers completed the FAST™ SAEBRS and the BESS independent of one another. Furthermore, the order in which the measures were completed was counterbalanced across students.
Do the classification accuracy analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
Classification analyses were performed using receiver operating characteristic (ROC) curve analysis. More specifically, ROC curve analyses conformed to an exploratory approach, wherein conditional probability statistics were calculated for each possible cut score within the FAST SAEBRS Total Behavior scale. A two-step approach was applied to the selection of cut scores within each FAST SAEBRS scale. First, cut scores associated with SP greater than .90 were identified. Second, of these identified cut scores, we selected the cut scores with the smallest difference between SE and SP, where SP still exceeded SE. Justification for prioritizing cut scores with high SP was founded in the high-stakes nature of the high-risk screening decision. To identify a student as being at “high risk” for behavioral concerns is to apply a potentially stigmatizing label with significant intervention implications. For instance, schools might choose to forego Tier 2 intervention for high risk students, moving directly to more restrictive Tier 3 strategies. Selecting cut scores with SP of at least .90 represented an attempt to limit the proportion of false positive (1-SP) decisions. Outcome corresponded to the overall Behavioral and Emotional Risk index, dichotomously scored as 0 = Normal and 1 = Extremely Elevated Risk. To note, those students falling within the “Elevated” risk range on the BESS were excluded from the present analyses.
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
Yes
If yes, please describe the intervention, what children received the intervention, and how they were chosen.
It is assumed students received some form of intervention during this time. However, intervention implementation was not documented. Rather, all intervention was a part of normal educational practice and was thus not carried out as part of this investigation.

Cross-Validation

Has a cross-validation study been conducted?
Yes
If yes,
Select time of year.
Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
The BASC-2 BESS was used as the criterion measure. The BESS was factor analytically derived from the broader BASC-2 measure, which is considered a gold standard of behavioral assessment. Research suggests that BESS scores are valid and diagnostically accurate indicators of the BASC-2, as well as other alternative measures. Within this study, teachers completed the FAST™ SAEBRS and the BESS independent of one another. Furthermore, the order in which the measures were completed was counterbalanced across students.
Do the cross-validation analyses examine concurrent and/or predictive classification?

Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
The cut points evaluated as part of this analysis were those identified through the initial classification accuracy study mentioned above.
Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
Yes
If yes, please describe the intervention, what children received the intervention, and how they were chosen.
It is assumed students received some form of intervention during this time. However, intervention implementation was not documented. Rather, all intervention was a part of normal educational practice and was thus not carried out as part of this investigation.

Classification Accuracy - Fall

Informant: Teacher

Evidence Grades K-5
Criterion measure BASC-2 BESS
Cut Points - Percentile rank on criterion measure
Cut Points - Performance score on criterion measure 1
Cut Points - Corresponding performance score (numeric) on screener measure 28
Classification Data - True Positive (a) 73
Classification Data - False Positive (b) 51
Classification Data - False Negative (c) 15
Classification Data - True Negative (d) 736
Area Under the Curve (AUC) 0.96
Statistics Grades K-5
Base Rate 0.10
Overall Classification Rate 0.92
Sensitivity 0.83
Specificity 0.94
False Positive Rate 0.06
False Negative Rate 0.17
Positive Predictive Power 0.59
Negative Predictive Power 0.98
Sample Grades K-5
Date Collected during the 2015-2016 school year
Sample Size 875
Geographic Representation  
Male  
Female  
Other  
Gender Unknown  
White, Non-Hispanic  
Black, Non-Hispanic  
Hispanic  
Asian/Pacific Islander  
American Indian/Alaska Native  
Other  
Race / Ethnicity Unknown  
Low SES  
IEP or diagnosed disability  
English Language Learner  

Classification Accuracy - Winter

Informant: Teacher

Evidence Grades K-5
Criterion measure BASC-2 BESS
Cut Points - Percentile rank on criterion measure
Cut Points - Performance score on criterion measure 28.00
Cut Points - Corresponding performance score (numeric) on screener measure 1
Classification Data - True Positive (a) 112
Classification Data - False Positive (b) 65
Classification Data - False Negative (c) 37
Classification Data - True Negative (d) 661
Area Under the Curve (AUC) 0.93
Statistics Grades K-5
Base Rate 0.17
Overall Classification Rate 0.88
Sensitivity 0.75
Specificity 0.91
False Positive Rate 0.09
False Negative Rate 0.25
Positive Predictive Power 0.63
Negative Predictive Power 0.95
Sample Grades K-5
Date 2015-16 school year
Sample Size 875
Geographic Representation  
Male  
Female  
Other  
Gender Unknown  
White, Non-Hispanic  
Black, Non-Hispanic  
Hispanic  
Asian/Pacific Islander  
American Indian/Alaska Native  
Other  
Race / Ethnicity Unknown  
Low SES  
IEP or diagnosed disability  
English Language Learner  

Classification Accuracy - Spring

Informant: Teacher

Evidence Grades K-5
Criterion measure BASC-2 BESS
Cut Points - Percentile rank on criterion measure
Cut Points - Performance score on criterion measure 1
Cut Points - Corresponding performance score (numeric) on screener measure 28
Classification Data - True Positive (a) 139
Classification Data - False Positive (b) 7
Classification Data - False Negative (c) 7
Classification Data - True Negative (d) 722
Area Under the Curve (AUC) 0.99
Statistics Grades K-5
Base Rate 0.17
Overall Classification Rate 0.98
Sensitivity 0.95
Specificity 0.99
False Positive Rate 0.01
False Negative Rate 0.05
Positive Predictive Power 0.95
Negative Predictive Power 0.99
Sample Grades K-5
Date Collected during the 2015-2016 school year
Sample Size 875
Geographic Representation  
Male  
Female  
Other  
Gender Unknown  
White, Non-Hispanic  
Black, Non-Hispanic  
Hispanic  
Asian/Pacific Islander  
American Indian/Alaska Native  
Other  
Race / Ethnicity Unknown  
Low SES  
IEP or diagnosed disability  
English Language Learner  

Cross-Validation - Fall

Informant: Teacher

Evidence Grades K-5 Grades K-5
Criterion measure BASC-2 BESS BASC-2 BESS
Cut Points - Percentile rank on criterion measure
Cut Points - Performance score on criterion measure 1 1
Cut Points - Corresponding performance score (numeric) on screener measure 28.00 28.00
Classification Data - True Positive (a) 22 32
Classification Data - False Positive (b) 24 35
Classification Data - False Negative (c) 5 1
Classification Data - True Negative (d) 516 644
Area Under the Curve (AUC) 0.97 0.99
Statistics Grades K-5 Grades K-5
Base Rate 0.05 0.05
Overall Classification Rate 0.95 0.95
Sensitivity 0.81 0.97
Specificity 0.96 0.95
False Positive Rate 0.04 0.05
False Negative Rate 0.19 0.03
Positive Predictive Power 0.48 0.48
Negative Predictive Power 0.99 1.00
Sample Grades K-5 Grades K-5
Date Winter 2014 Fall 2014
Sample Size 567 712
Geographic Representation    
Male    
Female    
Other    
Gender Unknown    
White, Non-Hispanic    
Black, Non-Hispanic    
Hispanic    
Asian/Pacific Islander    
American Indian/Alaska Native    
Other    
Race / Ethnicity Unknown    
Low SES    
IEP or diagnosed disability    
English Language Learner    

Reliability

Age / Grade
Informant
Grades K-5
Teacher
Rating Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Offer a justification for each type of reliability reported, given the type and purpose of the tool.
Internal reliability: a series of omega coefficients were used as part of a model-based approach to the evaluation of FAST™ SAEBRS Total Behavior scale internal reliability. Internal reliability is considered relevant given the presumption that all FAST™ SAEBRS items are related to the broader construct of general behavioral functioning. Consideration of a model-based coefficient like omega is considered particularly relevant given the presumption that the FAST™ SAEBRS is founded upon a bifactor model, wherein all items are related to both the general behavior factor and one of three narrow factors. Test-retest reliability: a series of correlation analyses were used to evaluate the association between FAST™ SAEBRS data administered across three time points. Interest in test-retest reliability was founded in the assumption that the majority of students within a school should maintain their social-emotional and behavioral risk status across the school year (Dever, Dowdy, Raines, & Carnazzo, 2015). Accordingly, it was anticipated there should be some degree of consistency in scores. With that said, it was expected such consistency would be tempered by the inherent variability of behavior and the delivery of intervention and supports to a subsample of students in the school. Generalizability coefficients: generalizability theory (GT) analyses were used in evaluating the influence of multiple measurement facets on FAST™ SAEBRS Total Behavior reliability. Via “D studies” completed as part of GT analyses, it was possible to evaluate the circumstances under which FAST™ SAEBRS Total Behavior scores achieved acceptable reliability. Measurement facets of interest included students nested within rater (s:r) and occasion (o).
*Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
Internal reliability was evaluated as part of Kilgus, Bonifay, von der Embse, Allen, and Eklund (in press). The study was conducted in four urban elementary schools (K-5) located in the Midwestern United States. All general education teachers in each school chose to participate in this study. Each teacher screened all students in their classroom, resulting in a sample of 68 teacher participants and 1,243 students. The sample was characterized by a diverse student population in regards to ethnicity, including sizeable subsamples of White (54.5%), Black (28.6%), Hispanic (5.3%), and Multiracial (8.4%) students. The free/reduced-price lunch rate across the four schools was equal to 65.1%. Test-retest reliability was evaluated as part of Kilgus, Kilpatrick, von der Embse, and Eklund (in preparation). The sample considered in this study was the same as that described immediately above for Kilgus et al. (in press). Generalizability coefficients were evaluated as part of Tanner, Eklund, Kilgus, and Johnson (in press). Primary participants included three teacher pairs (n = 6) from a suburban middle school in the southwestern United States. The school was selected due to its use of a co-teaching model, in which a special education and general education teacher share teaching responsibilities for a single classroom. This school contained one co-taught language arts classroom at each grade level (i.e., sixth, seventh, and eighth grades). All teachers were female and had between 1.5 and 14 years of teaching experience (M = 8.42, SD = 4.86). Sample participants also included 82 students (47.6% female) in grades six, seven, and eight. The mean size of each classroom was 26.86 (SD = .94) students, ranging from 26 to 28 students. The mean student age was 12.4 (SD = 1.05) years, ranging from 11 to 14 years of age. Approximately 64% of students were White/Caucasian, 20% Hispanic/Latino, 5% Black/African American, 3% Asian, and 3% American Indian/Native American. Finally, 31% of students were eligible for free or reduced-cost lunch.
*Describe the analysis procedures for each reported type of reliability.
Internal reliability: Omega (ω) coefficients represent the proportion of variance to all factors common to an item set of interest. Hierarchical omega (ωH) coefficients represent the proportion of variance attributable to a particular factor after controlling for all other factors. These latter statistics are particularly informative when examining measures corresponding to bifactor structures (such as the FAST™ SAEBRS), as items are presumed to be multidimensional and driven by both general and specific factors. Test-retest reliability: a series of Pearson product-moment correlation (r) coefficients were used to evaluate the association between fall (F), winter (W), and spring (S) FAST™ SAEBRS data. Three sets of correlations, including those comparing fall and winter scores, fall and spring scores, and winter and spring scores. Generalizability coefficients: G studies were conducted to determine the proportion of variance in screening scores that was attributable to student differences, teachers, the passage of time, and the interactions between these factors. Variance components were calculated using the ANOVA procedure (i.e., repeated measures factorial ANOVA with Type III sums of squares) in SPSS. Following the second set of G studies, the calculated sums of squares were imported into the EduG software program to serve as the basis of D studies (EduG, 2013). D studies were used to determine how altering the number of raters and occasions affected the generalizability of universal screening scores. Each D study permitted examination of the impact of increasing or decreasing the number of occasions and raters on the reliability of resulting data. Within GT analyses, reliability can be evaluated via two statistics, including (a) generalizability coefficients (Ep̂2), which speak to the reliability of relative or inter-individual decisions (e.g., screening), and (b) dependability coefficients (Φ), which speak to the reliability of absolute or intra-individual decisions (e.g., progress monitoring). Given the context and topic of this study, which was specific to universal screening, the decision was made to limit reported findings to generalizability coefficients, while excluding those related to dependability coefficients. D studies were conducted over a series of steps. First, the number of screening occasions, raters, and methods were reduced to a sample size of one, as this represents using a single screening measure during a single administration of measures. Of interest was examining the effect of increasing: (1) the number of occasions, while holding the number of raters to one; and (2) the number of raters, while holding the number of occasions to one, for each method.

*In the table(s) below, report the results of the reliability analyses described above (e.g., internal consistency or inter-rater reliability coefficients).

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Reliability Type Informant Age n Coefficient Generalizability* Teacher Middle 82 0.72-0.73 *Generalizability model used 1 rater and 1 rating occasion
Manual cites other published reliability studies:
Yes
Provide citations for additional published studies.
Eklund, K., Kilgus, S. P., von der Embse, N., Beardmore, M., & Tanner, N. (2017). Use of universal screening scores to predict distal academic and behavioral outcomes: A multi-level approach. Psychological Assessment, 29, 486-499. Kilgus, S. P., Bonifay, W., von der Embse, N. P., Allen, A. N., & Eklund, K. (in press). Evidence for the interpretation of Social, Academic, and Emotional, Behavior Risk Screener (FAST SAEBRS) scores: An argument-based approach to screener validation. Journal of School Psychology. Kilgus, S. P., Eklund, K., von der Embse, N. P., Taylor, C., & Sims, W. A. (2016). Psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (FAST SAEBRS) teacher rating scale and multiple gating procedure within elementary and middle school samples. Journal of School Psychology, 58, 21-39. Tanner, N., Eklund, K., Kilgus, S. P., Johnson, A. H. (in press). Generalizability of universal screening measures for behavioral and emotional risk. School Psychology Review. Kilgus, S. P., Sims, W. A., von der Embse, N. P., & Taylor, C. N. (2016). Psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (FAST SAEBRS) within an elementary sample. Assessment for Effective Intervention, 42, 46-59.
Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated reliability data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of reliability analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Validity

Age / Grade
Informant
Grades K-5
Teacher
Rating Convincing evidence
Legend
Full BubbleConvincing evidence
Half BubblePartially convincing evidence
Empty BubbleUnconvincing evidence
Null BubbleData unavailable
dDisaggregated data available
*Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
SSIS: The Social Skills Improvement System (SSIS; Gresham & Elliott, 2008) is a comprehensive teacher rating scale (83 items), which is used to assess the broad areas of student Social Skills, Problem Behaviors, and Academic Competence. The FAST™ SAEBRS was compared to the SSIS as part of its initial validation study, during which only FAST™ SAEBRS only included the Social Behavior (6 items) and Academic Behavior (6 items) scales. The SSIS was considered a particularly important criterion given the scale’s assessment of social and academic functioning, as well as its pertinence to problem behaviors, which are also assessed within each FAST™ SAEBRS subscale. BASC-2 BESS: The BASC-2 BESS was factor analytically derived from the broader BASC-2 measure, which is considered a goal standard of behavioral assessment. Research suggests that BASC-2 BESS scores are valid and diagnostically accurate indicators of the BASC-2, as well as other alternative measures. Like the FAST™ SAEBRS Total Behavior scale, the sole BASC-2 BESS scale score represents broad and general functioning within the social-emotional and behavioral domain. SRSS: The Student Risk Screening Scale (SRSS; Drummond, 1994) is a brief, 7-item teacher rating scale, which research suggests is an indicator of student externalizing behavior. The SRSS is completed in a manner similar to the FAST™ SAEBRS, with a classroom teacher completing the scale once for each student in his or her classroom. The SRSS is considered relevant to the FAST™ SAEBRS given the latter scale’s incorporation of items either directly or indirectly related to externalizing behavior, particularly as it relates to social functioning. SIBS: The Student Internalizing Behavior Screener (SIBS; Cook et al., 2011) is a brief, 7-item teacher rating scale used to assess student internalizing behavior. The SIBS is completed in a manner nearly identical to the SRSS. The SIBS is considered relevant to the FAST™ SAEBRS given the latter scale’s incorporation of items either directly or indirectly related to internalizing behavior, particularly as it relates to emotional functioning.
*Describe the sample(s), including size and characteristics, for each validity analysis conducted.
SSIS: The FAST™ SAEBRS was compared to the SSIS within Kilgus, Chafouleas, and Riley-Tillman (2013). This study was conducted across three public schools within a single school district in southeastern United States. During the 2010-2011 school year, 4% of the district’s students identified as English language learners, and 40% qualified for free or reduced lunch. Across the three schools, 56 K-5 teachers agreed to participate. Using a random number generator, researchers randomly selected five students for participation in each classroom, resulting in the identification of 276 student participants (four teachers requested to rate only four randomly selected students because of time constraints). In regards to ethnicity, 50.6% of students were White, 32.5% Black/African American, 10.3% Hispanic/Latino(a), 2.1% Asian, and 3.7% other. BASC-2 BESS: The FAST™ SAEBRS was compared to the BASC-2 BESS within Kilgus, Eklund, von der Embse, Taylor, & Sims (2016). Participants included 34 elementary teachers and their 567 students (52.9% female). In regards to ethnicity, 50.1% of students were White, 34.4% Black/African American, 11.3% Hispanic/Latino(a), 0.5% Asian, and 3.7% multi-racial. Overall, 61.9% of students qualified for free/reduced-priced lunch. SRSS and SIBS: The FAST™ SAEBRS was compared to the SRSS and SIBS as part of Kilgus, Sims, von der Embse, and Taylor (2016). Participants included 17 teachers and 346 students. This sample comprised all the teachers and students from a single rural Midwestern elementary school enrolling third-, fourth-, and fifth-grade students (teacher and student participation rate = 100%). The student body in this school was characterized by a relatively even split in gender, homogeneous ethnicity profile (i.e., 95% White/Caucasian), a 20% free/reduced price lunch rate, and low English language learner enrollment. Students included in the analysis were both regular and special education students. The 17 teachers in the study were all Caucasian females. Years of teaching experience ranged from 1 to more than 25 years.
*Describe the analysis procedures for each reported type of validity.
Concurrent validity: The FAST™ SAEBRS was compared to each of the aforementioned criteria via Pearson product-moment correlation (r) coefficients.

*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
Yes
Provide citations for additional published studies.
Eklund, K., Kilgus, S. P., von der Embse, N., Beardmore, M., & Tanner, N. (2017). Use of universal screening scores to predict distal academic and behavioral outcomes: A multi-level approach. Psychological Assessment, 29, 486-499. Kilgus, S. P., Bonifay, W., von der Embse, N. P., Allen, A. N., & Eklund, K. (in press). Evidence for the interpretation of Social, Academic, and Emotional, Behavior Risk Screener (FAST SAEBRS) scores: An argument-based approach to screener validation. Journal of School Psychology. Kilgus, S. P., Bowman, N. A., Christ, T. J., & Taylor, C. N. (2017). Predicting academics via behavior within an elementary sample: An evaluation of the Social, Academic, and Emotional Behavior Risk Screener (FAST SAEBRS). Psychology in the Schools, 54, 246-260. Kilgus, S. P., Eklund, K., von der Embse, N. P., Taylor, C., & Sims, W. A. (2016). Psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (FAST SAEBRS) teacher rating scale and multiple gating procedure within elementary and middle school samples. Journal of School Psychology, 58, 21-39. Kilgus, S. P., Sims, W. A., von der Embse, N. P., & Taylor, C. N. (2016). Psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (FAST SAEBRS) within an elementary sample. Assessment for Effective Intervention, 42, 46-59. Kilgus, S. P., von der Embse, N. P., Allen, A. N., Taylor, C. N., & Eklund, K. (in press). Examining FAST SAEBRS technical adequacy and the moderating influence of criterion type on cut score performance. Remedial and Special Education.
Describe the degree to which the provided data support the validity of the tool.
The presented data speak to the validity of nomological net upon which the FAST™ SAEBRS is founded. The FAST™ SAEBRS theoretical framework specifies that the measure should be capable of predicting a student’s broader social-emotional and behavioral functioning. It further specifies the measure should be capable of predicting a student’s behavior within the social, academic, and emotional domains. Evidence to date has supported this, as evidence by: (a) correlations with the BESS, an indicator of broad and general functioning, (b) correlations with the SSIS-Social Skills and SRSS, indicators of student social competence and externalizing behavior, respectively (both of which are theoretically captured through the FAST™ SAEBRS Social Behavior subscale), (c) correlations with the SSIS-Academic Competence scale, an indicator of student academic functioning (which is theoretically captured through the FAST™ SAEBRS Academic Behavior subscale), and (d) correlations with the SIBS, an indicator of student internalizing behavior (which is theoretically captured through the FAST™ SAEBRS Emotional Behavior subscale). When taken together, existing validity evidence supports all elements of the theoretical framework upon which the FAST™ SAEBRS is founded.
Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
No

If yes, fill in data for each subgroup with disaggregated validity data.

Type of Subgroup Informant Age / Grade Test or Criterion n Median Coefficient 95% Confidence Interval
Lower Bound
95% Confidence Interval
Upper Bound
Results from other forms of validity analysis not compatible with above table format:
Manual cites other published reliability studies:
No
Provide citations for additional published studies.

Bias Analysis

Age / Grade
Informant
Grades K-5
Teacher
Rating Yes
Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
Yes
If yes,
a. Describe the method used to determine the presence or absence of bias:
Multi-group confirmatory factor analysis (MG-CFA) was used to examine measurement equivalence/invariance (Pendergast, von der Embse, Kilgus, & Eklund, 2017). Specifically, analyses considered the extent to which the FAST™ SAEBRS (inclusive of only Social Behavior and Academic Behavior items) was invariant across ethnic categories.
b. Describe the subgroups for which bias analyses were conducted:
Participants from two racial groups (White n = 412, and Black n = 323) were included in analyses of SABRS ME/I across race. Participants from other racial groups were excluded because the sample sizes were too small (<100).
c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.
CFAs were conducted in Mplus 6.2 using WLSMV estimation (Beauducel & Herzberg, 2006). Overall model fit was evaluated based on the RMSEA and the CFI (Kline, 2010; Tanaka, 1993). Criteria for evaluating minimally acceptable model fit were established a priori: RMSEA values ≤ 0.08 and CFA values ≥ 0.90 (Browne & Cudeck, 1993; Hu & Bentler, 1995; Markland, 2007). The analyses in this study focused on examining ME/I across race. The ME/I of the two-factor SABRS structure was assessed by applying increasingly restrictive equality constraints across groups to examine (a) configural invariance, (b) metric invariance, and (c) scalar/threshold invariance. Nested models (i.e., models with increasingly restrictive invariance tests) were compared using the change in SB χ2 (ΔSB χ2), change in (ΔCFI), and change in RMSEA (ΔRMSEA) values. Within the current study, each nested model was compared to its parent model, the latter of which possessed increasingly restrictive invariance specifications. As the models grew more restrictive, non-significant Δχ2 (p > 0.05), ΔCFI < 0.01 (Cheung & Rensvold, 2002), and ΔRMSEA < 0.015 indicated that the more restrictive model had a comparable fit to the data as less restrictive one (Byrne, 2011; Meade et al., 2008; Satorra & Bentler, 2001). In the first step, configural invariance was established. Fit indices for the configural model fell within specified ranges (CFI = 0.993; RMSEA = 0.077). Next, a metric invariance model was tested wherein factor loadings were constrained to be equal across racial groups. The model had adequate fit based on the aforementioned fit criteria (CFI = 0.994; RMSEA = 0.069). The change-in-model fit indices suggested that the fit of the metric invariance model was not significantly worse, and, in fact, was slightly better relative to the configural invariance model (Δχ2 was non-significant, ΔCFI was < 0.01, and was ΔRMSEA < 0.015). Subsequently, a scalar/threshold invariance model was tested whereby factor loadings and thresholds were constrained to be equal across groups. The model had adequate fit (CFI = 0.994; RMSEA = 0.062), and the change-in-model fit indices indicated that the fit of the scalar/threshold model was not significantly different from that of the metric model (Δχ2 was non-significant, ΔCFI was < 0.01, ΔRMSEA was < 0.015). Therefore, scalar/threshold invariance was supported.

Data Collection Practices

Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.