BASC-3 Flex Monitor
Disruptive Behavior
Summary
The BASC–3 Flex Monitor was developed to provide an efficient alternative for monitoring the status of behavioral and emotional functioning; it is an efficacious online tool (also offering paper form options) that can be used to measure the effectiveness of intervention programs at a group or individual level. The BASC–3 Flex Monitor includes teacher, parent, and self-report forms that are to be used in conjunction with Q-global®, a secure online system for administering, scoring, and reporting test results. The BASC–3 Flex Monitor offers standard forms to measure each of the following behavioral/emotional domains: Inattention/Hyperactivity, Internalizing Problems, Disruptive Behaviors, Developmental Social Disorders, and School Problems. In addition, custom forms can be developed from an item bank of more than 700 items across teacher, parent, and student forms. For each custom form, a standardized total score (in T score units) is provided that is based on a nationally representative normative sample. When developing a form, a reliability coefficient can also be generated based on the same normative sample, providing an indication of the quality of the form being developed prior to its use in monitoring behavioral and emotional functioning. Spanish-language versions are available for all parent and student forms. The Disruptive Behaviors form measures a variety of disruptive behaviors, including, hitting others, annoying others, arguing, bullying, making threats, and lack of empathy for others.
- Where to Obtain:
- Cecil R. Reynolds and Randy W. Kamphaus / Pearson
- https://support.pearson.com/getsupport/s/ClinicalProductSupportForm (online contact only)
- Pearson, Attn: Inbound Sales & Customer Support, P.O. Box 599700, San Antonio, TX 78259
- 1-800-627-7271
- https://www.pearsonclinical.com/education/products/100001542/basc3-flexmonitor. html
- Initial Cost:
- $1.25 per completed form
- Replacement Cost:
- Contact vendor for pricing details.
- Included in Cost:
- Program requires a digital manual which can be obtained for $55. Administration/scoring per completed form: $1.25
- The BASC–3 Flex Monitor Disruptive Behaviors form includes teacher and parent forms that are to be used in conjunction with Q-global®, a secure online system for administering, scoring, and reporting test results. The forms can be administered digitally using a smartphone, tablet device, or computer. The forms may also be printed on paper, and responses can be entered into Qglobal for immediate scoring and reporting. Reports provide a raw score and standardized T score, a graph that is used to track scores over repeated administrations, a summary of the change in scores over repeated administrations, and a summary of item responses.
- Training Requirements:
- Less than one hour of training.
- Qualified Administrators:
- Those interpreting the BASC-3 Flex Monitor scores should be a B qualified professional who has completed formal coursework in the administration and interpretation of psychological tests and measurements and should understand the basic psychometrics that underlie test use and development. It is also recommended that these individuals have coursework in areas related to the emotional and behavioral development of children. Finally, these individuals should be familiar with the principles presented in the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014), or the more recent updates, and should endorse standards for the ethical use of educational and psychological tests. A complete review of this manual should be completed prior to using the BASC–3 Flex Monitor components. Information about qualification level B that is needed to purchase materials can be found at: https://www.pearsonclinical.com/education/qualifications.html
- Access to Technical Support:
- Assessment Format:
-
- Rating scale
- Scoring Time:
-
- Scoring is automatic OR
- 5 minutes per student
- Scores Generated:
-
- Raw score
- Standard score
- Developmental benchmarks
- Administration Time:
-
- 5 minutes per student
- Scoring Method:
-
- Automatically (computer-scored)
- Technology Requirements:
-
- Computer or tablet
- Internet connection
- Other technology : Progress monitoring forms are to be used in conjunction with Q-global®, a secure online system for administering, scoring, and reporting test results.
Tool Information
Descriptive Information
- Please provide a description of your tool:
- The BASC–3 Flex Monitor was developed to provide an efficient alternative for monitoring the status of behavioral and emotional functioning; it is an efficacious online tool (also offering paper form options) that can be used to measure the effectiveness of intervention programs at a group or individual level. The BASC–3 Flex Monitor includes teacher, parent, and self-report forms that are to be used in conjunction with Q-global®, a secure online system for administering, scoring, and reporting test results. The BASC–3 Flex Monitor offers standard forms to measure each of the following behavioral/emotional domains: Inattention/Hyperactivity, Internalizing Problems, Disruptive Behaviors, Developmental Social Disorders, and School Problems. In addition, custom forms can be developed from an item bank of more than 700 items across teacher, parent, and student forms. For each custom form, a standardized total score (in T score units) is provided that is based on a nationally representative normative sample. When developing a form, a reliability coefficient can also be generated based on the same normative sample, providing an indication of the quality of the form being developed prior to its use in monitoring behavioral and emotional functioning. Spanish-language versions are available for all parent and student forms. The Disruptive Behaviors form measures a variety of disruptive behaviors, including, hitting others, annoying others, arguing, bullying, making threats, and lack of empathy for others.
- Is your tool designed to measure progress towards an end-of-year goal (e.g., oral reading fluency) or progress towards a short-term skill (e.g., letter naming fluency)?
-
ACADEMIC ONLY: What dimensions does the tool assess?
- BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.
- Disruptive Behaviors: Measures a variety of disruptive behaviors, including hitting others, annoying others, arguing, bullying, making threats, and a lack of empathy.
- BEHAVIOR ONLY: Which category of behaviors does your tool target?
Externalizing
Acquisition and Cost Information
Administration
Training & Scoring
Training
- Is training for the administrator required?
- Yes
- Describe the time required for administrator training, if applicable:
- Less than one hour of training.
- Please describe the minimum qualifications an administrator must possess.
- Those interpreting the BASC-3 Flex Monitor scores should be a B qualified professional who has completed formal coursework in the administration and interpretation of psychological tests and measurements and should understand the basic psychometrics that underlie test use and development. It is also recommended that these individuals have coursework in areas related to the emotional and behavioral development of children. Finally, these individuals should be familiar with the principles presented in the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014), or the more recent updates, and should endorse standards for the ethical use of educational and psychological tests. A complete review of this manual should be completed prior to using the BASC–3 Flex Monitor components. Information about qualification level B that is needed to purchase materials can be found at: https://www.pearsonclinical.com/education/qualifications.html
- No minimum qualifications
- Are training manuals and materials available?
- Yes
- Are training manuals/materials field-tested?
- No
- Are training manuals/materials included in cost of tools?
- Yes
- If No, please describe training costs:
- Can users obtain ongoing professional and technical support?
- Yes
- If Yes, please describe how users can obtain support:
Scoring
- Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
- There are 8 to 13 items across the preschool, child, and adolescent levels of the teacher and parent forms (Teacher: Preschool [8], Child [13], and Adolescent [13]; Parent: Preschool [8], Child [13], and Adolescent [13]). Items are scored from 0 to 3 points, and scores are summed to form an overall raw score, which is then converted to a standardized T score. The T scores are based on nationally representative, age-based standardization samples (ages 2-3, 4-5, 6-7, 8-11, 12-14, and 15-18, with sample sizes ranging
- Do you provide basis for calculating slope (e.g., amount of improvement per unit in time)?
- Yes
- ACADEMIC ONLY: Do you provide benchmarks for the slopes?
- ACADEMIC ONLY: Do you provide percentile ranks for the slopes?
- Describe the tool’s approach to progress monitoring, behavior samples, test format, and/or scoring practices, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
Levels of Performance and Usability
- Date
- April 2013 through November 2014
- Size
- 4400
- Male
- 50
- Female
- 50
- Unknown
- Eligible for free or reduced-price lunch
- Other SES Indicators
- Parent education level (i.e., the highest school grade completed by the child’s mother or female guardian, or the child’s father if the mother’s education level was unavailable): grade 11 or less (1), high school graduate (2), 1 to 3 years of college or technical school (3), and 4 years of college or more (4)
- White, Non-Hispanic
- 49.3–55.7
- Black, Non-Hispanic
- Hispanic
- American Indian/Alaska Native
- Asian/Pacific Islander
- Other
- Unknown
- Disability classification (Please describe)
- First language (Please describe)
- Language proficiency status (Please describe)
Performance Level
Reliability
Age / Grade Informant |
Age 2-18
Parent |
Age 2-18
Teacher |
---|---|---|
Rating |
- *Offer a justification for each type of reliability reported, given the type and purpose of the tool.
- Internal Consistency and Standard Error of Measurement: Internal consistency (represented by the statistic coefficient alpha) suggests whether items in a scale largely reflect the same underlying dimension. Such information is important for establishing items included in the overall score are indicative of an overall construct. Test-Retest Reliability: Test-retest reliability reflects the consistency of ratings from the same teacher, parents, or student over a brief time interval. This metric is important for use in measures used repeatedly.
- *Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
- Internal consistency and standard error of measurement. Samples included 1,700 participants for the teacher forms and 1,800 for the parent forms. Overall, these samples indicate a close correspondence between the BASC–3 Flex Monitor standardization sample and the 2013 census proportions across most of the forms and age bands. Test-Retest reliability. Sample sizes ranged from 69 participants (parent form, child level) to 126 participants (parent form, adolescent level) for the test-retest studies. The samples include a variety of demographic groups across various socioeconomic status (SES), race/ethnicity, and geographic region, as well as a reasonable split between the number of males and females. Below are descriptions of each sample. For Teachers, Age Range 2-18 (Coefficient Alpha Reliability): Ages 2–3 African American 13.5 Asian 3.5 Hispanic 26.5 Other 6.0 White 50.5 Ages 4–5 African American 13.7 Asian 4.3 Hispanic 26.3 Other 6.3 White 49.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 25.0 Other 5.3 White 51.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 25.0 Other 5.3 White 51.3 Ages 8–11 African American 13.3 Asian 5.0 Hispanic 24.0 Other 5.3 White 52.3 Ages 12–14 African American 13.7 Asian 4.7 Hispanic 23.0 Other 5.0 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.7 Other 4.3 White 55.0 For Parents, Age Range 2 -18 (Coefficient Alpha Reliability): Ages 2–3 African American 13.7 Asian 3.3 Hispanic 25.7 Other 6.0 White 51.3 Ages 4–5 African American 13.7 Asian 4.7 Hispanic 26.0 Other 6.0 White 49.7 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 24.7 Other 4.7 White 52.3 Ages 8–11 African American 13.3 Asian 4.7 Hispanic 24.0 Other 5.7 White 52.3 Ages 12–14 African American 14.3 Asian 4.7 Hispanic 22.7 Other 4.7 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.0 Other 4.3 White 55.7 For Teachers, Age Range 2-18 (Test-Retest Reliability): Ages 2–5 African American 6.9 Asian 6.9 Hispanic 11.1 Other 11.1 White 63.9 Ages 6–11 African American 4.9 Asian 3.7 Hispanic 30.9 Other 1.2 White 59.3 Ages 12–18 African American 13.7 Asian -- Hispanic 15.8 Other 2.1 White 68.4 For Parents, Age Range 2-18 (Test-Retest Reliability): Ages 2–5 African American 11.4 Asian -- Hispanic 7.1 Other 10.0 White 71.4 Ages 6–11 African American 5.8 Asian 2.9 Hispanic 24.6 Other 2.9 White 63.8 Ages 12–18 African American 5.6 Asian 0.8 Hispanic 17.5 Other 4.0 White 72.2
- *Describe the analysis procedures for each reported type of reliability.
- The standard coefficient alpha procedures were used. Values for test-retest correlations include both unadjusted and adjusted values (adjusted for restriction of range, which can bias correlation coefficients in either a positive [greater variability than a population estimate] or negative [smaller variability than a population estimate] direction).
*In the table(s) below, report the results of the reliability analyses described above (e.g., model-based evidence, internal consistency or inter-rater reliability coefficients). Include detail about the type of reliability data, statistic generated, and sample size and demographic information.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- *Only one teacher was asked to complete a Teacher Rating Scale for each child. Teachers, however, were allowed to participate in the study for more than one student. Thus, ns for raters are not provided for the teacher forms.
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
- Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
- No
If yes, fill in data for each subgroup with disaggregated reliability data.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
Validity
Age / Grade Informant |
Age 2-18
Parent |
Age 2-18
Teacher |
---|---|---|
Rating |
- *Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
- To support the constructs being measured by the BASC–3 Flex Monitor forms, a series of correlational analyses were performed between the Total Scores obtained on the BASC–3 Flex Monitor forms and the composite scale scores from the BASC–3 TRS, PRS, and SRP. These analyses were performed using the standardization sample used to develop the BASC–3 Flex Monitor norms. The results of the analyses provide evidence of concurrent and discriminant validity of the Flex Monitor Total Scores with a well-established measure of behavioral and emotional functioning.
- *Describe the sample(s), including size and characteristics, for each validity analysis conducted.
- The sample size included 4,400 children with 1,700 children in the sample for the Teacher Form, 1,800 in the sample for the Parent Form and 900 in the sample for the Self-report Form. The samples consist of an equal number of male and female children in each age grouping. Overall, these samples indicate a close correspondence between the BASC–3 Flex Monitor standardization sample and the 2013 census proportions across most of the forms and age bands. When creating the general norms, attention was given to the presence of emotional, behavioral, or physical diagnoses or classifications reported for the child. Below are the descriptions of each sample. For Teachers, Age Range 2-5 (Correlations Validity, n = 500): Ages 2–3 African American 13.5 Asian 3.5 Hispanic 26.5 Other 6.0 White 50.5 Ages 4–5 African American 13.7 Asian 4.3 Hispanic 26.3 Ages 2–3 African American 13.5 Asian 3.5 Hispanic 26.5 Other 6.0 White 50.5 Ages 4–5 African American 13.7 Asian 4.3 Hispanic 26.3 Other 6.3 White 49.3 White 49.3 Ages 2–3 African American 13.5 Asian 3.5 Hispanic 26.5 Other 6.0 White 50.5 Ages 4–5 African American 13.7 Asian 4.3 Hispanic 26.3 Other 6.3 White 49.3 Ages 2–3 African American 13.5 Asian 3.5 Hispanic 26.5 Other 6.0 White 50.5 Ages 4–5 African American 13.7 Asian 4.3 Hispanic 26.3 Other 6.3 White 49.3 For Teachers, Age Range 6-11 (Correlations Validity): Ages 6–7 African American 13.7 Asian 4.7 Hispanic 25.0 Other 5.3 White 51.3 Ages 8–11 African American 13.3 Asian 5.0 Hispanic 24.0 Other 5.3 White 52.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 25.0 Other 5.3 White 51.3 Ages 8–11 African American 13.3 Asian 5.0 Hispanic 24.0 Other 5.3 White 52.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 25.0 Other 5.3 White 51.3 Ages 8–11 African American 13.3 Asian 5.0 Hispanic 24.0 Other 5.3 White 52.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 25.0 Other 5.3 White 51.3 Ages 8–11 African American 13.3 Asian 5.0 Hispanic 24.0 Other 5.3 White 52.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 25.0 Other 5.3 White 51.3 Ages 8–11 African American 13.3 Asian 5.0 Hispanic 24.0 Other 5.3 White 52.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 25.0 Other 5.3 White 51.3 Ages 8–11 African American 13.3 Asian 5.0 Hispanic 24.0 Other 5.3 White 52.3 For Teachers, Age Range 12-18 (Correlations Validity): Ages 12–14 African American 13.7 Asian 4.7 Hispanic 23.0 Other 5.0 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.7 Other 4.3 White 55.0 Ages 12–14 African American 13.7 Asian 4.7 Hispanic 23.0 Other 5.0 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.7 Other 4.3 White 55.0 Ages 12–14 African American 13.7 Asian 4.7 Hispanic 23.0 Other 5.0 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.7 Other 4.3 White 55.0 Ages 12–14 African American 13.7 Asian 4.7 Hispanic 23.0 Other 5.0 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.7 Other 4.3 White 55.0 Ages 12–14 African American 13.7 Asian 4.7 Hispanic 23.0 Other 5.0 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.7 Other 4.3 White 55.0 For Teachers, Age Range 2-5, Parent Form (Correlations Validity, n = 600): Ages 2–3 African American 13.7 Asian 3.3 Hispanic 25.7 Other 6.0 White 51.3 Ages 4–5 African American 13.7 Asian 4.7 Hispanic 26.0 Other 6.0 White 49.7 Ages 2–3 African American 13.7 Asian 3.3 Hispanic 25.7 Other 6.0 White 51.3 Ages 4–5 African American 13.7 Asian 4.7 Hispanic 26.0 Other 6.0 White 49.7 Ages 2–3 African American 13.7 Asian 3.3 Hispanic 25.7 Other 6.0 White 51.3 Ages 4–5 African American 13.7 Asian 4.7 Hispanic 26.0 Other 6.0 White 49.7 Ages 2–3 African American 13.7 Asian 3.3 Hispanic 25.7 Other 6.0 White 51.3 Ages 4–5 African American 13.7 Asian 4.7 Hispanic 26.0 Other 6.0 White 49.7 For Teachers, Age Range 6-11, Parent Form (Correlations Validity, n = 600): Ages 6–7 African American 13.7 Asian 4.7 Hispanic 24.7 Other 4.7 White 52.3 Ages 8–11 African American 13.3 Asian 4.7 Hispanic 24.0 Other 5.7 White 52.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 24.7 Other 4.7 White 52.3 Ages 8–11 African American 13.3 Asian 4.7 Hispanic 24.0 Other 5.7 White 52.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 24.7 Other 4.7 White 52.3 Ages 8–11 African American 13.3 Asian 4.7 Hispanic 24.0 Other 5.7 White 52.3 Ages 6–7 African American 13.7 Asian 4.7 Hispanic 24.7 Other 4.7 White 52.3 Ages 8–11 African American 13.3 Asian 4.7 Hispanic 24.0 Other 5.7 White 52.3 For Teachers, Age Range 12-18, Parent Form (Correlations Validity, n = 600): Ages 12–14 African American 14.3 Asian 4.7 Hispanic 22.7 Other 4.7 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.0 Other 4.3 White 55.7 Ages 12–14 African American 14.3 Asian 4.7 Hispanic 22.7 Other 4.7 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.0 Other 4.3 White 55.7 Ages 12–14 African American 14.3 Asian 4.7 Hispanic 22.7 Other 4.7 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.0 Other 4.3 White 55.7 Ages 12–14 African American 14.3 Asian 4.7 Hispanic 22.7 Other 4.7 White 53.7 Ages 15–18 African American 13.7 Asian 4.3 Hispanic 22.0 Other 4.3 White 55.7
- *Describe the analysis procedures for each reported type of validity.
- Correlational analyses were performed to establish the relationship between Flex Monitor Total Scores and BASC-3 Teacher/Parent/Self-Report Form Composite Scales.
*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- *Only one teacher was asked to complete a Teacher Rating Scale for each child. Teachers, however, were allowed to participate in the study for more than one student. Thus, ns for raters are not provided for the teacher forms.
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
- Describe the degree to which the provided data support the validity of the tool.
- Across all of the forms and levels, BASC–3 Flex Monitor Total Scores correlated with BASC–3 composite scale scores in a predictable fashion. Correlations between the ADHD and Disruptive Behavior Total Scores with the Externalizing Problems composite were consistently high. Scores from all BASC–3 Flex Monitor forms exhibited moderate to high correlations with scores from the Behavioral Symptoms Index, which is a global indicator of problems with behavioral/emotional functioning.
- Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
- No
If yes, fill in data for each subgroup with disaggregated validity data.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
Bias Analysis
Age / Grade: Informant |
Age 2-18
Parent |
Age 2-18
Teacher |
---|---|---|
Rating | Yes | Yes |
- Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
- Yes
- If yes,
- a. Describe the method used to determine the presence or absence of bias:
- During the development of the BASC–3 items, items were arranged into scales based on the standardization data and were then evaluated to see whether items functioned in the same way for females and males and for African American, Hispanic, and white children. This was done using two Differential Item Functioning (DIF) methods: Rasch-based and Mantel-Haenszel. The Raschbased method is based on the work of Mellenbergh (1982); person ability is estimated first using all data and then person abilities are fixed at the values obtained before and item difficulty parameters for all groups are estimated separately and compared. If the difference between the estimates for the two groups is larger than .50 logits and t test is significant at the .01 level, the item is considered as being potentially biased (Draba, 1977). This method estimates DIF based on crosstabulation of the classification using the measure of the trait. Absolute DIF size larger than .64 logits indicate moderate to large bias, which correspondents to delta unit 1.5. Items were considered for removal when a consistent pattern emerged across forms and levels. Only a small number of items were removed based on these criteria.
- b. Describe the subgroups for which bias analyses were conducted:
- Females and males and African American, Hispanic and White children.
- c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.
A small number of items were removed based on these criteria.
Growth Standards
Sensitivity to Behavior Change
Age / Grade: Informant |
Age 2-18
Parent |
Age 2-18
Teacher |
---|---|---|
Rating |
- Describe evidence that the monitoring system produces data that are sensitive to detect incremental change (e.g., small behavior change in a short period of time such as every 20 days, or more frequently depending on the purpose of the construct). Evidence should be drawn from samples targeting the specific population that would benefit from intervention. Include in this example a hypothetical illustration (with narrative and/or graphics) of how these data could be used to monitor student performance frequently enough and with enough sensitivity to accurately assess change:
- Items included on the BASC–3 TRS, PRS, and SRP standardization forms were based on items from the BASC–2 TRS, PRS, and SRP, as well as new items that were created based on behaviors reported by teachers, parents, and students (see Reynolds & Kamphaus, 2015 for a detailed discussion). The items reflect a comprehensive view of behavioral and emotional functioning across a wide domain corresponding to the BASC–3 scales. These items were used to form the initial pool of BASC–3 Flex Monitor items. Items were reviewed during several iterations for their appropriateness in monitoring change in behavioral and emotional functioning; items that were not considered appropriate for monitoring change were removed from the item pool. This process resulted in over 700 items remaining in the BASC–3 Flex Monitor item pool.
Reliability (Intensive Population): Reliability for Students in Need of Intensive Intervention
Age / Grade Informant |
Age 2-18
Parent |
Age 2-18
Teacher |
---|---|---|
Rating |
- Offer a justification for each type of reliability reported, given the type and purpose of the tool:
- Describe the sample(s), including size and characteristics, for each reliability analysis conducted:
- Describe the analysis procedures for each reported type of reliability:
In the table(s) below, report the results of the reliability analyses described above (e.g., model-based evidence, internal consistency or inter-rater reliability coefficients). Report results by age range or grade level (if relevant) and include detail about the type of reliability data, statistic generated, and sample size and demographic information.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
- Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
- No
- If yes, fill in data for each subgroup with disaggregated reliability data.
Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)n
(raters)Median Coefficient 95% Confidence Interval
Lower Bound95% Confidence Interval
Upper Bound
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
Validity (Intensive Population): Validity for Students in Need of Intensive Intervention
Age / Grade Informant |
Age 2-18
Parent |
Age 2-18
Teacher |
---|---|---|
Rating |
- Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
- Describe the sample(s), including size and characteristics, for each validity analysis conducted.
- Describe the analysis procedures for each reported type of validity.
- In the table(s) below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.
Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)n
(raters)Median Coefficient 95% Confidence Interval
Lower Bound95% Confidence Interval
Upper Bound
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
- Describe the degree to which the provided data support the validity of the tool.
- Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
- No
- If yes, fill in data for each subgroup with disaggregated validity data.
Type of Subscale Subgroup Informant Age / Grade Test or Criterion n
(sample/
examinees)n
(raters)Median Coefficient 95% Confidence Interval
Lower Bound95% Confidence Interval
Upper Bound
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
Decision Rules: Data to Support Intervention Change
Age / Grade: Informant |
Age 2-18
Parent |
Age 2-18
Teacher |
---|---|---|
Rating |
- Are validated decision rules for when changes to the intervention need to be made specified in your manual or published materials?
- Yes
- If yes, specify the decision rules:
- There are several ways to evaluate the scores provided on BASC–3 Flex Monitor forms. First, T scores can be evaluated according to the classification categories. These categories can be helpful when the primary question of interest is how the individual’s score compares to a representative population of the same age cohort. By itself, this information might not meet the primary need of most progress monitoring situations. A more traditional way of evaluating scores is to compare score changes across time. The BASC–3 Flex Monitor reports offer comparisons between a score and the score obtained during the initial form administration, as well as comparisons between a score and the score that directly precedes it. When comparing scores, the standard error of the difference is used to test for statistically significant differences between scale scores (using the formula provided in Anastasi & Urbina, 1997, p. 111). In this statistical test, the standard error value that is used is based on the test-retest reliability coefficients. Both methods of comparison are valuable when interpreting BASC–3 Flex Monitor results. Any formalized intervention strategy requires a commitment of time and resources from those involved in implementing it. The intervention should result in improved behavioral and emotional functioning (i.e., improved Total Scores on the monitoring form), as indicated by the change in T-score comparisons. However, intervention efforts should also result in functioning levels that are considered acceptable. For example, consider an intervention strategy designed to reduce disruptive behaviors. A child receives average ratings of 95 during a baseline preintervention period. After 6 weeks of 30-minute one-on-one sessions three times a week, the child’s monitoring form score is 75, for a 20-point difference. Undoubtedly, such a large difference would be statistically significant. However, a T score of 75 is still very extreme compared to the general population and lies in the Clinically Significant range. As such, serious consideration would need to be given to changing the intervention approach to something that might result in further reduction in behavioral problems.
-
What is the evidentiary basis for these decision rules?
Decision Rules: Data to Support Intervention Selection
Age / Grade: Informant |
Age 2-18
Parent |
Age 2-18
Teacher |
---|---|---|
Rating |
- Are validated decision rules for what intervention(s) to select specified in your manual or published materials?
- No
- If yes, specify the decision rules:
-
What is the evidentiary basis for these decision rules?
Data Collection Practices
Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.