MyDBRConnect

Disruptive Behavior

 

Cost

Technology, Human Resources, and Accommodations for Special Needs

Service and Support

Purpose and Other Implementation Information

Usage and Reporting

Initial Cost:

  • For 1-99 students, the full year purchase cost is $400, and the mid-year purchase cost is $240.
  • For 100-499 students, the full year purchase cost is $600, and the mid-year purchase cost it $360.
  •  For 500-1,499 students, the full year purchase cost is $1,000 and the mid-year purchase cost is $600.
  • For 1,500 – 2,999 students, the full year purchase cost is $1500, and the mid-year purchase cost is $900.
  • For 3,000 – 9,999 students, the gull year purchase cost is $1,800, and the mid-year purchase cost is $1,080.
  • For 10,000 – 49,999 students, the full year purchase cost is $5,000, and the mid-year purchase cost is $3,000.
  • For 50,000 – 99,999 students, the full year purchase cost is $11,000, and the mid-year purchase cost is $6,600.
  • For 100,000 – 299,999 students, the full year purchase cost is $14,000, and the mid-year purchase cost is $8,400.
  • For 100,000+ students, the full year purchase cost is $20,000, and the mid-year purchase cost is $12,000.

 

Replacement Cost:

Yearly subscription (based on August 1 – July 31 school year) pricing is tied to the student population of the school(s). Customers can also purchase half-year subscriptions.

 

Included in Cost:
Training is free of charge via the online training module: http://dbrtraining.education.uconn.edu/

 

Technology Requirements:

  • Computer or tablet
  • Internet connection

 

Training Requirements:

  • Less than 1 hour of training

 

Qualified Administrators:

  • No minimum qualifications specified

 

Accommodations:

No information provided; contact vendor for details.

 

Where to Obtain:

Website:

http://www.mydbrconnect.com/

Address:

16204 N. Florida Avenue, Lutz, FL 33549

Phone Number:
1.866.727.2884

Email:

cs@parinc.com


Access to Technical Support:

Technical support is provided upon purchasing DBR-SIS software from http://www.mydbrconnect.com/

 

As a behavioral assessment methodology, DBR combines characteristics of systematic direct observations and behavioral rating scales. Specifically, DBR-SIS reflects a teacher’s rating regarding the proportion of time in which a target student was observed to engage in a specific behavior, using as a scale from 0 (never) to 10 (always), during a specified observation period.

 

For example, if a student received a score of 8 out of 10 on a DBR-SIS form while being observed for Academic Engagement over a 20-minute period, this score would be interpreted as the student being academically engaged during 80% of the period. While observation periods and settings may vary depending on student and behavior specific factors, DBR-SIS forms reflecting student behaviors are always completed immediately following the observation.

 

Assessment Format:

  • Individual
  • Group
  • Computer-administered

 

Administration Time:

  • 15 minutes per student

 

Scoring Time:

  • Scoring is automatic

 

Scoring Method:

  • Calculated automatically

 

Scores Generated:

  • Duration
  • Raw Score
  • Percent
  • Peer Comparison
  • Rate of Change
  • Developmental Benchmarks

 

 

Reliability

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
RatingFull bubbleFull bubble

Justify the appropriateness of each type of reliability reported:

DBR-SIS was originally developed with intent to mirror opportunities for formative data streams as provided within systematic direct observation. As such, issues of reliability (particularly which types to emphasize) can be openly discussed and debated. Reliability evidence was gathered using a two-pronged approach that includes a) intraclass correlations and b) generalizability theory. 

The first approach involves reliability estimated by converting intraclass correlation coefficients to reliability coefficients using the approach suggested by Shrout & Fleiss (1979), with data obtained from studies designed to examine DBR-SIS within screening purposes. These reliability estimates are based on large samples of students across a diverse range of general classroom settings, and they address a wide range of grade levels and ultimately consider the variability between students and within observations These data provide insight into the consistency of student ratings across observation periods and indicate that ratings are very stable across observation periods.

Using generalizability theory, reliability data are calculated through dependability studies to demonstrate how reliability varies based on number of observations and days observed. This approach is appropriate as DBR-SIS data are rating scales and the ability to generalize scores such that we can assume a student would receive a similar rating from a different observer is of key concern. Generalization studies allow for reliability estimates across several thresholds of ratings, in this case, determining how many observations are needed to obtain various estimates. This provides practitioners with a range of administration options depending on the type of decision to be made (e.g. low stakes intervention, high stakes interventions). Reliability coefficients assuming assessment considerations (differing numbers of observations, type of rater scoring students) are discussed in the sources listed below, which purposely sampled from classrooms in which variability in student behavior was expected (e.g. inclusive classroom with intensive intervention needs).

 

Describe the sample characteristics for each reliability analysis conducted:

Sample information for Johnson et. al (2016) study:

Sample Demographics Table

Percentage by time-points

Fall

Winter

Spring

Gender

Male

52.0%

52.1%

51.7%

Female

48.0%

47.9%

48.3%

Race

White, Non-Hispanic

81.4%

82.8%

82.3%

Black, Non-Hispanic:

12.2%

11.0%

11.3%

American Indian/
Alaska Native:

-

-

-

Asian/Pacific Islander:

1.7%

1.7%

1.7%

Other:

3.7%

3.5%

3.6%

Multiracial

1.0%

1.0%

0.9%

Ethnicity

Non-Hispanic

92.5%

92.4%

92.6%

Hispanic

7.5%

7.6%

7.4%

 

Sample information for Chafouleas et. al (2013) study:

Grades K-5: 51.7% female, White, Non-Hispanic (N = 553; 89.6%), White, Hispanic (N = 12; 1.9%), Black (N = 9; 1.5%), American Indian or Alaska Native (N = 2; 0.3%), Asian (N = 193 2.1%), Other (N = 8; 1.3%), missing (N = 20 3.2%).

Grades 6-8: 46.3% female, 89.7% White, non-Hispanic

Sample information for Chafouleas et. al (2010) study:

Seven 8th-grade students attending an inclusive language arts classroom. Students demographics included: 3 boys/4 girls, 6 Hispanic/1 African American, 4 receiving special education services. Raters included the classroom teacher, a special education teacher who provided services in the classroom, and two research assistants. In the actual study, raters observed students three times a day over six consecutive days for a period of 45-60 minutes. Reliability coefficients below present the reliability for raters including classroom teachers and research assistants separately, across a variety of total observations.

 

Describe the analysis procedures for each reported type of reliability:

Analysis procedures for Johnson et. al (2016) study:

Average DBR-SIS AE scores across 6-10 observations per student were used for analysis. Specifically, data reliability for AE was calculated from a one-way intra-class correlation coefficient (ICC) that examined variability between students and within observations, corresponding to ICC(I,k), using a formula proposed by Shrout and Fleiss (1979). The average ICC (k=6-10) was selected.

Analysis procedures for Chafouleas et. al (2013) study:

Students were rated on DB by teachers across 5-10 data points and these scores were averaged to obtain a mean value. ICCs were then calculated, using a formula in accordance with Shrout and Fleiss’ (1979) recommendations. Intra-class correlation (ICC) coefficients were examined for each DBR-SIS behavior target to assess the appropriateness of this within-student DBR-SIS data aggregation.

Analysis procedures for Chafouleas et. al (2010) study:

Four primary facets of interest were identified (i.e., person, rater, day and rating occasion). Every student was rated on every occasion by every rater and, given that the goal was to generalize results beyond the specific students, raters, and rating occasions examined in the current study, all facets were considered to be random. An ANOVA with Type III sum of squares was used to derive all variance components (Chafouleas et al., 2010).

Form: Classroom Teachers Age Range: Lower Elementary – Middle School

Type of Reliability

Age or Grade

n (raters)

n (examinees)

Coefficient

Confidence Interval

Inter-rater reliability

Lower Elementary (1-2)

Upper Elementary (4-5)

Middle School (7-8)

-

Fall – 1863

Winter – 1798

Spring - 1791

*

 0.91-0.97

Inter-rater reliability: ICC

Elementary School

44

617

0.92

-

Inter-rater reliability: ICC

Middle School

17

214

0.78

-

*Note reliability estimates were not calculated across each grade point, but rather were summated by the target behavior (DB).

Form: Classroom Teachers Age Range: Eighth Grade

Type of Reliability*

1 obs

5 obs

10 obs

15 obs

20 obs

Generalizability ()

0.22

0.56

0.70

0.77

0.80

Dependability (Φ)

0.18

0.49

0.63

0.70

0.73

*Note. Teachers in this study were not exposed to the complete recommended training components. Brief introduction/overview only was provided, with no additional feedback.

Form: Research Assistant Age Range: Eighth Grade

Type of Reliability*

1 obs

5 obs

10 obs

15 obs

20 obs

Generalizability ()

0.36

0.74

0.85

0.90

0.92

Dependability (Φ)

0.29

0.63

0.74

0.79

0.81

*Note. Teachers in this study were not exposed to the complete recommended training components. Brief introduction/overview only was provided, with no additional feedback.

 

Validity

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
RatingEmpty bubbleEmpty bubble

Describe and justify the criterion measures used to demonstrate validity:

Concurrent validity serves as the primary source of data presented as related to DBR-SIS. As described, the intended purpose of DBR-SIS is in formative uses. As such, a primary source of validity data comes from concurrent comparisons with variety of behavior observation. While there is no single behavior assessment method that combines both teacher ratings and formative assessment, comparisons the Behavioral and Emotional Screening System and Student Risk Screening Scale (teacher ratings), both established and technically sound screening measures, provide information about the validity of DBR-SIS.

 

Describe the sample characteristics for each validity analysis conducted:

Sample information for Johnson et. al (2016) study:

Sample Demographics Table

Percentage by time-points

Fall

Winter

Spring

Gender

Male

52.0%

52.1%

51.7%

Female

48.0%

47.9%

48.3%

Race

White, Non-Hispanic

81.4%

82.8%

82.3%

Black, Non-Hispanic:

12.2%

11.0%

11.3%

American Indian/
Alaska Native:

-

-

-

Asian/Pacific Islander:

1.7%

1.7%

1.7%

Other:

3.7%

3.5%

3.6%

Multiracial

1.0%

1.0%

0.9%

Ethnicity

Non-Hispanic

92.5%

92.4%

92.6%

Hispanic

7.5%

7.6%

7.4%

 

Sample information for Kilgus et. al (2014) study:

The sample consisted of 1108 students in the 1st, 4th, and 7th grades sampled from 13 schools across three geographic regions (northeast, southeast, Midwest). Specifically, the sample consisted of 410 first grade students – 31 teachers, 354 fourth grade students – 25 teachers, and 344 seventh grade students – 23 teachers. Regarding region the sample consisted of 28 teachers at the Northeast site (first-grade n = 8, fourth-grade n = 9, and seventh-grade n = 11), 29 teachers at the Southeast site (first grade n = 14, fourth-grade n = 10, and seventh-grade n = 5) and 22 teachers at the Midwest site (first-grade n = 9, fourth-grade n = 6, and seventh-grade n = 7). The majority of students were identified as White, non Hispanic (n = 536; 48.38%); 141 as White, Hispanic (12.73%); 297 as Black or African American (26.81%); 20 as American Indian or Alaskan Native (1.81%); 45 as Asian American (4.06%); and 32 as Other (2.89%). Race/ethnicity data were not provided for 37 students (3.33%). A review of data indicated that the student sample at each geographic site was representative of its corresponding state population with regard to gender and race/ethnicity, with a slight underrepresentation of White, non-Hispanic students.

Sample information for Chafouleas et. al (2013) study:

Elementary (K-5)

617 Elementary Students (K-90; 1st-116; 2nd- 106; 3rd- 92; 4th-122; 5th-91)

Lower Elementary (K-2) – 312

Upper Elementary (3-5) – 305

Female (51.7%)

White, Non-Hispanic (N = 553; 89.6%)

White, Hispanic (N = 12; 1.9%)

Black (N = 9; 1.5%)

American Indian or Alaska Native (N = 2; 0.3%)

Asian (N = 193 2.1%)

Other (N = 8; 1.3%)

Missing (N = 20 3.2%).

Middle School (6-8)

214 middle school students (6th-18; 7th-155; 8th-41)

46.3% female

89.7% White, non-Hispanic

 

Describe the analysis procedures for each reported type of validity:

Analysis procedures for Johnson et. al (2016) study:

Correlation coefficients were calculated between BESS T-scores and mean DBR-SIS DB scores.

Analysis procedures for Kilgus et. al (2014) study:

Pearson product-moment bi-variate correlations between screening scales (e.g., DB, BESS, and SRSS) were calculated across grades.

Analysis procedures for Chafouleas et. al (2013) study:

Concurrent validity evaluated by calculating Pearson product-moment correlation coefficients (r) assessing the correlation between mean DBR-SIS DB scores and computed SRSS summed scores and BESS T scores.

 

Form: Classroom Teachers Age Range: Lower Elementary – Middle School

Type of Validity

Age or Grade

Test or Criterion

n (examinees)

n (raters)

Coefficient

Confidence Interval

Concurrent Validity

Lower Elementary (1-2)

Behavioral and Emotional Screening System (BESS: Kamphaus & Reynolds, 2007)

614

61

0.59

-

Concurrent Validity

Upper Elementary (4-5)

BESS

672

70

0.57

-

Concurrent Validity

Middle School (7-8)

BESS

530

61

0.56

-

Concurrent Validity

1st Grade Students 

BESS

410

 

31

0.59

-

Concurrent Validity

1st Grade Students

Student Risk Screening Scale (SRSS: Drummond, 1994)

410

31

0.55

-

Concurrent Validity

4th Grade Students

BESS

354

25

0.45

-

Concurrent Validity

4th Grade Students

SRSS

354

25

 

0.54

-

Concurrent Validity

7th Grade Students

BESS

344

23

0.62

-

Concurrent Validity

7th Grade Students

SRSS

344

23

 

0.68

-

Concurrent Validity

Elementary Students - Grades K – 5

BESS

617

 

 

 

44

0.63

-

Concurrent Validity

Elementary Students - Grades K – 5

SRSS

617

44

0.69

-

Concurrent Validity

Middle School Students - Grades 6 – 8

BESS

214

17

0.37

-

Concurrent Validity

Middle School Students - Grades 6 – 8

SRSS

214

17

0.39

-

 

Results from other forms of validity analyses not compatible with above table format:

The following steps were taken to protect against the threat of internal validity: (a) counterbalancing of measure presentation, (b) random order assignment of students on individual measures, and (c) random selection of students within classrooms. Counterbalancing of presentation order took place by measure through the random assignment of conditions to teacher participants, with corrections made after random assignment in order to ensure even distribution of conditions within site and grade group.

 

Describe the degree to which the provided data support the validity of the tool:

Results of Johnson et. al (2016) study:

The DB scale, in which lower scores indicated less risk (e.g., lower disruption) was positively correlated with the BESS-T scale for which lower scores indicate less risk across all grades and time-points. All correlations were statistically significant from 0 at the p<.001 level using the Holm–Bonferroni correction for Type I error inflation (Holm, 1979). These results, in addition to the steps taken to protect against threats to internal validity (see above), provide evidence strengthening the validity of DBR-SIS DB scores.

Results of Kilgus et. al (2014) study:

Bivariate correlations between the BESS and DBR-SIS DB and SRSS and DBR-SIS DB were all in the expected direction (e.g., Lower DB scores positively correlated with BESS and SRSS higher risk scores) and were statistically significant at the p<.001 level.

Results of Chafouleas et. al (2013) study:

All correlations between DBR-SIS DB and BESS-T and DBR-SIS DB and SRSS scores were statistically significant at the .001 level and in the expected direction.  Additionally, the influence of subgroup size (e.g., ratings of students within two vs three subgroups) was taken into consideration and no differences in correlation scores were found.

Bias Analysis Conducted

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
RatingNoNo

Have additional analyses been conducted to establish whether the tool is or is not biased against demographic subgroups (e.g., students who vary by race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?

Bias Analysis Method:

No qualifying evidence provided.

 

Subgroups Included:

No qualifying evidence provided.

 

Bias Analysis Results:

No qualifying evidence provided.

Sensitivity

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
RatingHalf-filled bubbleFull bubble

Describe evidence that the monitoring system produces data that are sensitive to detect incremental change (i.e., small behavior change in a short period of time):

Evidence that DBR-SIS can produce data that are sensitive to detect incremental change (e.g., small behavior change in a short period of time) is provided in the 3 studies below. Actual, not hypothetical, data are available to demonstrate how DBR-SIS has been used to monitor student performance on a frequent basis to inform decisions about student performance. The studies below represent a continuum of classwide (middle school, elementary) to individual (elementary) student focus. Graphs are provided in 2 of the 3 (JOBE, AEI) manuscripts to illustrate how data present enough sensitivity to assess change – the third manuscript (Exceptional Children) presents aggregated information in table format only given the volume of data.

Chafouleas, S. M., Sanetti, L.M.H., Kilgus, S. P., & Maggin, D. M. (2012). Evaluating  sensitivity to behavioral change across consultation cases using Direct Behavior Rating Single-Item Scales (DBR-SIS). Exceptional Children, 78, 491-505.

Abstract: In this study, the sensitivity of Direct Behavior Rating Single Item Scales (DBR-SIS) for assessing behavior change in response to an intervention was evaluated.  Data from 20 completed behavioral consultation cases involving a diverse sample of elementary participants and contexts utilizing a common intervention in an A-B design were included in analyses.  Secondary purposes of the study were to investigate the utility of five metrics proposed for understanding behavioral response as well as the correspondence among these metrics and teachers’ ratings of intervention acceptability. Overall, results suggest that DBR-SIS demonstrated sensitivity to behavior change regardless of the metric used. Furthermore, there was limited association between student change and teachers’ ratings of acceptability.

Chafouleas, S. M., Sanetti, L.M.H., Jaffery, R., & Fallon, L. (2012). Research to practice: An evaluation of a class-wide intervention package involving self-management and a group contingency on behavior of middle school students. Journal of Behavioral Education, 21, 34-57. Doi:10.1007/s10864-011-9135-8.

Abstract: The effectiveness of an intervention package involving self-management and a group contingency at increasing appropriate classroom behaviors was evaluated in a sample of middle school students. Participants included all students in each of the 3 eighth-grade general education classrooms and their teachers. The intervention package included strategies recommended as part of best practice in classroom management to involve both building skill (self-management) and reinforcing appropriate behavior (group contingency). Data sources involved assessment of targeted behaviors using Direct Behavior Rating—single item scales completed by students and systematic direct observations completed by external observers. Outcomes suggested that, on average, student behavior moderately improved during intervention as compared to baseline when examining observational data for off-task behavior. Results for Direct Behavior Rating data were not as pronounced across all targets and classrooms in suggesting improvement for students. Limitations and future directions, along with implications for school-based practitioners working in middle school general education settings, are discussed.

Riley-Tillman, T.C., Methe, S.A., & Weegar, K. (2009). Examining the use of Direct Behavior Rating methodology on classwide formative assessment: A case study. Assessment for Effective Intervention, 34, 242-250. doi:10.1177/1534508409333879

Abstract: High-quality formative assessment data are critical to the successful application of any problem-solving model (e.g., response to intervention). Formative data available for a wide variety of outcomes (academic, behavior) and targets (individual, class, school) facilitate effective decisions about needed intervention supports and responsiveness to those supports. The purpose of the current case study is to provide preliminary examination of direct behavior rating methods in class-wide assessment of engagement. A class-wide intervention is applied in a single-case design (B-A-B-A), and both systematic direct observation and direct behavior rating are used to evaluate effects. Results indicate that class-wide direct behavior rating data are consistent with systematic direct observation across phases, suggesting that in this case study, direct behavior rating data are sensitive to classroom-level intervention effects. Implications for future research are discussed.

In addition, the following study provides evidence DBR-SIS for both Academic Engagement and Disruptive Behavior is also sensitive to change in an intensive need population. In this study students, all had demonstrated weakness in social confidence and a majority were diagnosed formally with autism or emotional disturbances. In these studies, it was demonstrated that when behavior changed over time, both DBR and systemic direct observation altered accordingly.  In this case, SDO was used as a marker to document DBR sensitivity to change for both academic engagement and disruptive behavior.

Kilgus, S. P., Riley-Tillman, T. C., Stichter, J. P., Schoemann, A., & Owens, S. (in press). Examining the concurrent criterion-related validity of Direct Behavior Rating Single Item Scales (DBR-SIS) with students with high functioning autism. Assessment for Effective Intervention.

A line of research has supported the development and validation of Direct Behavior Rating – Single Item Scales (DBR-SIS) for use in progress monitoring. Yet, this research was largely conducted within the general education setting with typically developing children. It is unknown whether the tool may be defensibly used with students exhibiting more substantial concerns, including students with social competence difficulties. The purpose of this investigation was to examine the concurrent validity of DBR-SIS in a middle school sample of students exhibiting substantial social competence concerns (n = 58). Students were assessed using both DBR-SIS and systematic direct observation (SDO) across three target behaviors. Each student was enrolled in one of two interventions: the Social Competence Intervention or a business-as-usual control condition. Students were assessed across three time points, including baseline, mid-intervention, and post-intervention. A review of across-time correlations indicated small to moderate correlations between DBR-SIS and SDO data (r = .25-.45). Results further suggested that the relationships between DBR-SIS and SDO targets were small to large at baseline. Correlations attenuated over time, though differences across time points were not statistically significant. This was with the exception of academic engagement correlations, which remained moderate-high across all time points. 

Reliability: Intensive Population

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
Ratingdashdash

Justify the appropriateness of each type of reliability reported:

No qualifying evidence provided.

 

Describe the sample characteristics for each reliability analysis conducted:

No qualifying evidence provided.

 

Describe reliability of the slope analyses conducted with a population of students in need of intensive intervention:

No qualifying evidence provided.

Validity: Intensive Population

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
RatingEmpty bubbleEmpty bubble

Describe and justify the criterion measures used to demonstrate validity:

Concurrent validity serves as the primary source of data presented as related to DBR-SIS. As described, the intended purpose of DBR-SIS is in formative uses. As such, a primary source of validity data comes from concurrent comparisons with Systematic Direct Observation.

Kilgus, S. P., Riley-Tillman, T. C., Stichter, J. P., Schoemann, A., & Owens, S. (in press). Examining the concurrent criterion-related validity of Direct Behavior Rating Single Item Scales (DBR-SIS) with students with high functioning autism. Assessment for Effective Intervention.

SDO data were collected across 15-min observation sessions, each of which was divided into 30-sec intervals. The SDO employed both partial interval recording and momentary time sampling recording to estimate percentage of time target students engaged in relevant classroom behaviors. Three OCF behaviors were considered as part of this study. Academic engagement (SDO-AE) was defined as physical orientation to the teacher or current stimuli or active participation in the lesson or social interaction. Disruptive behavior (SDO-DB) was defined as purposeful engagement in behavior that interrupts the natural flow of academic instruction or classroom functioning. Noncompliance (SDO-NC) was defined as failure to follow/complete verbal or gestural behavioral directions provided by the teacher to a group or target student within 5 seconds. Note that SDO-DB and NC were coded using partial interval recording (where a behavior was marked as having occurred if it was observed at any point within each 30-sec interval), whereas SDO-AE was coded using momentary time sampling (where a behavior was marked as having occurred if it was observed at the end of each 30-sec interval). Partial interval was deemed appropriate given the typically irregular and brief, albeit still interruptive, nature of both DB and NC. Momentary time sampling was also considered appropriate given the expectation of frequent and nearly continuous AE within the classroom.

 

Describe the sample characteristics for each validity analysis conducted:

Participants met the following inclusion criteria: (a) student aged 11 to 14, (b) diagnosis of ASD or Special Education eligibility criteria of autism or school-identified social need, and (c) cognitive functioning (i.e., full-scale IQ) within 2.0 standard deviations of the mean. A sample of 33 students at six schools constituted the SCI-A group and 30 students at six schools constituted the BAU group for a total of 63 participants. Two students were dropped from analyses because of misreported IQ scores and one additional student was dropped because of a lack of data on outcome measures. The resulting sample includes 60 students (29 SCI-A and 31 BAU). Parent consent and student assent were obtained before the start of the study. Across all student participants, 55 students were male and 5 were female. The majority of participants met criteria for special education services, specifically 43.33% in the Autism category, 25% in the Emotional Disturbance category, and 20% in the Other Health Impairment category. Two students met eligibility for Specific Learning Disability, and one student met eligibility for Speech/Language Impairment. Four students did not have a current individualized education plan (IEP), and one student had a Section 504 Plan without an IEP.

 

Describe predictive validity of the slope of improvement analyses conducted with a population of students in need of intensive intervention:

Correlation coefficients were calculated to examine the relationship between each DBR and SDO target within each time point (i.e., pre, mid, post). hypothesized convergent relations corresponded to the pairings of (a) DBR-AE and SDO-AE, (b) DBR-DB and SDO-DB, (c) DBR-RS and SDO-NC. All other DBR-SDO pairings were hypothesized to be discriminant relations and were thus expected to be lower in magnitude relative to convergent relations. We followed Cohen’s (1988) guidelines for effect size interpretations of correlation magnitudes, where r ≥ .10 was considered small, r ≥ .30 medium, and r ≥ .50 large. In the interest of limiting over-interpretation of spurious or non-meaningful relations, conclusions regarding the presence of concurrent criterion-related validity were limited to medium and large correlations.

Next, correlation coefficients were compared across time points within each DBR-SDO pairing to examine the extent to which correlation magnitude varied over time. This testing was accomplished via chi-square nested model comparisons between a model with correlations freely estimated across time to a model that specified correlational equivalence (H0: ρ1 = ρ2= ρ3). Finally, a single overall correlation was estimated and evaluated within each DBR-SDO pair to evaluate the relationship between each measure across all time points. All correlations were estimated with Mplus v. 7.11 (Muthén & Muthén, 1998–2013).

 

Type of Validity

Age or Grade

Test or Criterion

n (examinees)

n (raters)

Coefficient

Confidence Interval

Concurrent

Ages 11-14

Systematic Direct Observation

23

 

 

 

 

 

63

0.32

-

 

Describe the degree to which the provided data support the validity of the tool:

The results of this study are consistent with all other validity research on DBR-AE. In addition, other research has included student with high needs. This combination extends the support for the validity of DBR as a progress monitoring tool for children with intensive needs.

Decision Rules: Changing Intervention

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
Ratingdashdash

Specification of validated decision rules for when changes to the intervention should be made:

No qualifying evidence provided.

 

Evidentiary basis for these rules:

No qualifying evidence provided.

Decision Rules: Choosing Intervention

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
Ratingdashdash

Specification of validated decision rules to inform intervention selection:

No qualifying evidence provided.

 

Evidentiary basis for these rules:

No qualifying evidence provided.

Administration Format

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
Data
  • Direct Observation
  • Rating Scale
  • Direct Observation
  • Rating Scale

Admin & Scoring Time

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
Data
  • Variable
  • Variable

Scoring Format

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
Data
  • Computer-scored
  • Computer-scored

Levels of Performance

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
Data
  • At-risk / Not-at-risk
  • At-risk / Not-at-risk

Specify the levels of performance and how they are used for progress monitoring:

Chafouleas, S. M., Kilgus, S. P., Jaffery, R., Riley-Tillman, T. C., & Welsh, M. (2013) Direct Behavior Rating as a school-based behavior screener for elementary and middle grades. Journal of School Psychology, 51(3), 367-385.

 

Johnson, A. H., Miller, F. G., Chafouleas, S. M., Welsh, M. E., Riley-Tillman, T. C., & Fabiano, G. (2016) Evaluating the technical adequacy of DBR-SIS in tri-annual behavioral screening: A multisite investigation. Journal of School Psychology, 54, 39-57.

The manuscripts cited above describe how levels of performance were obtained through use of ROC analyses. These analyses result in conditional probability indices which can be used to determine an optimal cut score for determining risk. This cut score serves as the level of performance with which a comparison of individual student can be made. These publications established cut scores with relatively small confidence intervals. Findings indicated that the established cuts were much more accurate in identifying at-risk students than would be expected from identifying students via chance alone. The following cut scores were decided for various grade groups:

Early Elementary (K-2)

DB = 2

Upper Elementary (3-5)

DB = 1

Middle School (6-8)

DB = 1

This information is presented as preliminary, and with a few important caveats. For example, subsequent analyses utilizing data collected from a different sample seem to indicate that it may not be appropriate to set uniform cuts across a grade group. In particular, cuts are not consistent across grade levels for upper elementary students, and different cuts may be needed for different portions of the school year (Fall, Winter, Spring).

Usability Study

Age/Grade: InformantGrades 6-8:
Teacher
Grades K-5:
Teacher
DataYesYes

If a usability study has been conducted on your tool, describe the results of the study:

Riley-Tillman, T.C., Chafouleas, S.M., Briesch, A.M., & Eckert, T. (2008) Daily Behavior Report Cards and Systematic Direct Observation: An investigation of the acceptability, reported training and use, and decision reliability among school psychologists. Journal of Behavioral Education, 17, 313-327. doi:s10864-008-9070-5

Abstract: More than ever, educators require assessment procedures and instrumentation that are technically adequate as well as efficient to guide data-based decision making. Thus, there is a need to understand perceptions of available tools, and the decisions made when using collected data, by the primary users of those data. In this paper, two studies that surveyed members of the National Association of School Psychologists with regard to two procedures useful in formative assessment, (i.e., Daily Behavior Report Cards; Systematic Direct Observation), are presented. Participants reported greater overall levels of training and use of Systematic Direct Observation than Daily Behavior Report Cards, yet both techniques were rated as equally acceptable for use in formative assessment. Furthermore, findings supported that school psychologists tend to make similar intervention decisions when presented with both types of data. Implications, limitations, and future directions are discussed.

Miller, F. G., Neugebauer, S. R., Chafouleas, S. M., Briesch, A. M., Welsh, M. E., Riley-Tillman, T. C., & Fabiano, G. A. (2012, August). Teacher perceptions of behavior screening assessments. Poster presentation at the American Psychological Association Annual Convention, Orlando, FL.

Abstract: This study aimed to investigate teachers’ perceptions of the usability (acceptability, understanding, feasibility, home-school collaboration, systems climate, and systems support) of three school-based behavior assessments. Public school teachers in grades 1, 2, 4, 5, 7, and 8 and located across three geographic locations (N = 133) served as participants. Overall, teachers rated the three behavioral assessments positively, with perceived greater understanding of DBR-SIS than other measures. One possible reason for the perceived greater understanding may be that DBR-SIS may be more easily interpreted than other rating scale formats because DBR-SIS ratings are intend to reflect the percentage of time a student engaged in a target behavior. Understanding teacher perceptions of behavioral rating scales is important; such assessments can be used to identify barriers to implementation for the purpose of either removing those barriers or for selecting an alternative option with greater likelihood of success.

 

If a social validity study has been conducted on your tool, describe the results of the study:

No qualifying evidence provided.