Classworks Progress Monitoring
Reading
Summary
Classworks Progress Monitoring is a web-based assessment and reporting system that includes both easy-to-administer progress monitoring assessments and real-time data and reports. Classworks Progress Monitoring assessments are formal assessments used to monitor student performance and measure growth on key strands. Classworks Progress Monitoring allows teachers to track progress and retention of gains with assessments for grades 1-8, given online for immediate and automatic scoring and reporting. Administered weekly, the assessments monitor progress throughout the intervention period and indicate student rate of improvement.
- Where to Obtain:
- Classworks
- hello@classworks.com
- 3470 McClure Bridge Road #3242 Duluth, GA 30096
- 770-355-5555
- www.classworks.com
- Initial Cost:
- $5.00 per student
- Replacement Cost:
- Contact vendor for pricing details.
- Included in Cost:
- Classworks professional development includes free online training and virtual training sessions. Training days can be purchased at $995 - $2,500 per day depending on the type of training and volume of days purchased.
- Classworks assessments and instruction are designed to be accessible for most students. Accessible design features included should aid most students who typically require testing accommodations such as large print, audio support, or extra time. Accommodations included are as follows: Extra time—Students may need extra time to complete the task. The assessment may be stopped and started as needed to allow students needing extra time to finish, is untimed, and can be administered in multiple test sessions. Administrations—Students may be given the amount of days necessary to complete the test. Presentation—Classworks diagnostic items are presented in a large, easily legible format specifically chosen for its readability. With HTML5, you have the ability to change the screen size and font size. Audio Support–Audio support is available for all grades. Setting—Classworks assessments are web-based. They can be completed on any device with internet access that meets technical requirements. Students may use headphones to benefit from audio support. Response—Classworks assessments are easily completed on a computer using point and click or on a tablet device using touch screens for those with motor impairment.
- Training Requirements:
- Less than one hour
- Qualified Administrators:
- No minimum qualifications specified.
- Access to Technical Support:
- Assessment Format:
-
- Computer-administered
- Scoring Time:
-
- Scoring is automatic OR
- 0 minutes per
- Scores Generated:
-
- Raw score
- Grade equivalents
- IRT-based score
- Developmental benchmarks
- Equated
- Administration Time:
-
- 20 minutes per student
- Scoring Method:
-
- Automatically (computer-scored)
- Technology Requirements:
-
- Computer or tablet
- Internet connection
Tool Information
Descriptive Information
- Please provide a description of your tool:
- Classworks Progress Monitoring is a web-based assessment and reporting system that includes both easy-to-administer progress monitoring assessments and real-time data and reports. Classworks Progress Monitoring assessments are formal assessments used to monitor student performance and measure growth on key strands. Classworks Progress Monitoring allows teachers to track progress and retention of gains with assessments for grades 1-8, given online for immediate and automatic scoring and reporting. Administered weekly, the assessments monitor progress throughout the intervention period and indicate student rate of improvement.
- Is your tool designed to measure progress towards an end-of-year goal (e.g., oral reading fluency) or progress towards a short-term skill (e.g., letter naming fluency)?
-
ACADEMIC ONLY: What dimensions does the tool assess?
- BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.
- BEHAVIOR ONLY: Which category of behaviors does your tool target?
Acquisition and Cost Information
Administration
Training & Scoring
Training
- Is training for the administrator required?
- Yes
- Describe the time required for administrator training, if applicable:
- Less than one hour
- Please describe the minimum qualifications an administrator must possess.
- No minimum qualifications
- Are training manuals and materials available?
- Yes
- Are training manuals/materials field-tested?
- Yes
- Are training manuals/materials included in cost of tools?
- Yes
- If No, please describe training costs:
- Can users obtain ongoing professional and technical support?
- Yes
- If Yes, please describe how users can obtain support:
Scoring
- Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
- Do you provide basis for calculating slope (e.g., amount of improvement per unit in time)?
- Yes
- ACADEMIC ONLY: Do you provide benchmarks for the slopes?
- ACADEMIC ONLY: Do you provide percentile ranks for the slopes?
- Yes
- Describe the tool’s approach to progress monitoring, behavior samples, test format, and/or scoring practices, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
Rates of Improvement and End of Year Benchmarks
- Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in your manual or published materials?
- Yes
- If yes, specify the growth standards:
- The Rate of Improvement is calculated for each student. This target line is used to monitor the progress of the intervention. The student’s trend line is a prediction of how a student will progress during the intervention period, based on their CBM results. This is initially calculated after three probes have been completed. Classworks calculates the trend line, also known as the rate of improvement or slope, using an ordinary least squares regression equation. In essence, it comes down to the number of points the student is expected to improve each week which varies depending on what each student needs to make enough progress.
- Are benchmarks for minimum acceptable end-of-year performance specified in your manual or published materials?
- No
- If yes, specify the end-of-year performance standards:
- Date
- Size
- Male
- Female
- Unknown
- Eligible for free or reduced-price lunch
- Other SES Indicators
- White, Non-Hispanic
- Black, Non-Hispanic
- Hispanic
- American Indian/Alaska Native
- Asian/Pacific Islander
- Other
- Unknown
- Disability classification (Please describe)
- First language (Please describe)
- Language proficiency status (Please describe)
Performance Level
Reliability
Grade |
Grade 2
|
Grade 3
|
Grade 4
|
Grade 5
|
Grade 6
|
Grade 7
|
---|---|---|---|---|---|---|
Rating |
- *Offer a justification for each type of reliability reported, given the type and purpose of the tool.
- The Classworks math assessment affords the means to screen students on multiple occasions (e.g., Fall, Winter, Spring) during the school year. Thus, test-retest reliability is necessary, and we estimate test-retest reliability via the Pearson correlation between Classworks Screener scores of students taking test in two terms within the school year (Fall/Winter, and Winter/Spring). The second reliability test is Cronbach’s Alpha, a measure of internal consistency. This analysis was conducted on a sample of students who had posted scores for three sets of Classworks Screener questions, all of which aimed to measure a single construct--student’s proficiency in reading.
- *Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
- See sample size information in the data.
- *Describe the analysis procedures for each reported type of reliability.
- See above question 1.
*In the table(s) below, report the results of the reliability analyses described above (e.g., model-based evidence, internal consistency or inter-rater reliability coefficients). Include detail about the type of reliability data, statistic generated, and sample size and demographic information.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
- Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
- No
If yes, fill in data for each subgroup with disaggregated reliability data.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
Validity
Grade |
Grade 2
|
Grade 3
|
Grade 4
|
Grade 5
|
Grade 6
|
Grade 7
|
---|---|---|---|---|---|---|
Rating |
- *Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
- The validity evidence for the Classworks assessments comes from the relationships of Classworks test scores to NWEA MAP Growth test scores. These relationships include a) the concurrent performance of students on Classworks tests with their performance on MAP Growth tests and b) the predictive relationship between students’ performance on Classworks tests with their performance, two testing terms later, on MAP Growth tests. The Measures of Academic Progress (MAP) is used as the outcome measure. Published by the NWEA the MAP Growth is regarded as a highly valid and reliable measure of broad reading ability. The NWEA website states, “Our tools are trusted by educators in 140 countries and more than half the schools in the US” which indicates it can be considered an excellent outcome measure for classification studies.
- *Describe the sample(s), including size and characteristics, for each validity analysis conducted.
- See data chart for sample size information.
- *Describe the analysis procedures for each reported type of validity.
- For the validity analysis conducted, we used concurrent and predictive validity. Concurrent validity was estimated as the Pearson correlation coefficient between student scores from fall 2017 and the same students’ total scale score on the Map Growth assessment (also administered in fall 2017). Predictive validity was estimated as the Pearson correlation coefficient between student scores from a given term (fall 2017) and the same students’ total scale score on the MAP Growth assessment administered in winter 2017-2018.
*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
- Describe the degree to which the provided data support the validity of the tool.
- Concurrent and predictive validity coefficients, for each grade and each time of year, were consistently in the mid to high 0.60s to 0.70s. This validity evidence demonstrates a strong relationship between the Classworks math assessment and the MAP Growth assessments across the grades and times of year reported.
- Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
- No
If yes, fill in data for each subgroup with disaggregated validity data.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
Bias Analysis
Grade |
Grade 2
|
Grade 3
|
Grade 4
|
Grade 5
|
Grade 6
|
Grade 7
|
---|---|---|---|---|---|---|
Rating | No | No | No | No | No | No |
- Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
- No
- If yes,
- a. Describe the method used to determine the presence or absence of bias:
- b. Describe the subgroups for which bias analyses were conducted:
- c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.
Growth Standards
Sensitivity: Reliability of Slope
Grade | Grade 2 | Grade 3 | Grade 4 | Grade 5 | Grade 6 | Grade 7 |
---|---|---|---|---|---|---|
Rating |
- Describe the sample, including size and characteristics. Please provide documentation showing that the sample was composed of students in need of intensive intervention. A sample of students with intensive needs should satisfy one of the following criteria: (1) all students scored below the 30th percentile on a local or national norm, or the sample mean on a local or national test fell below the 25th percentile; (2) students had an IEP with goals consistent with the construct measured by the tool; or (3) students were non-responsive to Tier 2 instruction. Evidence based on an unknown sample, or a sample that does not meet these specifications, may not be considered.
- Describe the frequency of measurement (for each student in the sample, report how often data were collected and over what span of time).
- Describe the analysis procedures.
In the table below, report reliability of the slope (e.g., ratio of true slope variance to total slope variance) by grade level (if relevant).
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
- Do you have reliability of the slope data that is disaggregated by subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?
If yes, fill in data for each subgroup with disaggregated reliability of the slope data.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
Sensitivity: Validity of Slope
Grade | Grade 2 | Grade 3 | Grade 4 | Grade 5 | Grade 6 | Grade 7 |
---|---|---|---|---|---|---|
Rating |
- Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
-
- Describe the sample(s), including size and characteristics. Please provide documentation showing that the sample was composed of students in need of intensive intervention. A sample of students with intensive needs should satisfy one of the following criteria: (1) all students scored below the 30th percentile on a local or national norm, or the sample mean on a local or national test fell below the 25th percentile; (2) students had an IEP with goals consistent with the construct measured by the tool; or (3) students were non-responsive to Tier 2 instruction. Evidence based on an unknown sample, or a sample that does not meet these specifications, may not be considered.
- Describe the frequency of measurement (for each student in the sample, report how often data were collected and over what span of time).
- Describe the analysis procedures for each reported type of validity.
In the table below, report predictive validity of the slope (correlation between the slope and achievement outcome) by grade level (if relevant).
NOTE: The TRC suggests controlling for initial level when the correlation for slope without such control is not adequate.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published validity studies:
- Provide citations for additional published studies.
- Describe the degree to which the provided data support the validity of the tool.
- Do you have validity of the slope data that is disaggregated by subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?
If yes, fill in data for each subgroup with disaggregated validity of the slope data.
Type of | Subscale | Subgroup | Informant | Age / Grade | Test or Criterion | n (sample/ examinees) |
n (raters) |
Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published validity studies:
- Provide citations for additional published studies.
Alternate Forms
Grade | Grade 2 | Grade 3 | Grade 4 | Grade 5 | Grade 6 | Grade 7 |
---|---|---|---|---|---|---|
Rating |
- Describe the sample for these analyses, including size and characteristics:
- What is the number of alternate forms of equal and controlled difficulty?
- If IRT based, provide evidence of item or ability invariance
- If computer administered, how many items are in the item bank for each grade level?
- If your tool is computer administered, please note how the test forms are derived instead of providing alternate forms:
Decision Rules: Setting & Revising Goals
Grade | Grade 2 | Grade 3 | Grade 4 | Grade 5 | Grade 6 | Grade 7 |
---|---|---|---|---|---|---|
Rating |
- In your manual or published materials, do you specify validated decision rules for how to set and revise goals?
- If yes, specify the decision rules:
-
What is the evidentiary basis for these decision rules?
NOTE: The TRC expects evidence for this standard to include an empirical study that compares a treatment group to a control and evaluates whether student outcomes increase when decision rules are in place.
Decision Rules: Changing Instruction
Grade | Grade 2 | Grade 3 | Grade 4 | Grade 5 | Grade 6 | Grade 7 |
---|---|---|---|---|---|---|
Rating |
- In your manual or published materials, do you specify validated decision rules for when changes to instruction need to be made?
- If yes, specify the decision rules:
-
What is the evidentiary basis for these decision rules?
NOTE: The TRC expects evidence for this standard to include an empirical study that compares a treatment group to a control and evaluates whether student outcomes increase when decision rules are in place.
Data Collection Practices
Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.