mCLASS
Mathematics
Summary
mCLASS:Math is a set of screening and progress monitoring measures for grades K-3. Measures in grades K and 1 are administered individually by a teacher using a handheld computer. While the student performs an assessment task using paper-based assessment materials or verbally presented prompts, the teacher follows along on the handheld, tapping with the stylus to record the student’s performance. The handheld software offers a pre-loaded class list indicating required assessment tasks, provides the teacher with directions and prompts to ensure standardized, accurate administration, and automates the precise timing requirements. Upon completion of each task, the handheld automatically calculates the student’s score and provides a risk evaluation. Measures in grades 2 and 3 are group administered on paper with online entry of scores to www.mclassmath.com. Student performance data can then be securely and immediately transferred to the Web-based mCLASS reporting system. All that is needed is a single computer with an available connection to the Internet to allow users to push one button and “sync” the assessment data to the reporting system. The mCLASS:Math Web site offers a range of reports at the district, school, class, and individual student level for further analysis. The set of measures in the screening are designed to be administered at the beginning, middle, and end of year, with alternate forms of all measures available for progress monitoring in between benchmark windows.
- Where to Obtain:
- Dr. Herbert Ginsburg of Teachers’ College, Columbia University / Wireless Generation, Inc
- 55 Washington Street Suite 800 Brooklyn, NY 11201-1071
- 800-823-1969, option 1
- www.wirelessgeneration.com
- Initial Cost:
- $13.90 per student
- Replacement Cost:
- $13.90 per student per year
- Included in Cost:
- $400 Start-up per campus - Remote Installation (one per campus). Telephone guidance through the installation of mCLASS software on teacher handhelds and desktop computers. Includes step-by-step walkthrough of the install process, troubleshooting, and verification of installation success. Onsite installation, including onsite installation of mCLASS software onto teacher handhelds, setup of up to three central sync stations per campus and preparation of a training room (typically a computer lab) available for $1400 per campus. The basic pricing plan is an annual per student license of $13.90. For users already using an mCLASS assessment product, the cost per student to add mCLASS:Math is $5 per student. Each teacher administering mCLASS:Math needs: • A handheld computer - typically costs less than $200 from national resellers. • Kit (contents described above): $35 • Internet connected computer for synchronization and viewing reports
- Training Requirements:
- 4-8 hours of training
- Qualified Administrators:
- Paraprofessional
- Access to Technical Support:
- Wireless Generation’s Customer Care Center offers complete user-level support from 7:00 a.m. to 7:00 p.m. EST, Monday through Friday. Customers may contact a customer support representative via telephone, e-mail, or electronically through the mCLASS Website. Calls to the Customer Care Center’s toll-free number are answered immediately by an automated attendant and routed to customer support agents according to regional expertise. Additionally, customers have self-service access to instructions, documents, and frequently asked questions on our Website. The research staff and product teams are available to answer questions about the content within the assessments.
- Assessment Format:
-
- One-to-one
- Scoring Time:
-
- 10 minutes per 2-3 students
- Scores Generated:
-
- Raw score
- Percentile score
- Developmental benchmarks
- Composite scores
- Subscale/subtest scores
- Administration Time:
-
- 2 minutes per student
- Scoring Method:
-
- Manually (by hand)
- Technology Requirements:
-
- Computer or tablet
- Internet connection
- Accommodations:
Descriptive Information
- Please provide a description of your tool:
- mCLASS:Math is a set of screening and progress monitoring measures for grades K-3. Measures in grades K and 1 are administered individually by a teacher using a handheld computer. While the student performs an assessment task using paper-based assessment materials or verbally presented prompts, the teacher follows along on the handheld, tapping with the stylus to record the student’s performance. The handheld software offers a pre-loaded class list indicating required assessment tasks, provides the teacher with directions and prompts to ensure standardized, accurate administration, and automates the precise timing requirements. Upon completion of each task, the handheld automatically calculates the student’s score and provides a risk evaluation. Measures in grades 2 and 3 are group administered on paper with online entry of scores to www.mclassmath.com. Student performance data can then be securely and immediately transferred to the Web-based mCLASS reporting system. All that is needed is a single computer with an available connection to the Internet to allow users to push one button and “sync” the assessment data to the reporting system. The mCLASS:Math Web site offers a range of reports at the district, school, class, and individual student level for further analysis. The set of measures in the screening are designed to be administered at the beginning, middle, and end of year, with alternate forms of all measures available for progress monitoring in between benchmark windows.
ACADEMIC ONLY: What skills does the tool screen?
- Please describe specific domain, skills or subtests:
- BEHAVIOR ONLY: Which category of behaviors does your tool target?
-
- BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.
Acquisition and Cost Information
Administration
- Are norms available?
- Yes
- Are benchmarks available?
- Yes
- If yes, how many benchmarks per year?
- 3
- If yes, for which months are benchmarks available?
- August-October, January-February, April-March
- BEHAVIOR ONLY: Can students be rated concurrently by one administrator?
- If yes, how many students can be rated concurrently?
Training & Scoring
Training
- Is training for the administrator required?
- Yes
- Describe the time required for administrator training, if applicable:
- 4-8 hours of training
- Please describe the minimum qualifications an administrator must possess.
- Paraprofessional
- No minimum qualifications
- Are training manuals and materials available?
- Yes
- Are training manuals/materials field-tested?
- Yes
- Are training manuals/materials included in cost of tools?
- Yes
- If No, please describe training costs:
- Can users obtain ongoing professional and technical support?
- Yes
- If Yes, please describe how users can obtain support:
- Wireless Generation’s Customer Care Center offers complete user-level support from 7:00 a.m. to 7:00 p.m. EST, Monday through Friday. Customers may contact a customer support representative via telephone, e-mail, or electronically through the mCLASS Website. Calls to the Customer Care Center’s toll-free number are answered immediately by an automated attendant and routed to customer support agents according to regional expertise. Additionally, customers have self-service access to instructions, documents, and frequently asked questions on our Website. The research staff and product teams are available to answer questions about the content within the assessments.
Scoring
- Do you provide basis for calculating performance level scores?
-
Yes
- Does your tool include decision rules?
- If yes, please describe.
- Can you provide evidence in support of multiple decision rules?
-
No
- If yes, please describe.
- Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
- Raw scores on each measure are the number of all correct responses within the time limit (1 or 2 minutes). For Computation in grades 2-3, the raw score is the correct number of digits in the student’s responses (e.g. 25+8 = 23 is worth 1 point, 25+8=33 is worth 2 points). Percentile ranks for each measure and time of year are calculated at the district-level. Developmental benchmarks for each measure, grade, and time of year (beginning, middle, end) report each score as deficit, emerging, or established. Composite score: Based on the number of measures on which the student performs at deficit, emerging, or established given time of year, an overall instructional support recommendation is reported for each student. Slopes of student performance over the course of a year on any measure are automatically graphed on web based reports at www.mclassmath.com. An aimline is charted from the student’s screening score to an end of year goal
- Describe the tool’s approach to screening, samples (if applicable), and/or test format, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
- In each grade, a set of measures (4 in K, 6 in G1, 5 in G2-3) make up the screening battery that is given to all students three times per year. In grades K-1, each measure is timed at one minute and individually administered using handheld software. In grades 2-3, the measures are timed at two minutes and group-administered. Progress monitoring is designed to be administered as often as bi-weekly for students who are identified by the screening as needing intensive instructional support, or monthly for those at lesser risk. The measures can be administered in English or Spanish. The language and reading requirements for completing the measures have been kept to a minimum so that they address mathematics ability with as little interference as possible from language or cultural factors. mCLASS:Math handheld-to-Web software also enables teachers to administer individual Diagnostic Interviews to observe students problem-solving, probe students’ mathematical thinking, and receive tailored instructional guidance
Technical Standards
Classification Accuracy & Cross-Validation Summary
Grade |
Kindergarten
|
Grade 1
|
Grade 2
|
Grade 3
|
---|---|---|---|---|
Classification Accuracy Fall | ||||
Classification Accuracy Winter | ||||
Classification Accuracy Spring |
Woodcock-Johnson III
Classification Accuracy
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- Administered to students in Kindergarten through Grade 3 within days of administration of mCLASS: Math, the Woodcock-Johnson III (WJ-III) Broad Math standard score was used to define the “risk” and “no-risk” categories. The “risk” level was assigned as to students demonstrating a WJ-III Broad Math score less than 100, which indicates the normative average ability; “no risk” was assigned to students with WJ-III. Broad Math standard scores greater than 100.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- Student performance on individual measures is classified as “Deficit”, “Emerging”, or “Established” based on previous analysis employing a norm-referenced achievement measure (Woodcock-Johnson III). Overall status is further described as “Intensive”, “Strategic”, and “Benchmark”, indicating a decreasing need for instructional support. The determination of overall status levels is the result of expert review of possible patterns of classification resulting from all individual measures at each grade level. “Benchmark” as an overall status of risk is intended to indicate students at very low risk of later mathematics difficulty. Conversely, “Intensive” as an overall status is intended to indicate students who are currently and very likely to experience future mathematics difficulty, barring intensive instructional support above and beyond typical classroom experience. An overall status of “Strategic” indicates students that may be at some risk for mathematics difficulty, but who can also turn out to be on track with some less intensive instructional supports. The three-tiered delineation of risk is designed to identify the students most in need of instructional support (“Intensive”) and those for whom we can say with greater confidence, are on track. For the current analysis, results are reported using first “Benchmark” (low risk) and then “Intensive” and below (at risk) as the cut-points. ROC Curves were calculated in the R software package (R Development Core Team, 2011) using the “ROCR” library. The outcome measure was the dichotomized criterion variable described previously and the predictor measure was the observed overall status
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Cross-Validation
- Has a cross-validation study been conducted?
-
No
- If yes,
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Classification Accuracy - Fall
Evidence | Kindergarten | Grade 1 | Grade 2 | Grade 3 |
---|---|---|---|---|
Criterion measure | Woodcock-Johnson III | Woodcock-Johnson III | Woodcock-Johnson III | Woodcock-Johnson III |
Cut Points - Percentile rank on criterion measure | ||||
Cut Points - Performance score on criterion measure | 99.00 | 99.00 | 99.00 | 99.00 |
Cut Points - Corresponding performance score (numeric) on screener measure | ||||
Classification Data - True Positive (a) | 19 | 2 | 66 | 54 |
Classification Data - False Positive (b) | 59 | 76 | 52 | 54 |
Classification Data - False Negative (c) | 8 | 0 | 18 | 19 |
Classification Data - True Negative (d) | 112 | 147 | 108 | 73 |
Area Under the Curve (AUC) | 0.68 | 0.66 | 0.73 | 0.67 |
AUC Estimate’s 95% Confidence Interval: Lower Bound | ||||
AUC Estimate’s 95% Confidence Interval: Upper Bound |
Statistics | Kindergarten | Grade 1 | Grade 2 | Grade 3 |
---|---|---|---|---|
Base Rate | 0.14 | 0.01 | 0.34 | 0.37 |
Overall Classification Rate | 0.66 | 0.66 | 0.71 | 0.64 |
Sensitivity | 0.70 | 1.00 | 0.79 | 0.74 |
Specificity | 0.65 | 0.66 | 0.68 | 0.57 |
False Positive Rate | 0.35 | 0.34 | 0.33 | 0.43 |
False Negative Rate | 0.30 | 0.00 | 0.21 | 0.26 |
Positive Predictive Power | 0.24 | 0.03 | 0.56 | 0.50 |
Negative Predictive Power | 0.93 | 1.00 | 0.86 | 0.79 |
Sample | Kindergarten | Grade 1 | Grade 2 | Grade 3 |
---|---|---|---|---|
Date | 2008 | 2008 | 2008 | 2008 |
Sample Size | 198 | 225 | 244 | 200 |
Geographic Representation | Middle Atlantic (NY) West North Central (MO) |
Middle Atlantic (NY) West North Central (MO) |
Middle Atlantic (NY) West North Central (MO) |
Middle Atlantic (NY) West North Central (MO) |
Male | ||||
Female | ||||
Other | ||||
Gender Unknown | ||||
White, Non-Hispanic | ||||
Black, Non-Hispanic | ||||
Hispanic | ||||
Asian/Pacific Islander | ||||
American Indian/Alaska Native | ||||
Other | ||||
Race / Ethnicity Unknown | ||||
Low SES | ||||
IEP or diagnosed disability | ||||
English Language Learner |
Reliability
Grade |
Kindergarten
|
Grade 1
|
Grade 2
|
Grade 3
|
---|---|---|---|---|
Rating |
- *Offer a justification for each type of reliability reported, given the type and purpose of the tool.
- Not Provided
- *Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
- Not Provided
- *Describe the analysis procedures for each reported type of reliability.
- Not Provided
*In the table(s) below, report the results of the reliability analyses described above (e.g., internal consistency or inter-rater reliability coefficients).
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
- Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
If yes, fill in data for each subgroup with disaggregated reliability data.
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
Validity
Grade |
Kindergarten
|
Grade 1
|
Grade 2
|
Grade 3
|
---|---|---|---|---|
Rating |
- *Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
- Not Provided
- *Describe the sample(s), including size and characteristics, for each validity analysis conducted.
- Not Provided
- *Describe the analysis procedures for each reported type of validity.
- Not Provided
*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
- Describe the degree to which the provided data support the validity of the tool.
- Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
If yes, fill in data for each subgroup with disaggregated validity data.
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- Provide citations for additional published studies.
Bias Analysis
Grade |
Kindergarten
|
Grade 1
|
Grade 2
|
Grade 3
|
---|---|---|---|---|
Rating | No | No | No | No |
- Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
- No
- If yes,
- a. Describe the method used to determine the presence or absence of bias:
- b. Describe the subgroups for which bias analyses were conducted:
- c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.
Data Collection Practices
Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.