Achieve3000's LevelSet
Reading
Summary
Developed in collaboration with MetaMetrics®, Inc., the makers of the Lexile Framework for Reading®, the LevelSet™ universal screener establishes each student’s initial Lexile reading level in English or in Spanish. LevelSet is the only assessment of its kind that measures a student’s ability to comprehend informational text and provides a scale score that matches reading ability with text complexity. It can be administered up to three times per year, first as a pre-test to establish a baseline Lexile level, forecast readiness for university and career benchmarks, match students with differentiated, tailored text; and identify the best solution and implementation that will promote accelerated growth for every student. Interim and post-test administrations provide a summative measure of student growth. LevelSet can be used as a stand-alone assessment or in conjunction with Achieve3000 differentiated instruction. During the test, students read a series of approximately 30 paragraph-long passages and answer a cloze-style question about each one.
- Where to Obtain:
- Achieve3000® and MetaMetrics®, Inc.
- orders@achieve3000.com
- 1985 Cedar Bridge Avenue, Suite 3, Lakewood, NJ 08701
- 732.367.5505
- www.achieve3000.com
- Initial Cost:
- $11.00 per student
- Replacement Cost:
- Contact vendor for pricing details.
- Included in Cost:
- LevelSet can be licensed for $11 per student per year. Online training is available at $440 per session, and onsite training is $2,300 per day. Included in the licenses is LevelSet pre, interim, and post-test administrations in English and Spanish for grades 2-12 and adult learners with 3 equivalent, alternate forms per grade. As a cloud-based solution, LevelSet can be used on any device with Internet connectivity.
- The Achieve3000 platform uses design principals which meet ADA and Section 508 requirements. Scaffolds are not provided during the LevelSet assessment for students with disabilities.
- Training Requirements:
- Less than 1 hr of training
- Qualified Administrators:
- No minimum qualifications specified.
- Access to Technical Support:
- On-demand online resources in Ask Achieve3000 provide step-by-step instructions for teachers to administer the LevelSet assessment, and student-facing videos serve to introduce students to the LevelSet assessment, its purpose, administration tips, and preparation guidelines. In addition to these on-demand assets, Achieve3000 curriculum and implementation managers—through onsite or live online training, and our customer support department—through phone or email communications—can respond to any questions that may arise as the assessment is administered.
- Assessment Format:
-
- Direct: Computerized
- Scoring Time:
-
- Scoring is automatic
- Scores Generated:
-
- Percentile score
- Normal curve equivalents
- Lexile score
- Administration Time:
-
- 15 minutes per assessment
- Scoring Method:
-
- Automatically (computer-scored)
- Technology Requirements:
-
- Computer or tablet
- Internet connection
- Accommodations:
- The Achieve3000 platform uses design principals which meet ADA and Section 508 requirements. Scaffolds are not provided during the LevelSet assessment for students with disabilities.
Descriptive Information
- Please provide a description of your tool:
- Developed in collaboration with MetaMetrics®, Inc., the makers of the Lexile Framework for Reading®, the LevelSet™ universal screener establishes each student’s initial Lexile reading level in English or in Spanish. LevelSet is the only assessment of its kind that measures a student’s ability to comprehend informational text and provides a scale score that matches reading ability with text complexity. It can be administered up to three times per year, first as a pre-test to establish a baseline Lexile level, forecast readiness for university and career benchmarks, match students with differentiated, tailored text; and identify the best solution and implementation that will promote accelerated growth for every student. Interim and post-test administrations provide a summative measure of student growth. LevelSet can be used as a stand-alone assessment or in conjunction with Achieve3000 differentiated instruction. During the test, students read a series of approximately 30 paragraph-long passages and answer a cloze-style question about each one.
ACADEMIC ONLY: What skills does the tool screen?
- Please describe specific domain, skills or subtests:
- BEHAVIOR ONLY: Which category of behaviors does your tool target?
-
- BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each sub-domain or sub-construct.
Acquisition and Cost Information
Administration
- Are norms available?
- Yes
- Are benchmarks available?
- Yes
- If yes, how many benchmarks per year?
- 1
- If yes, for which months are benchmarks available?
- spring
- BEHAVIOR ONLY: Can students be rated concurrently by one administrator?
- If yes, how many students can be rated concurrently?
Training & Scoring
Training
- Is training for the administrator required?
- Yes
- Describe the time required for administrator training, if applicable:
- Less than 1 hr of training
- Please describe the minimum qualifications an administrator must possess.
- No minimum qualifications
- Are training manuals and materials available?
- Yes
- Are training manuals/materials field-tested?
- Yes
- Are training manuals/materials included in cost of tools?
- Yes
- If No, please describe training costs:
- Can users obtain ongoing professional and technical support?
- Yes
- If Yes, please describe how users can obtain support:
- On-demand online resources in Ask Achieve3000 provide step-by-step instructions for teachers to administer the LevelSet assessment, and student-facing videos serve to introduce students to the LevelSet assessment, its purpose, administration tips, and preparation guidelines. In addition to these on-demand assets, Achieve3000 curriculum and implementation managers—through onsite or live online training, and our customer support department—through phone or email communications—can respond to any questions that may arise as the assessment is administered.
Scoring
- Do you provide basis for calculating performance level scores?
-
Yes
- Does your tool include decision rules?
-
Yes
- If yes, please describe.
- During the first administration of LevelSet, a student’s first 5 or 10 questions are examined, and the test administration can be stopped if warranted. The rules are as follows a. If the student is scoring all of the first five items incorrectly, then the administration is stopped, and the student is presented with a lower-level version of the assessment to complete. b. If the student is scoring more than five of the first 10 items incorrectly, then the administration is stopped, and the student is presented with a lower-level version of the assessment to complete. Students will receive a raw score and Lexile measure based on performance on the test level completed.
- Can you provide evidence in support of multiple decision rules?
-
No
- If yes, please describe.
- Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
- Describe the tool’s approach to screening, samples (if applicable), and/or test format, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
- LevelSet is appropriate for use with student subgroups, including English learners and students with disabilities. The following criteria were established to ensure no bias was exhibited in the identification and development of passages for the Achieve3000 LevelSet reading assessments: • Grade-level appropriate reading passages should be age-appropriate for the grade the passage is intended to be used with, according to typical reading levels. • Reading passages should use standard English conventions appropriate for students at the targeted grade level. • All passages and items should be free from bias based on race, gender, age, ethnicity, religion, disability, sexual orientation, or socioeconomic status. No group should have an advantage over another because of values, vocabulary, phrasing, or assumptions in a passage. Avoid stereotypes of ethnic or gender groups in passages and items. • To the degree possible, prior knowledge should not be required for the examinee to understand or appreciate the passage. References to events, people, and places should be explained within the passage unless considered common knowledge. Figurative language should be explained within the passage or be defined through context. • All passages should avoid topics that may be offensive to, or induce an emotional reaction from, an examinee, parent, or citizen group (e.g., violence, abuse, terminal illness, poverty). In addition, item writers were provided with additional training related to sensitivity issues including identifying areas to avoid when selecting passages and developing items. Training materials were developed based on material published by CTB/McGraw-Hill (Guidelines for Bias-Free Publishing) universal design and fair-access— equal treatment of the sexes, fair representation of minority groups, and the fair representation of disabled individuals. Finally, all items went through a two-stage internal review process prior to completion.
Technical Standards
Classification Accuracy & Cross-Validation Summary
Grade |
Grade 3
|
Grade 4
|
Grade 5
|
Grade 6
|
Grade 7
|
Grade 8
|
Grade 9
|
---|---|---|---|---|---|---|---|
Classification Accuracy Fall | |||||||
Classification Accuracy Winter | |||||||
Classification Accuracy Spring |
California Assessment of Student Performance and Progress (CAASPP)
Classification Accuracy
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- The CAASPP is the state assessment of ELA and is administered during the spring to all students in Grades 3-6.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- For the CAASPP, the 20th percentile from SBAC (http://www.smarterbalanced.org/assessments/development/percentiles/) was used. For the Achieve3000 program, the Level 1 cut in the fall (spring Level 1 cut minus 40L) was used where performance (i.e. reading grade-level-appropriate text) that did not exhibit sufficient mastery of knowledge and skills to be successful at the next grade level and demonstrate an insufficient understanding of the knowledge and skills measured at this grade level.
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Cross-Validation
- Has a cross-validation study been conducted?
-
No
- If yes,
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
State of Texas Assessment of Academic Readiness (STAAR)
Classification Accuracy
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- The STAAR is the state assessment of ELA and is administered during the spring to all students in Grades 3-8 and English I (Grade 9).
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the classification analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- For the STAAR, the 20th percentile (https://tea.texas.gov/student.assessment/staar/frequency-distributions/) was used. For the Achieve3000 program, the Level 1 cut in the fall (spring Level 1 cut minus 40L) was used where performance (i.e. reading grade-level-appropriate text) that did not exhibit sufficient mastery of knowledge and skills to be successful at the next grade level and demonstrate an insufficient understanding of the knowledge and skills measured at this grade level.
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Cross-Validation
- Has a cross-validation study been conducted?
-
No
- If yes,
- Describe the criterion (outcome) measure(s) including the degree to which it/they is/are independent from the screening measure.
- Describe when screening and criterion measures were administered and provide a justification for why the method(s) you chose (concurrent and/or predictive) is/are appropriate for your tool.
- Describe how the cross-validation analyses were performed and cut-points determined. Describe how the cut points align with students at-risk. Please indicate which groups were contrasted in your analyses (e.g., low risk students versus high risk students, low risk students versus moderate risk students).
- Were the children in the study/studies involved in an intervention in addition to typical classroom instruction between the screening measure and outcome assessment?
- If yes, please describe the intervention, what children received the intervention, and how they were chosen.
Classification Accuracy - Fall
Evidence | Grade 3 | Grade 4 | Grade 5 | Grade 6 | Grade 7 | Grade 8 | Grade 9 |
---|---|---|---|---|---|---|---|
Criterion measure | California Assessment of Student Performance and Progress (CAASPP) | California Assessment of Student Performance and Progress (CAASPP) | California Assessment of Student Performance and Progress (CAASPP) | State of Texas Assessment of Academic Readiness (STAAR) | State of Texas Assessment of Academic Readiness (STAAR) | State of Texas Assessment of Academic Readiness (STAAR) | State of Texas Assessment of Academic Readiness (STAAR) |
Cut Points - Percentile rank on criterion measure | 20 | 20 | 20 | 20 | 20 | 20 | 20 |
Cut Points - Performance score on criterion measure | 2338 | 2375 | 2415 | 1450 | 1511 | 1558 | 3440 |
Cut Points - Corresponding performance score (numeric) on screener measure | 227.5 | 347.5 | 462.5 | 517.5 | 587.5 | 622.5 | 737.5 |
Classification Data - True Positive (a) | |||||||
Classification Data - False Positive (b) | |||||||
Classification Data - False Negative (c) | |||||||
Classification Data - True Negative (d) | |||||||
Area Under the Curve (AUC) | 0.92 | 0.94 | 0.93 | 0.89 | 0.88 | 0.89 | 0.87 |
AUC Estimate’s 95% Confidence Interval: Lower Bound | 0.91 | 0.93 | 0.92 | 0.88 | 0.87 | 0.88 | 0.86 |
AUC Estimate’s 95% Confidence Interval: Upper Bound | 0.93 | 0.95 | 0.94 | 0.90 | 0.89 | 0.90 | 0.88 |
Statistics | Grade 3 | Grade 4 | Grade 5 | Grade 6 | Grade 7 | Grade 8 | Grade 9 |
---|---|---|---|---|---|---|---|
Base Rate | |||||||
Overall Classification Rate | |||||||
Sensitivity | |||||||
Specificity | |||||||
False Positive Rate | |||||||
False Negative Rate | |||||||
Positive Predictive Power | |||||||
Negative Predictive Power |
Sample | Grade 3 | Grade 4 | Grade 5 | Grade 6 | Grade 7 | Grade 8 | Grade 9 |
---|---|---|---|---|---|---|---|
Date | Spring 2017 | Spring 2017 | Spring 2017 | Spring 2017 | Spring 2017 | Spring 2017 | Spring 2017 |
Sample Size | |||||||
Geographic Representation | Pacific (CA) | Pacific (CA) | Pacific (CA) | West South Central (TX) | West South Central (TX) | West South Central (TX) | West South Central (TX) |
Male | |||||||
Female | |||||||
Other | |||||||
Gender Unknown | |||||||
White, Non-Hispanic | |||||||
Black, Non-Hispanic | |||||||
Hispanic | |||||||
Asian/Pacific Islander | |||||||
American Indian/Alaska Native | |||||||
Other | |||||||
Race / Ethnicity Unknown | |||||||
Low SES | |||||||
IEP or diagnosed disability | |||||||
English Language Learner |
Reliability
Grade |
Grade 3
|
Grade 4
|
Grade 5
|
Grade 6
|
Grade 7
|
Grade 8
|
Grade 9
|
---|---|---|---|---|---|---|---|
Rating |
- *Offer a justification for each type of reliability reported, given the type and purpose of the tool.
- • Internal Consistency: Internal-consistency reliability examines the extent to which a test measures a single basic concept. One procedure for determining the internal consistency of a test is coefficient alpha . Coefficient alpha sets an upper limit to the reliability of tests constructed in terms of the domain-sampling model. • Test-Retest: Test-retest reliability examines the stability of test scores over time. When the same test is administered twice within a reasonable time, the correlation of the results provides evidence of test-retest reliability. The closer the results, the greater the test-retest reliability of the assessment. • Alternate Form: Alternate-form reliability examines the consistency of test scores sampled from the same domain of items. When two forms that are considered to be parallel, or interchangeable (i.e. LevelSet Forms D and E) are administered to the same group of students, the correlation coefficient provides information about how well the two parallel forms yield the same results for students and is often referred to as a coefficient of stability and equivalence.
- *Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
- • Internal Consistency: internal reliability coefficients were calculated for Forms D, E, and F at each grade level from a sample of 9,922 students from 9 districts in 7 states (CA, HI, IL, IN, LA, NJ, OK). • Test-Retest: test-retest reliabilities were examined for a sample of 3,384 students who were administered Forms D, E, and F in the fall of 2014 and then the same form again within a two-week window. This sample was a subset of the sample used to calculate internal reliabilities for the test forms and included students from 9 districts in 7 states (CA, HI, IL, IN, LA, NJ, OK). • Alternate-Form: alternate-form reliability was examined for a sample of 6,529 students who were administered two different forms (Forms D, E, and F) within two weeks in the fall of 2014. This sample was a subset of the sample used to calculate internal reliabilities for the test forms and included students from 9 districts in 7 states (CA, HI, IL, IN, LA, NJ, OK).
- *Describe the analysis procedures for each reported type of reliability.
- • Internal Consistency: coefficient alpha for each of the three test forms (D, E, and F); Form D information presented in table • Test-Test: Pearson product-moment correlations between test scores; Form D information presented in table • Alternate: Pearson product-moment correlations between test scores; Form D/E information presented in table
*In the table(s) below, report the results of the reliability analyses described above (e.g., internal consistency or inter-rater reliability coefficients).
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
- Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
- No
If yes, fill in data for each subgroup with disaggregated reliability data.
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of reliability analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
Validity
Grade |
Grade 3
|
Grade 4
|
Grade 5
|
Grade 6
|
Grade 7
|
Grade 8
|
Grade 9
|
---|---|---|---|---|---|---|---|
Rating |
- *Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
- CAASSP. The CAASPP is the California state assessment of ELA and is administered during the spring to all students in Grades 3-8 and 11. STAAR. The STAAR is the Texas state assessment of ELA and is administered during the spring to all students in Grades 3-8, English I, and English II. Data for validity was from students who were administered the two assessments within 3 weeks.
- *Describe the sample(s), including size and characteristics, for each validity analysis conducted.
- CAASSP. The sample consisted of 14,831 students in Grades 3-6 where: 45.97% were female and 52.57% were male; 10.26% were Filipino, 67.99% were Hispanic or Latino, and 11.16% were White (not Hispanic); and 51.53% were classified as economically disadvantaged. STAAR. The sample consisted of 41,148 students in Grades 3-8, English I, and English II where: 49.04% were female and 50.95% were male; 22.56% were Black/African American, 64.52% were Hispanic, and 9.76% were White (not Hispanic); and 80.77% were classified as eligible for free or reduced-price lunch.
- *Describe the analysis procedures for each reported type of validity.
- CAASSP. Correlation between interim or state assessment scale scores in the spring (prior grade level) and the Achieve3000 Lexile measure in the fall. STAAR. Correlation between interim or state assessment scale scores in the spring (prior grade level) and the Achieve3000 Lexile measure in the fall.
*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
- In the LevelSet Technical Manual-- Study 1. NWEA MAP is an interim assessment of reading comprehension and is administered typically three times per year to all students in the school. The ISTEP+ was the Indiana state summative of ELA and administered all students in Grades 3-8 and 10. The HAS was the Hawaii state summative assessment of ELA and administered to all students in Grades 3-10. Data from Fall 2014 administrations of LevelSet from five school districts from across the United States were included in this validation study. This sample was a subset of the sample collected for the reliability studies. These school districts provided Achieve3000 with data from LevelSet administrations from their KidBiz3000, TeenBiz3000, and Empower3000 programs. In addition, scores from another test of reading comprehension administered during Spring 2014 were provided to serve as a criterion measure of reading comprehension. Study 2. Gates-MacGinitie Reading Test is a group-administered, norm-referenced assessment that yields scores for Vocabulary, Reading Comprehension, and Total Reading. The test was administered as a pre-test and a post-test. The sample for the study was selected from four school districts located in three regions of the United States (the West South region, the East North Central Region, and the Pacific region). Two districts were classified large suburb and two districts were classified as large city. Within each grade in the study, teachers were randomly assigned to the treatment or control groups. Only treatment teachers implemented the Achieve3000 program, while both groups implemented their usual ELA materials. A total of 512 students were in the treatment group with 127 (24.8%) in Grade 3, 263 (51.4%) in Grade 6, and 122 (23.8%) in Grade 9. The treatment group consisted of: 222 (43.4%) females and 290 (56.6%) males; 178 (34.8%) students classified as Hispanic and 334 (65.2%) not classified as Hispanic; and 329 (64.3%) students classified as white, 116 (22.7%) classified as black of African American, 26 (5.1%) classified as Asian, and 36 (7.0%) others. Of the students in the treatment group, 41 (8.0%) were classified as needing special education services, 183 (35.7%) received free- and reduced-price lunch, and 59 (11.5%) were classified as English language learners (ELL).
- Describe the degree to which the provided data support the validity of the tool.
- When the scores from two tests that have been developed to assess the same construct (i.e. reading comprehension) are highly correlated, it supports the validity argument for the use of test scores as measures of that construct. Correlation coefficients showing the relationship between the LevelSet test scores and state or nationally normed reading tests provide evidence of criterion-related validity for the Achieve3000 LevelSet tests. The correlations shown indicate that the two tests are measuring similar a construct – reading comprehension.
- Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
- No
If yes, fill in data for each subgroup with disaggregated validity data.
Type of | Subgroup | Informant | Age / Grade | Test or Criterion | n | Median Coefficient | 95% Confidence Interval Lower Bound |
95% Confidence Interval Upper Bound |
---|
- Results from other forms of validity analysis not compatible with above table format:
- Manual cites other published reliability studies:
- No
- Provide citations for additional published studies.
Bias Analysis
Grade |
Grade 3
|
Grade 4
|
Grade 5
|
Grade 6
|
Grade 7
|
Grade 8
|
Grade 9
|
---|---|---|---|---|---|---|---|
Rating | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
- Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiple-group confirmatory factor models.
- Yes
- If yes,
- a. Describe the method used to determine the presence or absence of bias:
- The Mantel-Haenszel (MH) Log Odds Ratio statistic, or estimated effect size, is used to determine the direction of differential item functioning (SAS Institute Inc., 1985). This measure is obtained by combining the odds ratios, j, across levels with the formula for weighted averages. Educational Testing Service (ETS) classifies DIF based on the MH D-DIF statistic (Zwick, 2012), developed by Holland and Thayer. Within Winsteps (Linacre, 2011), items are classified according to the ETS DIF Categories.
- b. Describe the subgroups for which bias analyses were conducted:
- • Gender – 1,310 items (96.3% of items in the study): Male (N = 207,716) and Female (N = 195,174); • Race – 1,070 items (78.7% of items in the study): Non-white (N = 20,778) and White (N = 16.977) – optional reporting field; • Ethnicity – 506 items (37.2% of items in the study): Non-Hispanic (N = 5,954) and Hispanic (N = 32,227) – optional reporting field; and • SES Status (Free and Reduced-Price Lunch) – 893 items (66.0% of items in the study): No (N = 8,474) and Yes (N = 14,162) – optional reporting field.
- c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.
- Across the 1,360 LevelSet (version 2) items and Form B items in the field study, 42 items (3.28%) showed Class C DIF in relation to gender, 95 items (8.88%) showed Class C DIF in relation to race, 32 items (6.32%) showed Class C DIF in relation to ethnicity (Hispanic-non-Hispanic) status, and 82 items (9.18%) showed DIF in relation to socio economic status Class C DIF.
Data Collection Practices
Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.