Acadience Math
Concepts and Applications
Summary
Concepts and Applications is a standardized measure designed to assess students’ progress in understanding math concepts and vocabulary and applying that knowledge to solving problems. It can be administered individually or to groups. Students write their answers to the problems under standardized conditions and time limits. The time limit varies from five minutes to sixteen minutes, depending on grade level. The total score is the number of points the student received for the problems that were answered correctly on the worksheet. The points received for individual problems is based on the number of correct digits in the final answer or the exact final answer, depending on the problem. An optional response pattern analysis can also be completed to give additional instructional information by analyzing the student's response patterns.
 Where to Obtain:
 Acadience Learning Inc.
 info@acadiencelearning.org
 Acadience Learning 859 Willamette Street, Suite 320, Eugene, OR 97401
 (541)4316931, (888) 9431240
 https://acadiencelearning.org/
 Initial Cost:
 Free
 Replacement Cost:
 Free
 Included in Cost:
 All materials are available for download for free at https://acadiencelearning.org/acadiencemath.html, including progress monitoring worksheets for each grade, assessor scoring booklets and keys for each grade, the Acadience Math Assessment Manual, and the Acadience Math Technical Brief.
 Approved accommodations are any accommodations that will not alter the standardization of the assessment. Specific approved accommodations include, but are not limited to: 1. The use of colored overlays, filters, or lighting adjustments for students with visual impairments. 2. The use of student materials that have been enlarged or with larger print for students with visual impairments. 3. The use of assistive technology, such as hearing aids and assistive listening devices (ALDs), for students with hearing impairments. 4. The use of a marker or ruler to focus student attention on the materials for students who are not able to demonstrate their skills adequately without one.
 Training Requirements:
 One to two hours of training to cover cover foundations of Acadience Math as well as administration and scoring of the measure.
 Qualified Administrators:
 Paraprofessionallevel training and adequate training on administration and scoring of Concepts and Applications.
 Access to Technical Support:
 Customer support is available from 8:00am to 5:00pm PST, Monday through Friday by phone, email, or through Acadience Learning's website.
 Assessment Format:

 Individual
 Small group
 Large group
 Scoring Time:

 1 minutes per worksheet
 Scores Generated:

 Raw score
 Percentile score
 Developmental benchmarks
 Developmental cut points
 Administration Time:

 5 minutes per worksheet
 Scoring Method:

 Manually (by hand)
 Technology Requirements:

Tool Information
Descriptive Information
 Please provide a description of your tool:
 Concepts and Applications is a standardized measure designed to assess students’ progress in understanding math concepts and vocabulary and applying that knowledge to solving problems. It can be administered individually or to groups. Students write their answers to the problems under standardized conditions and time limits. The time limit varies from five minutes to sixteen minutes, depending on grade level. The total score is the number of points the student received for the problems that were answered correctly on the worksheet. The points received for individual problems is based on the number of correct digits in the final answer or the exact final answer, depending on the problem. An optional response pattern analysis can also be completed to give additional instructional information by analyzing the student's response patterns.
 Is your tool designed to measure progress towards an endofyear goal (e.g., oral reading fluency) or progress towards a shortterm skill (e.g., letter naming fluency)?

ACADEMIC ONLY: What dimensions does the tool assess?
 BEHAVIOR ONLY: Please identify which broad domain(s)/construct(s) are measured by your tool and define each subdomain or subconstruct.
 BEHAVIOR ONLY: Which category of behaviors does your tool target?
Acquisition and Cost Information
Administration
Training & Scoring
Training
 Is training for the administrator required?
 Yes
 Describe the time required for administrator training, if applicable:
 One to two hours of training to cover cover foundations of Acadience Math as well as administration and scoring of the measure.
 Please describe the minimum qualifications an administrator must possess.
 Paraprofessionallevel training and adequate training on administration and scoring of Concepts and Applications.
 No minimum qualifications
 Are training manuals and materials available?
 Yes
 Are training manuals/materials fieldtested?
 Yes
 Are training manuals/materials included in cost of tools?
 Yes
 If No, please describe training costs:
 Can users obtain ongoing professional and technical support?
 Yes
 If Yes, please describe how users can obtain support:
 Customer support is available from 8:00am to 5:00pm PST, Monday through Friday by phone, email, or through Acadience Learning's website.
Scoring
 Please describe the scoring structure. Provide relevant details such as the scoring format, the number of items overall, the number of items per subscale, what the cluster/composite score comprises, and how raw scores are calculated.
 The Concepts and Applications total score is based on the number of points earned on the problems completed within the time limit. The problems are scored by evaluating the correct digits in the final answer or the exact answer per line, segment, or box. The correct digit or exact answer is associated with a specific number of points, as indicated by a legend on the teacher key. There are between 16 and 20 problems on each worksheet, listed across pages as necessary. The number of items on each worksheet is as follows: Grade 2: 16 problems; Grade 3: 20 problems; Grade 4: 20 problems; Grade 5: 16 problems; Grade 6: 20 problems. For each problem that the student completed or attempted, the number of points he or she received are written next to that problem. The points are added for each page and then summed to calculate the student’s total score, which is recorded at the top of the front page of the worksheet in the space provided. The final score for a progress monitoring assessment is the score from one student worksheet.
 Do you provide basis for calculating slope (e.g., amount of improvement per unit in time)?
 No
 ACADEMIC ONLY: Do you provide benchmarks for the slopes?
 No
 ACADEMIC ONLY: Do you provide percentile ranks for the slopes?
 No
 Describe the tool’s approach to progress monitoring, behavior samples, test format, and/or scoring practices, including steps taken to ensure that it is appropriate for use with culturally and linguistically diverse populations and students with disabilities.
 The Acadience Math measures were designed to be economical and efficient indicators of a student's progress toward achieving a general outcome such as math and to be used for both benchmark assessment and progress monitoring. Progress monitoring refers to the more frequent testing of students who may be at risk for future math difficulty on the skill areas in which they are receiving instruction, to ensure that they are making adequate progress. Progress monitoring can be conducted using gradelevel or outofgrade materials, depending on the student's needs. Decisions about the skill areas and levels to monitor are made at the individual student level. Students who are receiving additional support should be monitored for progress more frequently to ensure that the instructional support being provided is helping them get back on track. Monitoring may occur once per month, once every two weeks, or as often as once per week. In general, students who need the most intensive instruction are monitored for progress most frequently. Progress monitoring materials contain alternate forms of the same measures administered during benchmark assessment. Each alternate form is of equivalent difficulty. Not all students will need progress monitoring. Progress monitoring materials are organized by measure, since students who need progress monitoring will typically be monitored on specific measures related to the instruction they are receiving, rather than on every measure for that grade. Material selected for progress monitoring must be sensitive to growth, yet still represent an ambitious goal. The standardized procedures for administering an Acadience Math measure may apply when using Acadience Math for progress monitoring. Progress monitoring data should be graphed and readily available to those who teach the student. An aimline should be drawn from the student's current skill level (which may be the most recent benchmark assessment score) to the goal. Progress monitoring scores can then be plotted over time and examined to determine whether they indicate that the student is making adequate progress (i.e., fall above or below the aimline). The Acadience Math assessments were designed to support students of varied backgrounds. Questions were written with names that represent diverse cultural, racial, and ethnic groups. Approved accommodations are allowed for any student and consist of accommodations that will not alter the standardization of the assessment.
Rates of Improvement and End of Year Benchmarks
 Is minimum acceptable growth (slope of improvement or average weekly increase in score by grade level) specified in your manual or published materials?
 Yes
 If yes, specify the growth standards:
 Based on the student's initial level of performance on the Acadience Math measures, growth standards are set so that students scoring in the wellbelow benchmark range will be scoring in at least the below benchmark range by the next benchmark period, students scoring in the belowbenchmark range will be scoring at or above benchmark by the next benchmark period, and students scoring in the at or above benchmark range will remain in that range at the next benchmark period.
 Are benchmarks for minimum acceptable endofyear performance specified in your manual or published materials?
 Yes
 If yes, specify the endofyear performance standards:
 Three primary endofyear performance standards are specified: Well Below Benchmark, Below Benchmark, and At or Above Benchmark. These standards are used to indicate increasing odds of achieving At or Above Benchmark status at the next benchmark administration or goals on external measures of math achievement. Endofyear benchmarks goals and cut points for risk: Grade 2 Benchmark goal: 35, Cut point: 23; Grade 3 Benchmark goal: 47, Cut point: 32; Grade 4 Benchmark goal: 71, Cut point: 46; Grade 5 Benchmark goal: 62, Cut point: 40; Grade 6 Benchmark goal: 67, Cut point: 49.
 Date
 2019
 Size
 228,779
 Male
 51%
 Female
 49%
 Unknown
 0%
 Eligible for free or reducedprice lunch
 57%
 Other SES Indicators
 White, NonHispanic
 70.11%
 Black, NonHispanic
 Hispanic
 American Indian/Alaska Native
 Asian/Pacific Islander
 Other
 Unknown
 Disability classification (Please describe)
 First language (Please describe)
 Language proficiency status (Please describe)
Performance Level
Reliability
Grade 
Grade 2

Grade 3

Grade 4

Grade 5

Grade 6


Rating 
 *Offer a justification for each type of reliability reported, given the type and purpose of the tool.
 Reliability refers to the relative stability with which a test measures the same skills across minor differences in conditions. Three types of reliability are reported in the table below, alternateform reliability, alpha, and interrater reliability. Alternateform reliability is the correlation between different forms of the Concepts and Applications measure. High alternateform reliability coefficients suggest that these multiple forms are measuring the same construct. Coefficient alpha is a measure of reliability that is widely used in education research and represents the proportion of true score to total variance. Alpha incorporates information about the average intertest correlation as well as the number of tests. Interrater reliability indicates the extent to which results generalize across assessors scoring the measure.
 *Describe the sample(s), including size and characteristics, for each reliability analysis conducted.
 Alternateform reliability was collected for 724 students and interrater reliability was collected for 617 students in grades 26. This data was collected as part of a larger reliability study conducted during the 2014–2015, 2015–2016, 2016–2017, and 20182019 school years with a sample size of 1,810. Participants were from 17 schools in 14 districts in 10 US states. Demographic information is not available for this sample.
 *Describe the analysis procedures for each reported type of reliability.
 Alternateform reliability is reported as the correlation between two alternate forms of Concepts and Applications. Coefficient alpha treats the two tests as separate indicators and is calculated using the alternateform reliability, where the number of tests is equal to two. To calculate interrater reliability, photocopies were made of unscored student worksheets. The two copies (original and photocopy) were scored separately and independently. The interrater reliability coefficient is the correlation between the total scores from these two independently scored assessments.
*In the table(s) below, report the results of the reliability analyses described above (e.g., modelbased evidence, internal consistency or interrater reliability coefficients). Include detail about the type of reliability data, statistic generated, and sample size and demographic information.
Type of  Subscale  Subgroup  Informant  Age / Grade  Test or Criterion  n (sample/ examinees) 
n (raters) 
Median Coefficient  95% Confidence Interval Lower Bound 
95% Confidence Interval Upper Bound 

 Results from other forms of reliability analysis not compatible with above table format:
 Manual cites other published reliability studies:
 Yes
 Provide citations for additional published studies.
 Gray, J. S., Warnock, A. N., Dewey, E. N., Latimer, R., & Wheeler, C. E. (2019) Acadience™ Math Technical Adequacy Brief. Eugene, OR: Acadience Learning Inc.
 Do you have reliability data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
 No
If yes, fill in data for each subgroup with disaggregated reliability data.
Type of  Subscale  Subgroup  Informant  Age / Grade  Test or Criterion  n (sample/ examinees) 
n (raters) 
Median Coefficient  95% Confidence Interval Lower Bound 
95% Confidence Interval Upper Bound 

 Results from other forms of reliability analysis not compatible with above table format:
 Manual cites other published reliability studies:
 No
 Provide citations for additional published studies.
Validity
Grade 
Grade 2

Grade 3

Grade 4

Grade 5

Grade 6


Rating 
 *Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
 The Stanford Achievement Test Series, Tenth Edition–Total Math score (SAT10; Pearson, 2003) was used as the external criterion. The SAT10 is a widely used, timed, groupadministered, normreferenced achievement test appropriate for children in kindergarten through grade 12. In second through sixth grade, the SAT10 Total Math score includes scores from the subtests of Mathematics Problem Solving and Mathematics Procedures. Students are given 80 minutes in total to complete both subtests. The SAT10 Total Math score was compared to all Acadience Math measures given during the year, providing both predictive criterionrelated validity correlations for beginning and middleofyear measures and concurrent criterionrelated validity data for endofyear measures.
 *Describe the sample(s), including size and characteristics, for each validity analysis conducted.
 Validity data were collected during the 2017–2018 school year for second through sixth grade. This sample included 537 students across five schools in four districts in four states in the Pacific West and West North Central Midwest regions of the United States. Demographic data for four of the five participating schools were gathered at the school level from the NCES website (http://nces.ed.gov/). One school, a private Catholic school in the Midwest, was not listed on NCES. Across the four schools which reported demographic information, 10% of the student population was reported as American Indian or Alaska Native, 8% as Asian or Native Hawaiian/Pacific Islander, 2% as Black or African American, 39% as Hispanic, 37% as White, and 4% as Two or More Races . Fortysix percent of the student population was female, and 39% of all students were eligible for free/reduced lunch. Research sites were recruited from schools that were actively using the measures during the 20182019 school year and planned on collecting data at three benchmark periods: fall, winter, and spring, as per school/district practice, and entering that data into Acadience Data Management (ADM). Recruitment targeted schools that had average scores for each grade level within the upper 1/3 and lower 1/3 of all schools that enter their data in ADM. The purpose of this recruitment strategy was to include participants who represented a full range of student performance. Schools also needed to have a sufficient sample size of students within their 2nd, 3rd, 4th, 5th, and/or 6th grade classrooms with the minimum goal of having 50 students per grade participate. Students in general education classrooms in 2nd, 3rd, 4th, 5th, and/or 6th grades in participating schools who were receiving mathematics instruction were invited to participate, including students with disabilities provided the students had the response capabilities to participate. Both students who were struggling in mathematics and those who were typically achieving were included in this study, provided their parents provided consent.
 *Describe the analysis procedures for each reported type of validity.
 Predictive validity is the correlation between Concepts and Applications at the beginning of the year and the SAT 10 score at the end of the school year. This coefficient represents the extent to which Concepts and Applications can predict later math outcomes. Concurrent validity is the correlation between the Concepts and Applications score and the SAT 10 measure both at the end of the year. This coefficient represents the extent to which the Concepts and Applications score is related to important math outcomes.
*In the table below, report the results of the validity analyses described above (e.g., concurrent or predictive validity, evidence based on response processes, evidence based on internal structure, evidence based on relations to other variables, and/or evidence based on consequences of testing), and the criterion measures.
Type of  Subscale  Subgroup  Informant  Age / Grade  Test or Criterion  n (sample/ examinees) 
n (raters) 
Median Coefficient  95% Confidence Interval Lower Bound 
95% Confidence Interval Upper Bound 

 Results from other forms of validity analysis not compatible with above table format:
 Manual cites other published reliability studies:
 Yes
 Provide citations for additional published studies.
 Gray, J. S., Warnock, A. N., Dewey, E. N., Latimer, R., & Wheeler, C. E. (2019) Acadience™ Math Technical Adequacy Brief. Eugene, OR: Acadience Learning Inc.
 Describe the degree to which the provided data support the validity of the tool.
 Both the concurrent and predictive correlation are generally high. These strong correlations suggest that the Acadience Math Concepts and Applications measure is assessing skills relevant to math outcomes. Given the wide range of skills assessed on the SAT10, these data support the conclusion that the Concepts and Applications measure is an excellent indicator of math proficiency.
 Do you have validity data that are disaggregated by gender, race/ethnicity, or other subgroups (e.g., English language learners, students with disabilities)?
 No
If yes, fill in data for each subgroup with disaggregated validity data.
Type of  Subscale  Subgroup  Informant  Age / Grade  Test or Criterion  n (sample/ examinees) 
n (raters) 
Median Coefficient  95% Confidence Interval Lower Bound 
95% Confidence Interval Upper Bound 

 Results from other forms of validity analysis not compatible with above table format:
 Manual cites other published reliability studies:
 No
 Provide citations for additional published studies.
Bias Analysis
Grade 
Grade 2

Grade 3

Grade 4

Grade 5

Grade 6


Rating  No  No  No  No  No 
 Have you conducted additional analyses related to the extent to which your tool is or is not biased against subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)? Examples might include Differential Item Functioning (DIF) or invariance testing in multiplegroup confirmatory factor models.
 No
 If yes,
 a. Describe the method used to determine the presence or absence of bias:
 b. Describe the subgroups for which bias analyses were conducted:
 c. Describe the results of the bias analyses conducted, including data and interpretative statements. Include magnitude of effect (if available) if bias has been identified.
Growth Standards
Sensitivity: Reliability of Slope
Grade  Grade 2  Grade 3  Grade 4  Grade 5  Grade 6 

Rating 
 Describe the sample, including size and characteristics. Please provide documentation showing that the sample was composed of students in need of intensive intervention. A sample of students with intensive needs should satisfy one of the following criteria: (1) all students scored below the 30th percentile on a local or national norm, or the sample mean on a local or national test fell below the 25th percentile; (2) students had an IEP with goals consistent with the construct measured by the tool; or (3) students were nonresponsive to Tier 2 instruction. Evidence based on an unknown sample, or a sample that does not meet these specifications, may not be considered.
 The sample consisted of students who were identified as being "Well Below Benchmark" using the benchmark assessment of Acadience Math at the beginning of year. Being Well Below Benchmark corresponds to being below the 26th, 25th, 26th, 24th, and 22nd percentiles for second, third, fourth, fifth, and sixth grades, respectively. Students were only selected if they had a minimum of 15 observations.
 Describe the frequency of measurement (for each student in the sample, report how often data were collected and over what span of time).
 Progress monitoring data were collected throughout the school year at the discretion of the administering school, but not more frequently than once per week. Any student who had fewer than fifteen progress monitoring assessments was excluded from the analysis.
 Describe the analysis procedures.
 Reliability of slope was calculated as the ratio of true score variance to observed total variance. The true score variance estimate came from a hierarchical linear model based estimate of the variance in progress monitoring slopes (using the R package lme4), the observed score variance was calculated as the variance of the ordinary least squares slopes created for each student that met the aforementioned inclusion criteria. Confidence intervals were calculated using bootstrap estimation.
In the table below, report reliability of the slope (e.g., ratio of true slope variance to total slope variance) by grade level (if relevant).
Type of  Subscale  Subgroup  Informant  Age / Grade  Test or Criterion  n (sample/ examinees) 
n (raters) 
Median Coefficient  95% Confidence Interval Lower Bound 
95% Confidence Interval Upper Bound 

 Results from other forms of reliability analysis not compatible with above table format:
 Manual cites other published reliability studies:
 No
 Provide citations for additional published studies.
 Do you have reliability of the slope data that is disaggregated by subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?
 No
If yes, fill in data for each subgroup with disaggregated reliability of the slope data.
Type of  Subscale  Subgroup  Informant  Age / Grade  Test or Criterion  n (sample/ examinees) 
n (raters) 
Median Coefficient  95% Confidence Interval Lower Bound 
95% Confidence Interval Upper Bound 

 Results from other forms of reliability analysis not compatible with above table format:
 Manual cites other published reliability studies:
 No
 Provide citations for additional published studies.
Sensitivity: Validity of Slope
Grade  Grade 2  Grade 3  Grade 4  Grade 5  Grade 6 

Rating 
 Describe each criterion measure used and explain why each measure is appropriate, given the type and purpose of the tool.
 For the Acadience Math Concepts and Applications progress monitoring assessment, we used the Acadience Math Computation score at the end of the subsequent year as the outcome measure for the validity of slope. Computation is an appropriate criterion for Concepts and Applications due to the computational component for a number of the applications items on the assessment. While the criterion is internal in the sense that both the progress monitoring assessment and the criterion are Acadience Math measures, the criterion is external in the sense that it is distinct and separate from the Concepts and Applications progress monitoring system. We believe that using both an alternative measure of math skills (Computation vs. Concepts and Applications) and the length of time between the end of progress monitoring and the criterion (an entire year between the last progress motioning occasion and the criterion) provides a sufficiently powerful examination of the validity of slope. There is no overlap of item samples: The items for the Concepts and Applications assessment are completely different and share no overlap with the items used for the Computation assessment. These requirements (external measures and no overlap of item samples) serve to ensure a conceptual distance between the slope of Concepts and Applications and the criterion. In the reported analysis we increased the length of time between the slope of Concepts and Applications and the criterion measure by examining outcomes to the end of the subsequent academic year. So, for example, the validity of slope of progress on secondgrade Concepts and Applications assessment was examined with respect to end of thirdgrade Computation. Validity of slope was not calculated for grade 6.
 Describe the sample(s), including size and characteristics. Please provide documentation showing that the sample was composed of students in need of intensive intervention. A sample of students with intensive needs should satisfy one of the following criteria: (1) all students scored below the 30th percentile on a local or national norm, or the sample mean on a local or national test fell below the 25th percentile; (2) students had an IEP with goals consistent with the construct measured by the tool; or (3) students were nonresponsive to Tier 2 instruction. Evidence based on an unknown sample, or a sample that does not meet these specifications, may not be considered.
 The sample consisted of students who were identified as being "Well Below Benchmark" using the benchmark assessment of Acadience Math at the beginning of year. Being Well Below Benchmark corresponds to being below the 26th, 25th, 26th, 24th, and 22nd percentiles for second, third, fourth, fifth, and sixth grades, respectively. Students were only selected if they had a minimum of 15 observations.
 Describe the frequency of measurement (for each student in the sample, report how often data were collected and over what span of time).
 Progress monitoring data were collected throughout the school year at the discretion of the administering school, but not more frequently than once per week. Any student who had fewer than fifteen progress monitoring assessments was excluded from the analysis.
 Describe the analysis procedures for each reported type of validity.
 Validity of slope was assessed using the partial correlations between the students' ordinary least squares slope and the criterion, while controlling for the students' ordinary least squares intercept.
In the table below, report predictive validity of the slope (correlation between the slope and achievement outcome) by grade level (if relevant).
NOTE: The TRC suggests controlling for initial level when the correlation for slope without such control is not adequate.
Type of  Subscale  Subgroup  Informant  Age / Grade  Test or Criterion  n (sample/ examinees) 
n (raters) 
Median Coefficient  95% Confidence Interval Lower Bound 
95% Confidence Interval Upper Bound 

 Results from other forms of reliability analysis not compatible with above table format:
 Manual cites other published validity studies:
 No
 Provide citations for additional published studies.
 Describe the degree to which the provided data support the validity of the tool.
 The moderate to strong partial correlations that the OLS slopes have with a criterion that is both separated by an entire year and a conceptually different measure of math skills provides strong evidence of validity.
 Do you have validity of the slope data that is disaggregated by subgroups (e.g., race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?
 No
If yes, fill in data for each subgroup with disaggregated validity of the slope data.
Type of  Subscale  Subgroup  Informant  Age / Grade  Test or Criterion  n (sample/ examinees) 
n (raters) 
Median Coefficient  95% Confidence Interval Lower Bound 
95% Confidence Interval Upper Bound 

 Results from other forms of reliability analysis not compatible with above table format:
 Manual cites other published validity studies:
 No
 Provide citations for additional published studies.
Alternate Forms
Grade  Grade 2  Grade 3  Grade 4  Grade 5  Grade 6 

Rating 
 Describe the sample for these analyses, including size and characteristics:
 What is the number of alternate forms of equal and controlled difficulty?
 If IRT based, provide evidence of item or ability invariance
 If computer administered, how many items are in the item bank for each grade level?
 If your tool is computer administered, please note how the test forms are derived instead of providing alternate forms:
Decision Rules: Setting & Revising Goals
Grade  Grade 2  Grade 3  Grade 4  Grade 5  Grade 6 

Rating 
 In your manual or published materials, do you specify validated decision rules for how to set and revise goals?
 If yes, specify the decision rules:

What is the evidentiary basis for these decision rules?
NOTE: The TRC expects evidence for this standard to include an empirical study that compares a treatment group to a control and evaluates whether student outcomes increase when decision rules are in place.
Decision Rules: Changing Instruction
Grade  Grade 2  Grade 3  Grade 4  Grade 5  Grade 6 

Rating 
 In your manual or published materials, do you specify validated decision rules for when changes to instruction need to be made?
 If yes, specify the decision rules:

What is the evidentiary basis for these decision rules?
NOTE: The TRC expects evidence for this standard to include an empirical study that compares a treatment group to a control and evaluates whether student outcomes increase when decision rules are in place.
Data Collection Practices
Most tools and programs evaluated by the NCII are branded products which have been submitted by the companies, organizations, or individuals that disseminate these products. These entities supply the textual information shown above, but not the ratings accompanying the text. NCII administrators and members of our Technical Review Committees have reviewed the content on this page, but NCII cannot guarantee that this information is free from error or reflective of recent changes to the product. Tools and programs have the opportunity to be updated annually or upon request.