aimswebPlus Math
Math Facts Fluency1 Digit
Cost 
Technology, Human Resources, and Accommodations for Special Needs 
Service and Support 
Purpose and Other Implementation Information 
Usage and Reporting 
Initial Cost: aimswebPlus is a subscriptionbased tool. There are three subscription types available for customers*:
*Current aimsweb customers upgrading to aimswebPlus receive a $2/student discount off the subscription.
Replacement Cost: $8.50 per student per year. Annual license fee subject to change.
Included in Cost: Complete Kit: aimswebPlus is an online solution that includes digital editions of training manuals and testing materials within the application.

Technology Requirements:
Training Requirements:
Qualified Administrators:
Accommodations: Test accommodations that are documented in a student’s Individual Education Plan (IEP) are permitted with aimswebPlus. However, not all measures allow for accommodations.
Math Facts Fluency–1 Digit is an individually administered, timed test that employs strict time limits to generate ratebased scores. As such, valid interpretation of national norms, which are an essential aspect of decisionmaking during benchmark testing, depend on strict adherence to the standard administration procedures.
The following accommodations are allowed for Math Facts Fluency 1Digit during screening and progress monitoring:

Where to Obtain: Website: Address: San Antonio Office 19500 Bulverde Road, #201 San Antonio, TX, 78259 Phone Number: Email:
Pearson provides phone and emailbased support, as well as a user group forum that facilitates the asking and answering of questions.

aimswebPlus is a brief assessment system for screening and monitoring reading and math skills for all students in Kindergarten through Grade 8. Normative data were collected in 2013–14 on a combination of fluency measures that are sensitive to growth as well as new standardsbased assessments of classroom skills. The resulting scores and reports inform instruction and help improve student performance.
Math Facts Fluency–1 Digit is individually administered, with a teacher/examiner recording student data during the test session. Once testing is complete, summary and detailed reports for students, classrooms, and districts can be generated immediately.

Assessment Format:
*Examiners use digital record form.
Administration Time:
Scoring Time:
Scoring Method:
Scores Generated:

Reliability
Grade  1 

Rating 
Justify the appropriateness of each type of reliability reported:
Alternateform reliability, where equivalent forms are administered close together in time, is highly appropriate for progress monitoring CBM measures because it shows the consistency of scores from independently timed administrations with different content. Internal consistency reliability is not appropriate for speeded CBM measures.
The stability coefficient, where equivalent forms are administered with an interval of several months, reflects additional measurement error due to true change over time. As a result, these reliabilities are generally lower. The alternateform stability coefficient is based on correlations between fallwinter and winterspring benchmark scores
Describe the sample characteristics for each reliability analysis conducted:
The concurrent alternateform reliability sample is based on 10 schools from across the U.S. representing each of three SES levels (described above). Participating schools administered the alternate forms to all Kindergarten students in the school, with few exceptions for moderate to severe intellectual disabilities. Each student completed 2 of 3 alternate forms with forms administered in pairs: 1, 2; 1, 3; and 2, 3. The number of students completing each pair ranged from 206–223.
The stability coefficient is derived from the national norm sample described above.
Describe the analysis procedures for each reported type of reliability:
Pearson correlation coefficients of the scores from alternate forms.
Type of Reliability 
Age or Grade 
n 
Coefficient 
Confidence Interval 

Alternate form (concurrent) 
Grade 1 
206 
0.91 
0.89 – 0.93 
Stability 
Grade 1 
2000 
0.72 
0.71 – 0.73 
Validity
Grade  1 

Rating 
Describe and justify the criterion measures used to demonstrate validity:
Four validity studies are reported: a concurrent study and a predictive study for each of two outcome criteria. Both criteria are independent of the Math Facts Fluency–1 Digit measure and are unspeeded power tests rather than speeded fluency tests. Neither is used for progress monitoring. One criterion is the Math score from the Tennessee Comprehensive Assessment Program (TCAP; Tennessee’s endofyear state assessment), administered in the spring.
The other is the aimswebPlus Concepts & Applications (CA), a standardsbased interim assessment of math skills that is administered at the beginning, middle, and end of the school year. This assessment consists of 25 math concepts and problem solving items aligned to Grade 1 Common Core State Standards (CCSS) in mathematics and includes at least three items from each of the Grade 1 CCSS math domains. It is an individually administered power test in which students are given the time they need to complete each item. Its content differs from and has no overlap with Math Facts Fluency–1 Digit.
Describe the sample characteristics for each validity analysis conducted:
For each criterion, the same sample was used for both the concurrent and predictive validity studies. The SES index is the percentage of students at the student’s school eligible for free/reduced lunch, divided into three ranges of approximately equal size in the national student population.

Criterion 

Tennessee Comprehensive Assessment Program (N = 55) 
aimswebPlus (N = 801) 

Gender (%) 

Female 
53% 
50% 
Male 
47% 
50% 
Race/ethnicity (%) 

African American 
2% 
13% 
Hispanic 
25% 
25% 
White 
73% 
51% 
Other 
0% 
10% 
ELL (%) 
24% 
9% 
Free/reduced lunch 

68%–100% (school %) 
0% 
36% 
34%–67% (school %) 
100% 
33% 
0%–33% (school %) 
0% 
32% 
Describe the analysis procedures for each reported type of validity:
Both criterion measures were administered in the Spring. The concurrent studies are correlations between Spring MFF–1D scores and the criteria, and the predictive studies are correlations between Fall MFF–1D scores and the criteria.
Type of Validity 
Age or Grade 
Test or Criterion 
n 
Coefficient 
Confidence Interval 

Concurrent criterionrelated 
Grade 1 
Tennessee Comprehensive Assessment Program (TCAP) 
55 
0.66 
0.57–0.73 
Concurrent criterionrelated 
Grade 1 
aimswebPlus Concepts & Applications (CA) 
801 
0.54 
0.51–0.56 
Predictive criterionrelated 
Grade 1 
TCAP 
55 
0.76 
0.69–0.81 
Predictive criterionrelated 
Grade 1 
aimswebPlus CA 
801 
0.60 
0.58–0.62 
Describe the degree to which the provided data support the validity of the tool:
Math Facts Fluency–1 Digit (MFF–1D) is designed to measure fluency with onedigit addition and subtraction, a foundational skill considered important for success and included as a learning standard in the Common Core State Standards. These validity studies support the interpretation of MFF–1D scores as foundational for success in the general math domain. Furthermore, they demonstrate that performance on mental computational fluency has a moderately strong relationship with endofyear math achievement.
Bias Analysis Conducted
Grade  1 

Rating  No 
A3. Bias Analysis Conducted
Have additional analyses been conducted to establish whether the tool is or is not biased against demographic subgroups (e.g., students who vary by race/ethnicity, gender, socioeconomic status, students with disabilities, English language learners)?
Bias Analysis Method: No qualifying evidence provided.
Subgroups Included: No qualifying evidence provided.
Bias Analysis Results: No qualifying evidence provided.
Sensitivity: Reliability of the Slope
Grade  1 

Rating 
Describe the sample used for analyses, including size and characteristics:
The sample consisted of 2,701 Grade 1 students below the 25th national percentile on the fall Math Fact Fluency–1 Digit (MFF–1D) benchmark and who were assigned a math performance goal and receiving frequent progress monitoring with MFF–1D. All progress monitoring schedules were at least 20 weeks in duration during the 2016–17 school year.
Describe the frequency of measurement:
The interval between the first and last administration was a minimum of 20 weeks. Most administrations occurred weekly, with a small percentage conducted twice monthly.
Number of Weeks Between First and Last Progress Monitoring Administration

Quartile 1 
Median 
Quartile 3 
Range 
Weeks 
32 
34 
35 
20–42 
Describe reliability of the slope analyses conducted with a population of students in need of intensive intervention:
Each student’s progress monitoring administrations were sequenced by date and divided into two groups: odd numbered administrations (e.g., 1,3,5, etc.) and even numbered administrations (e.g., 2,4,6, etc.) Linear regression was used to compute the slope for each student by group. The following model was used:
Score_{i} = Intercept + Date_{i}
where Date is the amount of time since the start of progress monitoring and i ranges from 1 to the number of administrations.
The correlation between oddgroup and evengroup slopes across all students was computed and converted to a splithalf reliability coefficient using the SpearmanBrown Formula: 2r/(1+r)
Type of Reliability 
Age or Grade 
n 
Coefficient 
Confidence Interval 

Splithalf 
Grade 1 
2,701 
0.74 
0.73–0.75 
Sensitivity: Validity of the Slope
Grade  1 

Rating 
Describe and justify the criterion measures used to demonstrate validity:
Math Concepts & Applications (CA) was used as the criterion measure. CA is a standardsbased interim assessment administered as a separate test in the aimswebPlus Fall, Winter, and Spring benchmark math assessment battery. This assessment consists of 25 math concepts and problem solving items aligned to Grade 1 Common Core State Standards (CCSS) in mathematics and includes at least three items from each of the Grade 1 CCSS math domains. It is an individually administered test in which students respond orally and are given the time they need to complete each item. The math CA content and approach differs from and does not overlap with MFF–1D.
According to the CCSS for Mathematics, key skill and conceptual development in Grade 1 includes:
● an understanding of addition and subtraction through 20 and
● an understanding of whole number relationships and place value.
MFF–1D measures a student’s speed and accuracy mentally adding and subtraction onedigit numbers. It represents a CCSS Math Standard (1.OA.6) and it is a critical foundational skill that, when mastered, should improve a student’s understanding of combining and separating numbers (addition and subtraction) and learning the fundamentals of place value which form the basis for most number and operations knowledge through Grade 8. It is expected that students who improve the most on MFF–1D from Fall to Spring should have greater proficiency in the Spring with number concepts and problem solving skills, as measured by CA.
Describe the sample used for analyses, including size and characteristics:
The sample is the same as that used to compute the reliability of the slope.
Describe predictive validity of the slope of improvement analyses conducted with a population of students in need of intensive intervention:
Spring CA scores were regressed onto the Fall to Spring PM slope for MFF–1D and the Fall MFF–1D scores. Including Fall MFF–1D scores controls for differences in initial performance, thus removing its effect on the relationship between slope and outcome.
Model 1a: CA Spring Score = Intercept + MFF–1D slope + MFF–1D Fall Score
Type of Validity 
Age or Grade 
Test or Criterion 
n 
Coefficient 
Confidence Interval 

Predictive 
Grade 1 
aimswebPlus CA 
2,701 
0.30 
0.29  0.31 
Describe the degree to which the provided data support the validity of the tool:
These results support the validity of the inference that growth in the MFF–1D score reflects growth in math proficiency more generally because growth in the criterion construct contributes to higher criterion scores in the Spring. Because MFF–1D is different in content from both criteria and different in administration format from CA, one would not expect a high correlation between MFF–1D growth and Spring criterion performance. Therefore, moderate correlations such as these are good supporting evidence.
Alternate Forms
Grade  1 

Rating 
Describe the sample for these analyses, including size and characteristics:
The sample consisted of 3,839 students from 250 schools each with a math performance goal and a progress monitoring schedule who scored at or below the 30th national percentile on the spring MFF–1D benchmark form. Each student completed at least one of the alternate MFF–1D PM forms within a window from 5 to 35 days after benchmark testing. Forms were randomly assigned to students.
Evidence that alternate forms are of equal and controlled difficulty or, if IRT based, evidence of item or ability invariance:
The sample consisted of 3,839 students from 250 schools each with a math performance goal and a progress monitoring schedule who scored at or below the 30th national percentile on the spring MFF–1D benchmark form. Each student completed at least one of the alternate MFF–1D PM forms within a window from 5 to 35 days after benchmark testing. Forms were randomly assigned to students.
The average performance on the forms administered in a 30day window is the basis of form comparability. To demonstrate comparability, we provide the effect size as the mean difference between each form and the average difficulty across all forms in standard deviations units.
The means ES is 0.13, and 19 of 20 effect sizes are 0.30 or lower, which is considered small. Comparability of the entire set of 20 forms is also summarized using analysis of variance where Form is treated as a fixed factor. The results indicate that Form accounts for only 3.13% of the total score variance. This is a very small percent and will have a trivial effect on the growth slope over the 20 or so administrations that are common for progress monitoring.
Number of alternate forms of equal and controlled difficulty:
20
Decision Rules: Setting and Revising Goals
Grade  1 

Rating 
Specification of validated decision rules for when goals should be set or revised:
To get the most value from progress monitoring, aimswebPlus recommends the following: (1) establish a time frame, (2) determine the level of performance expected, and (3) determine the criterion for success. Typical time frames include the duration of the intervention or the end of the school year. An annual time frame is typically used when IEP goals are written for students who are receiving special education services. For example, aimswebPlus goals can be written as follows: In 34 weeks, the student will compare numbers and answer computational problems to earn a score of 30 points on Grade 4 Number Sense Fluency forms.
aimswebPlus provides several ways to define a level of expected performance. The goal can be based on:
 wellestablished performance benchmarks that can be linked to aimswebPlus measures via national percentiles (e.g., the link to state test performance levels) or total score (e.g., word read per minute in Grade 2);
 a national performance norm benchmark (e.g., the 50th national percentile is often used to indicate ongrade level performance);
 a local performance norm benchmark;
 or an expected or normative rate of improvement (ROI), such as the 85th national student growth percentile.
To use this last method (student growth percentile), the user begins by selecting the measure and baseline score, the goal date, the monitoring frequency (default is weekly), and a tentative goal score. The system automatically labels the ambitiousness of the goal as Insufficient (SGP below 50), Closes the Gap (SGP between 50 and 85), Ambitious (86 to 97), or Overly Ambitious (above 97). The user can then adjust the goal (or the goal date) in light of this feedback.
For students in need of intensive intervention, aimswebPlus recommends setting performance goals that represent rates of growth between the 86^{th} and 97^{th} SGP (Ambitious). An SGP of 86 represents a growth rate achieved by just 15% of the national sample, which is why it is considered ambitious. However, it is reasonable to expect significantly higher than average growth when implementing effective, intensive intervention.
If the goal is set according to a benchmark based on raw scores or national or local norms, the aimswebPlus system still labels the ambitiousness of the goal in one of the four levels described above. If the goal corresponds to an Insufficient or Overly Ambitious rate of growth, users are advised to consider adjusting the goal. However, the user ultimately determines what growth rate is required on an individual basis.
With respect to the decision to revise a goal, aimswebPlus provides empiricallybased feedback about the student’s progress relative to the initial goal using the statistical tool described in our response to question B5 below. If the projected score at the goal date is fully Above Target (ie., the 75% confidence interval for the student’s projected score at the goal date is entirely above the goal score), we recommend that the user consider raising the goal if the goal date is at least 12 weeks out. Otherwise, we recommend not changing the goal. On the other hand, if the upper end of the confidence interval on the projected score lies Below Target, we recommend either changing the intervention, increasing its intensity, or lowering the goal if the initial goal was Overly Ambitious.
Evidentiary basis for these rules:
As described above, users have flexibility in the method they use to set and revise goals in aimswebPlus. The SGPbased labeling of goals as Overly Ambitious, Ambitious, Closes the Gap, or Insufficient is intended to assist the user in choosing a goal, but is not an automatic goalsetting system. Likewise, the analytical system that generates a confidence interval for the student’s predicted performance at the goal date helps the user manage progress monitoring but does not make a decision about revising the goal. Certainly, a decision to lower a goal would rely primarily on the educator’s judgment, since the first consideration would be to change the intervention. No experiment has been conducted in which the aimswebPlus information related to setting and revision goals was provided for some students receiving intensive intervention but not others.
Decision Rules: Changing Instruction
Grade  1 

Rating 
Specification of validated decision rules for when changes to instruction should be made:
aimswebPlus applies a statistical procedure, based on linear regression, to the student’s progress monitoring scores in order to provide empiricallybased guidance about whether the student is likely to meet, fall short of, or exceed his/her goal. The calculation procedure (presented below) is fully described in the aimswebPlus Progress Monitoring Guide (Pearson, 2017). aimswebPlus users will not have to do any calculations—the online system does this automatically.
The decision rule is based on a 75% confidence interval for the student’s predicted score at the goal date. This confidence interval is studentspecific and takes into account the number and variability of progress monitoring scores and the duration of monitoring. Starting at the sixth week of monitoring (when there are at least four monitoring scores), the aimswebPlus report following each progress monitoring administration includes one of the following statements:
A. Below Target. Projected to not meet the goal. This statement appears if the confidence interval is completely below the goal score.
B. Above Target. Projected to meet or exceed the goal. This statement appears if the confidence interval is completely above the goal score.
C. Near Target. Projected score at goal date: Between (X) and (Y). This statement appears if the confidence interval includes the goal score, with X and Y indicating the bottom and top of the confidence interval, respectively.
If Statement A appears, the user has a sound basis for deciding that the current intervention is not sufficient and a change to instruction should be made. If Statement B appears, there is an empirical basis for deciding that the goal is not sufficiently challenging and should be increased. If Statement C appears, the student’s progress is not clearly different from the aimline, so there is not a compelling reason to change the intervention or the goal; however, the presentation of the confidenceinterval range enables the user to see whether the goal is near the upper limit or lower limit of the range, which would signal that the student’s progress is trending below or above the goal.
A 75% confidence interval was chosen for this application because it balances the costs of the two types of decision errors. Incorrectly deciding that the goal will not be reached (when in truth it will be reached) has a moderate cost: an intervention that is working will be replaced by a different intervention. Incorrectly deciding that the goal may be reached (when in truth it will not be reached) also has a moderate cost: an ineffective intervention will be continued rather than being replaced. Because both kinds of decision errors have costs, it is appropriate to use a modest confidence level.
Calculation of the 75% confidence interval for the score at the goal date
 Calculate the trend line. This is the ordinary leastsquares regression line through the student’s monitoring scores.
 Calculate the projected score at the goal date. This is the value of the trend line at the goal date.
 Calculate the standard error of estimate (SEE) of the projected score at the goal date.
The means and sums are calculated across all of the completed monitoring administrations up to that date. Add and subtract 1.25 times the SEE to the projected score, and round to the nearest whole numbers.
Evidentiary basis for these rules:
The decision rules are statistically rather than empirically based. The guidance statements that result from applying the 75% confidence interval to the projected score are correct probabilistic statements, under the assumption that the student’s progress to date can be described by a linear trend line. If the pattern of the student’s monitoring scores is obviously curvilinear, then the projected score based on a linear trend will likely be misleading. We provide training in the aimswebPlus Progress Monitoring Guide about the need for users to take nonlinearity into account when interpreting progressmonitoring data. Another assumption is that the student will continue to progress at the same rate as they have been progressing to date. This is an unavoidable assumption for a decision system based on extrapolating from past growth.
No controlled experimental study has been conducted to support the decision rules, however, an empirical study of actual progress monitoring results was undertaken to evaluate the accuracy of the decision rules as various points during the progress monitoring schedule. aimswebPlus Number Sense Fluency (NSF) and Oral Reading Fluency (ORF) progress monitoring data collected during the 201617 school year was used to evaluate the accuracy of the decision feedback. All students on a PM schedule who scored below the 30th national percentile on the fall benchmark and who had at least 20 PM administrations were included. Grades 2 and 3 were chosen. More than 1000 students’ scores were used in each grade. Most administrations we collected about weekly.
Because we did not have the student’s actual goal score we generated a goal score based on the ROI that corresponds to a student growth percentile of 55. This level was chosen because it represents an average rate of improvement and it resulted in about 50% of the students meeting the goal. The goal score was computed as follows: Fall Benchmark Score + ROI_{55}*Weeks. Where ROI_{55} is the ROI associated with the SGP of 55 and Weeks is the number of weeks from the baseline score (Fall Benchmark) and the Spring Benchmark. For each student, beginning with the 8th score and going through the last score, we computed the score feedback based on the rules described in the previous section. If the student was projected to be below target an intervention change was deemed necessary and coded 1. Otherwise, the student was assigned a score of zero for that administration (no change is needed).
We computed the accuracy of the decision to change interventions by comparing the decision to whether the student ultimately did not meet the goal score by the Spring Benchmark. Accuracy was computed as the percentage of the decisions to change intervention of all students who did not ultimately meet the goal. The results showed that decision accuracy improved with each successive administration with 70%  75% accuracy by the 8th administration and 75%  80% by the 15th administration and 90% by the 20th administration. This trend was replicated in each sample and it provides evidence that the decision rules validly indicate when a change in the intervention should be made because the student is unlikely to achieve the goal with the current rate of improvement.
Administration Format
Grade  1 

Data 
Individual
Computeradministered*
*Examiner uses digital record form.
Administration & Scoring Time
Grade  1 

Data 
Administration Time:
1 minute
Scoring Time:
Scoring is automatically completed by system when student responses for a given measure are submitted by the examiner.
Scoring Format
Grade  1 

Data 
Scoring Format:
Computerscored
ROI & EOY Benchmarks
Grade  1 

Data 
Specify the minimum acceptable rate of growth/improvement:
aimswebPlus provides student growth percentiles (SGP) by grade and initial performance level (Fall and Winter) for establishing growth standards. An SGP indicates the percentage of students in the national sample whose seasonal (or annual) rate of improvement (ROI) fell at or below a specified ROI. Separate SGP distributions are computed for each of five levels of initial (Fall or Winter) performance.
Goals are set in the system by selecting the measure and baseline score, the goal date, the monitoring frequency (default is weekly), and the goal score. When the user defines the goal score, the system automatically labels the ambitiousness of the goal. The rate of improvement needed to achieve the goal is computed and translated into an SGP. An SGP < 50 is considered Insufficient; an SGP between 50 and 85 is considered Closes the Gap; an SGP between 85 and 97 is considered Ambitious; and an SGP > 97 is considered Overly Ambitious. aimswebPlus recommends setting performance goals that represent rates of growth between the 85^{th} and 97^{th} SGP. However, the user ultimately determines what growth rate is appropriate on an individual basis.
Specify the benchmarks for minimum acceptable endofyear performance:
aimswebPlus allows users to select a target from a range of endofyear targets the one that is most appropriate for their instructional needs.
aimswebPlus defines a meaningful target as one that is objective, quantifiable, and can be linked to a criterion that has inherent meaning for teachers. To establish a meaningful performance target using aimswebPlus tiers, the account manager (e.g., a school/district administrator) is advised to choose a target that:
● is linked to a criterion,
● is challenging and achievable,
● closes the achievement gap, and
● reflects historical performance results (when available).
Customers are also advised to give consideration to the availability of resources to achieve the goal.
The targets are based on spring reading or math composite score national percentiles. Twelve national percentile targets ranging from the 15^{th} through the 70^{th} percentile, in increments of 5 are provided. This range was chosen because it covers the breadth of passing rates on state assessments and the historical range of targets our customers typically use. The system provides a default spring performance target of the 30th national percentile. Targets can be set separately for Reading and Math.
The aimswebPlus Tiers Guide provides more detail to help customers define a high quality performance target. It also provides a stepbystep method to align spring performance targets to performance levels on state accountability tests.
Once a target is selected, the aimswebPlus system automatically identifies the fall (or winter) cut score that divides the score distribution into three instructional Tiers. Students above the highest cut score are in Tier 1 and have a high probability (80%–95%) of meeting the performance target; students between the upper and lower cut scores are in Tier 2 and have a moderate probability (40%–70%) of meeting the performance target; and students below the lower cut score are in Tier 3 and have a low probability (10%–40%) of meeting the performance target.
The system recommends that a progress monitoring schedule be defined for any student below the 25th national percentile in a given season, or in Tiers 2 or 3.