New Student Growth Measure For Accountability in Michigan

March 25, 2015
Photo Courtesy of Giulia Forsythe. Art work by students in a workshop sponsored by Brock University Centre for Pedagogical Innovation, and directed by Dr. Joe Norris.

Photo Courtesy of Giulia Forsythe. Art work by students in a workshop sponsored by Brock University Centre for Pedagogical Innovation.

By Adrienne Hu

As testing season is about to begin in April, the Michigan Department of Education (MDE) has introduced a new measure of student growth (please see a relevant Green & Write post here) to be used along with the transitional assessment M-STEP for accountability purposes. The new measure – Student Growth Percentiles (SGPs) – will replace the mix of measures that the state has been using in its accountability system to measure individual students’ learning across different grade levels over one or more years.

Old Measures In Michigan

For the state of Michigan, there has been no uniform measure of student growth across grade levels and subjects in the past. One reason is that adjacent-grade assessments are not always available to calculate the same group of students’ growth for grades 3 – 8. For the students whose previous year’s data are not available. MDE has used school-wide measures of improvement from one cohort of student to the next to extrapolate student growth for a given grade level over time. These measures were used in two different places in the state accountability system: 1) as one of the three indicators to calculate the Top-to-Bottom-Rankings; and 2) to calculate Performance Level Change (PLC) on a 4-point scale, to hold districts accountable to get Performance Based Bonuses.

In comparison, high schools have a slightly different measure of student learning because testing is not that frequent during the school year. In the high school case, MDE has used slope calculations to estimate the annual improvement of different cohorts of students at a given grade level over the most recent four years.

As claimed by MDE, these measures of student growth work when assessments remain stable. However, when there is a transition to new assessments that are aligned to different standards, a new student growth measure needs to be adopted.

Student Growth Percentiles

The idea of SGPs is to measure individual student growth in relation to a group of students who had comparable scores in previous tests of a subject across the state. Within each academic peer group, students will be ranked by percentiles from 0 – 99, with 99 meaning that the student has demonstrated growth in that content area equal or greater than 99% of their peers with similar test scores patterns in the past.

The calculation of SGPs requires two components: a student’s current test score on the subject, and at least one previous Michigan test score for that student. Previous test scores are used to group students into academic peer groups, while current test scores are used to rank students within the group, in order to calculate the percentiles of the relative learning gains of this group of students.

In this regard, even if the current tests differ from the previous state tests in terms of scale and the standards they align with, the SGPs can still measure student growth because they are not intended for comparison between tests. Also, as claimed by MDE, SGPs unify the measurement for K-8 students and grades 9-12 students under the condition that the scores of the new test are strongly correlated with the old scores from the old assessments.

Potential Caveats of The SGPs

The notion of academic peer groups with comparable test score patterns is not new. The much controversial value-added models (VAMs) (see, e.g., the statement from American Statistical Association) use a similar idea to calculate the predicted student scores based on the performance of a peer group who shared similar backgrounds (including student past performance, race, district, and many other contextual factors). In comparison, the SPGs only consider the past test performance to group students. The difficulty is that there are always many issues of grouping students into a so called “comparable peer group”. With VAMs, the grouping of students largely determine the predicted scores of the students. And the difference between these predicted test scores and the actual student test score is the indicator of how much this student has learned relative to a group of students who have similar background as he or she.

With the SGPs, there is no such concern because the measure is not using some statistical model to determine how well a student could have scored on an assessment within his or her academic group. But still, questions arise when it comes to the size of each academic group and the cut-off points of past test scores for each group when actually calculating SPGs. Moreover, the plan to implement this measure becomes fuzzier if the state uses multiple test results from the past, which needs to take the variation of individual performance into consideration.

The MDE does not provide many details on the implementations of SGPs on these issues, nor does it mention of the use of this measure in state’s teacher evaluation framework. Right now, we only know that SGPs will be used in the state accountability system in one way or another.

 

Contact Adrienne Hu: husihua@msu.edu