I’m not sure how much credibility the Progress Reports at the heart of the NYC Department of Education’s accountability system have left. The elementary and middle school Reports issued earlier this fall were ridiculed for their inability to distinguish one school from another, since 97% of the school’s received A’s or B’s (and 84% received A’s). Moreover, I showed that the student progress measures that make up 60% of a school’s overall score were highly unreliable from one year to the next. As long as these reports are tied to year-to-year changes in state test scores, they’re likely to be fatally flawed.
On Monday, the Department released the 2008-09 Progress Reports for high schools. Anna Phillips reported that Chancellor Joel Klein said that the high school Progress Reports were more stable and accurate than those for elementary and middle schools because they’re based on multiple measures. Huh? Welcome to the party, Chancellor Klein. I hate to tell you that measures such as credit accumulation are not necessarily accurate measures of a school’s contribution to student learning and development.
But the high school Progress Reports have a bigger problem. Three-quarters of a school’s score comes from a school’s location in relation to a group of 40 peer schools. The idea of comparing a school to peer schools is to create an “apples to apples” comparison. It’s actually a good feature of the Progress Reports that they seek to compare a given school to how schools across the city are doing as well as to how schools that serve similar students are performing.
But it only works if the right criteria are used to determine a school’s peer schools. Wednesday, Jenny Medina and Robert Gebeloff broke a story in the New York Times that high schools with higher percentages of poor, black and Hispanic students received lower grades on the Progress Reports. In 2009, they wrote, the high schools which received A’s enrolled an average of 77% black and Hispanic students. In contrast, the high schools which received C’s, D’s and F’s enrolled an average of 91% black and Hispanic students. This pattern, found in 2007 and 2008 as well, suggested that the school grading system doesn’t adequately adjust for racial and ethnic differences among schools.
A high school’s peer index is based primarily on its students’ average eighth-grade scores on the state ELA and math exams (using the peculiar metric the DOE has developed for converting the exam’s scale scores into a 1.0 to 4.5 proficiency scale), minus two times the percentage of special education students and minus the percentage of overage students. A high school with an average proficiency of 3.10, 6% special education students, and 12% overage students would have a peer index of 2.86. One with an average proficiency of 3.70, 2% special education students, and 5% overage students would have a peer index of 3.61.
Welcome to Chalkbeat
Chalkbeat is an independent nonprofit news organization telling the story of education in America. Learn more.
Education news. In your inbox. Sign up for our email newsletter
Education news. In your inbox. Sign up for our email newsletter
Although the formula tries to take special education and overage status into account, I suspect that its designers were unaware that it is dominated by the average proficiency value, because there is far more variance from school to school in average proficiency than in special education and overage status. But a larger question is, why these factors and not others? Why not the percentage of English Language Learners (ELL’s)? Why not the percentage of students eligible for a free or reduced-price lunch? Why not the racial/ethnic make-up of the school? (And when is the DOE going to wise up that it can’t treat black students as equivalent to Hispanic students, and Asian students as equivalent to white students? These groups have different learning trajectories.)
And why stop there? If the goal is to try to isolate the impact of the school on student performance and progress, then logic would dictate that we should seek to control for all factors that are prior to selection into one school versus another, and potentially related to students’ outcomes. That includes a range of demographic criteria, to be sure. But there are at least two other factors that I think ought to be taken into account. The first is school size. Schools in New York City generally have little control over their size, and if small schools provide certain advantages for students, then we should compare small schools to small schools and large schools to large schools. The second is per-pupil expenditures. Even in the Fair Student Funding era, there are disparities in per-pupil expenditures across schools that are not accounted for by demographic differences in the students attending different schools. I’ve spoken to principals who are indignant that their peer schools have higher expenditures, and yet they are being held to the same performance criteria.
Does all this matter? You bet. Let’s look at just one of the many measures in the high school Progress Reports: the percentage of second-year students accumulating ten or more credits. (The pattern I’m going to describe is found for many of the performance and progress measures in the Progress Reports.) Citywide, the 2009 average was 72%, with a standard deviation of 15%. Schools are compared to their “peer range,” a school’s location in relation to its lowest and highest peers. Citywide, schools were, on average, 59% of the distance between the lowest and highest peers on their percentages of second-year students accumulating ten or more credits.
But some schools were advantaged in these calculations, and others disadvantaged, even though the peer horizon scores are explicitly designed to compare “apples to apples.” The figure below compares schools in the lowest quarter of a given demographic feature to schools in the top quarter. Schools with high concentrations of black and Hispanic students; large schools; schools with a higher proportion of special education students; and schools with more English Language Learners all score lower relative to their “peer” schools than do other schools.
What these figures suggest is that New York City’s high school Progress Reports systematically penalize some schools and reward others. So when you see the DOE touting the superiority of the progress made by the small schools opened during the Bloomberg/Klein era, remember that it’s no accident: it’s built into the accountability system.