Few principals statewide gave teachers low marks in first round of evaluations

Few principals across New York state gave their teachers low scores in the 2012-13 school year as they implemented a new evaluation system that calls for in-depth classroom observations, according to data released by the state on Thursday.

Ninety-eight percent of teachers statewide received top ratings, “effective” or “highly effective,” on the 60 percent of their evaluations made up primarily of observations, the data shows. Less than 1 percent of teachers earned the lowest rating on their observations.

Nearly nine times as many teachers, or about 4 percent, received low ratings on the 40 percent of their evaluations that use a combination of state and local tests. The difference is likely spark a debate over what parts of an evaluation should be used to measure teacher quality—and what parts are the most accurate.

Overall, more than 123,000 teachers and 3,000 principals received ratings in the 2012-13 school year. The results did not include New York City’s 75,000 teachers and 1,100 principals because of a labor dispute that caused the city to implement its system a year late.

In total, 94 percent of teachers and 92 percent of principals earned one of the top two ratings and the high marks quickly drew criticism from supporters of stronger accountability measures.

The data, which includes a breakdown of how teachers and principals were rated based on their districts and schools, as well as their subjects and grades taught, is the fullest picture yet of how one of the state’s biggest efforts to improve teacher quality played out in its first year. The release does not include data from New York City, which was was the only district that did not implement teacher evaluations until the 2013-14 school year.

The average statewide distribution masked the fact that there was a far more variable mix in urban districts, according to Capital New York, which reported that poor urban districts Rochester and Buffalo rated 40 percent of teachers in the lowest two categories of “developing” and “ineffective.”

The new teacher evaluation system was meant to better distinguish teacher quality, and its supporters said that the evaluations would help resolve the disconnect between teachers’ almost uniformly high ratings and the low number of students who graduate high school prepared for college-level coursework. The evaluations’ proponents also said they would also help districts root out the lowest-performing teachers by allowing districts to use ratings to fire or deny job protections.

Whether schools are any closer to achieving either goal remains unclear.

Teachers unions have fought the use of student test scores in the evaluation system, arguing that they aren’t an accurate reflection of teaching skills. Critics have said scores earned in recent years are especially unreliable because they’ve come as the state has adopted tests aligned to new learning standards.

“The data does not support any valid conclusions about teachers, students, schools, or school districts because of the flawed implementation of the Common Core,” New York State United Teachers spokesperson Carl Korn said.

But supporters of tougher teacher evaluations said the data was proof that state test scores should play an even bigger role in evaluations. Many districts had no teachers who were rated ineffective or developing, a sign that the State Education Department should have greater oversight in determining how districts structure the evaluations, StudentsFirstNY Executive Director Jenny Sedlis said.

“Any part of the teacher evaluation system that finds zero percent of teachers to be ineffective, when less than a third of students are on grade level, raises serious questions,” Sedlis said.

State Education Department spokesperson Dennis Tompkins did not say in a statement if the results reflected an accurate look of teacher quality across the state. He noted that since 80 percent of evaluation plans were negotiated between school districts and their local union, the results could be subject to variety.

“It is important to remember that each APPR plan is locally negotiated and unique,” he said.