Add one more point of critique to the city’s Teacher Data Reports: Experts and educators are worried about the bell curve along which the teacher ratings fell out.
Like the distribution of teachers by rating across types of schools, the distribution of scores among teachers was essentially built into the “value-added” model that the city used to generate the ratings.
The long-term goal of many education reformers is to create a teaching force in which nearly all teachers are high-performing. However, in New York City’s rankings — which rated thousands of teachers who taught in the system from 2007 to 2010 — teachers were graded on a curve. That is, under the city’s formula, some teachers would always be rated as “below average,” even if student performance increased significantly in all classrooms across the city.
The ratings were based on a complex formula that predicts how students will do — after taking into account background characteristics — on standardized tests. Teachers received scores based on students’ actual test results measured against the predictions. They were then divided into five categories. Half of all teachers were rated as “average,” 20 percent were “above average,” and another 20 percent were “below average.” The remaining 10 percent were divided evenly between teachers rated as “far above average” and “far below average.”
IMPACT, the District of Columbia’s teacher-evaluation system, also uses a set distribution for teacher ratings. As sociologist Aaron Pallas wrote in October 2010, “by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students’ learning.”
After years of criticism that its school report cards are too difficult for most parents to understand, the city is redesigning the report cards that give each school a letter grade.
Starting this fall, the Department of Education will produce one-page progress reports that contain only the most important pieces of performance data about each school. The new reports are meant to deliver complicated accountability information "in a more parent-friendly way," according to Phil Vaccaro, a representative of the department's accountability office. Vaccaro presented a draft of the new report to the city school board yesterday.
The "progress report family summary" has the same content but a different design from the data-packed two-pager currently produced for each school. For example, instead of having eight different numbers to describe student progress, there is just one, the proportion of students who made a year's progress in a single year.
A member of the school board, Dmytro Fedkowskyj, worked with the department to develop the new reports. "We need to present them in ways parents can understand," he said, adding that parents who misunderstood the reports could make misinformed school choices.
Critics of the progress reports said the family summary might actually be too simple.
A screenshot (including a caption) from today's online press conference about state test scores, featuring State Education Commissioner Richard Mills and Board of Regents Chancellor Merryl Tisch.
More students across New York State scored proficient on the state reading and writing test this year than ever before, and gains by black and Hispanic students drove the improvements. The difference between white and black students' average scores is now at 18 points, down from 28 in 2006.
More students in New York City scored proficient, too; proficiency rose 18 percentage points to 69 percent from 51 percent in 2006. According to the city Department of Education, the difference between the percentage of black and Hispanic children who scored proficient on the test and the percentage of white students who did now stands at 22 percentage points, down from more than 29 three years ago.
State school leaders described the gains across New York as "moderate" because much of the increases were driven by a greater proportion of children just squeaking past the proficiency cutoff, State Education Commissioner Richard Mills explained during a press conference this morning.
The difference comes from looking at the actual scale scores students received, rather than the percentage of students deemed proficient. Scale scores are considered the most statistically useful way to evaluate test score gains. (Aaron Pallas has written about this on GothamSchools.)
Mills explained the distinction by providing three ways to look at this year's sixth-grade scores. The first is by looking purely at what proportion of students in the grade tested at basic proficiency. According to that metric, 81 percent of this year's sixth-graders met proficiency, compared to 60.4 percent of sixth-graders in 2006, the first year of a new statewide curriculum and testing program.
Looking at proficiency over time, 69 percent of children in 3rd grade in 2006 met standards; those are the same children who posted an 81 percent proficiency rating as sixth-graders this year. But the scale scores of that same cohort of children actually dropped slightly over the same period, from 669 to 667.