jonah rockoff

Teaching teachers

New York

State to use a "value-added" growth model without calling it that

State test scores won't count more toward the evaluations of elementary and middle school teachers next year, according to an amended proposal that a Board of Regents committee passed unanimously on Monday. The proposed model, which was formally approved on Tuesday, included a methodology to calculate student growth that was nearly identical to the "value-added" model that State Education Commissioner John King brought to the board in April. Both models add new data points to the formula used to approximate how much each teacher has contributed to students' growth. But under state law, any model termed "value-added" would have required, controversially, that its weight increase from 20 to 25 percent on some teacher evaluations. King's alternative this month was for the state to adopt an “enhanced growth model” that adds virtually all of the same data points but doesn’t have the value-added moniker. Spurning the name allows the state to avoid increasing the weight of test scores until all districts have at least one year of implementation under their belts, something the state teachers union has asked for. "I would have thought that adding all these factors would qualify as 'value-added,' but this distinction was always opaque," said Jonah Rockoff, a Columbia University economist who advised the state on its methodology "If the commissioner wants to keep the weight at 20 percent for another year then staying within the 'student growth' framework seems like the simplest way to do it."
New York

Why it's no surprise high- and low-rated teachers are all around

The New York Times' first big story on the Teacher Data Reports released last week contained what sounded like great news: After years of studies suggesting that the strongest teachers were clustered at the most affluent schools, top-rated teachers now seemed as likely to work on the Upper East Side as in the South Bronx. Teachers with high scores on the city's rating system could be found "in the poorest corners of the Bronx, like Tremont and Soundview, and in middle-class neighborhoods," "in wealthy swaths of Manhattan, but also in immigrant enclaves," and "in similar proportions in successful and struggling schools," the Times reported. Education analyst Michael Petrilli called the findings "jaw-dropping news" that "upends everything we thought we knew about teacher quality." Except it's not really news at all. Value-added measurements like the ones used to generate the city's Teacher Data Reports are designed precisely to control for differences in neighborhood, student makeup, and students' past performance. The adjustments mean that teachers are effectively ranked relative to other teachers of similar students. Teachers who teach similar students, then, are guaranteed to have a full range of scores, from high to low. And, unsurprisingly, teachers in the same school or neighborhood often teach similar students. “I chuckled when I saw the first [Times story], since the headline pretty much has to be true: Effective and ineffective teachers will be found in all types of schools, given the way these measures are constructed,” said Sean Corcoran, a New York University economist who has studied the city’s Teacher Data Reports.
New York

Why we won't publish individual teachers' value-added scores

Tomorrow's planned release of 12,000 New York City teacher ratings raises questions for the courts, parents, principals, bureaucrats, teachers — and one other party: news organizations. The journalists who requested the release of the data in the first place now must decide what to do with it all. At GothamSchools, we joined other reporters in requesting to see the Teacher Data Reports back in 2010. But you will not see the database here, tomorrow or ever, as long as it is attached to individual teachers' names. The fact is that we feel a strong responsibility to report on the quality of the work the 80,000 New York City public school teachers do every day. This is a core part of our job and our mission. But before we publish any piece of information, we always have to ask a question. Does the information we have do a fair job of describing the subject we want to write about? If it doesn't, is there any additional information — context, anecdotes, quantitative data — that we can provide to paint a fuller picture? In the case of the Teacher Data Reports, "value-added" assessments of teachers' effectiveness that were produced in 2009 and 2010 for reading and math teachers in grades 3 to 8, the answer to both those questions was no. We determined that the data were flawed, that the public might easily be misled by the ratings, and that no amount of context could justify attaching teachers’ names to the statistics. When the city released the reports, we decided, we would write about them, and maybe even release Excel files with names wiped out. But we would not enable our readers to generate lists of the city’s “best” and “worst” teachers or to search for individual teachers at all. It's true that the ratings the city is releasing might turn out to be powerful measures of a teacher's success at helping students learn. The problem lies in that word: might.
New York

Getting an F or a D led schools to assign fewer essays, projects

New York

For most students, no benefit to a school's F grade, study finds

A study examining whether getting poor grades on city progress reports prompted schools to improve their students' test scores found little evidence of such a boost. The study, released today by the conservative-leaning Manhattan Institute, asked the question by comparing schools with progress report raw scores that were roughly the same, but just different enough to get different letter grades. In fact the two groups showed about the same amount of progress — except in fifth-grade math, where students in failing schools made "significant and substantial improvement" compared to their peers in schools that had been assigned a grade of D, according to the study. The progress reports assign letter grades to schools based primarily on improvements in students' test scores. Since the first reports were released a year ago, the program has been the subject of sustained criticism: Parents and teachers have complained about unfair stigmatization of good schools, and statisticians have charged that the reports are driven as much by error as by actual school improvement. The study's architect, Manhattan Institute senior fellow Marcus Winters, called his findings "mixed-positive" in favor of the progress reports. Those findings were the subject this morning of a panel discussion sponsored by the Manhattan Institute featuring Winters, Columbia University economist Jonah Rockoff, and two officials from the Department of Education's accountability office, including its CEO, James Liebman.