Tennessee’s teacher evaluation system improving, state report says

Tennessee’s teacher evaluation system is more accurate than ever in measuring teacher quality, according to a report released Thursday by the state Department of Education.

“Now three years into our evaluation system, we see clear indications that the system itself is improving rapidly through the dedicated work of educators across the state,” the 36-page report concludes. “Most importantly, we see significant signs that students are learning more, and that Tennessee is making progress to move itself into the top half of national performance and provide the education that our students and their families expect and deserve.”

The evaluation system was a keystone of the state’s sweeping 2010 education reforms that helped secure a $500 million federal grant called First to the Top. The state substantially altered its teacher evaluations by requiring them annually instead of every five years, as well as relying more on student test scores.

The state’s report, based primarily on the 2013-14 school year, is the third on the new evaluation system since its implementation in 2011-12. The evaluation process has been tweaked every year since its launch based on teacher feedback.

Among the findings, state education leaders are touting the higher correlation between a teacher’s value-added score (TVAAS), which estimates how much teachers contribute to students’ growth on statewide assessments, and observation scores conducted primarily by administrators. Value-added scores and observation scores still don’t always match up — meaning some teachers have low value-added scores and high observation scores, and vice versa — but Assistant Education Commissioner Paul Fleming said that’s natural.

“The goal is not perfect alignment,” Fleming said Wednesday as the state prepared to release its report.

Because good teachers sometimes have low value-added scores, those scores count for only a fraction of a teacher’s overall evaluation score, he said.

Still, misalignment between observation scores and value-added data has been a priority for the state Education Department. During the 2013-14 school year, schools that had the most instances of a teacher who performed poorly according to value-added data, but well or moderately well in observations, had the option of using a state coach to help them refine their observation practices and narrow the gap between observation scores and value-added scores. (A recent study by Vanderbilt’s Peabody College of Education and Human Development shows that principals rely on observation data far more than on value-added data.)

Assistant Education Commissioner Paul Fleming (G.Tatter)

“What’s powerful, these team coaches, because they have been former teachers and principals themselves in these districts, have had significant results . . . in helping schools not just feel better, but give more accurate and helpful feedback to teachers, ” Fleming said.

The department also saw an uptick in the number of teachers for non-tested subjects using portfolios, rather than school-wide test scores, as part of their observations. For most Tennessee teachers, student test scores count for at least 25 percent of their evaluations, even if they teach non-tested subjects. But in 2013-2014, 11 districts — up from only three districts the year before — used one of three portfolio models in which teachers show student growth by submitting student work, instead of grading non-tested teachers based on the test scores of their co-workers. The state has approved portfolio models for teachers of world languages, fine arts and physical education.

Although the report says teachers who use portfolios have “expressed great appreciation for a model that treats them as content experts and allows them to be judged based on the merits of work that happens in their own classrooms,” Fleming said few districts have used portfolios because they are not understood well and are more time-consuming, especially for smaller districts.

“We’re trying to make sure districts have as much information as possible to take that route if they choose to,” Fleming said.

Department officials aim to expand the portfolio model and lessen reliance on school-wide test scores. In 2013-2014, 48 percent of Tennessee teachers were evaluated in part by individual growth measures, which include either value-added data or portfolios. That number could increase to 70 percent, according to department estimates. “Individual growth measures allow districts, schools and teachers to better identify how teachers are performing in every subject area, so that they can receive the support and recognition they need,” the report says.

The use of test scores for teachers of non-tested subjects has drawn the ire of the Tennessee Education Association, as well as some legislators.

Data for the report was collected through the Tennessee Teaching, Empowering, Leading and Learning (TELL) survey of 61,000 teachers and the Tennessee Consortium of Research, Evaluation, and Development survey of about 25,000 teachers, as well as in-person interviews with more than 3,000 educators.

Contact Grace Tatter at gtatter@chalkbeat.org.

Follow us on Twitter: @GraceTatter, @chalkbeattn.

Like us on Facebook.

Sign up for our newsletter for regular updates on Tennessee education news.