teacher prep

Three of Tennessee’s largest teacher training programs improve on state report card

PHOTO: Nic Garcia

Three of Tennessee’s 10 largest teacher training programs increased their scores on a state report card that seeks to capture how well new teachers are being prepared for the classroom based on state goals.

The University of Tennessee-Knoxville became the first public university to achieve a top score under the State Board of Education’s new grading system, now in its second year. And Middle Tennessee State University and East Tennessee State University also improved their scores.

But most of Tennessee’s 39 programs scored the same in 2017 as in 2016. Those included the University of Memphis and Austin Peay State University.

And more than 40 percent landed in the bottom tiers, including the state’s largest, Tennessee Technological University in Cookeville, along with other sizable ones like the University of Tennessee’s programs in Chattanooga and Martin.

The report card, released on Thursday, is designed to give a snapshot of the effectiveness of the state’s teacher preparation programs, a front-burner issue in Tennessee since a 2016 report said that most of them aren’t adequately equipping teachers to be effective in the classroom. Teacher quality is important because years of research show that teachers matter more to student achievement than any other aspect of schooling.

State officials say the top-tier score by UT-Knoxville is significant — not only because it’s a public school but because it was the state’s sixth largest training program in 2017. “As one of the state’s flagship public institutions, UTK is setting the bar for how to effectively train teachers at scale,” said Sara Heyburn Morrison, executive director of the State Board. She cited the school’s “model internship program” and “close partnerships with local districts.”

In the previous year’s report card, the top scores only went to small nontraditional programs like Memphis Teacher Residency and Teach For America and private universities such as Lipscomb in Nashville and Union in Jackson.

That demographic recently prompted a call to action by Mike Krause, executive director of the Tennessee Higher Education Commission. He told state lawmakers last month that it’s time to put traditional programs at public institutions under a microscope, especially since those colleges and universities produce 90 percent of the state’s new teachers.

“Sometimes an undue amount of discussion happens around alternative new teacher programs like Teach For America or the New Teacher Project …,” he said. “If we’re going to move the needle (on teacher training), it’s going to happen at the campus of a college or university.”

Tennessee has graded programs that train teachers since 2009 but redesigned its report card in 2016 to provide a clearer picture of their effectiveness for stakeholders ranging from aspiring teachers to hiring principals. The criteria includes a program’s ability to recruit a strong, racially diverse group of teachers-in-training; produce teachers for high-need areas such as special education and secondary math and science; and its candidates’ placement and retention in Tennessee public schools. Another metric is how effective those teachers are in classrooms based on their evaluations, including state test scores that show student growth.

Not everybody is satisfied with the report card’s design, though.

“It’s a real challenge to capture in one report the complexity of preparing our candidates to be teachers, especially when you’re comparing very different programs across the state,” said Lisa Zagumny, dean of the College of Education at Tennessee Tech, which increased its points in 2017 but not enough to improve its overall score.

She said Tech got dinged over student growth scores, but that only a third of its graduates went on to teach in tested subjects. “And yet our observation scores are very high,” added Associate Dean Julie Baker. “We know we’re doing something right because our candidates who go on to teach are being scored very high by their principals.”

Racial diversity is another challenge for Tech, which is located in the Upper Cumberland region. “The diversity we serve is rural, first-generation college students who are typically lower socioeconomically,” said Zagumny.

Tennessee is seeking to recruit a more racially diverse teacher force because of research showing the impact of having teachers who represent the student population they are serving. Of candidates who completed Tennessee’s programs in 2016, only 14 percent were people of color, compared with 36 percent of the state’s student population.

Morrison said this year’s report card includes a new “highlights page” in an effort to allow programs to share a narrative about the work they’re doing. 

You can search for schools below, find the new 2017 scores, and compare them with the previous year. A 1 is the lowest performance category and a 4 is the highest. You can sort the list based on performance and size. This is the state’s first report card based on three years of data.

a high-stakes evaluation

The Gates Foundation bet big on teacher evaluation. The report it commissioned explains how those efforts fell short.

PHOTO: Brandon Dill/The Commercial Appeal
Sixth-grade teacher James Johnson leads his students in a gameshow-style lesson on energy at Chickasaw Middle School in 2014 in Shelby County. The district was one of three that received a grant from the Gates Foundation to overhaul teacher evaluation.

Barack Obama’s 2012 State of the Union address reflected the heady moment in education. “We know a good teacher can increase the lifetime income of a classroom by over $250,000,” he said. “A great teacher can offer an escape from poverty to the child who dreams beyond his circumstance.”

Bad teachers were the problem; good teachers were the solution. It was a simplified binary, but the idea and the research it drew on had spurred policy changes across the country, including a spate of laws establishing new evaluation systems designed to reward top teachers and help weed out low performers.

Behind that effort was the Bill and Melinda Gates Foundation, which backed research and advocacy that ultimately shaped these changes.

It also funded the efforts themselves, specifically in several large school districts and charter networks open to changing how teachers were hired, trained, evaluated, and paid. Now, new research commissioned by the Gates Foundation finds scant evidence that those changes accomplished what they were meant to: improve teacher quality or boost student learning.  

The 500-plus page report by the Rand Corporation, released Thursday, details the political and technical challenges of putting complex new systems in place and the steep cost — $575 million — of doing so.

The post-mortem will likely serve as validation to the foundation’s critics, who have long complained about Gates’ heavy influence on education policy and what they call its top-down approach.

The report also comes as the foundation has shifted its priorities away from teacher evaluation and toward other issues, including improving curriculum.

“We have taken these lessons to heart, and they are reflected in the work that we’re doing moving forward,” the Gates Foundation’s Allan Golston said in a statement.

The initiative did not lead to clear gains in student learning.

At the three districts and four California-based charter school networks that took part of the Gates initiative — Pittsburgh; Shelby County (Memphis), Tennessee; Hillsborough County, Florida; and the Alliance-College Ready, Aspire, Green Dot, and Partnerships to Uplift Communities networks — results were spotty. The trends over time didn’t look much better than similar schools in the same state.

Several years into the initiative, there was evidence that it was helping high school reading in Pittsburgh and at the charter networks, but hurting elementary and middle school math in Memphis and among the charters. In most cases there were no clear effects, good or bad. There was also no consistent pattern of results over time.

A complicating factor here is that the comparison schools may also have been changing their teacher evaluations, as the study spanned from 2010 to 2015, when many states passed laws putting in place tougher evaluations and weakening tenure.

There were also lots of other changes going on in the districts and states — like the adoption of Common Core standards, changes in state tests, the expansion of school choice — making it hard to isolate cause and effect. Studies in Chicago, Cincinnati, and Washington D.C. have found that evaluation changes had more positive effects.

Matt Kraft, a professor at Brown who has extensively studied teacher evaluation efforts, said the disappointing results in the latest research couldn’t simply be chalked up to a messy rollout.

These “districts were very well poised to have high-quality implementation,” he said. “That speaks to the actual package of reforms being limited in its potential.”

Principals were generally positive about the changes, but teachers had more complicated views.

From Pittsburgh to Tampa, Florida, the vast majority of principals agreed at least somewhat that “in the long run, students will benefit from the teacher-evaluation system.”

Source: RAND Corporation

Teachers in district schools were far less confident.

When the initiative started, a majority of teachers in all three districts tended to agree with the sentiment. But several years later, support had dipped substantially. This may have reflected dissatisfaction with the previous system — the researchers note that “many veteran [Pittsburgh] teachers we interviewed reported that their principals had never observed them” — and growing disillusionment with the new one.

Majorities of teachers in all locations reported that they had received useful feedback from their classroom observations and changed their habits as a result.

At the same time, teachers in the three districts were highly skeptical that the evaluation system was fair — or that it made sense to attach high-stakes consequences to the results.

The initiative didn’t help ensure that poor students of color had more access to effective teachers.

Part of the impetus for evaluation reform was the idea, backed by some research, that black and Hispanic students from low-income families were more likely to have lower-quality teachers.  

But the initiative didn’t seem to make a difference. In Hillsborough County, inequity expanded. (Surprisingly, before the changes began, the study found that low-income kids of color actually had similar or slightly more effective teachers than other students in Pittsburgh, Hillsborough County, and Shelby County.)

Districts put in place modest bonuses to get top teachers to switch schools, but the evaluation system itself may have been a deterrent.

“Central-office staff in [Hillsborough County] reported that teachers were reluctant to transfer to high-need schools despite the cash incentive and extra support because they believed that obtaining a good VAM score would be difficult at a high-need school,” the report says.

Evaluation was costly — both in terms of time and money.

The total direct cost of all aspects of the program, across several years in the three districts and four charter networks, was $575 million.

That amounts to between 1.5 and 6.5 percent of district or network budgets, or a few hundred dollars per student per year. About half of that money came from the Gates Foundation.

The study also quantifies the strain of the new evaluations on school leaders’ and teachers’ time as costing upwards of $200 per student, nearly doubling the the price tag in some districts.

Teachers tended to get high marks on the evaluation system.

Before the new evaluation systems were put in place, the vast majority of teachers got high ratings. That hasn’t changed much, according to this study, which is consistent with national research.

In Pittsburgh, in the initial two years, when evaluations had low stakes, a substantial number of teachers got low marks. That drew objections from the union.

“According to central-office staff, the district adjusted the proposed performance ranges (i.e., lowered the ranges so fewer teachers would be at risk of receiving a low rating) at least once during the negotiations to accommodate union concerns,” the report says.

Morgaen Donaldson, a professor at the University of Connecticut, said the initial buy-in followed by pushback isn’t surprising, pointing to her own research in New Haven.

To some, aspects of the initiative “might be worth endorsing at an abstract level,” she said. “But then when the rubber hit the road … people started to resist.”

More effective teachers weren’t more likely to stay teaching, but less effective teachers were more likely to leave.

The basic theory of action of evaluation changes is to get more effective teachers into the classroom and then stay there, while getting less effective ones out or helping them improve.

The Gates research found that the new initiatives didn’t get top teachers to stick around any longer. But there was some evidence that the changes made lower-rated teachers more likely to leave. Less than 1 percent of teachers were formally dismissed from the places where data was available.

After the grants ran out, districts scrapped some of the changes but kept a few others.

One key test of success for any foundation initiative is whether it is politically and financially sustainable after the external funds run out. Here, the results are mixed.

Both Pittsburgh and Hillsborough have ended high-profile aspects of their program: the merit pay system and bringing in peer evaluators, respectively.

But other aspects of the initiative have been maintained, according to the study, including the use of classroom observation rubrics, evaluations that use multiple metrics, and certain career-ladder opportunities.

Donaldson said she was surprised that the peer evaluators didn’t go over well in Hillsborough. Teachers unions have long promoted peer-based evaluation, but district officials said that a few evaluators who were rude or hostile soured many teachers on the concept.

“It just underscores that any reform relies on people — no matter how well it’s structured, no matter how well it’s designed,” she said.

evaluating evaluation

Teaching more black or Hispanic students can hurt observation scores, study finds

Thomas Barwick | Getty Images

A teacher is observed in her first period class and gets a low rating; in her second period class she gets higher marks. She’s teaching the same material in the same way — why are the results different?

A new study points to an answer: the types of students teachers instruct may influence how administrators evaluate their performance. More low-achieving, black, Hispanic, and male students lead to lower scores. And that phenomenon hurts some teachers more than others: Black teachers are more likely to teach low-performing students and students of color.

Separately, the study finds that male teachers tend to get lower ratings, though it’s not clear if that’s due to differences in actual performance or bias.

The results suggest that evaluations are one reason teachers may be deterred from working in classrooms where students lag farthest behind.

The study, conducted by Shanyce Campbell at the University of California, Irvine, analyzed teacher ratings compiled by the Measures of Effective Teaching Project, an effort funded by the Bill and Melinda Gates Foundation. (Gates is also a supporter of Chalkbeat.)

The paper finds that for every 25 percent increase in black or Hispanic students taught, there was a dip in teacher’s rating, similar to the difference in performance between a first and second-year teacher. (Having more low-performing or male students had a slightly smaller effect.)

That’s troubling, Campbell said, because it means that teachers of color — who often most frequently work with students of color — may not be getting a fair shot.

“If evaluations are inequitable, then this further pushes them out,” Campbell said.

The findings are consistent with previous research that shows how classroom evaluations can be biased by the students teachers serve.

Cory Cain, an assistant principal and teacher at the Urban Prep charter network in Chicago, said he and his school often grapple with questions of bias when trying to evaluating teachers fairly. His school serves only boys and its students are predominantly black.

“We’re very clear that everyone is susceptible to bias. It doesn’t matter what’s your race or ethnicity,” he said.

While Cain is black, it doesn’t mean that he doesn’t see how black boys are portrayed in the media, he said. And also he knows that teachers are often nervous they will do poorly on their evaluations if students are misbehaving or are struggling with the content on a given day, knowing that it can be difficult for observers to fully assess their teaching in short sessions.

The study can’t show why evaluation scores are skewed, but one potential explanation is that classrooms appear higher-functioning when students are higher-achieving, even if that’s not because of the teacher. In that sense, the results might not be due to bias itself, but to conflating student success with teacher performance.

Campbell said she hopes her findings will add nuance to the debate over the best ways to judge teachers.

One idea that the study floats to address the issue is an adjustment of evaluation scores based on the composition of the classroom, similar to what is done for value-added scores, though the idea has received some pushback, Campbell said.

“I’m not saying we throw them both out,” Campbell said of classroom observations and value-added scores. “I’m saying we need to be mindful.”