hiring help

School districts struggle when hiring new teachers. A new study suggests L.A. has found a better way

PHOTO: Tennessee Department of Education

Every spring and summer, America’s school districts face a critical challenge: hiring a batch of new teachers.

For some districts, the first problem is finding enough educators to fill their classrooms. But for many others, the central issue is choosing among the candidates — and administrators are left to develop their own systems for using résumés and test scores to predict who will do the best job.

New research suggests that Los Angeles, at least, has found a better way.

In 2014, Los Angeles Unified School District redesigned its hiring process to carefully cull teaching applicants. Each prospective teacher gets several scores for measures like college GPA, a sample teaching lesson, an interview, and professional references. Candidates who earn 80 out of 100 points get passed along for consideration to school principals. (Principals can still request an applicant who scored below that benchmark to be added to the hiring pool.)

The paper, published through the group CALDER at the American Institutes for Research, found that teachers who scored higher made a bigger impact on student achievement, scored higher on the district’s evaluation system, and were absent for fewer days.

Los Angeles’ screening tests “appear to accurately discern several aspects of teacher quality,” write the researchers, Paul Bruno of the University of Southern California and Katharine Strunk of Michigan State University. “The district may therefore benefit from its policy of excluding most low-performing applicants from employment eligibility.”  

The study is limited to teachers who were actually hired by the district, so it’s impossible to know how teachers screened out by the system — likely, the lowest-scorers — would have done in the classroom. Instead, the researchers compared the performance of the teachers who were hired with relatively high or low scores.

The differences were statistically significant but usually small. For instance, a teacher who scored substantially above average was about half as likely to receive a low evaluation rating (though only about 4 percent of all teachers fell into that category).

The researchers also examined whether schools benefited from the new hiring system. Indeed, it seemed to lead to small test score bumps in schools with higher shares of newly hired teachers, relative to what would be expected under the old system.

One consideration the study didn’t address was the impact on teacher diversity. Other screening systems — like teacher certification rules — tend to disproportionately exclude candidates of color.

The research is the latest in a string of recent studies showing that the way schools make hiring decisions can make a small but meaningful impact on students — and that many districts could do a better job at it.

When teachers are hired after the first day of school, students have been shown to do worse on tests at the end of the year. Still, some large districts had hundreds of vacant teaching positions at the start of this academic year. (Los Angeles, notably, had very few.)

Other districts, like Washington, DC and Spokane, Washington, have also created screening processes that predict teacher effectiveness.

Yet recent research suggests that more districts are actually decentralizing hiring decisions so that principals have more control over which teachers they take on. This may help ensure a good fit between teachers and a school, something research shows is important.

At the same time, the Los Angeles study highlights the potential benefits of a more standardized approach. Principals still make the ultimate hire, but have to sort through fewer applicants to get there.

Paul Bruno, one of the study’s authors, said finding the right balance — between autonomy and centralization — is a key open question. “That’s something we don’t know a whole lot about: how best to make that tradeoff,” he said.

a high-stakes evaluation

The Gates Foundation bet big on teacher evaluation. The report it commissioned explains how those efforts fell short.

PHOTO: Brandon Dill/The Commercial Appeal
Sixth-grade teacher James Johnson leads his students in a gameshow-style lesson on energy at Chickasaw Middle School in 2014 in Shelby County. The district was one of three that received a grant from the Gates Foundation to overhaul teacher evaluation.

Barack Obama’s 2012 State of the Union address reflected the heady moment in education. “We know a good teacher can increase the lifetime income of a classroom by over $250,000,” he said. “A great teacher can offer an escape from poverty to the child who dreams beyond his circumstance.”

Bad teachers were the problem; good teachers were the solution. It was a simplified binary, but the idea and the research it drew on had spurred policy changes across the country, including a spate of laws establishing new evaluation systems designed to reward top teachers and help weed out low performers.

Behind that effort was the Bill and Melinda Gates Foundation, which backed research and advocacy that ultimately shaped these changes.

It also funded the efforts themselves, specifically in several large school districts and charter networks open to changing how teachers were hired, trained, evaluated, and paid. Now, new research commissioned by the Gates Foundation finds scant evidence that those changes accomplished what they were meant to: improve teacher quality or boost student learning.  

The 500-plus page report by the Rand Corporation, released Thursday, details the political and technical challenges of putting complex new systems in place and the steep cost — $575 million — of doing so.

The post-mortem will likely serve as validation to the foundation’s critics, who have long complained about Gates’ heavy influence on education policy and what they call its top-down approach.

The report also comes as the foundation has shifted its priorities away from teacher evaluation and toward other issues, including improving curriculum.

“We have taken these lessons to heart, and they are reflected in the work that we’re doing moving forward,” the Gates Foundation’s Allan Golston said in a statement.

The initiative did not lead to clear gains in student learning.

At the three districts and four California-based charter school networks that took part of the Gates initiative — Pittsburgh; Shelby County (Memphis), Tennessee; Hillsborough County, Florida; and the Alliance-College Ready, Aspire, Green Dot, and Partnerships to Uplift Communities networks — results were spotty. The trends over time didn’t look much better than similar schools in the same state.

Several years into the initiative, there was evidence that it was helping high school reading in Pittsburgh and at the charter networks, but hurting elementary and middle school math in Memphis and among the charters. In most cases there were no clear effects, good or bad. There was also no consistent pattern of results over time.

A complicating factor here is that the comparison schools may also have been changing their teacher evaluations, as the study spanned from 2010 to 2015, when many states passed laws putting in place tougher evaluations and weakening tenure.

There were also lots of other changes going on in the districts and states — like the adoption of Common Core standards, changes in state tests, the expansion of school choice — making it hard to isolate cause and effect. Studies in Chicago, Cincinnati, and Washington D.C. have found that evaluation changes had more positive effects.

Matt Kraft, a professor at Brown who has extensively studied teacher evaluation efforts, said the disappointing results in the latest research couldn’t simply be chalked up to a messy rollout.

These “districts were very well poised to have high-quality implementation,” he said. “That speaks to the actual package of reforms being limited in its potential.”

Principals were generally positive about the changes, but teachers had more complicated views.

From Pittsburgh to Tampa, Florida, the vast majority of principals agreed at least somewhat that “in the long run, students will benefit from the teacher-evaluation system.”

Source: RAND Corporation

Teachers in district schools were far less confident.

When the initiative started, a majority of teachers in all three districts tended to agree with the sentiment. But several years later, support had dipped substantially. This may have reflected dissatisfaction with the previous system — the researchers note that “many veteran [Pittsburgh] teachers we interviewed reported that their principals had never observed them” — and growing disillusionment with the new one.

Majorities of teachers in all locations reported that they had received useful feedback from their classroom observations and changed their habits as a result.

At the same time, teachers in the three districts were highly skeptical that the evaluation system was fair — or that it made sense to attach high-stakes consequences to the results.

The initiative didn’t help ensure that poor students of color had more access to effective teachers.

Part of the impetus for evaluation reform was the idea, backed by some research, that black and Hispanic students from low-income families were more likely to have lower-quality teachers.  

But the initiative didn’t seem to make a difference. In Hillsborough County, inequity expanded. (Surprisingly, before the changes began, the study found that low-income kids of color actually had similar or slightly more effective teachers than other students in Pittsburgh, Hillsborough County, and Shelby County.)

Districts put in place modest bonuses to get top teachers to switch schools, but the evaluation system itself may have been a deterrent.

“Central-office staff in [Hillsborough County] reported that teachers were reluctant to transfer to high-need schools despite the cash incentive and extra support because they believed that obtaining a good VAM score would be difficult at a high-need school,” the report says.

Evaluation was costly — both in terms of time and money.

The total direct cost of all aspects of the program, across several years in the three districts and four charter networks, was $575 million.

That amounts to between 1.5 and 6.5 percent of district or network budgets, or a few hundred dollars per student per year. About half of that money came from the Gates Foundation.

The study also quantifies the strain of the new evaluations on school leaders’ and teachers’ time as costing upwards of $200 per student, nearly doubling the the price tag in some districts.

Teachers tended to get high marks on the evaluation system.

Before the new evaluation systems were put in place, the vast majority of teachers got high ratings. That hasn’t changed much, according to this study, which is consistent with national research.

In Pittsburgh, in the initial two years, when evaluations had low stakes, a substantial number of teachers got low marks. That drew objections from the union.

“According to central-office staff, the district adjusted the proposed performance ranges (i.e., lowered the ranges so fewer teachers would be at risk of receiving a low rating) at least once during the negotiations to accommodate union concerns,” the report says.

Morgaen Donaldson, a professor at the University of Connecticut, said the initial buy-in followed by pushback isn’t surprising, pointing to her own research in New Haven.

To some, aspects of the initiative “might be worth endorsing at an abstract level,” she said. “But then when the rubber hit the road … people started to resist.”

More effective teachers weren’t more likely to stay teaching, but less effective teachers were more likely to leave.

The basic theory of action of evaluation changes is to get more effective teachers into the classroom and then stay there, while getting less effective ones out or helping them improve.

The Gates research found that the new initiatives didn’t get top teachers to stick around any longer. But there was some evidence that the changes made lower-rated teachers more likely to leave. Less than 1 percent of teachers were formally dismissed from the places where data was available.

After the grants ran out, districts scrapped some of the changes but kept a few others.

One key test of success for any foundation initiative is whether it is politically and financially sustainable after the external funds run out. Here, the results are mixed.

Both Pittsburgh and Hillsborough have ended high-profile aspects of their program: the merit pay system and bringing in peer evaluators, respectively.

But other aspects of the initiative have been maintained, according to the study, including the use of classroom observation rubrics, evaluations that use multiple metrics, and certain career-ladder opportunities.

Donaldson said she was surprised that the peer evaluators didn’t go over well in Hillsborough. Teachers unions have long promoted peer-based evaluation, but district officials said that a few evaluators who were rude or hostile soured many teachers on the concept.

“It just underscores that any reform relies on people — no matter how well it’s structured, no matter how well it’s designed,” she said.

evaluating evaluation

Teaching more black or Hispanic students can hurt observation scores, study finds

Thomas Barwick | Getty Images

A teacher is observed in her first period class and gets a low rating; in her second period class she gets higher marks. She’s teaching the same material in the same way — why are the results different?

A new study points to an answer: the types of students teachers instruct may influence how administrators evaluate their performance. More low-achieving, black, Hispanic, and male students lead to lower scores. And that phenomenon hurts some teachers more than others: Black teachers are more likely to teach low-performing students and students of color.

Separately, the study finds that male teachers tend to get lower ratings, though it’s not clear if that’s due to differences in actual performance or bias.

The results suggest that evaluations are one reason teachers may be deterred from working in classrooms where students lag farthest behind.

The study, conducted by Shanyce Campbell at the University of California, Irvine, analyzed teacher ratings compiled by the Measures of Effective Teaching Project, an effort funded by the Bill and Melinda Gates Foundation. (Gates is also a supporter of Chalkbeat.)

The paper finds that for every 25 percent increase in black or Hispanic students taught, there was a dip in teacher’s rating, similar to the difference in performance between a first and second-year teacher. (Having more low-performing or male students had a slightly smaller effect.)

That’s troubling, Campbell said, because it means that teachers of color — who often most frequently work with students of color — may not be getting a fair shot.

“If evaluations are inequitable, then this further pushes them out,” Campbell said.

The findings are consistent with previous research that shows how classroom evaluations can be biased by the students teachers serve.

Cory Cain, an assistant principal and teacher at the Urban Prep charter network in Chicago, said he and his school often grapple with questions of bias when trying to evaluating teachers fairly. His school serves only boys and its students are predominantly black.

“We’re very clear that everyone is susceptible to bias. It doesn’t matter what’s your race or ethnicity,” he said.

While Cain is black, it doesn’t mean that he doesn’t see how black boys are portrayed in the media, he said. And also he knows that teachers are often nervous they will do poorly on their evaluations if students are misbehaving or are struggling with the content on a given day, knowing that it can be difficult for observers to fully assess their teaching in short sessions.

The study can’t show why evaluation scores are skewed, but one potential explanation is that classrooms appear higher-functioning when students are higher-achieving, even if that’s not because of the teacher. In that sense, the results might not be due to bias itself, but to conflating student success with teacher performance.

Campbell said she hopes her findings will add nuance to the debate over the best ways to judge teachers.

One idea that the study floats to address the issue is an adjustment of evaluation scores based on the composition of the classroom, similar to what is done for value-added scores, though the idea has received some pushback, Campbell said.

“I’m not saying we throw them both out,” Campbell said of classroom observations and value-added scores. “I’m saying we need to be mindful.”