While state leaders grappled to understand how a new evaluation could have rated just 219 ineffective educators out of nearly 50,000, Marion County teachers say they know some of the reasons why.
At a meeting of 17 local teachers hosted Monday night by TeachPlus, they said scores are driven by the realities that the evaluators, mostly principals, face: Inconsistent direction, lack of training and the need to protect their own and their schools’ interests.
“The default position for principals is to rate teachers effective or highly effective,” said Jacob Pactor, an English teacher at Speedway High School “If they say otherwise, it requires them to do more work. And they’re rating their own hires. It’s difficult for principals, or anyone, to say, ‘I made a mistake.’”
TeachPlus is the local chapter of a national non-profit group that aims to involve teachers in policymaking. Locally, the group’s executive director is Caitlin Hannon, an Indianapolis Public School Board member and former IPS teacher.
Some teachers think the evaluation process is creating the wrong incentives for principals when they do evaluation.
Joe Gramelspacher, a math teacher at Crispus Attucks Medical Magnet High School in IPS, said principals in IPS have practical problems, too. They want to keep teachers with potential and worry about running them off if they don’t rate them effective.
“In IPS,” Gramelspacher said, “the perspectives of a lot of administrators is ‘my teachers are working hard but might not be doing everything they need to do in class. But if I’m telling them everything they need to improve on they might get discouraged and might quit.’”
IPS schools, he said, are set up to identify most teachers as effective rather then to provide a useful path to improved teaching for those on the low end of the scale.
“A huge problem in IPS is there is not a lot of thinking about how were going to make our teachers better in a strategic way,” Gramelspacher said.
Teachers’ experience with evaluation varies greatly from school to school.
Christina Lear, a 10th grade English at Herron High School who previously worked at IPS, said her experience has been that there simply isn’t enough observation, interaction and feedback from principals. She has rarely seen or heard from an administrator on issues of her teaching performance.
By comparison, her classroom was regularly visited by observers from Teach for America when she was part of the corps that places top graduates as teachers in low income schools. She got a similar level of support from her instructors at Marian University while pursuing a master’s degree.
“I got a lot of feedback from them,” she said. “They were able to help me more.”
Rachel Quinn, who teaches at Harshman Middle School in IPS, said she has had a great experience with evaluation, thanks to an excellent assistant principal at the school.
“She was well versed in RISE (the state model evaluation system) and our staff was very well trained,” Quinn said. “I’ve had a really positive experience with my RISE evaluator. She comes to my room and she types everything she hears.”
The only problem? The evaluator is leaving the district for another job. Quinn worries about what will happen next.
Others said their schools were doing a good job, too.
Gramelspacher said Crispus Attucks was one of the few IPS schools to label a teacher ineffective. In setting expectations, he said, administrators aimed for teachers to challenge themselves but not set the bar too high. He’s not sure every IPS school was as thoughtful in its approach.
“Looking at our (ratings) distribution I think it was more honest than most of IPS,” he said. “It speaks to how much discretion each principal has.”
By contrast, Kelly Hannon who teaches George Washington High School in IPS, said the school’s leadership and teachers have repeatedly turned over, disrupting the evaluation process.
“Having new people come in and try to establish a new norm, this is not prioritized,” she said. “It’s really hard when you have new teachers and administrators coming in each year. (Evaluation) was something we just have to do because we were told we have to do it.”
Some teachers, however, reported their schools were very serious about evaluation. For those that did it right, quality training made a big difference.
Joel Thomas and Megan Kinsey both were part of a team that evaluated other teachers at Indianapolis Lighthouse charter school. They praised the process.
Thomas said he spent a full year training to evaluate other teachers, watching loads of videotaped examples to hone his skill at identifying good teaching practice.
When he saw so many teachers rated effective statewide, he wondered if other evaluators were trained as well as he was.
“We want them to know what it looks like to be effective and we want to know what it looks like to be ineffective,” Thomas said. “My first question is how intensively and how long were they trained, and how accountable are principals for that training?”
Lighthouse evaluators also had smaller groups of teachers to track and more time to work with them.
“I only evaluate seven people,” Kinsey said. “I am in their room at least once a week and we have a discussion every single week. I know where they’re at.”
Kinsey said she rated one teacher ineffective and three others as needing improvement.
Elsewhere, teachers felt new laws weren’t helping as much as policymakers hoped.
Teachers are frustrated by legislative meddling, Pactor said. Educators may have had a hard time embracing a new process because they don’t believe the lawmakers have made a thoughtful effort to help teachers improve, he said.
“No one takes this seriously,” he said. “Not legislators, not principals. The only ones who take it seriously are teachers.”
When Ashley Hebda, a former IPS teacher also now at Indianapolis Lighthouse, hears legislators talking about requiring more testing to be factored into teacher rating, she wonders if they realize how few teachers can be judged by ISTEP.
In many cases, teachers who don’t teach in tested subjects or grades, or specialists like art and music teachers, simply write their own tests that their schools then use to meet the state’s requirement for judging student growth.
Sometimes, teachers keep reinventing the tests they are judged on until they get the desire rating outcome, she said.
“You have a chance to resubmit it until you got the right set of data that you were pushing for,” she said. “In a lot of teachers’ cases, that worked out in their favor.”
With all the focus on ISTEP and observation Gramelspacher thinks Indiana is missing an obvious measure of teacher performance: why not ask the students?
There’s so much concern that Indiana’s system might be measuring the wrong things, he said, but strong results on student surveys have been shown to be a predictor of quality teaching.
Judging teachers in part based on what their students think of their teaching should be on the table in Indiana, Gramelspacher said.
“It’s actually harder to game,” he said. “A lot of research shows it’s a very reliable method.”