TNReady Testimony

As lawmakers grill McQueen about Tennessee’s testing problems, here are five things we learned

PHOTO: Marta W. Aldrich
Education Commissioner Candice McQueen (center) testifies before Tennessee lawmakers along with Questar CEO Stephen Lazer and Assistant Education Commissioner Nakia Towns.

Education Commissioner Candice McQueen has pledged to ensure the accuracy of Tennessee’s new standardized test as frustrated lawmakers are seeking explanations for a second straight year of testing problems.

McQueen and her staff offered new details about the latest breakdown on Tuesday in their first appearance before legislators since reporting that the state’s testing company incorrectly scored paper tests for some high school students this year. She called scanning mistakes the culprit and said the state is working closely with Questar to prevent such problems in the future.

A year earlier, the botched rollout of online testing led to the test’s cancellation for grade-schoolers, the firing of Tennessee’s previous test maker, and the decision to phase in online testing over three years.

PHOTO: Marta W. Aldrich
McQueen (far left) pauses with her team, including Questar CEO Stephen Lazer (far right), to hear a few final comments from lawmakers.

The state is ultimately responsible for this year’s “failure,” McQueen said, but she let Questar CEO Stephen Lazer take some heat too.

“We at Questar own that it happened,” said Lazer, who sat beside McQueen during the hearing. “It should have been caught (earlier), and it won’t happen again.”

Earlier in the day, Gov. Bill Haslam called the controversy overblown because this year’s errors were discovered as part of the state’s process for vetting scores.

“I think the one thing that’s gotten lost in all this discussion is the process worked,” Haslam told reporters. “It was during the embargo period before any of the results were sent out to students and their families that this was caught.”

The three-hour hearing at the State Capitol was dotted with occasional testy exchanges as lawmakers bemoaned the challenge of rebuilding trust in Tennessee’s problem-plagued assessment. They questioned why teachers, as part of their evaluations, appear to be the only ones being held accountable for this year’s results.

“Are we terminating this contract (with Questar)?” asked Rep. Craig Fitzhugh, a Democrat from Ripley who is running for governor. “… Have there been any modifications (to the contract) as a result of this error?”

McQueen responded that the contract hasn’t changed, but that the state’s work plan with Questar has.

“We have had intense conversations between the department and the vendor on quality improvements and expectations,” she said, “and we are moving forward with very specific deadlines.”

The hearing also featured testimony from teachers, several teachers unions, a superintendent, a school board member, and a researcher. Some called for a three-year moratorium on using TNReady scores for accountability purposes; others urged the state to “stay the course.”

Here are five things we learned:

1. The scoring problem came to light because of discrepancies flagged at one school.

As they looked at the data, educators at Blackman High School in Rutherford County noticed that some of their highest-performing students scored low on one standard in English language arts. That raised a red flag since those same students had demonstrated proficiency on that standard in other assessments. The district contacted the state, which requested an investigation from Questar, which traced the discrepancies to a scoring error when scanning paper tests. “The scanning program was incorrect,” Lazer said. “The scanners read the documents right, but the data was in the wrong columns.”

2. Tennessee plans to release scores next year before the new school year begins.

The state has gotten pushback for this year’s protracted scoring schedule that ended this month, more than two months after the school year began. While the scoring process takes longer with a new test, McQueen said the state is committed to getting all scores out by mid-August next year. She said districts will receive their preliminary high school scores by the end of May for inclusion in students’ final grades. Final high school scores will go out in July. For grades 3-8, scores should be delivered by mid-August at the latest, she said.

3. The state is banking on its transition to online testing to expedite high school results.

After the online fiasco that soured TNReady’s first year, McQueen’s decision to slow-walk the state back into online testing also slowed the subsequent scoring and delivery process. But 2018 marks the first school year that all high schoolers will take the test online again — a change that state officials feel confident about after 25 districts successfully made the leap this year. (Middle and elementary schools will make the switch in 2019, though districts will have the option of administering the test on paper to its youngest students in grades 3-4.)

4. There is talk of an outside investigation into Tennessee’s testing failures.

Rep. Mike Stewart of Nashville asked McQueen if she would object to a top-to-bottom review of Tennessee’s testing challenges from an independent third party such as the state comptroller’s office. “Not at all,” McQueen responded, adding that her department has sought proactively to improve the process.

5. McQueen plans to reconvene her testing task force — again.

One of her first acts as commissioner in 2015 was to form a task force to study concerns about over-testing and recommend improvements. So grave were testing-related issues that McQueen followed up with a second study panel in 2016, even as the state has remained committed to TNReady as the lynchpin of its system of accountability. Now the commissioner wants to reconvene that task force this year to begin looking specifically at 11th-grade testing and diagnostic assessments used by districts, among other things. McQueen told lawmakers that she hopes to have the first meeting by December.

Editor’s note: This story has been updated to identify the Rutherford County school where scoring concerns were flagged.

First Person

Let’s be careful with using ‘grading floors.’ They may lead to lifelong ceilings for our students

PHOTO: Helen H. Richardson, The Denver Post

I am not a teacher. I am not a principal. I am not a school board member. I am not a district administrator (anymore).

What I am is a mother of two, a high-schooler and middle-schooler. I expect them both to do their “personal best” across the board: chores, projects, personal relationships, and yes, school.

That does not mean all As or Bs. We recognize the sometimes arbitrary nature of grades. (For example, what is “class participation” — is it how much you talk, even when your comments are off topic?) We have made it very clear that as long as they do their “personal best,” we are proud.

That doesn’t mean, though, that when someone’s personal best results in a poor grade, we should look away. We have to ask what that grade tells us. Often, it’s something important.

I believe grading floors — the practice (for now, banned in Memphis) of deciding the lowest possible grade to give a student — are a short-sighted solution to a larger issue. If we use grade floors without acknowledging why we feel compelled to do so, we perpetuate the very problem we seek to address.

"If we use grade floors without acknowledging why we feel compelled to do so, we perpetuate the very problem we seek to address."Natalie McKinney
In a recent piece, Marlena Little, an obviously dedicated teacher, cites Superintendent Hopson’s primary drive for grade floors as a desire to avoid “creat[ing] kids who don’t have hope.” I am not without empathy for the toll failing a course may take on a student. But this sentiment focuses on the social-emotional learning aspect of our students’ education only.

Learning a subject builds knowledge. Obtaining an unearned grade only provides a misleading indication of a child’s growth.

This matters because our students depend on us to ensure they will be prepared for opportunities after high school. To do this, our students must possess, at the very least, a foundation in reading, writing and arithmetic. If we mask real academic issues with grade floors year after year, we risk missing a chance to hold everyone — community, parents, the school board, district administration, school leaders, teachers, and students — accountable for rectifying the issue. It also may mean our students will be unable to find employment providing living wages, resulting in the perpetuation of generational poverty.

An accurate grade helps the teacher, parents, and district appropriately respond to the needs of the student. And true compassion lies in how we respond to a student’s F. It should act as an alarm, triggering access to additional work, other intervention from the teacher or school, or the use of a grade recovery program.

Ms. Little also illustrates how important it is to have a shared understanding about what grades should mean. If the fifth-grade boy she refers to who demonstrates mastery of a subject orally but has a problem demonstrating that in a written format, why should he earn a zero (or near-zero) in the class? If we agree that grades should provide an indicator of how well a student knows the subject at hand, I would argue that that fifth-grade boy should earn a passing grade. He knows the work! We don’t need grade floors in that case — we need different ideas about grades themselves.

We should also reconsider the idea that an F is an F. It is not. A zero indicates that the student did not understand any of the work or the student did not do any of the work. A 50 percent could indicate that the student understood the information half the time. That is a distinction with a difference.

Where should we go from here? I have a few ideas, and welcome more:

  1. In the short term, utilize the grade recovery rules that allow a student to use the nine weeks after receiving a failing grade to demonstrate their mastery of a subject — or “personal best” — through monitored and documented additional work.
  2. In the intermediate term, create or allow teachers to create alternative assessments like those used with students with disabilities to accommodate different ways of demonstrating mastery of a subject.
  3. In the long term, in the absence of additional money for the district, redeploy resources in a coordinated and strategic way to help families and teachers support student learning. Invest in the development of a rich, substantive core curriculum and give teachers the training and collaboration time they need.

I, like Ms. Little, do not have all the answers. This is work that requires our collective brilliance and commitment for the sake of our children.

Natalie McKinney is the executive director of Whole Child Strategies, Inc., a Memphis-based nonprofit that provides funding and support for community-driven solutions for addressing attendance and discipline issues that hinder academic success. She previously served as the director of policy for both Shelby County Schools and legacy Memphis City Schools.

failing grade

Why one Harvard professor calls American schools’ focus on testing a ‘charade’

PHOTO: Alan Petersime

Harvard professor Daniel Koretz is on a mission: to convince policymakers that standardized tests have been widely misused.

In his new book, “The Testing Charade,” Koretz argues that federal education policy over the last couple of decades — starting with No Child Left Behind, and continuing with the Obama administration’s push to evaluate teachers in part by test scores — has been a barely mitigated disaster.

The focus on testing in particular has hurt schools and students, Koretz argues. Meanwhile, Koretz says the tests are of little help for accurately identifying which schools are struggling because excessive test prep inflates students’ scores.

“Neither good intentions nor the value of well-used tests justifies continuing to ignore the absurdities and failures of the current system and the real harms it is causing,” Koretz writes in the book’s first chapter.

Daniel Koretz, Harvard Graduate School of Education

His skepticism will be welcome to families of students who have opted out of state tests across the country and others who have led a testing backlash in recent years. That sentiment helped shape the new federal education law, ESSA.

Koretz has another set of allies in some conservative charter and voucher advocates, including — to an extent — Secretary of Education Betsy DeVos, who criticized No Child Left Behind in a recent speech. “As states and districts scrambled to avoid the law’s sanctions and maintain their federal funding, some resorted to focusing specifically on math and reading at the expense of other subjects,” she said. “Others simply inflated scores or lowered standards.”

But national civil rights groups and some Democratic politicians have made a different case: That it’s the government’s responsibility to continue to use test scores to hold schools accountable for serving their students, especially students of color, poor students, and students with disabilities. (ESSA continues to require testing in grades three through eight and for states to identify their lowest performing schools, largely by using test scores.)

We talked to Koretz about his book and asked him to explain how he reached his conclusions and what to make of research that paints a more positive picture of tests and No Child Left Behind.

The interview has been edited for clarity and length.

Do you want to walk me through the central thesis of your book?

The reason I wrote the book is really the subtitle: we’re “pretending to make schools better.”

Most of the bad news that’s in this book is old news. We’ve been collecting evidence of various kinds about the impact of the very heavy handed, high-stakes testing that we use in this country for a long time. I lost patience with people pretending that these facts aren’t present. So I decided it would be worth writing a book that summarizes the evidence both good and bad about the effects of test-based accountability. When you do that, you end up with an awful lot on the bad side and not very much on the good side.

Can you talk about some of the bad effects?

There are a few that are particularly important. One is absolutely rampant bad test prep. It’s just everywhere. One of the consequences of that is that test scores are often very badly inflated.

There aren’t all that many studies of this because it’s not really a welcome suggestion. When you go to the superintendent and say, “Gee, I’d like to see whether your scores are inflated,” they rarely say, “Boy, we’ve been waiting for you to show up.” There aren’t that many studies, but they’re very consistent. The inflation that does show up is sometimes absolutely massive. Worse, there is growing evidence that that problem is more severe for disadvantaged kids, creating the illusion of improved equity.

Another is increasingly widespread cheating. We, of course, will never know just how widespread because there aren’t resources to examine the data from 13,000 school districts. Everyone knows about Atlanta, a few people know about El Paso, but that’s just the tip of the iceberg.

There’s obviously also — and perhaps this should be on the same par — enormous amounts of stress for teachers, for kids, and for parents. That’s the bad side.

I want to ask a little more about test score inflation. What is the strongest evidence for inflation? And let me give you two pieces that to me seem like potentially countervailing evidence. One piece is when I’m looking at research on school turnaround — like the most recent School Improvement Grant program and also turnaround efforts in New York City — these schools have been under intensive pressure to raise test scores. And yet their test scores gains on high-stakes tests have been pretty modest at best. The other example is the Smarter Balanced exam. The scores on the Smarter Balanced exam don’t seem to be going up. If anything, they’re going down.

The main issue is that score inflation doesn’t occur in the same amount everywhere. You’ve come up with two examples where there is apparently very little. There are other examples that are much worse than the aggregate data suggest.

In the case of Smarter Balanced, I would wait and see. Score inflation can only occur when people become sufficiently aware of predictable patterns in the test. You can’t game a test when you don’t know what irrelevant things are going to recur, and that just may take some time.

I’m wondering your take on why some of the strongest advocates for test-based accountability have been national civil rights groups.

One of the rationales for some of the most draconian test-based accountability programs we’ve had has been to improve equity. If you got back to the enactment of NCLB, you had [then-Massachusetts Sen.] Teddy Kennedy and [then-California Rep.] George Miller actively lobbying their colleagues in support of a Republican bill. George Miller summed that up in one sentence in a meeting I went to. He said, “It will shed some light in the corners.” He said that schools had been getting away with giving lousy services to disadvantaged kids by showing good performance among advantaged kids, and this would make it in theory impossible to do that.

Even going back before NCLB, I think that’s why there was so much support in the disability community for including disabled kids in test-based accountability in the 1990s — so they couldn’t be hidden away in the basement anymore. I think that’s absolutely laudable. It’s the thing I praise the most strongly about NCLB.

It just didn’t work. That’s really clear from the evidence.

I think the intention was laudable and I think the intention was why high-stakes testing has gotten so much support in the minority community, but it just has failed.

You mention in your book probably the most widely cited study on the achievement effects of No Child Left Behind, showing that there were big gains in fourth grade math and some gains in eighth grade math, but there wasn’t anything good or bad in reading.

Pretty much. There was a little bit of improvement in some years in reading but nothing to write home about.

So the math gains — and that was on the low-stakes federal NAEP test — they’re just not worth it in your view?

I think the gains are real. But there are some reasons not be terribly excited about these. One is that they don’t persist. They decline a little bit by eighth grade, they disappear by the time kids are out of high school. We don’t have good data about kids as they graduate from high school, but what we do have doesn’t show any improvement.

The biggest reason I’m not as excited as some people are about those gains is we’ve had evidence going back to the 1980s that one of the responses that teachers have had to test-based accountability is to take time out of untested subjects and to put it into math and reading. We don’t know how much of that gain in math is because people are teaching math better and how much is because kids aren’t learning about civics.

That’s, in my view, not enough to justify all of the stuff on the other side of the ledger.

When I’ve looked at some studies on the impact of NCLB on students’ social-emotional skills, the impact on teachers’ attitudes in the classrooms, and the impact on voluntary teacher turnover, they haven’t found any negative effects. They also haven’t found positive effects in most cases. But that would seem to at least in one sense undermine the argument that NCLB had big harmful effects on these other outcomes.

I haven’t seen those studies, but I don’t think what you describe does undermine it. What I would like to see is an analysis of long-term trends not just on teacher attrition but on teacher selection. A lot of what I have heard has really been, frankly, anecdotal. I was once a public school teacher and teaching now is utterly unlike what it was when I taught. It seems unlikely that that had no effect on who opts in and who opts out to be a teacher.

I don’t have evidence of this but I suspect that to some extent different types of people are selecting into teaching now than were teaching 30 years ago.

Can you talk about what you see as good versus bad test prep?

Something that Audrey Qualls at the University of Iowa said was, “A student has only mastered something if she can do it when confronted with unfamiliar particulars.”

Think about training pilots — you would never train pilots by putting them in a simulator and then always running exactly the same set of conditions because next time you were in the plane and the conditions were different you’d die. What you want to know is that the pilot has enough understanding and a good enough command of the physical motions and whatnot that he or she can respond to whatever happens to you while you’re up there. That’s not all that distant an analogy from testing.

Bad test prep is test prep that is designed to raise scores on the particular test rather than give kids the underlying knowledge and skills that the test is supposed to capture. It’s absolutely endemic. In fact, districts and states peddle this stuff themselves.

I take it it’s very hard to quantify this test prep phenomenon, though?

It is extremely hard, and there’s a big hole in the research in this area.

Let’s turn from a backward-looking to a forward-looking discussion. What is your take on ESSA? Do you think it’s a step in the right direction?

This may be a little bit simplistic, but I think of ESSA as giving states back a portion of the flexibility they had before No Child Left Behind. It doesn’t give them as much flexibility as they had in 2000.  

It has the potential to substantially reduce pressure, but it doesn’t seem to be changing the basic logic of the system, which is that the thing that will drive school improvement is pushing people to improve test scores. So I’m not optimistic.

One of things that I argue very strongly at the end of the book is that we need to look at a far broader range of, not just outcomes, but aspects of schooling to create an accountability system that will generate more of what we want. ESSA takes one tiny step in that direction: it says you have to have one measure beyond testing and graduation rates. But if you read the statute it almost doesn’t matter what that measure is. The one mandate is that it can’t count as much as test scores — that’s written in the statute. The notion that it means the same thing to monitor the quality of practice or to monitor attendance rates is just absurd

As I’m sure you know, research — including from some of your colleagues at Harvard — has shown that so-called “no-excuses” charter schools in places like Boston, Chicago, and New York City, have led to substantial test score gains and in some cases improvements in four-year college enrollment. Are you skeptical that those gains are the result of genuine learning?

It depends on which test you’re talking about. Some of the no-excuses charter schools drill kids on the state test, so I don’t trust the state test scores for some of those schools. I think it’s entirely plausible that some of those schools are going to affect long-term outcomes because they’re in some cases replacing a very disorderly environment with a very orderly one. In fact, I would say too orderly by quite a margin.

But those reforms are much bigger than just test-based accountability or just the control structure we call charters. It’s a whole host of different things that are going on: different disciplinary policies, different kinds of teacher selection, different kinds of behavioral requirements, all sorts of things.

A lot of the discussion around accountability, including in your book, is about the measures we should be using to identify schools. I’m interested in your take on what happens when a school is identified by whatever system — perhaps by the holistic system you described in the book — as low performing.

The first step is to figure out why is it bad. I would use scores as an opening to a better evaluation of schools. If scores on a good test are low, something is wrong, but we don’t know what. Before we intervene we ought to find out what’s wrong.

This is the Dutch model: school inspections are concentrated on schools that shows signs of having problems, because that’s where the payoff is. I would want to know what’s wrong and then you can design an alternative. In some cases, it may be the teaching staff is too weak. It may be in some cases the teaching staff needs supports they don’t have. It may be like in the case of Baltimore, they need to turn the heat on. Who knows? But I don’t think we can design sensible interventions until we know what the problems are.