Much ado is made every year about how students do on state tests. But are individual students’ test scores useful for them and their teachers?

Ruben, a Bronx teacher who blogs at Is our Children Learning?, says they might not be, because the scores come months after the tests are given and don’t give specific information about students’ skills. In a post about finding out his students’ reading scores yesterday (some improved; others showed a decline), he writes:

What’s most frustrating is how little the numbers tell me. We’re talking about a test that was taken in January. So the data doesn’t really even speak to the students I’m currently teaching. The data doesn’t really speak to anything at all, because it isn’t dissected in any way to show strengths in needs in specific areas such as vocabulary, drawing conclusions or writing. All I have are numbers, numbers that in many ways contradict what I know to be true about the reading and writing abilities of my students.

Of course all this is inconsequential, because even if the test was flawed, or too easy (the whole city went up 20%? Really?), those flaws apply to 4th grade students (and their teachers) universally. … Ultimately, I know my students should have done better, because pretty much everyone else did better. So, now it’s time to figure out what went wrong, so I can get it right next year.

Ruben’s objections to how the state delivers test scores represent two of the reasons Department of Education has offered to justify the introduction of more frequent, rapidly scored interim assessments in city classrooms.

I’ve posted about Ruben’s insights before, here, here, and here.