First Person

Why NAEP Matters

NYC Chancellor Joel Klein’s response in Wednesday’s New York Times to Diane Ravitch’s op-ed last week provides a lot to chew on.  Today, I’m focusing on his comments about the National Assessment of Educational Progress (NAEP), which is also known as the Nation’s Report Card.  NAEP began collecting data in 1969, and remains the only federal assessment designed to report on trends in the academic performance of U.S. children and youth.  All 50 states and the District of Columbia participate in NAEP, as does New York City and an increasing number of other urban school districts.  NAEP has an annual operating budget of more than $130 million per year, which represents a significant share of federal investments in education research.  Though not an expert on testing and assessment, Diane Ravitch has a long-standing interest in NAEP—she was appointed to the bipartisan National Assessment Governing Board (NAGB), which oversees NAEP, during President Bill Clinton’s second term, and remained on the board until 2004.

One of the ways that NAEP differs from many other standardized tests is that NAEP is designed to yield a much wider picture of the subject-matter knowledge the test is intended to measure.  Many standardized tests are designed to provide an accurate picture of a particular child’s performance.  It’s efficient to do so by having all test-takers respond to the same set of test items.  If a group of fourth-graders all answer the same 45 items in a 90-minute math exam, we can learn a lot about performance on those particular items, which are chosen to be representative of the content domain they are supposed to represent (such as fourth-grade math).  But such a test would tell us little about student performance on other items that might have a different format, or address different fourth-grade math skills.  NAEP addresses this problem by having many more test items, but no child answers all of the items, because that would take hours and hours of testing time.  Instead, each child responds to a sample of the items, and the performance on these items is combined across children to yield a picture of the performance of children in general.  Testing experts such as Dan Koretz at Harvard believe that assessments such as NAEP are less vulnerable to score inflation than state assessments because it’s more challenging to engage in inappropriate test preparation when there are so many potential test items a student might respond to.  But the tradeoff is that NAEP is not designed to provide a reliable and accurate measure of performance for a particular child.   

Let’s look at what the Chancellor had to say about NAEP:

“The national tests [Ravitch] cites are not the measure of federal accountability, are given only to a small sample of schools, and are not aligned with New York State standards and therefore with what we teach in our classrooms. (That said, our fourth-grade scores on those tests are strong.)”

Not the measure of federal accountability.  The No Child Left Behind Act delegated to states the responsibility of developing systems of learning standards and assessments designed to measure progress towards universal student proficiency by 2014.  It’s true that the tests that are used to assess the performance of the New York City schools for NCLB purposes are state assessments, not NAEP.  But it is misleading to say that NAEP is not a measure of federal accountability.  The tests administered by the 50 states vary considerably in their difficulty, with some states reporting much higher rates of student proficiency than are indicated by student performance on the NAEP assessment.  In New York City, 56% of fourth-graders in 2007 were judged proficient on the New York state English Language Arts test, whereas only 25% reached proficiency on the NAEP reading assessment.  New York City and New York State are by no means distinctive in finding much higher rates of proficiency on state tests than on NAEP—many states have even larger disparities—but the unevenness of the proficiency standards across states, and the fact that state tests change frequently over time, has led Congress and the U.S. Department of Education to rely on NAEP as the primary measure of trends in the performance of American schoolchildren over time.  Moreover, Education Secretary Arne Duncan has recently advised state superintendents that they should report state NAEP performance in their state and district report cards documenting performance under NCLB.  In these ways, NAEP is very much a measure of federal accountability. 

Given only to a small sample of schools.  For the life of me, I can’t figure out why the Chancellor thinks this is relevant.  A well-designed sample will yield estimates of student performance that are unbiased and accurate, and the New York City sample is designed by leading statisticians to be representative of the population of New York City students and large enough to detect meaningful differences between New York City and other jurisdictions, as well as meaningful differences over time.  

Not aligned with New York State standards and therefore with what we teach in our classrooms.  It would seem unfair for New York City schoolchildren to spend the year studying Shakespeare, and then be assessed on their knowledge of contemporary American fiction.  In reality, the curricular content of NAEP and the New York State assessments doesn’t diverge that much.  For example, in eighth-grade mathematics, the state specifies 104 distinct standards in the arenas of problem-solving, reasoning and proof, communication, connections, representation, number sense and operations, algebra, geometry, and measurement.  (Keep in mind that these 104 standards are assessed via only 45 test items.)  The NAEP framework allocates test items to number properties and operations (20%), measurement (15%), geometry (20%), data analysis and probability (15%), and algebra (30%).  I’m not going to do a detailed comparison, but I invite readers to look at the NAEP standards and see if they represent content that you think is unimportant for eighth-graders to know.      

Our fourth-grade scores on those tests are strong.  Surely the Chancellor must know that, when a test is administered in both the fourth and eighth grade, and he claims that the fourth-grade results are “strong,” and says nothing about the eighth grade, a reasonable person might wonder about the eighth-grade results.  In fact, there have been no statistically significant gains in eighth-grade performance in New York City in either reading or math between 2003 and 2007 on the NAEP assessment, and no gains in fourth-grade reading either.  Fourth-grade scores in New York City are “strong” only in the sense that there were significant gains in fourth-grade math performance from 2003 to 2007. 

A final note:  New York City has been participating voluntarily in the NAEP Trial Urban District Assessment since 2002, so presumably the Chancellor believes that there is something to be learned from the performance of New York City’s children on the NAEP assessments.  And the Department of Education’s press office has had no qualms about crowing about NAEP results when the Department believes there is good news to share.  But a Department, and a Chancellor, truly committed to transparency would be willing to acknowledge the bad with the good, and present a balanced picture of successes and failures.  Writing off NAEP as if it doesn’t matter fails to meet that standard.

First Person

I’m a principal who thinks personalized learning shouldn’t be a debate.

PHOTO: Lisa Epstein
Lisa Epstein, principal of Richard H. Lee Elementary, supports personalized learning

This is the first in what we hope will be a tradition of thoughtful opinion pieces—of all viewpoints—published by Chalkbeat Chicago. Have an idea? Send it to cburke@chalkbeat.org

As personalized learning takes hold throughout the city, Chicago teachers are wondering why a term so appealing has drawn so much criticism.

Until a few years ago, the school that I lead, Richard H. Lee Elementary on the Southwest Side, was on a path toward failing far too many of our students. We crafted curriculum and identified interventions to address gaps in achievement and the shifting sands of accountability. Our teachers were hardworking and committed. But our work seemed woefully disconnected from the demands we knew our students would face once they made the leap to postsecondary education.

We worried that our students were ill-equipped for today’s world of work and tomorrow’s jobs. Yet, we taught using the same model through which we’d been taught: textbook-based direct instruction.

How could we expect our learners to apply new knowledge to evolving facts, without creating opportunities for exploration? Where would they learn to chart their own paths, if we didn’t allow for agency at school? Why should our students engage with content that was disconnected from their experiences, values, and community?

We’ve read articles about a debate over personalized learning centered on Silicon Valley’s “takeover” of our schools. We hear that Trojan Horse technologies are coming for our jobs. But in our school, personalized learning has meant developing lessons informed by the cultural heritage and interests of our students. It has meant providing opportunities to pursue independent projects, and differentiating curriculum, instruction, and assessment to enable our students to progress at their own pace. It has reflected a paradigm shift that is bottom-up and teacher led.

And in a move that might have once seemed incomprehensible, it has meant getting rid of textbooks altogether. We’re not alone.

We are among hundreds of Chicago educators who would welcome critics to visit one of the 120 city schools implementing new models for learning – with and without technology. Because, as it turns out, Chicago is fast becoming a hub for personalized learning. And, it is no coincidence that our academic growth rates are also among the highest in the nation.

Before personalized learning, we designed our classrooms around the educator. Decisions were made based on how educators preferred to teach, where they wanted students to sit, and what subjects they wanted to cover.

Personalized learning looks different in every classroom, but the common thread is that we now make decisions looking at the student. We ask them how they learn best and what subjects strike their passions. We use small group instruction and individual coaching sessions to provide each student with lesson plans tailored to their needs and strengths. We’re reimagining how we use physical space, and the layout of our classrooms. We worry less about students talking with their friends; instead, we ask whether collaboration and socialization will help them learn.

Our emphasis on growth shows in the way students approach each school day. I have, for example, developed a mentorship relationship with one of our middle school students who, despite being diligent and bright, always ended the year with average grades. Last year, when she entered our personalized learning program for eighth grade, I saw her outlook change. She was determined to finish the year with all As.

More than that, she was determined to show that she could master anything her teachers put in front of her. She started coming to me with graded assignments. We’d talk about where she could improve and what skills she should focus on. She was pragmatic about challenges and so proud of her successes. At the end of the year she finished with straight As—and she still wanted more. She wanted to get A-pluses next year. Her outlook had changed from one of complacence to one oriented towards growth.

Rather than undermining the potential of great teachers, personalized learning is creating opportunities for collaboration as teachers band together to leverage team-teaching and capitalize on their strengths and passions. For some classrooms, this means offering units and lessons based on the interests and backgrounds of the class. For a couple of classrooms, it meant literally knocking down walls to combine classes from multiple grade-levels into a single room that offers each student maximum choice over how they learn. For every classroom, it means allowing students to work at their own pace, because teaching to the middle will always fail to push some while leaving others behind.

For many teachers, this change sounded daunting at first. For years, I watched one of my teachers – a woman who thrives off of structure and runs a tight ship – become less and less engaged in her profession. By the time we made the switch to personalized learning, I thought she might be done. We were both worried about whether she would be able to adjust to the flexibility of the new model. But she devised a way to maintain order in her classroom while still providing autonomy. She’s found that trusting students with the responsibility to be engaged and efficient is both more effective and far more rewarding than trying to force them into their roles. She now says that she would never go back to the traditional classroom structure, and has rediscovered her love for teaching. The difference is night and day.

The biggest change, though, is in the relationships between students and teachers. Gone is the traditional, authority-to-subordinate dynamic; instead, students see their teachers as mentors with whom they have a unique and individual connection, separate from the rest of the class. Students are actively involved in designing their learning plans, and are constantly challenged to articulate the skills they want to build and the steps that they must take to get there. They look up to their teachers, they respect their teachers, and, perhaps most important, they know their teachers respect them.

Along the way, we’ve found that students respond favorably when adults treat them as individuals. When teachers make important decisions for them, they see learning as a passive exercise. But, when you make it clear that their needs and opinions will shape each school day, they become invested in the outcome.

As our students take ownership over their learning, they earn autonomy, which means they know their teachers trust them. They see growth as the goal, so they no longer finish assignments just to be done; they finish assignments to get better. And it shows in their attendance rates – and test scores.

Lisa Epstein is the principal of Richard H. Lee Elementary School, a public school in Chicago’s West Lawn neighborhood serving 860 students from pre-kindergarten through eighth grade.

Editor’s note: This story has been updated to reflect that Richard H. Lee Elementary School serves 860 students, not 760 students.

First Person

I’ve spent years studying the link between SHSAT scores and student success. The test doesn’t tell you as much as you might think.

PHOTO: Photo by Robert Nickelsberg/Getty Images

Proponents of New York City’s specialized high school exam, the test the mayor wants to scrap in favor of a new admissions system, defend it as meritocratic. Opponents contend that when used without consideration of school grades or other factors, it’s an inappropriate metric.

One thing that’s been clear for decades about the exam, now used to admit students to eight top high schools, is that it matters a great deal.

Students admitted may not only receive a superior education, but also access to elite colleges and eventually to better employment. That system has also led to an under-representation of Hispanic students, black students, and girls.

As a doctoral student at The Graduate Center of the City University of New York in 2015, and in the years after I received my Ph.D., I have tried to understand how meritocratic the process really is.

First, that requires defining merit. Only New York City defines it as the score on a single test — other cities’ selective high schools use multiple measures, as do top colleges. There are certainly other potential criteria, such as artistic achievement or citizenship.

However, when merit is defined as achievement in school, the question of whether the test is meritocratic is an empirical question that can be answered with data.

To do that, I used SHSAT scores for nearly 28,000 students and school grades for all public school students in the city. (To be clear, the city changed the SHSAT itself somewhat last year; my analysis used scores on the earlier version.)

My analysis makes clear that the SHSAT does measure an ability that contributes to some extent to success in high school. Specifically, a SHSAT score predicts 20 percent of the variability in freshman grade-point average among all public school students who took the exam. Students with extremely high SHSAT scores (greater than 650) generally also had high grades when they reached a specialized school.

However, for the vast majority of students who were admitted with lower SHSAT scores, from 486 to 600, freshman grade point averages ranged widely — from around 50 to 100. That indicates that the SHSAT was a very imprecise predictor of future success for students who scored near the cutoffs.

Course grades earned in the seventh grade, in contrast, predicted 44 percent of the variability in freshman year grades, making it a far better admissions criterion than SHSAT score, at least for students near the score cutoffs.

It’s not surprising that a standardized test does not predict as well as past school performance. The SHSAT represents a two and a half hour sample of a limited range of skills and knowledge. In contrast, middle-school grades reflect a full year of student performance across the full range of academic subjects.

Furthermore, an exam which relies almost exclusively on one method of assessment, multiple choice questions, may fail to measure abilities that are revealed by the variety of assessment methods that go into course grades. Additionally, middle school grades may capture something important that the SHSAT fails to capture: long-term motivation.

Based on his current plan, Mayor de Blasio seems to be pointed in the right direction. His focus on middle school grades and the Discovery Program, which admits students with scores below the cutoff, is well supported by the data.

In the cohort I looked at, five of the eight schools admitted some students with scores below the cutoff. The sample sizes were too small at four of them to make meaningful comparisons with regularly admitted students. But at Brooklyn Technical High School, the performance of the 35 Discovery Program students was equal to that of other students. Freshman year grade point averages for the two groups were essentially identical: 86.6 versus 86.7.

My research leads me to believe that it might be reasonable to admit a certain percentage of the students with extremely high SHSAT scores — over 600, where the exam is a good predictor —and admit the remainder using a combined index of seventh grade GPA and SHSAT scores.

When I used that formula to simulate admissions, diversity increased, somewhat. An additional 40 black students, 209 Hispanic students, and 205 white students would have been admitted, as well as an additional 716 girls. It’s worth pointing out that in my simulation, Asian students would still constitute the largest segment of students (49 percent) and would be admitted in numbers far exceeding their proportion of applicants.

Because middle school grades are better than test scores at predicting high school achievement, their use in the admissions process should not in any way dilute the quality of the admitted class, and could not be seen as discriminating against Asian students.

The success of the Discovery students should allay some of the concerns about the ability of students with SHSAT scores below the cutoffs. There is no guarantee that similar results would be achieved in an expanded Discovery Program. But this finding certainly warrants larger-scale trials.

With consideration of additional criteria, it may be possible to select a group of students who will be more representative of the community the school system serves — and the pool of students who apply — without sacrificing the quality for which New York City’s specialized high schools are so justifiably famous.

Jon Taylor is a research analyst at Hunter College analyzing student success and retention.