As the students of Colorado take the first round of the new state tests for math and English, many debates surrounding testing, and these tests in particular, are heating up.

As a math educator for over 25 years, including more than 19 in Colorado, I hear comments and critiques of the tests that demonstrate fear and confusion around PARCC, the testing consortium at the core of Colorado’s new assessments.

I have had the opportunity to participate in several phases of the creation of the PARCC math tests, and each time, I learned more about the test, the expectations for students, and ways that teachers could support students in being prepared for the test. These experiences gave me confidence in these tests. I hope that by explaining my reasons for this confidence, I might help alleviate some of the stress teachers, parents, and students may be feeling.

Just days after Colorado became a PARCC state in August of 2012, I traveled with about 25 other Colorado educators to Chicago to the first convening of the PARCC Educator Leader Cadre. This group met approximately twice a year, and at each convening we had the chance to ask questions, give input, and provide feedback to shape what was important to each of our states.

Through these experiences, I learned that PARCC is built upon an evidence-based design: starting with the standards, identifying the specific skills and knowledge the standards require, then designing tests and items that align to those knowledge and skills. We also had the opportunity to collaborate with, learn from, and share resources with hard-working educators in other PARCC states, and we learned how much of the actual detail around giving the test was a state decision, and that we could make the best decisions for Colorado.

A few months after the first cadre meeting, I was invited to serve on the Performance Level Descriptors (PLD) committee. We created documents that describe the math that students know and are able to do at each performance level. We used, among other things, our collective expertise as math teachers to construct these descriptions.

During the PLD meetings, I helped in reviewing test items as well. We gave feedback on whether an item was acceptable as is, required revisions, or needed to be rejected altogether, based on how well it aligned to the standards, whether it reflected an authentic mathematical context, and whether or not it provided an understanding of a student’s mathematical thinking.

Although my work focused on content, there were other groups that reviewed each item through other lenses, such as bias and sensitivity. All in all, each item is reviewed by about 30 educators before becoming eligible for inclusion on the test.

This process, along with the evidence-centered design of the test, supports the validity of the test items, with many experts affirming that the item is indeed designed to assess the desired content.

Most recently, I participated on the test construction committee. We reviewed each item on all 10 forms of the test, checked that the computer-based items were scoring correctly, and confirmed that all 10 forms (six online and four paper and pencil) were parallel in terms of structure, content, and difficulty level. As a result, I feel confident that the finished product is what PARCC said it would be back at the first cadre meeting in August 2012.

It is important to remember that we have new tests because we have new standards. These new standards are not just a reshuffling of content; they are transformational in that they ask us to engage all students in learning experiences that are proven to be aligned to college and career readiness.

This transformational change requires a significantly different tool for measuring. And if we are truly teaching to the standards, this test is a better measure of what students are learning than any other option out there.

These tests will not be perfect the first year, but they will get better every year. And although change is scary, we owe it to our students to have systems that provide them whatever opportunities they choose after graduation.

Editor’s note: This is the first in a series of commentary pieces on a variety of perspectives from the testing debate. Check back tomorrow for more.