Holding Schools Accountable

Assessment tests strive to improve student performance and prompt schools to make curriculum adjustments.

Alan Joch

THOSE INCLINED TO WRITE OFF THE Lindsay Unified School District, which is located an hour south of Fresno in California’s Central Valley, have reasons for skepticism. More than half of the district’s 3,500 students are English learners, and about 80 percent come from low-income homes—the demographics of a school system typically identified with low test scores.

“We’re one of those districts with real challenges,” says Frances Holdbrooks, assistant superintendent in charge of curriculum and instruction.

Yet the Lindsay district’s students are beating the odds. While national averages during the last decade show mixed results in reading and math scores, scores for Lindsay’s pupils have risen steadily. The school district’s Academic Performance Index (API), the cornerstone of California’s school-accountability efforts, has soared almost 150 points higher than its levels in 2000.

A number of factors have contributed to the improvement in reading and math scores, but Holdbrooks gives the greatest weight to assessment testing. Lindsay’s administrators and teachers no longer rely on “gut feelings” to figure out what’s working in the classrooms. Instead, they pore over regular and detailed performance data.

Using the testing results, Lindsay’s educators and administrators can quickly identify struggling students and can intervene to offer them extra instruction. If needed, the district can also provide additional training for the teachers.

“At the end of the year, we’re not guessing whether our kids will make it or not” on standardized state tests, like those mandated by the No Child Left Behind (NCLB) Act of 2001, Holdbrooks says. “Teachers are much more aware of standards and how they can target their teaching in more effective ways.”

Using a Valuable Tool

Lindsay is not unique in this respect. From a relatively affluent and growing district in Vail, Ariz., to sprawling Los Angeles with its diverse student population, assessment testing is becoming a valuable tool to keep track of both student and teacher performance and to ensure that what is taught in the classroom closely matches what is evaluated in statewide tests.

But even proponents admit that good assessments don’t guarantee success. Some of the biggest challenges involve cultural changes: Although numerous teachers embrace the new fountain of data that assessment tests provide, others find the numbers hard to digest.

“There’s no hiding from the data,” Holdbrooks says. “You can’t just blame the kids if your class isn’t doing well, but Ms. Jones next door is making progress.”

Ms. Jones, it seems, just might know something that other teachers don’t.

Taking the Learning Pulse

Schools may justifiably feel overwhelmed by today’s testing requirements. NCLB mandates annual state tests for math and reading for grades three through eight, and one test each for math and reading between grades 10 and 12, beginning with the 2005 to 2006 academic year. States have responded by establishing tests in core subjects, but data alone can’t improve curricula or arrest the slide of a fifth-grader falling behind in math class.

Critics deride high-stakes state exams—the cumulative year-end tests that gauge after-the-fact performance—often referring to them as “autopsies.” However, regular assessment tests and new technologies that let users create on-the-fly exams and real-time scoring promise a way to intervene earlier in a student’s slide—before it’s too late.

By regularly taking the learning pulse of students across classrooms and districts, teachers and administrators can implement real-time curriculum adjustments and close the learning gaps that lead to lower scores on standardized tests.

For a number of students, these innovations are coming just in time. During the past decade, performance levels in core subjects have remained stagnant or shown growth for only certain groups of students.

In its 2003 National Assessment of Educational Progress, the National Center for Education Statistics, part of the U.S. Department of Education in Washington, D.C., found that across the country average reading scores for fourth-graders remained essentially unchanged from 1992 to 2003. Reading scores for eighth-graders rose slightly over the decade, but then showed a frustrating dip between 2002 and 2003. (See chart on page 28.)

An examination of specific student populations shows nagging performance gaps. For example, reading scores for fourth-grade girls in 2003 averaged seven points higher than the scores of their male classmates, a difference that increased to 11 points by the eighth grade. Similar gaps existed in 1992.

Reading scores for Hispanic students showed little improvement over the last decade. White students continued to score 30 points higher than Hispanic and black children, again mirroring differences noted in 1992.

Although math scores showed steady improvement overall, gaps exist in this subject area, too. Black and Hispanic eighth-graders trail their white classmates by 37 points and 27 points, respectively. However, fourth-graders are narrowing the gap, with respective differences of 27 points and 18 points.

Although numbers like these identify problems, they don’t reveal the root causes, which can include faulty teaching methods, inadequate classroom materials, language barriers and biases in test questions. Frequent and well-honed assessment exams may provide insight into solutions for each classroom.

Educators at the Los Angeles Unified School District (LAUSD) believe that assessment testing will help to eliminate stratifications along racial, gender and economic lines by holding all students accountable to state standards. “Just because we have a lot of English learners and students coming from poverty, we don’t want anyone to lower expectations” about what each student can achieve, says Ronni Ephraim, chief instructional officer for LAUSD’s K-12 instruction.

The need to raise expectations about student performance is also felt by Ruth Ann McKenna, a project director for the Comprehensive School Assistance Program at WestEd, a San Francisco-based nonprofit development and service agency for schools. “The old expectation is a year’s growth in a year’s time,” she says. “This attitude accepts that a C student comes into a grade as a C and enters the next one as a C. But under NCLB, we want a C student to become a B student after a year.”

To achieve that goal, McKenna says, “schools need a whole new way to organize so that teachers constantly improve performance beyond what they’re used to seeing. That means accelerated instruction, intervention and enrichment.”

Providing Relief

In order to boost student performance, assessment tests must be tied directly to state standards and should occur regularly: weekly, quarterly or whatever period matches the flow of lesson plans. Designing and scoring the tests can be time-consuming, and the task of turning the results into curriculum action items can add to the burden.

However, assessment programs let teachers download questions that are tailored to individual state standards from a central database and use them to compile tests. Teachers then scan and digitize the completed tests and e-mail them to a testing service, which automatically scores the exams and stores the results in a performance database that is hosted by the service and available via its secure Web site.

Once authenticated, both teachers and administrators can access the database and use reporting tools on the site to create reports that graphically present data, such as ranking students against classmates or against state standards. All of this is done quickly: The time from test to report may be less than 24 hours.

Then automation ends, and it’s up to teachers and administrators to decide what action to take. That’s not always easy, WestEd’s McKenna acknowledges. Problems develop when test results are used “just to say ‘Gotcha!’” to teachers, she says.

“When the results come out of the machine, there needs to be a system in place for taking action,” she points out. “Otherwise, kids will not get better on state tests.”

Strategic Curriculum Change

The teachers and administrators at Arizona’s Vail Unified School District comb through assessment scores for subjects that weren’t grasped in a classroom or at a grade level. One recent test showed that only 40 percent of the seventh-grade class earned passing scores in subtracting integers, a skill taught just before the assessment.

“We concluded that we didn’t have enough time to teach that subject,” says Joe Sassone, assistant superintendent for curriculum and professional development. Teachers revisited the material, and, a month later, 80 percent of the class passed. “Two years ago, we wouldn’t have seen the problem and taken action,” Sassone admits. “It’s a big mind shift.”

The Vail district, near Tucson, is one of the fastest-growing districts in Arizona. Projections are that the 6,600-student population will grow to as high as 10,000 by 2008.

With that kind of growth come all the usual challenges: crowded schools, large classes, redistricting, and new teacher hiring and training. “If you want to live in Vail, you have to get used to change,” Sassone says.

Two years ago, the district became dissatisfied with its students’ flat achievement data and decided to take action by using assessment tests. “We wanted a system in place to make sure all the learning standards were being taught, and we were testing whether students were learning key standards,” Sassone says.

Vail encouraged teachers to hone standard tests in order to raise students’ academic performance and, in the process, to become personally vested in the initiative’s success.

“After they edited the tests two or three times, teachers really felt that they had buy-in,” Sassone says.

For example, teachers worked with a vendor to create miniquizzes to augment the math and reading assessments that students in grades two through 10 take three times a year. The quiz results help Vail fine-tune its assessment questions.

“If we want to look at a second grade’s subtraction skills, we put together five questions on that skill and can quickly see who’s proficient, who’s approaching proficiency and who’s below that level,” Sassone says.

During the past two years, Vail’s test scores have improved across the board. The performance of all special-needs students increased after a year of assessments. At one elementary school, only 46 percent of fifth-graders had been meeting state standards; now 80 percent of the fifth-grade class is at that level. Vail’s students who already exceed standards also are benefiting.

“We used the assessments to identify mastery students, and then we gave them enrichment time,” Sassone says. Similarly, students who were close to mastery got more time to learn skills—a move that boosted them into the mastery category.

Coaching Is Key

LAUSD’s Ephraim is no stranger to curriculum reform. She developed the district’s Elementary Literacy Initiative, which uses literacy coaches who work with other teachers to improve language arts instruction. The initiative has boosted standardized test scores for the past three years in more than 425 LAUSD schools.

Because periodic assessments are such an important component of its reading initiative, the district is pushing to do them every six to 10 weeks in all core curriculum areas. So far, district schools have implemented periodic assessment in language arts, math and science—with assessments in social studies in its middle schools and high schools soon to follow.

LAUSD uses about 600 full-time coaches systemwide to help teachers interpret assessment data. “Part of their job is to help teachers understand why we’re asking them to teach in certain ways,” Ephraim says.

Performance improvements hinge on “being knowledgeable about how to read the reports and knowing what to do with them,” she adds. “It’s easy to say we have failing kids. The reports are tools that teachers should use to help them understand how their students are performing and also should help teachers reflect on their own teaching practices.”

The assessments measure what students should have been taught, Ephraim explains. The district has instructional guides and pacing plans to outline what standards should be taught during a specific period of time. If the teacher is expected to cover specific standards assessment items, the measurements would show whether or not students understood the skills and concepts.

“Once we receive the assessment data, we disaggregate the results for ethnicity, gender and other factors,” Ephraim says. Superintendents receive data reports from each of their schools and use this data to develop a plan of action for the next six to 10 weeks.

Teachers also use the data as they plan together. If one class distinguishes itself in spelling, for example, the successful instructor can share his or her secrets with colleagues. Or teachers may bring in a coach to forge new strategies. Either way, the goal is to refuse to accept poor performance.

“As practitioners, we all need to struggle with what we can do to help students meet standards,” says Ephraim.

Time to Improve

Assessment testing began five years ago in California’s Lindsay Unified School District. The district began testing students using paper tests that provided only summary, district-level results. The watchword then, as now, was to verify the data.

“We don’t go by feelings when we judge if something is working,” says Holdbrooks. “We say, ‘What does the data show?’”

These days, the district gets detailed reports on how each student fares against state standards. “We can make better decisions instructionally about which kids are mastering lessons and which ones need us to go back and reinforce lessons,” Holdbrooks says.

The district requires assessment tests every trimester. Rather than compiling tests from questions its testing service provides, Lindsay uses test questions from a third-party test bank and—in order to take advantage of close ties to its curriculum—from the publisher of the materials used in Lindsay’s math program.

In order to better use the data to craft improved curricula, the school schedules weekly early-release or late-start days to give teachers time to meet by school, grade level or department to discuss test results, share ideas and organize “intervention groups” to give struggling students customized help. “Giving teachers time to do this is crucial,” Holdbrooks says.

The assessment process is making Lindsay’s teachers better attuned to grade-level standards and is helping them to identify shortfalls in their teaching materials.

Although Lindsay’s API scores look good, its Adequate Yearly Progress numbers remain below par. “We will make the target with math, but language arts is not where it should be,” Holdbrooks says. “But we think that if we continue down this path, we’ll see growth.”

High-tech tests don’t offer a quick fix for students’ lagging performance, Holdbrooks says. “It’s a lot of really hard work,” she acknowledges. “But, if you don’t have data to show how your students are doing, you won’t be able to change what’s happening in your classroom.”

Alan Joch is a New Hampshire-based freelance writer who specializes in business and technology.

Mixed Results

Overall reading scores across the United States showed minimal improvement over the last decade and declined slightly in 2003 from the previous year, while average math scores climbed to new highs.

Reading Scores

GRADE 4

1992: 216
1994: 212
1998: 213
2002: 217
2003: 216

GRADE 8

1992: 258
1994: 257
1998: 261
2002: 263
2003: 261

Math Scores

GRADE 4

1990: 212
1992: 219
1996: 222
2000: 224
2003: 234

GRADE 8

1990: 262
1992: 267
1996: 269
2000: 272
2003: 276

Source: The National Center for Education Statistics’ 2003 National Assessment of Educational Progress

Teachers + Testing = Success

Assessment testing provides new insights into how students are learning, but teaching remains the key to achieving gains in test scores.

Joe Sassone, assistant superintendent for instruction at Arizona’s Vail Unified School District, cautions those considering use of the new assessment technology not to think of it as the driving force behind better scores. “It’s the vehicle to help us get where we want to go,” he says.

Nevertheless, the reports enabled by the technology do encourage teachers to get together to examine state standards against a backdrop of faculty skills and resources. “We map lesson plans to a districtwide calendar, so everyone knows we’ll be teaching this subject in this week,” Sassone says. “There’s more collaboration. A couple of years ago, teachers were doing their own thing, and there was no collegiality at all.”

Today, all of the Los Angeles Unified School District’s 450 elementary schools are administering periodic assessments in literacy and mathematics. But only about half of these schools are regularly using the data to inform instruction, says Ronni Ephraim, chief instructional officer for K-12.

Periodic assessments work best when there’s a collaborative atmosphere among teachers and principals, she says. Schools that stumble typically haven’t created a culture in which teachers feel comfortable enough to open up their classes’ scores to peer scrutiny. “Opening up classroom practice is key to using periodic assessments effectively to meet the needs of students, teachers and administrators,” Ephraim says.

After its first round of assessments last year, California’s Lindsay Unified School District saw its state-sponsored Academic Performance Index (API) rise from 400 in the 1999-2000 school year to 609 in the 2003-2004 school year, according to Frances Holdbrooks, assistant superintendent in charge of curriculum and instruction.

But the process was also a learning experience for teachers. One assessment tested math reasoning skills, but mistakenly used questions on prime numbers, which students hadn’t yet studied. “That’s the type of ‘Aha!’ that is now happening,” Holdbrooks says. “In my opinion, that’s very productive.”

Oct 31 2006

Sponsors