Accountability Isn't a Four-Letter Word

Learn how to get reasonable metrics on your technology investments.

WHEN A SALTY FOUR-LETTER word is uttered, it may draw unwanted attention to the speaker and create discomfort among the listeners. Although the word accountability actually has 14 letters, its mention often evokes similar reactions.

Yet demonstrating real return on investment (ROI) with technology is an issue schools can no longer avoid, deflect or soft-pedal. Funds are tight, so school boards and superintendents are demanding evidence of the effectiveness of every technology investment.

“The media and many parents, too, are increasingly unsatisfied with the ‘trust us’ answer,” says Curt Anderson, instructional technology coordinator for Millard Public Schools in Omaha, Neb.

With appropriate producing metrics in place, we can show how technology expenditures have a positive effect on the bottom line of education: raising student achievement. High-stakes tests aren’t the only option for measuring technology’s benefits. So where do we begin when we want to demonstrate results? And how can we frame our goals and evidence in a convincing fashion?

UPDATING METRICS

Many educators struggle to answer these questions, while others hope to avoid the debate altogether. Regrettably, school leaders often employ less important, imprecise or misplaced measures to arrive at their answers. Metrics employed by educational leaders have historically fallen into three categories:

Tier 1: On the Bench = Completing, Counting and Confirming Metrics

Tier 2: On Deck = Comparing and Using Metrics

Tier 3: Accountability: At Bat = Producing Metrics

Baseball provides helpful metaphors. Riding the pine bench gives us time to consider what’s happening on the field. These are Tier 1 metrics. But metrics of this nature often don’t show the true value of technology projects.

“For many years, technology evaluation has been all about counting and comparing — our student-to-computer ratio, how many computers we have on our network and how many classrooms have Internet access,” explains Bill Morrison, director of technology for the Rapides Parish School District in Alexandria, La. “The problem with these measures is that they say nothing about student outcomes, yet we persist in using such measures to determine program success.”

Yet, in both cases, we are not in a position to contribute to the team’s efforts to win. (See the “Getting Ready to Bat ” sidebar on page 47 for details on Tier 1 and Tier 2.)

“These metrics are popular because they are very easy to administer and collect,” Morrison continues. The type of measures increasingly demanded by many decision-makers and administrators fall into the “at bat” category. Employing these metrics shows that technology is a productive player that can get on base, get in scoring position, help the school district score and maybe hit a home run.

TIER 3: AT BAT

Tier 3, or “at bat,” measures tread on the prized ground of results-oriented thinking. They represent more powerful tools — metrics that, if employed, will begin to move schools steadily toward a producer mentality and away from a consumer mentality in educational technology.

“If student achievement is truly the ultimate goal,” says Hal Anderson, director of information systems for the Cheyenne Mountain School District in Colorado Springs, Colo., “then the ultimate measure of effective use of technology must be based on what the students have accomplished with the tools that are available to them, rather than being based on how successful we as educators may have been in the task of procuring those tools.”

These metrics fall into four categories: light, soft, medium and hard gains.

In the light gains arena, educators can develop paper-based or online surveys that capture student feelings about using technology or simply indicate a preference for learning with technology. Schools can measure motivational impact by comparing how long students may linger after class or stay focused on a given task compared with the same interest level or time on task during a traditional lesson. A principal or lead teacher may also gather compelling classroom anecdotes by interviewing classroom teachers and soliciting examples of learning moments or student success stories.

In the soft gains arena, educators can track attendance patterns during an ongoing technology-based activity and compare those data with traditional patterns to see if attendance has increased or instances of tardiness have decreased. Another metric involves tracking classroom assignment completion or homework submittal during a technology-based intervention and comparing those data with historical data. Are students turning in more homework or completing more assignments as a result of this intervention?

Learning efficiency (learning or accomplishing more in less than the expected time) is one of the easiest and most important gains to document. In order to measure learning efficiency, simply compare the amount of time it takes to accomplish a task or master a lesson standard using the technology with the amount of time it would take without the technology. Another method would be to measure the amount of material covered with the technology intervention as opposed to the amount of information covered in a traditional lesson without technology.

In the medium gains arena, consider administering a pre-test prior to a technology-supported lesson and then testing again to demonstrate that student gains were indeed realized. Measuring increased enrollment in upper-level courses requires the long view but remains a desirable goal. Educators must first present historical enrollment patterns and show that the intervention has in part led to increased enrollment of women, minorities or target populations in upper-level courses.

Next, imagine the appeal of a grant proposal that offers to reduce the number of Ds and Fs in a historically poor-performing class. This can be done by showing a historical tendency and aiming to reduce that trend through the introduction of a technology-based intervention.

In the hard-gains arena, educators enjoy many convincing options for measuring ROI. We can document concrete improvement of skills over time by referencing a rubric or projects in a learning portfolio.

In order to warrant continued funding for a project, consider using value-added assessment tools to demonstrate how a technology-based literacy program affects gains. If school leaders insist on standardized test score improvements before funding a key innovation, create a pilot project that focuses on improving the subcomponents of a test score, known as a content subscore.

Also, imagine crafting a technology proposal that aims to move students up in proficiency level on a state assessment. For example, if 10 students are on the bubble between partially proficient and proficient performance levels, concentrate on improving just those students with your technology intervention.

Educators no longer need to recoil from the challenges of accountability. However, schools do need to enlist more compelling metrics as they strive to demonstrate the learning benefits associated with technology investments.

“We shouldn’t throw up our hands just because impact assessment is difficult,” says Brent Wilson, professor of information and learning technologies at the University of Colorado at Denver. “We should build a reasoned case for impact that might draw on a whole array of measures — some better than others, but all useful in drawing a portrait of overall impact.”

Increasingly, the metrics that matter will be producing metrics.

Light Gains

Impact on attitudes: “Students enjoyed or preferred learning with the technology.”

Impact on motivation: “Students were more attentive when practicing word-attack skills, nearly doubling typical time on task and attention span.”

Anecdotal evidence: “Teacher observations support the notion that students who construct visual representations of geometry concepts understand those concepts more deeply.”

Soft Gains

Improved attendance and decreased tardiness on activity dates

Increased homework completion rates

Learning efficiency: “Students are able to complete 10 times the number of transformations using the graphing calculators than they can with pencil and graph paper in the same amount of time.”

Medium Gains

Pre-test/post-test improvement: “Students showed consistently positive post-test gains on multiplication timed tests in the classroom after 10 practice sessions with the software.”

Increased enrollment in upper-level course work: “The number of students enrolling in advanced placement science classes increased 10 percent after the implementation of the new technology-based tutoring system.”

Reduced failure rate: “A historical average of 25 students receive grades of D or F in their seventh-grade math standards course each year. After the intervention, this number was reduced to four students for two consecutive years.”

Success in gateway courses: “After implementing the new online homework support system for algebra courses, students are more likely to score a B or higher.”

Hard Gains

Improvement in performance over time on an assessment rubric: “Using the word processor, students demonstrated quality improvements on our writing rubric in the areas of spelling, sentence length, use of descriptive language and length of writing.”

Improvement on a benchmark assessment: “After the intervention, students showed consistent increases in four of five categories on benchmark tests administered three times a year.”

Content subscore improvement: “Although no changes were evidenced in the overall proficiency level at this time, all partially proficient students increased their comprehension subscores on the state reading assessment by 10 percent.”

Standardized test score improvement: “Target students realized a 10 percent score gain on the state reading achievement test.”

Proficiency-level improvement: “Nearly half the students scoring unsatisfactory on state assessments improved to partially proficient.”

Rigorous study or evaluation demonstrating effectiveness: “A controlled study demonstrated that students improved their understanding of key concepts in chemistry when technology-based assessments were used at critical junctures in the lesson and that improvement was greater than that experienced by students taking a comparable class taught without the technology.”

GETTING READY TO BAT

Riding “the bench” and waiting “on deck” are vital parts of getting ready to bat. Luckily, many school districts have mastered these skills. However, schools must learn to move beyond these basic metrics when striving to justify their programs. The higher we move up the ladder of Tier 3 gains, the better we will become at attaining and demonstrating a valuable return on our technology investments.

Using a baseball analogy, employing Tier 1 metrics is like being a member of the team but sitting on the bench. Although we can cheer accomplishments along, we are not yet in a position to contribute to our team’s efforts to win. These metrics are informative, but they are not useful in addressing crucial questions about effectiveness.

Tier 1: On the Bench

Showing evidence of program/project completion: “Business labs were updated in all high schools for the beginning of the 2005-2006 school year and under the $300,000 budget.”

Counting computers: “We now have 100 computers in each elementary school.”

Tracking dollars spent: “Our district spends more than $150 per student on technology resources for learning.”

Demonstrating that the educational intent was met: “Our middle school writing and research labs are now in use by students and teachers.”

At the Tier 2 level of accountability, schools often attempt to demonstrate the benefits of investing in technology by comparing facilities or resources with neighboring districts, measuring the access or availability of resources to students or showing increasingly sophisticated levels of use in classrooms.

Tier 2: On Deck

Comparison with others: “We have as many computer labs as other high schools in the region.”

Access or availability: “We previously enjoyed little access to technology, but now we have a 4:1 student-to-computer ratio.”

Levels of use: “Our special-needs students use individualized math practice software two times a week [frequency].” “Word processing is integrated across all content areas [breadth].”

TOP EIGHT METRICS

Question: Which of these best characterizes how your school district measures the effectiveness of educational technology investments?

1. Levels of usage (i.e., students go to the lab x times per week)

2. Anecdotal evidence from teachers (i.e., “The students seem engaged.”)

3. Counting computers or systems (i.e., x number of computers per student)

4. Year-end testing

5. Comparison data (i.e., “We are at x, compared to the x national average.”)

6. Decreased failure rates

7. Learning efficiency (i.e., it takes students x amount less time to complete assignments)

8. Increased enrollment in upper-level courses

Source: EdTech poll of 63 readers

Len Scrogan is director of instructional technology at Boulder Valley Schools in Boulder, Colo.