SUE B. WORKMAN, at Indiana University, looks beyond ACD systems and trouble tickets for deeper interpretation of help-desk numbers.

May 01 2008
Management

Metrics Aren't All Numbers

They're the information that can help IT perform better.

They’re the information that can help IT perform better.

The IT help desk is a crucial supporting player in the team of instructors, researchers, administrators and students who are spread across university and college campuses. Knowing exactly how and where to help, however, is a challenge. With such a broad spectrum of activity, knowing the specific details of how to help can make the process more effective for everyone.

Identifying the key metrics these institutions should be using to ensure a successful help desk isn’t clear-cut. Not all metrics are created equal. Knowing what’s important, what’s not, and how to collect and understand metrics is both an art and a science.

Help-desk experts at big institutions, such as Indiana University, University of Notre Dame, Stanford University and Yale University say the most useful help-desk metrics don’t simply track IT trouble; they help smooth out everyday operations. In other words, they grease the cogs on which regular IT operations turn. Looking only at simple figures, such as how many help-desk calls were handled or settled, won’t make the help desk more effective. It’s how those numbers are used to change the way the desk handles calls that makes the difference.

“Those metrics that enable us to understand if the help desk is working and what changes need to be made to ensure smooth operation matter most,” says Susan Grajek, senior director of client support in information services at Yale University.

Help-desk experts list the following metrics as some of the most commonly used: customer satisfaction; availability; accuracy, speed or how long it takes to resolve a problem; how many tickets close daily; call volume; call-abandonment rate; backlog; and utilization of agents.

Experts agree that using these metrics helps them meet their goal of optimizing the help-desk experience so faculty, staff and students can keep working. The task is a challenging one, however, because metrics are dynamic and the IT world is constantly changing. Metrics that served a purpose two years ago might not be useful today. “You must look at the content of what you’re supporting,” says Grajek.

Metrics are a great tool for managing, “but they’re no substitute for it,” says Tom Goodrich, manager of support tools and metrics at Stanford University.

Look Beyond the Numbers

Metrics serve as the eyes and ears of help-desk performance. Depending on the size of the institution, help desks commonly serve thousands to tens of thousands of users who study or work on campus. The help desk is often the first line of response for IT hardware, software and networking issues.

An Educause Center for Applied Research (ECAR) study on university IT help desks published last December reported that most of them support a wide range of identity-related services, including password changes, user account generation, user name changes and support for operating system software, central hardware and the data network. Help desks also assist with common instructional and administrative applications, such as e-mail, personal productivity applications and campus calendar applications, according to the study.

Help desks that monitor the calls coming into their operations can generate raw data that can be used to improve efficiency. For instance, metrics on call volume and average length of call can assist help-desk managers to determine whether they have adequate staff. Measuring call-abandonment rates tells administrators how many calls actually got through and how many users hung up because their calls were put on hold.

Interpreting numbers is essential for an efficient help desk. Many years ago, when Stanford began using metrics, the focus was on trouble-ticket count and the number of phone calls handled. “We learned that just looking at the numbers didn’t tell us about the quality of the service,” says Chris Lundin, director of help-desk services at Stanford.

The organization backed away from looking strictly at numbers and began using a new metric (a customer- satisfaction survey) to better understand the end-user experience as well as the help-desk specialists’ performance. The school has netted improvements since adding the user survey, says Lundin.

Customer-satisfaction surveys and other user surveys can also be a very useful tool to ensure that the help desk is fulfilling its mission.

When an incident report is closed at the University of Notre Dame help desk, for example, an e-mail goes out to the customer asking them to rate their recent experience. “How it’s defined is up to the customer. In other words, ‘Did we treat you well, how did we do?’” says Denise Moser, help-desk manager at Notre Dame.

At many university help desks, some surveys are followed up with a phone call to get more information from the user.

“Metrics are a starting point,” says James Abbott, president of Abbott Associates, an IT consulting company located in Greenville, S.C., and adjunct professor at the University of Arizona. Abbott notes that metrics should be continually changing.

Getting It Right

Most help-desk metrics are tracked by automatic call distribution (ACD) systems and trouble-ticket software. In addition to the automated systems, universities commonly add a customer-satisfaction survey.

Experts see those as basic measurements that must be built upon. At Indiana University, the support center, which includes the help desk, looks not only at the ACD and trouble-ticket systems for metrics but also at a customer-satisfaction survey, a knowledge base and other online-distribution systems it has developed internally, according to Sue B. Workman, associate vice president for support at Indiana University.

The support-center knowledge base is populated with about 13,000 highly honed answers. “We’ll look at the top 10 knowledge-base requests to see why people are calling us. We’re always going back to the knowledge base to see how we can improve it,” says Workman.

There’s good reason to do so — too many questions from users can cost money. Last year there were 29 million hits on the knowledge base, according to Workman, at a cost to the school of about five cents per hit, compared with about 133,000 telephone calls during the same period at an average cost of $10.70 per call.

Timing Is Important

How often university help-desk organizations look at the metrics they collect varies depending on what question they want answered. Noting that you get from your metrics what you put into them, Abbott stresses the importance of metric utilization and its immediacy. Some metrics don’t store very well, while others require longer monitoring intervals to be meaningful. Some tabs have to be kept in minutes, while others can be tracked in months.

“The longest we look at metrics is daily,” says Abbott, whose goal is to run a near-real-time environment, shift by shift.

Yale’s Grajek tracks metrics on a minute-by-minute basis on a flat-panel screen, monitoring which agents are on the telephone and how many people are on hold. Other metrics, such as call abandonment, are checked weekly; call volume is checked monthly or quarterly. “I want to see how the service is growing,” she says.

Observations of some indicators can lead to a rollout of additional services. For example, Indiana University’s Workman used metrics last year to track user preferences for contacting the help desk and recorded double-digit increases in e-mail and chat inquiries and a decline in phone calls and face-to-face support. The university introduced the chat option last year.

In relying on metrics to improve help-desk efficiency and deliver excellent customer service, help-desk managers must understand the importance of validating the metrics data. Fortunately, validation doesn’t pose a problem because the automated systems are self-validating. The duration of a call, who the caller is, where the call went and the routing of the call is automatically recorded. It’s a bit more difficult to validate customer-satisfaction surveys, but most organizations follow up on a percentage of survey responses.

Successful use and interpretation of metrics must also be measured. Some organizations set boundaries by using service level agreements and measure success by meeting their SLAs.

Some measures of success are less tangible but no less effective, say some experts. There’s room for some informality in metrics when it comes to how successful you are. “There’s nothing like word of mouth,” says Grajek.

When all is said and done, metrics provide insight or tell a story about help-desk performance. Knowing how to interpret the numbers and understanding the bigger picture are as essential as an ACD system.

Metric ABCs

Metrics can be tricky. If you’re not asking the right questions, you may be generating the wrong metrics. Martin Klubeck, a planning and strategy consultant with the office of information technologies at the University of Notre Dame, offers some tips on how to get the numbers right:

Tip 1: If you don’t know why you’re collecting data or reporting a metric, stop.

Tip 2: Don’t chase data. Determine the question to ask, not the answer you want.

Tip 3: It’s not enough to ensure you don’t misuse data. You must also create an environment of trust around data-reporting.

Tip 4: Don’t take metrics for absolute truth. Dig deeper.

Common pitfalls and how to avoid them

  1. Focusing on negative customer-survey feedback will cause you to lose perspective.
  2. Changing procedures or the way you’re doing business based on a small amount of negative feedback. Investigate complaints and evaluate feedback with a level head.
  3. Placing too much emphasis on a speedy resolution can lead to inaccurate analysis, which could result in a call back.
  4. Deploying front-line technical people who lack people skills. The goal of the help desk should be a timely, accurate and pleasant experience.
  5. Focusing on the wrong metrics may get you the wrong response.
  6. Being too granular with how calls are categorized.
  7. Focusing only on specific numbers, such as ticket counts, speed, and number of calls handled. They don’t tell the whole story.
  8. Being a slave to metrics can taint the perception of the customer experience.
  9. Failing to test the help-desk operation. Tape-recording help desk/customer interactions is a way to assess performance.
  10. Choosing to measure everything, instead of deciding what needs measuring.
<p>Stephen Hill</p>

More On

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT