Six ways to measure the impact of your service design efforts

Jules Prick
Written by
Jules Prick
21 dec 2017 . 11 mins read
Share this article

As service design is increasingly finding its way to service oriented organisations, they realise that customer experience is becoming an important business objective. As a result, companies need to be clear about the impact of service design efforts on business results.

We at Koos Service Design often see that clients have embraced the importance of a well-designed service and the impact of a superior experience, but have noticed that they are not always sure on how to measure their design results — let alone thoroughly implement the right measures.

In short: ‘I designed a great service experience, but does it yield anything?’

In this article, I propose six possible key performance indicators (KPI’s in short) to measure the effect of your service design efforts.

1. Net Promoter Score (NPS)

First KPI; the widely used Net Promotor Score (NPS in short), a loyalty metric developed by Bain & Co. Net Promoter Score is a number in percentages, ranging from -100 to 100, that represents the sentiment surrounding your services. Customers can fall into either one of the three categories; detractors, passives or promoters. NPS is calculated by subtracting the percentage of detractors from the percentage of promoters. The higher your NPS is, the more loyal your customers will be.

Detractors are not satisfied with your service offering, grading your services with a 1–6, and are unlikely to do business with you again. Even worse, they can be so unsatisfied that they might impede your growth by spreading negative reviews.

Passives fall somewhere in the middle, grading a 7 or 8. They are quite satisfied but not happy enough to actively recommend your service to their peers.

Promoters are customers that will actively share how great your services are with their peers and promote your service. They are more likely to adopt more of your services or use them more frequently. Customers grading you with a 9 or 10 are considered Promotors.

Because NPS is about overall happiness about your services and not about the latest interaction, companies tend to send their NPS surveys at a regular interval, such as quarterly or biyearly. Make sure to send your survey to a random sample of your customer base, not only the one with a recent interaction. Companies like airliner Transavia or tourism company TUI have found a relationship between the increase of NPS and net revenue, making a good case to invest in customer experience design to C-level management.

The three categories of the NPS score; detractors, passives and promoters

2. Customer Satisfaction Score (CSAT)

Where many people think NPS measures satisfaction (nope… it’s loyalty), the CSAT score is the one KPI to use for satisfaction. Another important difference is that NPS uses general periodic measuring — where CSAT is used on an interaction level, meaning CSAT questions can be asked after each service interaction. In order to obtain your CSAT score, ask a customer questions on how satisfied he is with his latest interaction. The rating scale can differ from a 5 or 7-points Likert scale to a 1–100% scale.

An example of a CSAT survey question

Within the CSAT survey, additional questions are asked to probe the customer to tell why he is satisfied or not. This means that, unlike NPS, CSAT can give you great pointers to service improvement opportunities when asked after each service interaction (not every interaction for each customer… please). Leverage this metric and use it to point you in the direction of where your design efforts are most needed.

Make sure you’re not improving service interactions as if they are in a vacuum. Research by McKinsey shows that improving efforts focusing on a journey level are better for increasing customer recommendation, differentiation and cold hard cash!

3. Customer Effort Score (CES)

Where NPS and CSAT scores are focused on creating a more enjoyable experience for customers, CES is focused on reducing the customer effort. Customer effort is defined as the amount of effort a customer needs to make in order to get his customer job done (like paying a bill, for example). The CES score got introduced by CEB in 2010 and is mentioned in the 2013 book “The Effortless Experience”. Although it is often used to only measure effort for handling an issue with customer service, tracking the customer’s effort can be seen as an important business objective for each service interaction. In order to determine the CES score, companies need to ask the following question after an interaction:

“On a scale of 1 to 5, with 5 being the highest effort, please indicate the total effort that was required by you to complete your [insert interaction here]”


The responses will provide a Customer Effort Score as follows:

Sum of All Customer Effort Scores
— — — — — — — — — — — — — — ——  =  Customer Effort Score
Total number of responses


Research by Oracle showed that customer satisfaction increased from 61% to 93% when customers reported low effort. Additionally, a one point decrease in CES (a 20% decrease in your score) shows a 14% increase in intent to repurchase.

If you’re thinking of using CES, I have some advice from a service design perspective;

Map it out. CES will only be of value to better your service by using the customer journey as a framework and analyse CES on an interaction level; this will identify specific points of friction and opportunities for improvement.

Ask why. The CES score traditionally doesn’t ask customers why they find a certain interaction effortless or not. By asking the why question after the rating, you will gain insight into the barriers that prevent a service interaction to be easy and effortless.

Make it objective. Where CES surveys the perceived effort customers put in, you can also survey objective measures, such as task success rate, time on task or user error rate. These measures will objectively show you where customers have difficulty completing their task or interaction and where they experience barriers to do so.

4. Customer Lifetime Value (CLV)

More financially focused measures can also be linked to service design efforts. We have recently executed a project for JustLease to use service design methodologies and design sprints to increase customers’ financial value, aiming for an increase in service revenue.

The key metric for financial value is the Customer Lifetime Value (CLV), which is a prediction of the net profit attributed to the entire future relationship with a customer. Many business KPI’s contribute to a high Customer Lifetime Value, such as margin, cross-sell rate, upsell rate, acquisition cost and rate, retention rate and churn rate.

Research has shown that 55% of all customers are willing to pay more for a superior experience, being able to increase margins. Customer retention is cheaper than customer acquisition and a great customer experience is proven to drive loyalty and increase retention. Better cross- and upsell could be achieved through a service design driven approach, as was shown in the JustLease case.

Research has shown that 55% of all customers are willing to pay more for a superior experience, being able to increase margins.

The way CLV is calculated is different per organisation, and can range from a simple crude heuristic to a complex calculation. Either way, it shows the impact of service design efforts on the hard bucks.

5. Cost to Serve (CtS)

This metric is focused on the costs that are being made to serve a specific customer. A typical client request that we often get related to CtS is “We need to reduce the amount of calls to Customer Service”. Expenses might also include customer facing staff, sales reps or internal system costs. Other requests related to digitalisation or the design of self-help capabilities are ways to reduce serving costs by means of service design.

A simple service design principle to reduce costs is co-creation. When co-creating, we put people from different departments and expertises together in a room, and start thinking about the service as a whole. This often results in the departments realising that if they work together, they can offer the service more effectively and reduce costs by doing so.

Service Blueprinting for KLM Airlines

The #1 tool to look into cost reductions from a service perspective is the Service Blueprint. By mapping out the internal processes that are needed to serve the customer well, internal pain points and opportunities for serving efficiency can be identified. It helps greatly to identify where in the process of serving customers are the biggest expenses and how they might be reduced.

Although some designers might tell you that service design is not meant to reduce costs but to improve customer experience, sometimes the improvement of customer experience in itself can cause a reduction in serving costs. The American telco Sprint has stated that by improving the customer experience, they’ve managed to reduce their customer care costs by as much as 33%, according to HBR.

The American telco Sprint has stated that by improving the customer experience, they’ve managed to reduce their customer care costs by as much as 33%, according to HBR.

6. Time to Market (TtM)

A KPI for design effect that is often overlooked, is the Time to Market. With the pace of new products and services increasing and competitors luring to copy your services, the pace in which services can be brought to market is often an important differentiator. By using service design and Google sprint methodologies, we have helped Aegon (an insurance company) to shorten their development time from 9–12 months to having a first MVS (Minimum Viable Service) in the market after just 13 weeks. Service design can contribute to a faster Time to Market in 3 different ways:

  1. Gaining insight. A global management consultancy states that 48% of R&D budget is wasted in part because of weak insights, meaning what’s developed is not what customers want. Design research methodologies can give you deep customer insights and make sure you focus on identifying true customer needs in an early stage of the innovation process.
  2. Alignment. Using service design on a strategic level and looking at innovations or service propositions from a customer perspective creates better alignment between different departments. This results in all departments knowing where to focus, bringing to market more focused innovations in a more timely manner.
  3. Early validation. By creating quick prototypes, you can validate your biggest assumptions with real customers in an early stage of the design process. This not only makes you more prone to adjust your course, but also heightens your success rate. Like David Kelley likes to state: “Fail fast to succeed sooner.”

To conclude

If you are still reading this article, I have a hunch you are interested in implementing design impact measures in your company. Some tips:

  • Compare. If you want to measure the impact of your design efforts, you’ll need to compare the status quo with the new situation. So make sure you have some existing data / KPI’s to compare with.
  • Ask why. Make sure to not only measure data, but also ask customers the why question. Not the hard data, but the explanations of customers will give you the best pointers for innovation. Data alone won’t cut it.
  • Don’t just measure. Data alone is nothing — you need to act upon it. Research shows that 95% of companies say they regularly collect customer feedback, but only 29% of companies systematically incorporate customer insights into their decision-making processes.



Good luck.

Jules Prick
Written by
Jules Prick
21 dec 2017 . 11 mins read
Share this article