Chat benchmarks insights from the insurance industry

Today, 80% of insurance customers would use digital channel options for different tasks and transactions. Meanwhile, the share of digitally active insurance customers has increased more than 60% in the last four years.

The insurance industry is transforming, with more and more consumers accessing and engaging with their insurance providers online. It is against this backdrop that live chat software is soaring.


But simply tacking a live chat option onto your insurance website is not enough to impress your digital audience. Chat is a key strategic cornerstone for modern insurance comms, and should be implemented with due consideration.


With that in mind, we’ve conducted a chat benchmarks study comparing two of our insurance customers. It focuses on two implementation approaches: quick chat self-deployment versus a planned, consultative chat project.


As you will see, those different approaches lead to divergent results.


Methodology

All data has been collated from Jan 2019 – Jan 2020, for a recent sample from real insurance providers. Backing this recent data is over 15 years of live chat experience from the Parker Software team.  

We’ve spotlighted two customers in a direct comparison exercise.

◘ Customer A: came to us as they wanted a more cost-effective channel and a reduced reliance on the telephone

◘ Customer B: came to us as they wanted to increase customer satisfaction through chat, as well as improving their omni-channel presence and increasing first-time resolutions

Although both insurance companies chose WhosOn as the best-fitting live chat solution for their needs, they opted for contrasting deployments.

Customer A

◘ Is now the first year into their contract

◘ Bought out of the box software licences rather than a tailored solution, with no professional services

◘ Has recently begun working with Parker Software consultants to improve standards and drive the effective use of data

Customer B

◘ Is now the third year into their contract

◘ Opted for a solution implemented with the use of Parker Software consultants, for a bespoke best practice model

◘ Fine-tuned the solution year on year to improve standards, using data from reports

◘ Provides bi-annual training (1 internal, 1 from Parker Software), for any staff who have started within a 12-month period

Chat expectations

In our research, we have focused on typical insurance chat scenarios. It is our experience that, as standard in your industry, service levels are set around the following goals:

◘ Increasing customer satisfaction and loyalty

◘ Developing and engaging brand advocates

◘ Meeting and exceeding customer expectations

◘ Improving NPS scores

The data presented hereon is centred on these goals and expectations. We hope you can use it to progress with improved insight and confidence.


Findings

Service levels are usually set via average speed of answer and abandonment rate (i.e. missed or not answered). However, they will be set by each individual customer’s requirement, so if yours differ we can adjust accordingly.  


What do average response rates look like?

For customers setting standards for the time taken to answer chats, consideration must be made internally. Variables include:

◘ Agent skillsets / knowledge

◘ Number of chats available to answer

◘ Agent availability

Naturally, each variable affects service levels. So, how do our customers compare in combatting these variables when it comes to average response rates?

Customer results

Using the insurance industry data as a benchmark we can see:

◘ Customer A averages 20.9 seconds to answer a chat  

◘ Customer B averages 5.2 seconds to answer a chat

Explaining the gap

Perhaps unsurprisingly, Customer B performs better than their out-of-the-box counterpart. Customer B has worked with us for over 3 years, refining their chat through quarterly reviews and ongoing consultancy.

This partnership has resulted in the implementation of our recommended best practices, such as using auto-accept, skills-based routing and queuing. So, while there are variables, our live chat experience has shown Customer B effective ways to control them.

The methods used

To help Customer B control the variables they were facing, we:

◘ Helped set up agent skillsets and chat routing, accompanied by employee training

◘ Worked with the customer on dynamic invites, ensuring that chat is offered at the optimum time in the user journey

◘ Used quarterly review sessions to explore WhosOn analytics and help the customer understand their chat volumes and peaks, ensuring they have enough agents and licences to respond

The impact

Live chat offers a clear, compelling benefit: it’s live. Customers turn to this channel with the expectation of immediacy, and the speed of the support received is a highly influential factor to their satisfaction.

If a customer does have to wait, providing a clear idea of their wait length helps manage expectations and reduce frustration. So, even in peak periods, the use of queuing with relevant wait messages can help keep customer satisfaction high.


What is an average abandonment / missed chat rate?

A missed chat is a missed opportunity. Businesses deploying chat want to avoid its abandonment, but again, this requires consideration. Variables include:

◘ Number of chat agents available

◘ Are customers prepared to wait?

◘ Is the chat button available 24/7?

◘ An offline message when out of hours, or when agents are unavailable

Again, each variable affects service levels. So, how do Customer A and Customer B compare when it comes to missed chat rates?

Customer results

Using the insurance industry data as a benchmark we can see:

◘ Customer A has 29% missed chats

◘ Customer B has 6% missed chats

Explaining the gap

Once more, Customer B outstrips their insurance competitor. Although there are variables related to live chat software usage, Customer B has worked with us to better manage them.

Customer A, for example, has an ever-present chat button on their website. It is probable that their abandon rate is so high due to customers trying to chat when no agents are available. Analysis of their reporting would help address this problem, and Customer A has since approached us for consultation on this area.

Customer B, on the other hand, has been working with Parker Software since their chat project began. Taking advantage of our knowledge, they have managed to achieve missed chat rates that are over 10% above average.

The methods used

Although Customer B has more concurrent operators available than Customer A – with 30 compared to 20 – they also manage operator schedules more effectively.

Customer B follows the best practice advised as part of their implementation package with Parker Software. As part of this, they have:

◘ Set rules for chat button visibility

◘ Incorporated offline messages outside of opening hours

◘ Added the option for customers to leave a message

The impact

By taking more chats, Customer B creates more supported customers, and capitalises on more conversion opportunities. By tailoring their chat channel based on availability, they ensure that customers have options left open to them, rather than waiting in an unsatisfying – and endless – queue.


What is the average on concurrent chats?

A key benefit of using live chat software is the ability to talk to more than one customer simultaneously. However, in the complex insurance environment, it is important to balance this multi-tasking capacity with quality care. With that in mind, variables for concurrent chats include:

◘ Resource planning for agents

◘ Number of chat agents

◘ Chat drivers

◘ Queues  

◘ Agent skillsets

Every variable has a different impact on chat concurrency, and should be considered accordingly. So how do Customer A and Customer B stack up?

Customer results

Using the insurance industry data as a benchmark we can see:

◘ Customer A has no limit to concurrent chats, and averages 4 chats per agent concurrently

◘ Customer B has a limit of 3 concurrent chats, and averages 2 chats per agent concurrently

Explaining the gap

In our experience, consideration must be made for the agent’s skillset and the type of chats taken. While simple tasks are suited for 2-3 concurrent chats, running through a complex policy may not be.

For the insurance industry, it is important to bear this in mind when setting the maximum number of concurrent chats allowed. Customer A has taken no steps towards introducing limits, whereas Customer B has placed a firm focus on chat quality and limited concurrency.

The methods used

As part of our consultative approach with Customer B, we:

◘ Implemented auto-accept rules to ensure that chats are taken in a timely fashion and distributed evenly

◘ Ensured that requests are routed to the right agent with the relevant skillset and from the relevant team

◘ Helped keep responses speedy with accurate canned responses

The impact

Key studies have been performed in the concurrent chat area. While results differ based on chat complexity, they have routinely shown that two concurrent sessions per agent provides the best balance between productivity and customer satisfaction.

In setting limits on the number of concurrent chats enabled, whilst also making effective use of canned responses and routing, Customer B has empowered their agents to guide multiple customers through chats – without ever comprising service quality.


How many chats per hour, per agent?

A quality chat implementation means little if there is limited channel uptake. The number of chats per hour, per agent is a key metric for anybody deploying chat, and variables include:

◘ Chat volume

◘ Agent skillset

◘ Products

With their differing approach to chat, how do Customer A and Customer B compare?

Customer results

Using the insurance industry data as a benchmark we can see:

◘ Customer A has 11 chats per hour on average

◘ Customer B has 13 chats per hour on average

Explaining the gap

Customer B has followed our best practice guidelines, from initial implementation to the present day, to keep chats flowing. Due to this, they have been able to handle more chats than their competitor – even with their limit on chat concurrency.

Customer A, in comparison, is slightly behind both in terms of their industry and in general. Overall from the data across our hosted base, the average number of chats per hour was 12.1. Whilst by no means ineffective, Customer A’s instant deployment is proving less successful than the strategic, long-term plan followed by Customer B.

The methods used

To help Customer B achieve their high average of chats per hour, per hour, we:

◘ Provided consultancy on creating a fluid chat launching process, from button to window to auto-accept rules

◘ Worked with Customer B to set routing rules and backup routing rules, to create a manageable flow of chats directed to the best-placed agents

◘ Assisted with the creation of a library of canned responses and their ongoing analysis, helping Customer B to optimise via reviews and feedback

The impact

Simply, being able to take more chats per hour, per agent means enormous cost-savings for any company. Providing chat at the right time, to address the right issues, helps drive chat volume and reduce the reliance on other more expensive channels.

Even though it was Customer A who purchased chat with the main goal of saving money, it is Customer B who has achieved that through taking a more considered approach to chat.


Chat quality

While these service levels are a good indicator of chat usage, they don’t necessarily indicate the quality of service provided.  For a quality benchmark, 3 factors are usually looked at:

◘ NPS

◘ CSAT

◘ First time resolution

For a more thorough examination, we have also collected data across these key quality areas.


How to increase NPS scores?

The Net Promoter Score (NPS) is a management tool used to measure the loyalty of customers, and take in quick, reliable feedback. Variables include:

◘ Product

◘ Price

◘ Customer service

◘ Staff turnover

◘ Value

So, with their contrasting performance in terms of service levels, how do Customer A and Customer B compare in terms of NPS?

Customer results

Using the insurance industry data as a benchmark we can see:

◘ Customer A has a 45% NPS

◘ Customer B has a 63.1% NPS

Explaining the gap

Customer B’s live chat users are more likely to recommend their service and exit feeling satisfied than Customer A’s chat users. The reasons for that, in all probability, are those outlined in the service level findings.

Customer A chose a speedy chat deployment, rather than a strategic chat partnership. Customer B opted for a solution approach rather a simple software download, and as a result has implemented chat with greater care, and with every need catered for.

The methods used

Customer B made full use of a partnership with Parker Software to drive this high NPS score. For example, we:

◘ Consulted Customer B on reaching out to visitors and asking them for feedback after they finish a chat

◘ Assisted with a custom post-chat survey followed by an NPS question, to gather feedback that can be collated as part of their overall company score

◘ Collaborated on the implementation of regular training for new starters, to help ensure that NPS scores do not drop due to inferior performance in event of any staff turnover

The impact

For Customer B, the NPS feedback from chat is a great way to analyse their promoters to gain reviews. Overall, it has helped the company improve their products. 

In terms of customer satisfaction, Customer B is out-performing its insurance competitor and creating loyal, well-supported customers. Their revenue can only increase as a result.


How to improve CSAT scores?

The CSAT score puts a numerical value on customer satisfaction. For companies using live chat software, it is an essential way to quantify customer happiness. Variables include:

◘ Product

◘ Price

◘ Customer service

◘ Staff turnover

◘ Value

Once more, we used this metric to compare Customer A and Customer B.

Customer results

Using the insurance industry data as a benchmark we can see:

◘ Customer A is running at 75.02% CSAT

◘ Customer B is running at 84.01% CSAT

Explaining the gap

In the first instance, Customer A has less agents than Customer B. This could well be a consideration to their lower score, as with less agents they also have longer wait times and resolutions.

However, this is by no means the only factor. Year on year, Customer B has looked at the analytics in their reviews. We have worked with them on reporting to understand when they need to increase their licences to account for volume, as well as ensuring they are using chat to its best ability.

Although Customer A is now in the process of working with our consultants to see where they can improve the chat experience and help their agents make best use of features, they are behind in doing so. For Customer A, there is now work to be done in proving that chat service has improved, and that it’s not easier for the customer to just pick up the phone.

The methods used

With Customer B, our recommendations were implemented at the start. These include key CSAT quality drivers such as:

◘ Chat queuing rules and queue messages to manage expectations

◘ Automatic skill routing to ensure relevant support

◘ Auto-accept to minimise waiting

◘ Canned responses to ensure speed and efficiency

◘ Ongoing assessment of reviews, to continually optimise performance

◘ Agent training, to ensure helpful, friendly service through chat

The impact

In an increasingly demanding customer service environment, the onus is on you to keep raising the bar in terms of the service you provide, to consistently meet and exceed expectations, and to provide the best products year on year.

In having such a high CSAT score – higher than the average across our hosted base at 79.5% – Customer B has helped future-proof its company against customer attrition.


How to achieve first time resolution?

First time resolution massively appeases frustrated customers, and creates a quick, frictionless experience for live chat users. Variables include:

◘ Staff knowledge

◘ Products

With first time resolution as a metric, we compared Customer A and Customer B.

Customer results

Using the insurance industry data as a benchmark we can see:

◘ Customer A does not report on first time resolution

◘ Customer B is running at 70.7% first time resolution

Explaining the gap

At present, Customer A is failing to record their first time resolution rates, and has no way of scoring their chat in terms of its effectiveness in this area. Again, this is largely down to a rapid deployment with minimal prior planning, and without consultation.

Customer B used our expertise concerning customer tracking to identify whether chat users were turning to other channels within 48 hours. As a result of this, they have been able to pin down their first time resolution rate, and it is once again higher than the average of 70.1%.

Methods used

To help Customer B measure and achieve this first time resolution rate, we:

◘ Helped Customer B analyse their chats and post chat surveys for feedback

◘ Identified areas where chat could help, working with the customer continually to improve resource planning and online optimisation around dynamic invites

◘ Through implementation and review, consulted Customer B on best practice use of available features, including canned responses, file transfer and guided web journeys

The impact

73% of customers say that valuing their time is the most important thing that a company can do to provide good customer service. This means, simply, that they don’t want to have to keep returning to a brand to get a resolution. Customers expect effective service interactions, with their question answered or their issue fully resolved without further follow-up.

Customer B can track and optimise their success in this area, where Customer A cannot. So, while Customer B is persistently identifying opportunities to improve, Customer A has not yet even begun their measurements.


Take your next steps with access to insights

We have compiled this benchmarking document to help you move forward with all possible knowledge. The data is real, the customers are real, and their respective setbacks and successes are real.

With access to these authentic experiences and results, we hope that you will be able to take a more informed approach to a live chat implementation. Naturally, our advice is to take the path of Customer B.

So, to achieve similar results for your own insurance chat project, get in touch with our experts today.


Find this post useful? You can also download it as a white paper.