Today’s contact centers need to revisit core assumptions around measuring agent performance. Changes in business conditions influencing agent engagement raise new questions about whether traditional performance models are sufficient to address the more complex customer needs that have taken center stage in recent years.
In many respects, the nature of the agent’s job has changed. Instead of being tied to a physical center, many are now working remotely, meaning they are more at risk of being disconnected from their peers and supervisors, who in turn have less visibility into agents’ performance. This happened suddenly, at the onset of the pandemic.
Another change has happened more slowly but is just as meaningful: the transition to digital-first interactions, often originally handled by self-service and then escalated to agents. These interactions can be more complex and less scriptable than traditional voice calls. Customers come to these interactions with more preparation and knowledge. People on both sides of the interaction are now more likely to be using tools that were unavailable even five years ago: video calls and collaboration systems. The result has been an overall increase in the variety and complexity of contact center work.
By 2024, two-thirds of contact centers will increase budgets for training and coaching due to the rigors of managing work-from-home agents and the increasing complexity of agented interactions.
Organizations are also revisiting strategies for recruiting and engaging with employees. Post-pandemic, it has been harder to hire and retain agents, although this is likely a temporary situation that will return to a more normal state in 2023. What might be permanent, though, are changes in the mix of qualities agents need to perform well. That affects both the hiring pipeline and the training/incubation period, and has effects on how supervisors motivate and manage staff. Agents have always been expected to display empathy and social abilities, but the variety of customer queries has raised the bar for those qualities. More complex interactions coupled with greater reliance on self-service means that customers are in a different frame of mind when they ultimately reach an agent. Agents need to be sensitive to a higher level of customer frustration, and be trained in skills needed to de-escalate situations. They also need to be well-versed in the internal resources that are available to solve problems. Agent training has to guide performance in problem-solving skills because the interactions that make it past self-service and automation are likely to take longer to resolve and be more important to the customer.
Organizations are also beginning to look to contact centers to contribute to revenue efforts. Agents are starting to be screened for (and trained in) sales awareness skills like recognizing potential upsell opportunities. As this sales-centric thinking percolates across the industry, centers will have to cope with the tension between the mission to control costs and the very real need for resources to invest in skills and the hiring pipeline. For many firms, the pandemic was a black swan event that made it possible to invest large sums in contact centers in order to keep operating. The open commitment to resources that was available in 2020 has been waning into 2022 and will likely be gone by 2023.
These shifting business conditions make plain the gap in how centers measure performance. Contact centers usually measure and reward agents for speed and quantity: more calls, shorter calls, faster resolutions, fewer transfers and handoffs. Unfortunately, while those metrics are great at keeping a service operation within expected norms, they don’t tell a clear story about the added interaction complexity. Calls that make it through the self-service gate are harder, so they last longer, which pulls down the agents’ stats on speed. Bringing in other team members for help can also work against traditional performance measures.
Normal metrics also leave aside questions of how impactful agent performance can be on outcomes and revenues. Agents are rarely tracked by how they influence revenue, customer value, customer longevity or other measures of the impact customers have on the organizational bottom line. This calls for new performance standards that are better aligned with company goals.
Performance measurement is also changing due to advancements in the underlying technologies used to manage agents. For example, systems with embedded artificial intelligence and machine learning are now starting to evaluate 100% of agent interactions, rather than the 3%-5% that is standard practice in manual supervisor quality reviews. There is evidence that agents perceive this as a fairer system because they know everyone is being tracked using the same large pool of interactions. It allows managers to zero in on more precise coaching and training based on a bigger picture view of real conditions. Over time, it is likely that this will improve agent morale and lower attrition.
Another technology advance that affects performance (and hence, how one defines success) is the ability to use automation (AI again) to deliver assistance and suggested guidance in real time, during the interaction. Current tools are more sophisticated and capable than old-style decision-tree suggestions that don’t incorporate an interaction’s or customer’s context into the advice. This mitigates the problem of increasing complexity by allowing agents with lower skill levels to continue to serve higher-value customers even in more high-stakes situations.
Agent assistance tools are still predominantly found in enterprise-level organizations rather than in mid-market or small- to medium-sized firms. To make inroads into those centers, vendors should make a stronger case for the overall value of automated guidance in very practical terms, like agent performance improvements. Smaller organizations need assurance that these tools can be used by line-of-business teams without IT involvement, that they are simple to configure and don’t require complex training.
Contact centers are also taking another look at gamification, which had a popular moment 10 years ago when it was first available in automated systems. At that time, it was only thinly taken up by centers because it was provided by niche software that had to be integrated into existing performance management systems. Today, it is more likely to be found as a component of the existing PM and agent management platforms.
Evidence suggests that gamification does have an effect on performance in certain circumstances, for certain types of metrics. But there are lingering questions about its value and purpose. Managers need to have a clear view of what they expect it to accomplish. Is it better at making people work harder and do more? Or is the goal to make the process of what an agent does more pleasant and less monotonous? Most managers probably land somewhere in the middle, and use gamification to focus on a specific skill or goal they want an agent or team to lean into for a specific period of time.
It is likely that as centers turn to more outcome and revenue-based metrics, they will find gamification an interesting way to encourage sales-related thinking in agents. We expect to see it used in conjunction with coaching, specifically aimed at helping people identify opportunities for upsells or cross-sells, or at making them more comfortable taking advantage of those moments. And in today’s decentralized, “work from anywhere” world, it may be an effective way of building team identity, even if it is not used to boost particular metrics.
Measuring agent performance is now more precise than ever, but also has to take into account a wider range of personal and professional success factors. Identifying the specific qualities needed in a particular center at a particular time is now a harder job than measuring speed and accuracy. Fortunately, there are technological and managerial tools and techniques that can help decision-makers staff centers with a healthy mix of skills while creating an environment that limits turnover and agent dissatisfaction.