Before predictive analytics became a buzzword for businesses around the globe, marketing teams were improving their targeting with predictive models for more than a quarter of the previous century. Predictive models and methodologies have been used effectively for a long time especially in risk management. Many businesses have seen the benefits of predictive analytics for decades and have continued to test the quality of these tools and methodologies.
However, many organisations are understandably a bit paranoid about the quality of their predictive models and the tools to develop them. Banks, as one example, constantly get audited on their results, but with predictive analytics continually expanding to support more important decisions, scrutiny is warranted in all industries.
We all know this area comes with some challenges. Data about customers has not always been so abundant (so big) and it’s only been recently – despite the early rise of predictive analytics in this area – that high-quality quantitative analysis was properly supported by trends in data warehousing and other data management support functions. Another challenge to adoption has been that many have been skeptical about the notion that a lot of decisions about customers can largely be automated, often paired with an unwarranted optimism in the alternative: the human mind.
The failings of the human mind
The human mind has difficulty grasping and acting on complex, non-obvious patterns. When hundreds of customer attributes are intertwined in highly complex relationships - particularly if those relationships are non-linear - the human brain will either draw a blank or make things up. We like simple patterns but the world is not very simple anymore. We love to infer, for instance, that a high income is correlated to a particular kind of behaviour. Or that drinking Green Tea prevents disease. Simple things. But making $150,000 a year in country NSW has a different meaning than that same income for someone living in Sydney’s Point Piper. Now add a few hundred other dimensions. Naïve segmentation can be dangerous. Unfortunately, that same human brain is not always aware of its limitations and our species is typically way too optimistic about its analytical powers.
Our brain often gets tricked into seeing a pattern that is simplistic or not really there. A human risk adjudicator may correctly perceive that some broker comes with a higher overall bad rate (the percentage of loans that default), but is unlikely to pick up on the fact that performance is above par for a very specific customer segment. A good model would spot and exploit those subtleties.
Another challenge is seen in human operators, across all industries, who are often set in their ways, especially if the process they use is even somewhat successful. As a consequence, patterns that seemed to work reasonably well in the past will continue to be used even if performance is dropping. Therefore, worldly knowledge not captured in the data, a factor that potentially supports human judgment, can easily work against it. Intuition, in other words, is not only overrated, it has a fairly limited shelf life.
Challenges with predictive models
Now that predictive models are widely used, the challenge shifts to making sure they are used where and when it matters most. It’s not enough to apply predictive analytics offline and calculate probabilities for a certain type of customer behaviour, the models need to be executed in real-time to leverage contextual information.
The problem with scripted conversations in the call centre, for instance, or any other bi-directional channel, is that it largely ignores the customer on the other end of the line. Even if the most sophisticated analytics have been used to find a relevant offer for Mrs. Smith, that script goes right out of the window when she contacts a Customer Service Representative with a complaint or her ATM withdrawal balks.
Conversations are not just about telling or showing the customer or even about advising them; they are also about listening and responding in ways directly relevant to what is being discussed. If a customer rejects an offer, is in a bad mood, calls in with a complaint, or has just seen a check unexpectedly bounce, it will affect the conversation in ways that can’t be completely scripted in advance, much like a chess game can’t be played by looking up the next move in a table.
What is required is a next-best action, real-time decision making that inserts itself in the conversation every single time something changes: when the customer is acting, reacting, or getting upset. Any such change should cause a re-evaluation of the state of the conversation, and possibly a change of tack. Imagine the following conversation:
“How is your wife doing?”
“We just got divorced.”
“Is she still with that bank?”
This response just doesn’t sound right, does it?
To be able to respond in relevant ways during a conversation, a decision engine cannot just re-apply its rules, it will also need to re-execute any predictive models that are referenced by those rules. To carry a decent conversation and take customer input seriously, many rules and many predictive models have to be continuously applied in real-time. Anything short of that is just being rude.
It is hopefully evident by now that in many situations predictive analytics can help business processes identify what the right thing to do is. Sure, a gut instinct can be right sometimes, but don’t go it alone – let the data be your guide. It’s important to note that accurate and actionable predictions are only half the story. If the predictive models cannot easily be embedded in the business processes, the returns will be a fraction of what can be achieved. Even if predictive analytics and rules can decide on the right thing to do, it’s still a matter of doing things right.
Luke McCormack is the VP Pegasystems APAC.