Here are two important, but on the face of it, pretty simple questions. How do you find out what customers want and how can you make sense of their choices? The obvious answer is ‘ask them’, and this is what traditional research techniques such as surveys, questionnaires and focus groups do. There is a fundamental challenge with this approach though: unfortunately, people tend to be quite bad at making predictions about their own behaviour.
There are a range of known cognitive biases that underlie these poor predictions. For example, people are known to modify their responses just because they know they are being studied – a phenomenon known as ‘reactivity’. We also have the tendency to lie to ourselves. Studies have found that we believe we are more likely to engage in behaviours that are good for us than we actually do.
Moreover, in research settings people are susceptible to so-called ‘social desirability bias’, or the propensity to over-report behaviours that make them look good to others. Netflix saw this when it asked its customers to say what shows and movies they watched most.
People reported viewing enlightening foreign films and documentaries. But download figures showed otherwise. Instead it was actually Hollywood blockbusters, entertainment of a more pedestrian nature, that tended to top the bill.
For policymakers – especially policymakers looking to modify customer behaviours – this provides a valuable lesson. You need to look at what people do, not just what they say they’ll do. A point well illustrated by research at the FCA into the behaviour of consumers putting their money in investment funds.
As part of its interim Competition Market Study into the UK’s asset management sector last year, the FCA asked non-advised retail investors an important question: do you take charges into account when making investment decisions?
Most said ‘yes’. In fact, 77% of respondents claimed to look at charges when they made their investment decisions. 45% said charges were an influential factor in their choice. So far so positive for competition and price discovery.
Yet the habits of real world customers tell a different story. According to 6 days’ worth of browsing data given to the FCA by an online investment platform, its customers engaged with charges during fewer than 9% of visits (excluding visits to fund landing pages) . And under 3% engaged with documents on the side, including Key Information Documents.
This ‘clickstream data’, as it is called, is vital because it reveals the truth about what customers actually do. It shows what pages people visit within a website, the order they click through them and how long they spend on them.
Separating fact from fiction
In this instance, the company also shared data on the investments customers made and their holdings at the end of that month.
Over the last 6 months, the FCA has worked with academics Neale Mahoney and Hanbin Yang from the University of Chicago Booth School of Business to make sense of that data. The research examined customers’ attention to charges and how they seemed to make choices about what investments to make.
Overall, we found that customers rarely engage with charges on this website. Key findings from this dataset include:
- Of all the visits to the website to look at funds, fewer than 9% engage with charges.
- Under 3% of visits engage with documents on the side, which includes the Key Information Document (KID). The KID is designed to help the investor make a more informed decision and it includes charges.
- A strong indicator of engagement with charges is when customers sort lists of funds by charge, which would show that they wish to minimise charges. Customers only do this during 0.1% of visits.
- Although very few customers sort by charges, those who do tend to buy cheaper funds.
- As a percentage of time spent across all customers, the majority of time was on the account and portfolio summaries (about 25%) and factsheet landing pages (over 10%).
- Those who buy passive funds tend to have higher engagement with fees. Out of all visits where clients buy funds, 17% engage with fees but when clients buy passive trackers that figure increases to 21%.
- When customers engage with charges, on average they do so for longer than when they look at other pages. They spend just over 60 seconds on charge-related pages and 43 seconds on non-charge related pages per visit.
We also categorised three ‘typical paths’ that customers took as they passed through the website. They are based on the route that clients take to get to fund factsheet landing pages and indicate the way that they look for and buy funds. These categories help us understand how broad types of behaviour can influence investment decisions.
‘Reviewers’ are customers who mostly go to the website to check their existing investments. They start in their portfolio summary and then go and look at funds. ‘Choosers’ are customers that use recommendation lists to navigate to the factsheet landing page. ‘Searchers’ are customers that use keyword searches internally from the website or externally from search engines in order to find funds. Overall, we find that ‘Choosers’ and ‘Searchers’ engage more with charges than ‘Reviewers’ and tend to purchase cheaper funds.
In this specific case, the clickstream data offers an important insight into how customers may be disciplining firms on price within the asset management market, which is a key driver of competition.
The future of clickstream data
Of course, focus groups, surveys and questionnaires are still critical for measuring people’s understanding and uncovering why they make certain choices. These are things you can’t get from clickstream data.
But to get a full picture we can expect to see policymakers relying more on data like this as they recognise the importance of having an objective view of how people behave. This is facilitated by the growing availability of information on customer browsing habits.
As with any research method, there are limitations. For example in this case, where pages contained charge information, they also contained other information, so it wasn’t possible to be sure if customers had read and understood the charges. We were also unable to make assessments of the causes of behaviours we observed.
Moreover, in this case we only had 6 days’ worth of data, which is only a snapshot. In the future, we are likely to see regulators looking at longer-run datasets.
This makes practical sense. Regulators have access to vast amounts of data, but the trick is enriching it to tease out genuine insights. And it is not difficult to imagine regulators using browsing data to understand consumer purchasing behaviours in all sorts of markets. This has huge potential to help us better understand the effectiveness of competition and the potential remedies.
There are some clear implications for policy too. It is important that customers’ attention is drawn to charges and that this information is presented in a way that is easy to understand. As far as we are aware, this is the first time a regulator has used clickstream data in their research. Given its potential to support the policy-making process, we doubt it will be the last.
Our thanks go to Hanbin Yang at the University of Chicago Booth School of Business, who co-authored this article.