Speech by Robin Finer, Acting Chief Economist at the FCA, delivered at the Respublica conference ‘The Value of Data, Competition, and Financial Services’, London.
Speaker: Robin Finer, Acting Chief Economist
Location: The Value of Data, Competition, and Financial Services, Respublica conference, London
Delivered on: 7 May 2019
- Data can have economic value for the consumer – it is a personalised input that reduces search costs and helps individuals to obtain products and services that better match their needs.
- While consumers have difficulty understanding the value of their own information, the value of data to firms is augmented by their ability to combine data sources on individuals and then aggregate across consumers. The better they are at this, the more they’ll attract new business and more information.
- By increasing our understanding of these markets and the role and value of data, we can help empower consumers to choose what to share, where, and for what, as well as informing debates about how to deal with technology firms’ market power.
Note: this is the speech as drafted and may differ from the delivered version.
We can’t move for technology and data these days – it’s in our homes and on the news, on our wrists and in our pockets.
There has also been huge focus, from high political office to serious antitrust allegations, on the growth of the large technology companies, how they've reached that position and what impact that will have going forward. Discussions about the consequences of their growth and the market power they hold have made headlines around the world.
Discussions about the consequences of their growth and the market power they hold have made headlines around the world.
A lot of this discussion has emphasised the role of data – personal data, transactional data, the combination of different kinds of data. One of the most prominent recent explorations of these issues is the Furman review, which considered the opportunities and challenges of the digital economy, and suggested how our regulatory approach might be adjusted to deal with them.
It’s clear that this debate merits urgent attention. In revealing information about ourselves to tech companies, financial firms and others, we may benefit from a personalisation of products and services, with propositions more tailored to our needs. But this comes at a cost, even with a zero monetary price. Our data forms the bedrock upon which these new tech giants have built their businesses. But do we really understand its true value – to ourselves and to others? And does it matter?
You won’t be surprised to hear that I think it does. Over the next few minutes, I’ll explain why, focusing on three different elements:
- The different ways in which data about us may create economic value for us as individuals.
- The emotional value we place on personal data and how this impacts our willingness to share it with others.
- The implications for those who use our personal data and gain commercial value from it.
I believe that if we properly understand these issues, we as policy makers and regulators will be better able establish the importance of data in different contexts, and identify those firms who genuinely have market power through their access to and/or ability to exploit it. This is the only way we can inform robust policy – which maximises the benefits for individuals, while minimising the dangers that can arise when data is concentrated in a small number of very powerful firms.
Economic value of data for consumers
Let’s start by thinking about why our information might be economically useful for us as individuals. We might think about 2 different types of data here – ‘attribute’ data that describes our characteristics (often given away consciously by an individual) and data that records/demonstrates our behaviour in particular situations (observed by the firm). Attribute data includes things such as our address, date of birth, genetic information, etc. The second category includes our search/purchase history in an online setting.
These different types of data can be used, alone or in combination, to serve 2 basic purposes – prediction and identity verification.
All these data play an essential role in opening up huge, often global, markets – which we can access simply by handing over a few details.
In other words, knowing more about us (both our inherent characteristics and our past behaviour), in combination with data about the context (such as location or time of day) enables others to better anticipate our future demands (prediction) and to offer us goods and services that are likely to meet them. This predictive power means that, should we wish to, we can take advantage of the convenience and speed offered by real time ‘suggestions’ of products or services delivered via our phone or device. We may even benefit from ‘anticipatory shipping’ – the delivery of goods to customers before they’ve even ordered them, or the Minority Report for shopping.
Moreover, when we interact online these days, we increasingly have to prove we are who we say we are. Our unique characteristics (such as fingerprints or the details of our face) can now be measured/observed and used to enable us to take advantage of personalised offers, or to transact in a secure setting.
All these data play an essential role in opening up huge, often global, markets – which we can access simply by handing over a few details.
But there is, of course, a trade-off. The true amount of data that we are giving firms, over and above what is openly demanded, is not always clear. As well as taking information such as our name, address, telephone etc, platforms surreptitiously collect valuable data on behaviour (we accept cookies at our own peril). This can include stated preferences (likes for certain Facebook pages, retweets, reviews of restaurants etc.) and browsing behaviour on online marketplaces (time taken to choose a product, which products were looked at but not bought – although the varying accuracy of this fuzzy algorithm will be familiar to all of us who have received recommendations better suited to family members or to that friend for whom we bought a recent birthday gift.) This issue of transparency around data is addressed by the Information Commissioner’s Office’s remit to ensure that everyone’s data is used properly and legally – General Data Protection Regulation (GDPR) requirements such as the right to be informed make sure that your data is used only in ways you would reasonably expect.
Even if we know how our data is being used, we do not know how much it is really ‘worth’.
Indeed, academics such as Glen Weyl at Microsoft describe the way in which our behaviour enables online firms to improve their offer (and make more money) as unpaid labour.
Emotional value of data for consumers
It’s not just that we don’t always understand the value of the information that we are providing. We are also inconsistent in how valuable we think it is. While it is common to hear people say how reluctant they are to reveal their personal information to others, they will often reveal the same information quite readily when it enables them to access a particular product or service – the so-called ‘privacy paradox’.
If the emotional value of data is subjective, and so entirely personal, it is crucial that consumers are able to choose what they get for it.
Evidently, we would expect that something like place of work would be felt to be less a ‘sensitive’ piece of information to share than, say, historical medical records. But it’s not always clear cut. The deciding factor may be context – how much do we care about the product or service on offer, or our relationship with the firm? In short, the value we assign data is determined by the type and context, but not necessarily in a predictable way. It has an emotional element – data about us is personal.
We currently talk about price-sensitivity when examining how consumers make their choices – we may well be talking about privacy-sensitivity in the near future. There is already some evidence of markets trading off service quality and data access, like the alternative search engine DuckDuckGo – the search quality is worse but they don’t track you.
But perceiving our data as more or less valuable in different settings could also leave us open to exploitation in markets. ‘Nudging’, or the manipulation of systematic behavioural biases, can be used for bad as well as for good. Loss aversion, or the fear of missing out on a really attractive new product, may induce consumers to hand over more details than are strictly necessary for the firm’s stated purposes – a contradiction of GDPR’s principle of ‘freely given consent’. If presented in a time pressured environment, consumers may suffer from present bias, underweighting the true value of their personal data in favour of the endorphin fix that comes from access to the latest social networking site.
If the emotional value of data is subjective, and so entirely personal, it is crucial that consumers are able to choose what they get for it – under transparent conditions about how it is later used. Legislative steps, such as those that apply under GDPR, allow individuals to request information on what data is held on them and ask that it be deleted. This gives individuals control over personal data and affords them the right to be forgotten, which may be important emotionally and/or economically. Elsewhere, German regulators termed the emotional cost of the constant monitoring by Facebook of its users in return for access to the site ‘unfair’ – and an illegal abuse of Facebook’s dominance. We can see that there are clearly ethical arguments here – is there a right way for firms to be using our data? But that’s a much bigger question for another day!
How much should we be asking in return for access to our information? In many everyday purchasing decisions, the mental calculations that we do are hard enough. Weighing up the benefits of a product with the really very abstract costs of giving away data, is even more so. Knowing that the economic and emotional value of personal information might change at a later date makes things even more complicated. Analysts interested in resolving this problem might find broad, generalisable measurements of the value of data, which could act as a benchmark, a useful first step. Indeed, academics like Dr Rebecca McDonald (Department of Economics, University of Birmingham) are exploring new experimental methods for quantifying the monetary value of personal data.
The upshot, then, is that the lack of transparency about the scope and depth of data collected, combined with the lack of a fixed value of the data, means that we consumers may be limited in our ability to make an informed decision about what to share and where. And, as regulators, we are limited in our ability to make informed judgements about the questions that arise around the use of data.
Extraction of value
These limitations can make our relationship with online markets very different to those for ‘traditional’ consumer goods, where we have a shelf price for products – products which have a ‘fixed’ value. Anyone who walks into a sports shop and wants a particular pair of running shoes will pay the same price in that shop as other customers (let’s ignore loyalty cards for the moment). Moreover, when they go into the shop, they will see the same range of shoes, displayed the same way, as anyone else entering at the same time.
The value of data to firms is augmented by their ability to combine data sources on individuals and then aggregate across consumers.
Let’s compare this with an online setting. If I have been searching for trainers for a while and regularly share my running times on Strava, I may well get charged a different price from a shopper whose data self suggests that they are new to running. Moreover, I may be shown a different set of options, which best meet my (and the retailer’s) needs.
In the physical setting, the person who decides that the shoes are too expensive in that shop and goes elsewhere to buy them has some influence on the price that is charged to all customers. The shop doesn’t wish to lose too much business to the competition and will set its one price for the trainers accordingly. But online, the ability of firms to offer personalised prices means that those customers whose behaviour/data suggests that they are unlikely to shop around may well end up paying more. They may also be presented with products that are more profitable for the retailer. They are no longer protected by the activity of their more price-conscious fellow consumers.
While consumers have difficulty understanding the value of their own information, some firms are very good at understanding the value of that information to them. And these 2 values may be very different. – using a range of technologies to extract valuable insights. The better they are at this, the more they’ll attract new business and more information. It is a virtuous circle for those that are already in the game. It is much harder for those who are not.
What conclusion can we draw from all this?
First and foremost, it is clear that as a society, and specifically for us as regulators, we need to improve our understanding of the value of these datasets, to firms and to individuals, and how that value could change over time. This will help not only enhance our analysis of markets and inform better policy, but also ensure consumers are better informed.
Understanding the variation in the use and value of data, in aggregate, can help inform debates about how to deal with technology firms’ market power.
Promoting effective competition in all markets requires engaged consumers. In increasingly digitalised markets, where market power is determined by a firm’s ability to extract value from data, there is a challenge posed by consumers’ lack of understanding of what they are revealing and what that information is worth.
Moreover, we’ve seen that, in these evolving markets, it is not clear that the competition driven by engaged customers benefits their less engaged fellow consumers in the way it used to.
By increasing our understanding of these markets and the role and value of data, I believe we can help empower consumers to choose what to share, where, and for what. Understanding the variation in the use and value of data, in aggregate, can help inform debates about how to deal with technology firms’ market power. And in the years ahead, firms’ use of data may be a factor when considering proposed break ups of dominant technology players, versus adapting our existing anti-trust competition analysis.
As regulators, we should also be aware (and take advantage) of the benefits that these and other datasets provide for us. Given the right data, skills and capacity, alongside a legal regime that is fit for purpose, we should be able to evolve and fulfil our role more effectively, keeping up with the progress in the markets we regulate.