Explaining why the computer says ‘no’

31 May 2019

The financial services industry is on the brink of revolution in artificial intelligence. But can the rise of AI decision-making be compatible with the need to explain decisions to consumers?

Opening the black box of machine learning

The financial services industry is facing increasing pressure to explain its decisions to consumers. When faced with possibly life-changing outcomes the customer quite reasonably may have questions: ‘Why have I been denied a mortgage or a life-insurance policy?’ Or: ‘What can I do to increase my chances of being approved in the future?’

Set against this requirement for explainability, the industry is on the brink of a revolution: that of big data and artificial intelligence (AI). In the future, we can expect more and more decisions to be informed by systems using machine learning. Some of the most powerful machine learning models, by their very nature, lack a defined structure. They are described as `black boxes’ because we can observe the inputs (the data) and see the output (the prediction or decision) but may not be able to explain completely the mechanism that connects one to the other.

Is it possible to harness the power of machine learning to make more effective decisions, while satisfying a potential need to explain these?

The answer is: yes, but at a price. Explanations are not a natural by-product of complex machine learning algorithms. Effort can be made to retrofit an explanation. Alternatively, a simpler, more interpretable algorithm could be used in the first place, but this may cost a predictive edge.

There might, in other words, be a trade-off between the ability to meet demands for an explanation and the ability to supply more accurate predictions at reasonable time and cost.

For this reason, the authors of this article would argue the focus should be ‘sufficient interpretability’. This is the point where supply and demand meet and where society finds the right balance between the benefits of machine learning, and AI more generally, and the need to make sense of its predictions and decisions.

Exactly where this balance lies will likely be the subject of intense debate as the demands for transparency and the advance of AI accelerate. It may also depend heavily on the question at hand (the ‘use case’) as well as who is asking.

At a recent workshop on Artificial Intelligence explainability at the Massachusetts Institute of Technology, we, representing both the FCA and the Bank of England, discussed the demand for explainability and practical options for supplying it.

The demand for explainability

The desired nature and degree of explanation is likely to depend on two broad factors: (a) the type of decision informed by the algorithm; and (b) the stakeholder concerned (e.g. the individual affected by the decision, the accountable manager, a regulator).

(a) Type of decision: Interest in an explanation may be particularly high where a decision is important to an individual and she has scope to influence the outcome, perhaps by changing behaviour. Think about a recruitment algorithm. An applicant with some understanding of a model’s main drivers might use this to improve her job prospects, perhaps tweaking her CV or seeking a particular qualification.

In financial services, an algorithm used to predict loan default may determine whether individuals are granted mortgages, and if so on what terms (including interest rate). These are consequential decisions and someone rejected for a loan may well be keen to understand why. An explanation could reassure them that the decision feels fair or help them improve a subsequent application. In some cases, the law itself may dictate a degree of explainability. In the US, credit scores and credit actions are subject to the Equal Credit Opportunity Act. This compels credit rating companies and lenders to provide specific reasons to borrowers where negative decisions affect them. The EU’s General Data Protection Regulation (GDPR) includes a more general right to explanation for people subject to automated decision making, but opinions differ over how it can be applied in practice and whether further data protection rights are needed.

The FCA’s and Bank of England’s regulations are also relevant in several areas, including the FCA’s Principle for Business that requires firms to pay due regard to the information needs of their clients, and the PRA’s Fundamental Rules, which collectively set out the PRA’s expectations of firms (to name just two examples). The FCA, the Bank of England and fellow regulators are looking in closer detail at the implications of all aspects of data, including machine learning.

So, some model-driven decisions will come with heavy demands for justification. But explainability may matter much less in other important applications. If you’re critically ill and need an accurate diagnosis, then trading off even a touch of predictive power for an explanation of the prediction may be injudicious. And in less critical settings too, the primary concern will often be performance: if AI regulates the temperature of your office building, what you’ll care about first and foremost is whether it heats and cools the workspace effectively. Nailing down the inner workings of the black box is unlikely to be a priority.

(b) Stakeholders concerned: Even for a given decision, explainability may mean quite different things to different types of stakeholder. Think about prospective borrowers on the receiving end of an AI-driven lending decision. Even the most sophisticated is likely to prefer a short summary of what drove the decision in her specific case over a full-blown account of a complex, highly non-linear system. What are the main factors that raised my estimated default risk? What would I need to change to get approved next time? 'Interpretability' will be the focus - generally taken to mean that an interested stakeholder can comprehend the main drivers of a model-driven decision. But for someone accountable for a model’s decisions (perhaps a senior manager in a financial institution) additional needs are likely to kick in. They may demand much more understanding of a model and must ensure that appropriate testing and controls have been implemented. Regulators meanwhile will want to see evidence that effective accountability itself is in place across the system. This may mean evidence that a degree of interpretability is being provided - appropriate to the use case and stakeholders concerned. Some interpretability may help in ensuring that a machine learning model makes economic sense and that its predictions are not driven by statistical quirks or odd data inputs. It may also help in addressing ethical concerns. Making machine learning models more interpretable could help in identifying socially undesirable ‘algorithmic bias’, for example where some groups end up being treated differently on the basis of characteristics such as race, gender, or sexuality.

The supply of explainability

So how could an interpretable model be developed? We discuss two main approaches: (a) 'interpretability by design' and (b) 'reverse engineering explanatory features'. The first method achieves interpretability at the cost of lower prediction accuracy; the second at the cost of additional work.

(a) 'Interpretability by design' boils down to creating a simpler model from the outset. This might be a linear model or logit model, or perhaps a small decision tree, such as the illustrative example below on predicting car fuel consumption. This approach guarantees explainable outputs, but will often lead to a reduction in predictive accuracy. For example, it will rule out deep learning, a source of enormous predictive advantage in some settings.

(b) 'Reverse engineering explanatory factors' does not rely on limiting the complexity of machine learning models, but instead subsequently applying a further algorithm to help interpret how they work. Forthcoming joint work by teams from the FCA and Bank of England will illustrate this approach. We apply a machine learning model (a ’gradient boosted tree’) to a rich mortgage data set in order to predict borrower arrears. This procedure constructs a high number of decision trees at the same time and creates predictions based on the aggregation of these trees. It produces more accurate results than simple decision trees but results are less interpretable. Our model performs well in terms of predictive accuracy, but its results cannot be explained easily. So, we use an explainability algorithm to tease out which factors drove the machine learning model’s individual predictions. 

This allows us to build a second more simplified model which helps visualise why some mortgages are classified as high-risk and others as low-risk, but, crucially, does not replace the original machine learning model (which ultimately makes the predictions).

It should be said that explainability algorithms will have limitations and may be able to provide only indicative or illustrative insight into the black box process, rather than definitive explanations. Such insights may not be sufficient for the needs of consumers, firms or regulators, particularly in contexts where near-complete explainability is desired.

Conclusion: sufficient interpretability

As AI permeates all aspects of our lives, calls are mounting for models to be explainable. We suggest a focus on ‘sufficient interpretability’, where demand and supply for explainability meet. ‘Sufficient’ reflects the fact that explainability comes with a cost, can also never be absolute, and will be valued in some settings (and by some stakeholders) more than others. ‘Interpretability’ captures the idea that one of the most useful and practical ways to explain a decision is often simply to articulate its main drivers.

Establishing the right level of interpretability will require a collective endeavour in the years ahead. We should expect further discussions to clarify demand, as well as further experimentation and innovation on the supply side. Unpacking the concerns of individuals will be particularly important - we need a more nuanced understanding of how preferences for explainability turn on features of the decision environment (along with appetite for compromise). Citizens and their advocates will gradually reveal the level of interpretability that they expect in different settings, and what trade-offs they are willing to make to achieve this.

Sufficient interpretability is unlikely to be one size fits all; it will vary by stakeholder and context. Having an understanding of how a decision was reached will doubtless feel essential to data subjects in some cases, perhaps where the personal stakes are high, they feel able to influence the outcome, or there are fairness concerns. In other contexts, safety and efficacy may trump explainability altogether. We have no definitive scientific understanding of how some highly successful drugs work (and nor do those who developed them); we just know that they are effective thanks to rigorous testing of outcomes. Most of us are happy to take the benefits today and leave scientists to continue the quest to pin down a causal mechanism in the future. And let’s not forget that we live in a world where many decisions are still taken for us by humans—unable to explain their decisions perfectly, for all their other qualities.  

Finally, sufficient interpretability is only one element of a safe and effective AI environment. Efforts are needed to ensure training data are representative and sufficiently accurate, to formulate a range of credible model validation techniques, and to agree on rigorous ethical standards for practitioners in the AI field.

Karen Croxson is Head of Research at the FCA. Philippe Bracke is a Technical Specialist in the Household Finance and Economic Data Science Unit at the FCA. Carsten Jung is the lead on artificial intelligence at the Bank of England’s Fintech Hub.

Get Insight in your inbox