Artificial Intelligence in the boardroom

01 August 2019

The advent of AI is not just a matter for the technicians, those at the very top of firms must take responsibility for the big issues.

After decades of talk, the era of artificial intelligence (AI) is finally arriving and debate about how this will affect business processes is well underway. Much of that debate has remained focussed on the IT department or at an operational level and of course, it is in these areas that many of the challenges and opportunities will be tackled.

But to regard AI as rooted in departments and among technicians would be a serious mistake - the advent of AI will create business challenges that will, indeed must, reach all the way to the very top of organisations.

Boardrooms are going to have to learn to tackle some major issues emerging from AI – notably questions of ethics, accountability, transparency and liability.

These are not matters that can be ring-fenced in a department, whether that be IT, legal, or customer service – that would to be abrogate boardroom responsibility and to leave an organisation exposed at the very top.

Boardrooms are going to have to learn to tackle some major issues emerging from AI – notably questions of ethics, accountability, transparency and liability.

Boards already oversee machines, but AI will be a step change

It’s regularly said that financial services firms are technology companies with financial licences. Through early automation of business processes, to full throttle adoption of internet enabled digital services, the leaders of financial services firms have grown to realise that strategic use of technology is an essential factor in success.  As a result, boards and senior managers have developed a range of new governance skills: overseeing those running technology units directly, procuring technology services and outsourcing technology and services to third parties.

These skills have developed over the decades in which technology has been important to financial services. Latterly, in overseeing machine operating models and algorithms, which calculate pricing and automate execution of deterministic activity, executives have created new governance activities like model validation.

This is a good starting point, but Artificial Intelligence will require yet more new skills and new areas of focus.

Ethics

Ethical decisions are not new in business, however AI brings a new dimension of complexity. Just because the machine can do it, should it? Will the machine learn inappropriate behaviour from past human decisions? What’s wrong, what’s right and what is just creepy?

Overseeing machines in the era of models and algorithms created new responsibilities and new roles. But with AI, it may not be as simple as clear accountability. The difference between right and wrong may be more nuanced, not least because wider society has yet to reach a consensus on what is an ethical use of AI.

Adopting AI may require more user research and certainly should involve challenging debate. 

In other industries ethics committees have helped represent the powerless in these sorts of debates, while being accountable to the powerful. Boards may not want to go as far as setting up new committees, but they will need to dedicate time and serious effort to identify and to make these ethical decisions.

Explainability

Some of the coding approaches used in AI can render outcomes difficult to explain. Many in the AI industry may argue that this will become the new normal - we’ll just have to trust the machine.  But this is unlikely to ever be acceptable to regulators.

A recent Insight article - Explaining why the computer says ‘no’ – addressed this issue and suggested that ‘sufficient’ explainability should be ultimate target. But of course, this will still mean a human will have to decide what amounts to ‘sufficient’.

That task should ultimately fall to the board, and board members should take a hard line on what 'sufficient' means for them, and also what it should mean for the consumer.

It is vital that board members do not let them themselves be seduced by a 'black box knows best' argument. They will need the confidence and the integrity to admit when they themselves do not fully understand any aspect of their firms use of AI, and to demand an explanation.

They will need the confidence and the integrity to admit when they themselves do not fully understand any aspect of their firms use of AI, and to demand an explanation.

Transparency

Transparency is very different from Explainability. For a firm to be transparent about its use of AI, its customers should know when and how machines are involved in making decisions, whether it is about them or on their behalf. Customers should also be told clearly and explicitly what aspects of their personal data are being fed into AI systems, and their consent to that use should be an absolute requirement.

This raises further questions about whether using only an AI system to decide, for example, on product applications is possible. What if a consumer refuses permission for their personal data to be used by an AI system? Should they have the right to have their application considered in another way? If there is no other method, does this amount to excluding that customer from that product unreasonably?

Transparent provision of consent is a developing topic being informed by experience. But customers will make judgements about companies based on how they handle these matters, and so their importance moves from control issues to critical business decisions requiring the input of the board.

Where boards may have to become involved is setting the approach and level of detail involved in transparency, and that in turn will reflect the values of their organisations.

Liability

Boards should bear in mind that use of AI can change how liability plays out in traditional business models. Where and how frequently this will crop up in financial services is uncertain. But consider motor insurance, where the emerging consensus is that for autonomous vehicles the liability for motor insurance shifts away from the driver and onto the manufacturer. This is a good reminder that major changes are possible.

Boards will need to keep an eye on the potentially changing nature of liability for services which include AI. For example, by using AI to make a service more useful might a company unwittingly be assuming greater liability?

The introduction of AI is not just a matter for IT departments. In fact, perhaps we should not regard it as principally a technology issue at all, but rather as a business issue.

The arrival of AI systems will require real understanding and genuine engagement at the very top of business. The best boards will consciously and overtly debate and decide their position and approach to these issues of Ethics, Explainability, Transparency and Liability.

Those that do not may find their AI projects come back to haunt them in entirely unexpected ways. 

Get Insight in your inbox