Asset management portfolio tools

Multi-firm reviews Published: 13/01/2020 Last updated: 27/10/2020

This report sets out our review findings of how firms in the asset management sector selected and used risk modelling and other portfolio management tools. While we saw some good practice at most firms, our review identified problems in firms’ processes and controls, particularly in risk model oversight and contingency planning.

Who this review applies to

This report is based on our findings from a sample of firms, but is relevant to all firms in the asset management sector. These firms should consider our findings and how they apply to their own organisations.

Why we conducted this review

Our review followed our earlier work, including the Technology and Cyber Resilience Questionnaire. The review wanted to assess how asset management firms select, use and oversee the tools and models they use to manage portfolios. We wanted to see how firms identify and manage relevant risks and their capability to respond to system failures or service interruptions.

Portfolio management tools and risk models are central to asset management activities. Any significant technological failure could cause serious consumer harm. If the service interruption affected a provider who supported a large enough group of asset managers, this could also damage market integrity.

What we did

We visited 10 firms in the asset management sector to see how they selected and used risk modelling and other portfolio management tools. The firms varied in terms of size, scale, operating models and asset classes.

We met senior executives from the first and second lines of defence. Our review was not an in-depth end-to-end review or audit of these tools, and did not, for example attempt to test the operational effectiveness of these tools or firms’ contingency arrangements.

What we found

The firms we sampled had different approaches in their use of portfolio tools. Some relied largely on a single provider offering an integrated package. Others used a suite of tools from different providers. The remainder built their technology in-house.

Many firms found it challenging to decide whether to use a single provider for most of their needs or to bring together components from different sources. The ‘one-provider’ integrated solution offered some potential benefits, such as:

  • reducing manual input, lowering the risk of errors from data being re-keyed or presented differently on various tools
  • simplified vendor management
  • improved oversight, in both first and second-line, due to consistent data handling and creating a ‘single source of truth’
  • less complexity in training and implementation

Firms agreed that potential drawbacks to the single provider ‘package’ approach included:

  • concentration risk due to widespread reliance on a small number of large specialist providers
  • the resilience implications for their own firms from relying so significantly on a single provider
  • the potential for individual elements of the package not being the best of their type, and
  • the perceived irreversibility of such a decision – having configured operations (including compliance functions) around an integrated solution, firms were concerned that replacing this with a more modular system would be very difficult in practice

The advantages and disadvantages of different approaches

These varied between firms. Firms emphasised that understanding the compromises involved in each technology strategy was critical in managing the resulting trade-offs.

Those which had developed in-house tools highlighted the resulting flexibility for functionality and maintenance. They pointed to the difficulties that even relatively large firms may face in trying to influence the providers they rely on. Others were concerned that trying to fit their operations into a ‘standardised’ set of tools might reduce their distinctiveness and damage their competitive position.

Firms which had built in-house tools were aware of the costs of their choice and regularly looked at how they could reduce this. This included the possibility of monetising their software and broadening the cost base by licensing it to competitors, and regularly reviewing the market for similar technology to see if new commercial products might be as effective.

Vendor management

The main vendor management (VM) approaches we saw were:

  • a centralised function operating a largely standardised approach to the various categories of provider and with little input from business areas or end-users
  • a decentralised approach where the first-line users largely ‘owned’ the oversight of the relationship with little input from the central function beyond core procurement and administrative support
  • a hybrid where the VM function actively included service users in their oversight of providers

Firms with a centralised approach were able to explain the rationale for doing so, stating that this gave them a consistent approach and economies of scale in overseeing providers. Others pointed out the limitations of relying on a centralised approach. These included a reduced or partial understanding of how the business uses the tool (including how far the business relies on that service) and being unable to recognise the quality of service being received.

Firms that involved end-users in the vendor management process seemed better able to assess the impact of service quality issues. Several firms said that how critical specific vendors are can change as the business or operating environment evolves. All oversight programmes we saw included periodic reviews of providers, but these were often independent of the business. They also often failed to consider whether the risk categorisation of the provider had changed since the service began, simply keeping the same categorisation even if the way the business used them had changed significantly. Firms were therefore unlikely to give the oversight resources needed to the highest-risk providers.

Firms explained the challenges that arise when functionality, delivery or resilience requires change over time due to external factors such as regulatory changes or evolving cyber risks. The most effective provider review programmes we saw included targeted attention on these topics to ensure that providers’ arrangements remained compliant and within risk appetite. Many providers run user groups for their clients to share experience and good practice and to help influence or prioritise service delivery changes at the provider. Not all firms chose to participate in these user groups, as they typically felt they already had enough influence with their providers, though it was not always clear what they based this confidence on.

The sophistication of vendor management programmes varied significantly between firms. The most carefully designed programmes ranged from simple surveys with the lower risk and least complex suppliers to audit (internal or external) to carry out detailed reviews of the higher risk or more complex service providers. The effectiveness of oversight programmes was not always matched to the size and resources of the firm. The most effective approaches were clearly risk-based and outcome-oriented, with firms able to demonstrate how their engagement informed their resilience planning.

Model governance

We saw different approaches to how firms oversaw the use of risk and investment models in their business.

Several firms adopted a 'framework' approach to reviewing their use. This involved checking that models are being developed or used in line with the agreed procedures, rather than repeatedly reviewing each individual model in detail. The effectiveness of this approach appeared to rely heavily on the quality of the framework and the sampling being sufficiently representative of all the models in use. In some firms, the number of models being reviewed in sampling seemed too small to provide assurance of whether the model development and implementation processes were sufficiently robust. More commonly, we saw review processes which independently validated both the underlying models and the design framework. This appeared likely to provide a higher level of assurance.

Many firms said that model governance was challenging. This was partly because of the difficulty in building and retaining technical expertise in first and second-line oversight functions. With the increasing use and complexity of model use, firms said that recruitment was difficult with strong competition for resources. To manage this, firms were encouraging contractors to move into permanent roles and creating apprenticeships.

Managing change

Firms told us that changing front-office technology suppliers is a complex process requiring significant alteration to existing processes. While contracts often had break clauses of between 2 and 5 years, the length of time it takes to implement a new provider and the resulting disruption meant firms were often very reluctant to use these clauses. Several firms confirmed that the length of some relationships was less a positive endorsement of the provider they were using than a reflection of the difficulty of going elsewhere.

Firms told us of change programmes suffering delays and cost overruns often because of data migration issues (cleanliness and format). Firms stressed the benefits of carrying out a comprehensive business analysis to fully understand the current processes and needs before provider selection and implementation. To reduce risk, several firms carried out a detailed gap analysis to understand how closely a new tool’s capabilities correspond to the identified business needs. Firms said keeping technical capability within the business by bringing change programme contractors in-house made the post-implementation period smoother.

Firms explained the difficulty of being able to test how new tools would handle the full range of transactions and assets, in appropriate volumes, to be confident that the tool would work correctly in practice or to identify issues more quickly. Firms which had carried out parallel running of existing and new tools said that, while operationally challenging, this was a very effective way of understanding the new software’s strengths and weaknesses.

Resilience and recovery

Firms had generally not given enough consideration to how they would manage different lengths of outages. This is despite Portfolio Management tools and associated services, such as data packages, being critical to how firms function. Firms often assumed that service interruptions would be few and of short duration, with an implicit view that some providers may be ‘too big to fail’. We saw little evidence to support that confidence.

Firms generally understood the impact on customers or markets if they were unable to continue their critical services for material periods. But it was sometimes not clear that their contingency plans measured up to these risks. Firms told us that building and maintaining the necessary fallback plans to allow them to operate normally during an extensive outage was prohibitively expensive. Some appeared to have limited capacity for even relatively short interruptions. These firms had weaknesses in the frequency, timing, synchronisation and storage of data back-ups meaning they would quickly encounter difficulties in their portfolio visibility.

Firms were clear that the value of contingency plans for technology failures and service interruptions was linked to how well they had been tested. Greater involvement by the first line in the development and the subsequent review and testing of these arrangements may increase the amount of comfort that the contingency plans can provide.

Testing of software

Software upgrades and patches, which can be frequent or time-critical, often result in operational problems because the upgraded system behaves in a way that users had not expected. Firms described a tension between the need to quickly implement necessary change with the desire to test fully.

Firms were particularly concerned about supplier errors where the firms themselves may be liable for the cost of any resulting losses. Providers’ contracts typically limited their liability, especially for issues such as consequential loss and warranty periods. Firms were not always confident about the circumstances in which they could pass financial liability on to their provider.

Firms sought to reduce the risk associated with code changes and patches by engaging closely with key providers around the extent and robustness of their testing arrangements. Where their categorisation of systems as high or low risk was incomplete or outdated, they could not be sure that the right level of engagement or oversight would be achieved.

Some firms placed great weight on the tests carried out by their software providers before new code was rolled out. These firms did not always demonstrate a clear understanding of either the limits of such testing or how these tests matched up with the way that they themselves used the software. The firms which engaged with supplier user group forums said that this allowed them to improve their understanding of system developments and put extra controls in place to address the risks identified.

Other techniques firms used to manage the risks of technology change included using phased programmes for implementing non-critical changes, by rolling them out gradually ahead of a full go-live and using quarantining or other segregation methods for the new code until they had sufficient assurance about its robustness.

Customer expectations

Modelling tools were often central to the investment and management processes for firms but we did not always feel that firms were clear in how they described the role of these tools in these areas. In particular, the triggers or circumstances which might allow portfolio managers to amend or overrule model outputs were not always well-defined or clearly documented.

Many firms told us that a factor in their selection of specific risk models was their belief that investment consultants and other intermediaries had expectations or preferences that firms would be using these tools. This suggested that firms were sometimes using tools they were not fully committed to. This created uncertainty as to how much weight the portfolio managers actually placed on the outputs from these tools and whether assets were consistently being managed in line with clients’ expectations. Firms did sometimes tell us that their clients were not always willing or able to engage at the necessary level of detail as to how these tools would be used in practice.

Our actions and next steps

We provided feedback to the firms that we met and set out our expectations as to where improvements could be made.

Our review found instances where firms could further reduce the potential for harm both to their operations and to their customers.

We expect firms to ensure that their implementation, oversight and contingency arrangements in respect of these tools enable them to comply with our expectations as set out in the systems and controls handbook and elsewhere. These include expecting a firm’s arrangements will ‘ensure that it can continue to function and meet its regulatory obligations in the event of unforeseen interruption’.

We will continue to look at the operational resilience arrangements in place at firms, including those which were not included in our review.