Asset management portfolio tools

Multi-firm reviews Published: 13/01/2020 Last updated: 14/01/2020

This report sets out our review findings of how firms in the asset management sector selected and used risk modelling and other portfolio management tools. While we saw some good practice at most firms, our review identified problems in firms’ processes and controls, particularly in risk model oversight and contingency planning.

Who this review applies to

This report is based on our findings from a sample of firms, but is relevant to all firms in the asset management sector. These firms should consider our findings and how they apply to their own organisations.

Why we conducted this review

Our review followed our earlier work, including the Technology and Cyber Resilience Questionnaire. The review wanted to assess how asset management firms select, use and oversee the tools and models they use to manage portfolios. We wanted to see how firms identify and manage relevant risks and their capability to respond to system failures or service interruptions.

Portfolio management tools and risk models are central to asset management activities. Any significant technological failure could cause serious consumer harm. If the service interruption affected a provider who supported a large enough group of asset managers, this could also damage market integrity.

What we did

We visited 10 firms in the asset management sector to see how they selected and used risk modelling and other portfolio management tools. The firms varied in terms of size, scale, operating models and asset classes.

We met senior executives from the first and second lines of defence. Our review was not an in-depth end-to-end review or audit of these tools, and did not, for example attempt to test the operational effectiveness of these tools or firms’ contingency arrangements.

What we found

We received 1 response from a consumer body to the code recognition section in CP19/27.

They argued that we should not recognise the LSB’s Standards of Lending Practice for business customers for 2 reasons.

Risk of misleading customers

First, the respondent disagreed with our overall policy approach to recognising industry codes for unregulated activities. They felt that recognising the LSB’s Standards of Lending Practice for business customers could potentially mislead SMEs who could assume they are protected by a code we have recognised but the activity itself is unregulated. The respondent felt this would be particularly problematic when the FCA is looking to reduce confusion around the perimeter of what we do and don’t regulate.

Our response

We considered this issue more widely as part of the work on our overall code recognition policy (PS18/18). That policy has been decided and our role now is to consider whether, on a case-by-case basis, codes meet our criteria.

Lack of transparency

Second, the respondent said there was a lack of transparency about how the LSB monitored and enforced the Standards. The respondent argued there could be no accountability without transparency about the performance of individual firms. If we were to recognise the LSB’s Standards of Lending Practice for business customers, the respondent wanted us to get an undertaking from the LSB about how it will address this point.

Our response

In considering this issue, monitoring and enforcement are not among our criteria for recognising codes. We believe it would be disproportionate to require the LSB to improve firm-level transparency here when our criteria do not require it and it would be inconsistent with our approach to recognition of other codes where we have not made this requirement. In any case, the LSB provides independent oversight of its registered firms’ adherence to the Standards of Lending Practice for business customers.

Model governance

We saw different approaches to how firms oversaw the use of risk and investment models in their business.

Several firms adopted a 'framework' approach to reviewing their use. This involved checking that models are being developed or used in line with the agreed procedures, rather than repeatedly reviewing each individual model in detail. The effectiveness of this approach appeared to rely heavily on the quality of the framework and the sampling being sufficiently representative of all the models in use. In some firms, the number of models being reviewed in sampling seemed too small to provide assurance of whether the model development and implementation processes were sufficiently robust. More commonly, we saw review processes which independently validated both the underlying models and the design framework. This appeared likely to provide a higher level of assurance.

Many firms said that model governance was challenging. This was partly because of the difficulty in building and retaining technical expertise in first and second-line oversight functions. With the increasing use and complexity of model use, firms said that recruitment was difficult with strong competition for resources. To manage this, firms were encouraging contractors to move into permanent roles and creating apprenticeships.

Managing change

Firms told us that changing front-office technology suppliers is a complex process requiring significant alteration to existing processes. While contracts often had break clauses of between 2 and 5 years, the length of time it takes to implement a new provider and the resulting disruption meant firms were often very reluctant to use these clauses. Several firms confirmed that the length of some relationships was less a positive endorsement of the provider they were using than a reflection of the difficulty of going elsewhere.

Firms told us of change programmes suffering delays and cost overruns often because of data migration issues (cleanliness and format). Firms stressed the benefits of carrying out a comprehensive business analysis to fully understand the current processes and needs before provider selection and implementation. To reduce risk, several firms carried out a detailed gap analysis to understand how closely a new tool’s capabilities correspond to the identified business needs. Firms said keeping technical capability within the business by bringing change programme contractors in-house made the post-implementation period smoother.

Firms explained the difficulty of being able to test how new tools would handle the full range of transactions and assets, in appropriate volumes, to be confident that the tool would work correctly in practice or to identify issues more quickly. Firms which had carried out parallel running of existing and new tools said that, while operationally challenging, this was a very effective way of understanding the new software’s strengths and weaknesses.

Resilience and recovery

Firms had generally not given enough consideration to how they would manage different lengths of outages. This is despite Portfolio Management tools and associated services, such as data packages, being critical to how firms function. Firms often assumed that service interruptions would be few and of short duration, with an implicit view that some providers may be ‘too big to fail’. We saw little evidence to support that confidence.

Firms generally understood the impact on customers or markets if they were unable to continue their critical services for material periods. But it was sometimes not clear that their contingency plans measured up to these risks. Firms told us that building and maintaining the necessary fallback plans to allow them to operate normally during an extensive outage was prohibitively expensive. Some appeared to have limited capacity for even relatively short interruptions. These firms had weaknesses in the frequency, timing, synchronisation and storage of data back-ups meaning they would quickly encounter difficulties in their portfolio visibility.

Firms were clear that the value of contingency plans for technology failures and service interruptions was linked to how well they had been tested. Greater involvement by the first line in the development and the subsequent review and testing of these arrangements may increase the amount of comfort that the contingency plans can provide.

Testing of software

Software upgrades and patches, which can be frequent or time-critical, often result in operational problems because the upgraded system behaves in a way that users had not expected. Firms described a tension between the need to quickly implement necessary change with the desire to test fully.

Firms were particularly concerned about supplier errors where the firms themselves may be liable for the cost of any resulting losses. Providers’ contracts typically limited their liability, especially for issues such as consequential loss and warranty periods. Firms were not always confident about the circumstances in which they could pass financial liability on to their provider.

Firms sought to reduce the risk associated with code changes and patches by engaging closely with key providers around the extent and robustness of their testing arrangements. Where their categorisation of systems as high or low risk was incomplete or outdated, they could not be sure that the right level of engagement or oversight would be achieved.

Some firms placed great weight on the tests carried out by their software providers before new code was rolled out. These firms did not always demonstrate a clear understanding of either the limits of such testing or how these tests matched up with the way that they themselves used the software. The firms which engaged with supplier user group forums said that this allowed them to improve their understanding of system developments and put extra controls in place to address the risks identified.

Other techniques firms used to manage the risks of technology change included using phased programmes for implementing non-critical changes, by rolling them out gradually ahead of a full go-live and using quarantining or other segregation methods for the new code until they had sufficient assurance about its robustness.

Customer expectations

Modelling tools were often central to the investment and management processes for firms but we did not always feel that firms were clear in how they described the role of these tools in these areas. In particular, the triggers or circumstances which might allow portfolio managers to amend or overrule model outputs were not always well-defined or clearly documented.

Many firms told us that a factor in their selection of specific risk models was their belief that investment consultants and other intermediaries had expectations or preferences that firms would be using these tools. This suggested that firms were sometimes using tools they were not fully committed to. This created uncertainty as to how much weight the portfolio managers actually placed on the outputs from these tools and whether assets were consistently being managed in line with clients’ expectations. Firms did sometimes tell us that their clients were not always willing or able to engage at the necessary level of detail as to how these tools would be used in practice.

Our actions and next steps

We will recognise the LSB’s Standards of Lending Practice for business customers for 3 years. After this time, we can extend our recognition if we think the content of these Standards is still relevant and appropriate. Our recognition of these Standards is as it stands at the current date. If we think that they no longer represent proper standards of market conduct, we will withdraw recognition before the end of the 3-year period. The LSB’s separate Information for Practitioners for business customers is not included in our recognition.

Existing rules will continue to apply

Firms following the code will still need to comply with any restrictions that apply on financial promotions in the Financial Services and Markets Act 2000 or rules in CONC 3 (Financial promotions and communications with customers), which may apply to some unregulated activities. The Standards of Lending Practice for business customers expressly acknowledge that where legislation or statutory rules replicate or conflict with the Standards, the legislation or statutory rules supersede them.

By conducting themselves in line with the applicable Standards of Lending Practice for business customers, individuals and firms will tend to indicate they are meeting their obligation to meet proper standards of market conduct for unregulated activities.

We do not intend to supervise firms or individuals directly against the Standards of Lending Practice for business customers in unregulated markets. Our role is to make sure that firms meet their governance, and systems and control obligations, including under the SM&CR. We expect firms and individuals to consider both the spirit and letter of the Standards of Lending Practice for business customers to make sure they fully meet any obligations to observe ‘proper standards of market conduct’ for unregulated activities. Compliance with the Standards may be one way to show evidence they are complying with our overall governance requirements.

We will not take action based solely on a breach of provisions in market codes, recognised or not. However, codes may however be used as evidence and relied upon in determining what proper standards are, or were believed to be, at the relevant time. Recognition of a market code does not change our enforcement approach. It is not a new basis for enforcement, and does not strengthen our ability to take enforcement action.