Derivatives boom ramps up operational pressure
Firms are required to raise the bar through the trade lifecycle, from order management through to post-trade confirmation, netting, settlement, and reconciliation.
The failures in recent weeks of tech bank SVP and Signature Bank, as well as the wipeout of Credit Suisse subordinated bondholders and hasty ushering of the bank into the arms of UBS, have catapulted the banking sector onto the world’s front pages. The ostensible cause of the meltdowns is soaring interest rate risk, which has arrived on the heels of an extended period of ultra-loose monetary policy. Amid increased uncertainty around loan books and deposits, a number of banks have been shown to have had less than a firm grip on their risk management activities.
These ongoing challenges throw a spotlight on the banking sector’s wider approach to risk, with many banks, 15 years after the financial crisis, still in the process of upgrading their risk systems and compliance. One way this is manifested is in the continuing weaknesses of risk data. Indeed, across the industry, there remains a high level of variability in the effective use of data in decision-making. One cause is a lack of capabilities in compliance teams, which many banks believe is stymieing efforts to marshal data, derive insights, and apply controls.
In a highly digitised financial market environment, very few who work in banking underestimate the role that data needs to play in risk management. Indeed, risk and compliance teams increasingly understand that excellent data governance, and standardised data tasks and processes, are a precondition of effective risk management. Where many institutions fall short is in implementation. In managing data, too many banks are still challenged by blurred lines of responsibility across reporting hierarchies, a lack of clarity over ownership, and redundant storage of data assets — in many cases creating overlapping records and preventing the creation of a single source of truth. A perennial challenge is data quality, with data often too inaccurate to be useful or presented in variety of fields and formats.
Poor quality data is inherently tied to increased regulatory risk. Indeed, reporting requirements under almost every major piece of financial regulation—from Basel III and IFRS 9 to MIFID II, EMIR, and FATCA—are increasingly data-heavy, requiring banks to manage, clean, and analyse a large amount of data before sharing with regulators.
In the post-trade space, data-related activities such as derivative reconciliations have become critical elements of the securities lifecycle, with regulation requiring the timely and accurate filing of transaction and financial data on a daily basis. Still, few firms meet the required standard: Research from ACA Group, a governance, risk, and compliance (GRC) advisor, shows that 97% of reports under MIFIR/EMIR contain inaccuracies, and on average each report has 30 separate error types.
High error rates are not so surprising, given the daily volumes in global markets and the complex contractual terms embedded in many securities, particularly in the derivatives markets. In the reconciliation process, for example, a typical bank or broker must compare multiple internal files against multiple external files across more than 20 venues. The process involves collecting and normalising data from fragmented sources and platforms—often with varying symbology and counterparty codes—and then processing multiple fields, before matching across various functions and operations. At the same time, to ensure that reports sent to clients and regulators are accurate, trade breaks must be identified and addressed as promptly as possible—and all this in an environment in which many banks have yet to embrace automation. Indeed, 81 percent of Tier 1 banks still use spreadsheets as part of the core reconciliation process, according to a recent Acuiti whitepaper commissioned by Kynetix.
Against this background, some market participants choose to apply an artificial differentiation in their understanding of breaks, dividing them into so-called genuine breaks and operational breaks. Genuine breaks are defined as those that create real risks for the bank, while operational breaks are drivers of inefficiency and excess costs. A common problem, bankers say, is that the volume of operational breaks is so high that there is a risk of entirely missing more harmful genuine breaks.
Of course, both genuine and operational breaks need to be managed. When genuine breaks occur, the priority must be to identify the root cause and eliminate it to avoid the chance of repetition. Operational breaks, meanwhile, require more attention to data quality and data practices.
Many of the pain points in data management can be seen as issues of governance. Responsibility for data assets is often dispersed, while data producers and consumers operate under different rules and standards. Banks commonly have failed to put in place overarching frameworks for data quality, while data analysis remains highly manual. Indeed, in the reconciliations space, no Tier 1 bank in our recent survey had fully automated the process.
The unlock for these challenges is to continue to refine data risk management frameworks. Through dedicated rules and processes, banks can ensure that data is properly described, that the organisation is designed to manage data effectively, and that the appropriate tools are in place to support data hygiene and data management. In a period of reduced confidence in the sector, there will be a renewed urgency to put these mechanisms in place to foster a greater sense of security and reliability.
Firms are required to raise the bar through the trade lifecycle, from order management through to post-trade confirmation, netting, settlement, and reconciliation.
The Derivatives Market Institute for Standards (DMIST) has made its first major proposal. What criticisms have been raised and how does DMIST plan to address them?
Sign up using the option below to receive the latest articles sent straight to your inbox.