Is standardisation the key to ETD reconciliations

Is standardisation the key to navigating ETD reconciliations?

94% of sell-side firms say a lack of data standardisation is a major challenge and causes huge complexity across the entire post-trade process

A lack of data standardisation has been ever-present in the listed derivatives and post-trade industry, and it causes the industry a real headache, as our recent study found.  It was identified as one of the single biggest challenges facing listed derivatives reconciliations.

The reason for data inconsistency is one of path dependency.  The biggest two factors are disparate symbologies and counterparty codes. These differences have been exacerbated by the fact that multiple vendors built systems with data terminology not based on ISO/trade body standards. 

The frustrating element is that there has never been a universal symbology adopted by all. There have been many attempts to adopt a universal code but as of today, this has never come to fruition. There have instead been multiple standards such as Bloomberg, RICs and each exchange’s own codes. There have been moments where it has looked as though everyone will migrate to a universal standard, but then the next new shiny system comes along adding further layers to the jungle. A quick look back through history illustrates this clearly.

When GMI launched just over a decade ago, it quickly became the dominant post-trade processor, yet it used different codes to the systems predating it. Prior to that, ISO 10383 was introduced to standardise Market Identifier Codes in 2003 and there was in fact a noticeable improvement.

Roughly 15 years later though, ISINs in Mifid II was introduced and much of that improvement was subsequently undone. The problem with ISINs was, despite calls from the industry to follow existing methodologies, it followed an entirely new methodology that did not include expiries. This further complicated the current post-trade systems and how they reconciled with each other.

Overcoming disparate symbologies and counterparty codes is the key to path dependency. Vendors have realised this and are learning to adapt. They provide features that mitigate the risk associated with the numerous symbologies being used. Software now translates different codes and symbols which internally reduces some of the burden being placed on firms. The problem is, that with more fields and reporting requirements constantly being introduced, providers are simply reacting and unable to be proactive in formulating a standardisation plan that could be adopted by all.

All square by 9am Rubiks cube

Our recent research report highlights some of the issues discussed above. The report commissioned by Kynetix and based on a survey and series of interviews with executives at over 60 sell-side firms, argues that through investment in technology, data normalisation or standardisation and automation, firms can navigate the ‘data jungle’ whilst achieving a new paradigm of risk reduction and efficiency. 

In the report, we set out to benchmark approaches being taken by different parts of the industry, to understand the drivers for investment and get insights into how much automation is currently being deployed to mitigate risk versus sheer numbers of manual operators and processes in derivatives reconciliations.

The report is free to download and can be accessed here.

You may also like

Would you like to read more articles like this?

Sign up using the option below to receive the latest articles sent straight to your inbox.