Publications

Pete Eggleston Pete Eggleston

April 30th – RTS 28 Deadline Day

“I love deadlines. I like the whooshing sound they make as they fly by”
Douglas Adams

MiFID II is here. From all accounts, January 3rd 2018 was similar to Y2K[1] day, in that everyone woke up, went to work and, generally speaking, everything carried on as usual. It is clear that the ‘soft’ launch of MiFID II has resulted in no discernible disruption from a liquidity or execution perspective, but there are a number of looming elephants in the room that were postponed e.g. the additional 6-month grace period for assignment of LEI codes. So, those that were waiting for January 3rd to come and go in the hope that they could leave MiFID II behind them, and get on with their day jobs, are going to be disappointed. 2018, and probably beyond, will continue to have a significant MiFID II focus as much remains to be done.

One of the next key dates in the implementation timetable is April 30th, by which time, institutions will need to have submitted their RTS 28 reports. RTS 28 encompasses many aspects of the best execution obligations for an institution, and represents a large data gathering, cleansing and reporting exercise. That is burdensome enough, but it is further complicated by ambiguity in what exactly needs to be reported, especially for an OTC market such as FX.

If we look at the RTS 28 Top 5 report alone, which is the only RTS 28 report where the legislation provides a specific template, then ambiguity exists even here, and can be summarised in the following areas:

a)       Venue vs Channel vs Counterparty

For the FX market, with a hybrid market structure of both quote- and order-driven activity, there is confusion over the definition of these terms. If you are executing an RFQ order, over a multi-dealer platform (e.g. FXall or FX Connect), with a panel of 5 liquidity providers then you could define the multi-dealer platform as the Channel, and the winning liquidity provider as the Counterparty. So, in this example, there is no Venue? But what if the multi-dealer platform is an MTF? Clearly, even in the simplistic case of an RFQ trade there is scope for confusion.

In the case of an algo trade, that has been initiated via a multi-dealer platform, with a bank, then additional complications arise. The bank’s smart order router will be directing the algo child fills across multiple venues, so in this case Channel, Counterparty and Venue for each child slice, at least, of the algo would appear clear. However, if the algo was spot and not linked to an underlying securities transaction, i.e. does not fall into the ‘Associated Spot’ category for MiFID II reporting purposes, then technically speaking this trade should not be included in RTS 28 reporting. But, once the algo had completed, what if forward points were then applied to the algo spot rate to roll the trade forward? The parent trade is no longer spot, and does now fall within MiFID II reporting requirements.

b)      Passive vs Aggressive

Again, for the hybrid world of FX where there is still a very large proportion of quote-driven business, how should the definition of passive or aggressive be applied? Reading the regulatory text would indicate that any trade which has paid bid-offer spread is technically an aggressive trade, whereas ‘earning’ spread would constitute a passive fill. There are conflicting views in this across the industry. For many of our clients, these fields are generally ignored for FX if they do not execute any of their business via orders or algos, or have direct market access. For orders and algos, however, data is provided by the majority of liquidity providers on whether the order was filled passively or not. This is not yet consistently available across the industry yet, or provided in a consistent format, but is becoming increasingly prevalent.

c)       Directed

For many mandates, FX transactions are ‘directed’ to a specific counterparty under the terms of the IMA. Such transactions should be split out and identified in the Top 5 report. However, many asset managers net transactions across portfolios, the net execution result of which is then allocated back across the individual accounts within the block. This can potentially result in complications whereby trades for non-directed accounts can be included in a directed block, as there was a benefit from a netting perspective, so the parent block can no longer simply be included in Top 5 directed field. This would need to be done at the level below, i.e. individual allocations or child trades, so the concept of multi-tier trade hierarchies are required.

Other reporting requirements

RTS28 is not just about supplying a Top 5 report. Analysis of the execution obtained across these Channels, Counterparties or Venues is also required with a view to understanding if there is consistency across allocated volume and performance. But the definition of performance is no longer simply ‘best price’. Indeed, the MiFID II definition of best execution refers to a range of factors, including price, some of which may be relevant to some institutions in they way they execute in a hybrid FX world, some of which won’t be. Clearly, these factors need to be defined, prioritised and set in accordance with each institution’s best execution policy. Only when this has been done can any view of overall ‘performance’ be measured, aggregated and reported.

Over time it is fair to assume that these ambiguities will decrease as market consensus develops and further guidance from bodies such as ESMA is provided, especially once a review post the first reporting cycle is concluded. In the meantime, however, institutions are figuring out for themselves. At BestX, our approach has been to take outside counsel advice from Linklaters[2], which has helped provide clarity on reporting requirements in addition to the Top 5 report (e.g. the approach taken to the associated performance reports), and also to ensure that the reporting software is as flexible as possible to accommodate different interpretations and requirements.

BestX allows an institution to define exactly what execution factors are relevant for their specific business and best execution policy. This allows a customised measure of performance to be constructed across any entity, including Channel, Counterparty and Venue. This framework forms the foundation for our Regulatory Reporting module, which allows a client to fully customise and configure exactly what they would like to include in their RTS 28 Top 5 report and also generates the associated performance reports. For example, some clients may wish to generate Top 5 reports for Channel, Counterparty and Venue. Some clients have made the decision to include all spot transactions, regardless of whether the trades are associated or not. Given the delay in LEI code assignment, we also allow reports to be constructed without this official designation to at least ensure that the first round of reports in April can be generated.

It is clear that regulators are looking for evidence of a best efforts approach to satisfying the reporting requirements, so a pragmatic and flexible approach is probably a decent strategy in these early months of a post January 3rd 2018 world.

[1] For younger readers, this relates to January 1st 2000, when the world waited with bated breath to see if computers would continue to function

[2] Please contact us if you would like further information on this legal opinion (contact@bestx.co.uk)

Read More
Market Conditions Pete Eggleston Market Conditions Pete Eggleston

What are the factors that drive the cost of forward FX?

“Big Data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.”
Dan Ariely

As part of our ongoing quest to enhance our analytics, and to continue to meet our clients requests, we have been spending considerable time over the last few months researching ideas to model the expected cost arising from the forward point component of FX transactions. Such a model would complement our existing framework for estimating expected costs for the spot component.

This research is far from straightforward. The FX Forward market is still a largely voice driven market, often with significant biases in pricing arising from supply and demand or basis factors. This results in a general lack of high quality data with which to perform rigorous modelling. At BestX, however, we do now have a unique data set of traded data that allows for such research and we hope this will provide the foundation for the production of such a model.

We have decided upon a 2 phased approach. Phase 1 will be a relatively simple, yet pragmatic, extension of our existing parametric model for expected spot costs. We plan to launch this in Q1 to meet the initial need for a fair value reference point for the costs arising from forward points. Phase 2 is a longer term project, which will take us down the road of a data-driven approach as there are indications that a parametric model will have limitations when attempting to model the foibles of the forward FX market. We are already planning for this and have started research into using machine learning methods, including nearest neighbour algorithms, to cope with the complexity of this market. As part of this research, one of the initial pieces of work was to try to understand what the key drivers for FX forward costs actually are as we are aware of the risks of utilising machine learning on big data sets without an understanding of the fundamentals. We have summarised the initial findings of this work here.

Read More
Pete Eggleston Pete Eggleston

Total Transparency

Not everything that can be counted counts, and not everything that counts can be counted.
Albert Einstein

The demand for transparency within the execution process has increased significantly over recent years within the FX market. Indeed, BestX was founded to try to help meet this demand and we have adopted this theme within everything we do. We set out to build a market-leading set of analytics and ensure that all of our clients have total transparency around the workings of these models. Such analytics can only add real value if they are powered with the highest quality market data. Transparency around the market data inputs used is therefore also critical and we have invested significantly in order to build a comprehensive view of the FX market. This article explores some of the thinking behind our approach and why we believe it is important to generate the most broad, independent and representative view of the market.

In an OTC market such as FX one of the biggest challenges when trying to compute accurate execution metrics is gathering a data set which fulfils the following criteria:

-          representative
-          clean
-          independent
-          normalised
-          timely

Below are some of the common themes and challenges in building such a data set.

·       Breadth and independence of data

One of the most common topics when discussing market data and benchmarking is the breadth of sources used and the independence of such sources. Independence and the complete absence of any bias is critical in delivering a market standard for FX best execution metrics. Computing a mid based on such a broad array of liquidity providers globally is far more valuable than generating a potentially skewed mid based on a specific sector of the market. For example, if a mid were computed based on liquidity sources biased towards to non-bank high frequency traders, this would clearly be inappropriate for use in estimating costs for large institutional asset managers. BestX takes market data from over 100 liquidity providers, supplied to BestX through a number of pipes, one of which is the Thomson Reuters pipe in addition to ICE and EBS. Thomson Reuters is not the only source, and even if it was, it is not a single price as data from all of the individual liquidity providers is accessed.

·       Generating benchmarks based on client specific liquidity providers

This is an interesting point and one which we debate frequently. Aside from the fact that regulations such as PRIPPS stipulate gathering data from as representative set of sources as possible, we believe that for the institutional market it is important to portray a view of the total market. To simply compute costs based on a client’s specific liquidity sources is self-reinforcing and could be argued is not satisfying best execution as perhaps there are other sources out there that a client could access but currently doesn’t? In addition, there is a growing demand for one level playing field to compute costs across, that could be used to meet demands for, for example, peer analysis. If the market data set is tailored for each client in this universe then we would always be comparing apples and oranges.

At BestX, we do also provide the ability for clients to submit their quote data, which we will use as additional benchmarks if so desired, as some best execution policies require this. However, we provide these metrics in addition to the spread to mid costs based on the full market-wide data set.

·       Internal pools

We would argue that, even if it were available, would data from liquidity provider’s internal pools add any value when trying to assess price discovery and generating a market mid? The price forming data and flow is available via the lit electronic marketplace, where liquidity providers risk manage the ‘exhaust’ of their inventory. The activity of internal pools is interesting, although would not add value in determining the market mid at any one point in time, e.g. having offsetting trades match and internalise wouldn’t necessarily change where the external market is trading.

There is clearly significant value within the overall best execution outcome through internalisation, and we measure this via other factors to demonstrate this value (e.g. through post-trade market revaluation and impact metrics).

·       Timeliness of data

There is a lot of focus on market data sources and independence, and rightly so. In addition, however, there is also a requirement to ensure that data is timely, especially in the FX market. Using stale data, for example, snapped at 1 or 5 min intervals or worse, can obviously potentially generate erroneous cost and slippage metrics. It is imperative to be gathering data on a millisecond frequency and in real-time to allow for immediate transaction analysis if required.

·       The FX Tape and other potential sources

The recent announcement of the launch of a tape for the FX market is an interesting development. Clearly, this is an initial step and there are many questions still around exactly what will be available, at what cost and with what lag. It may be that it could provide BestX, and all other providers, with an additional ‘official’ source of traded price data, although for it to be truly representative it will require all of the large liquidity providers to participate fully. This would, obviously, be extremely valuable and could be used in addition to the broad market data set we already consume and aggregate.

Equally we will be following the evolution of what trade data becomes available via the APAs once MiFID II goes live. It is unclear at this stage exactly what will be available and how timely the data will be, but it could provide an additional source. The trade data that became available following Dodd-Frank disappointed to some extent as it wasn’t rich enough to use for rigorous analytical purposes, so we are reserving judgement on the potential data riches that may flourish from MiFID II until we can actually see it.

·       Credit

We don’t generate pools of liquidity adjusted for different credit quality or capacity. The philosophy is to generate a representative picture of the institutional market that can be broadly applied to compare and contrast performance and cost metrics.  Additional benchmarks can be customised on a bespoke basis to service specific liquidity pools if required.

OTC markets make the provision of representative, accurate TCA metrics difficult. FX doesn’t have a National Best Bid and Offer (NBBO), there isn’t a source of public prints and there is little consistency across the industry in terms of what data is made available. The current situation may obviously change over the next few years, for example, via the FX tape or a shift to an exchange based market structure, but it seems unlikely to happen in the medium term. We have taken the pragmatic, and rigorous, approach to gathering as much high quality data that we can and use it in a thoughtful way across a suite of analytics. One of the core tenets of BestX is the delivery of an analytics product that is totally free from any conflict or bias. Independence and total transparency is therefore critical, both in terms of the analytics and input market data.

Read More

A New Beginning for Fair & Transparent FX Markets

The release of the final Global Code of Conduct (“Code”) on 25 May 2017 is a watershed moment for the foreign exchange (FX) market.  The FX market, which is a global decentralized market for the trading of currencies, is the largest market in the world in terms of trading volume, with turnover of more than $5 trillion a day.  The Code was developed by the Foreign Exchange Working Group (“FXWG”) working under the auspices of the Markets Committee of the Bank for International Settlements (“BIS”).  The Code was also created in partnership with a diverse Market Participants Group (MPG) from the private sector.  A Global Foreign Exchange Committee, formed of public and private sector representatives, including central banks, will promote and maintain the principles.  

The Code establishes a common set of 55 principles for good practice in the FX market, including ethics, transparency, governance, information sharing, electronic trading, algorithmic trading and prime brokerage.  The Code took almost two years to complete, with the first half issued in May 2016.  Our article on the first phase of the code is available here.  Market participants will need time to conform their practices with the principles of the Code and it is anticipated that most will need approximately 6-12 months to do so, a time frame set to align with new requirements for transparency and best execution under MiFID II, which comes into force on 3 January 2018. 

The Code is organized around six primary principles:

  1. Ethics

  2. Governance

  3. Information sharing

  4. Execution

  5. Risk Management and compliance

  6. Confirmation and settlement processes

The Code comes on the heels of difficult times in the FX markets, with the stated goal of restoring public faith in the market in the aftermath of the FX scandal which resulted in $11 billion in fines being levied worldwide on some of the largest financial institutions, as well as another $2 billion in settlements in related class action litigation.  Moreover, custodian “excessive” FX profit cases have also resulted in around $1.2 billion in settlement costs and penalties, and multi-year supervised remediation.  

“All of us recognise the need to restore the public’s faith in the foreign exchange market.  We share the view that the global code plays an important role in assisting that process and also in helping improve market functioning,” said the Reserve Bank of Australia deputy governor Guy Debelle, who headed the FXWG.

The FX scandal was the subject of investigation by government authorities across the globe, including the Federal Reserve, the Department of Justice, and the Commodities and Futures Trading Commission (“CFTC”) in the US and the Financial Conduct Authority (“FCA”) in the UK.  The investigations alleged that for almost a decade, traders coordinated trading strategies in order to manipulate benchmark rates and price fix bid/ask spreads, as well as trigger client stop-loss orders and limit orders.  The Code deals with these manipulative practices head-on, making clear they have no place in a fair marketplace.  

Several banks are also facing or have faced regulatory and civil action over inappropriate use of “last-look,” with fines in the hundreds of millions so far for failures to be sufficiently transparent around the process.  The new Code does not ban the practice, but requires that market participants “should be transparent regarding its use.”

Although the Code is voluntary in nature and non-enforceable, market participants are paying close attention, especially as regulators have used its predecessor, the Non-Investment Products (NIPs) Code, as well as other relevant guidance such as the ACI Model Code, the 2001 FX Good Practice Guidelines (which was developed by 16 leading FX market intermediaries), and the 2008 Federal Reserve Bank of New York’s Guidelines for Foreign Exchange Trading Activities as a basis for litigation.  

For example, the UK’s FCA, which levied almost GBP 1.4 billion in fines with regard to the FX scandal, dedicated an entire annex of its Final Opinions to “relevant codes of conduct,” citing the NIPs Code, the ACI Code, as well as the Good Practice Guidelines, and specifically highlighted that the relevant codes set out “the importance of firms requiring standards that “strive for best execution for the customer” when managing client orders.  

And just this past week a Consent Order issued by the NY Department of Financial Services (“DFS”) pointed to the Fed’s 2008 guidance that identified the need for dealers to protect client confidentiality and avoid situations involving or appearing to involve trading on nonpublic information.  The NY DFS foreshadowed the theme of the new Code, emphasizing that it is precisely because “there is no single regulator for the FX market, it is all the more essential that financial institutions take an active hand in supervising this business line.”

With this spirit of adherence in mind, the FXWG produced alongside the Code a very essential Report on Adeherence to the FX Global Code (“Report”), setting out a framework to promote awareness and incentivise adherence to the Code's standards.  This Report emphasises that it is the responsibility of market participants to take appropriate steps to adopt the Code in their day-to-day practices and culture, including establishing appropriate mechanisms to monitor this process.  This will include adopting new technologies that allow for effective compliance and cost analysis monitoring. 

A highlight of the Code is the new Statement of Commitment (Annex 3 to the Code) that market participants can use publicly, or bilaterally, to support key objectives of the Code such as enhancing transparency, efficiency and functioning in the FX Market.  The Statement of Commitment (“Statement”) provides a single, common basis by which each market participant can represent that it: (i) supports the Code and recognises it as a set of principles of good practice for the FX Market; (ii) is committed to conducting its FX Market activities in a manner that is consistent with the principles of the Code; and (iii) considers that it has taken appropriate steps, based on the size and complexity of its activities, and the nature of its engagement in the FX Market, to align its activities with the principles of the Code.  We anticipate that market participants who embrace the Statement will invest in new technologies such as FX best execution tools to support their Statement, and will also attract market share through this commitment to transparency and good practice.
The Code also seeks to accelerate the provision of accurate time stamping of orders and transactions by Market participants, both at the time of acceptance and execution (see Principle 36 in Risk Management and Compliance).  Market participants “should apply sufficiently granular and consistent time-stamping so that they record both when an order is accepted and when it is triggered/executed.”  Although they don’t have to provide this detail by default, they do have to make it available promptly on request.  If time stamping is not performed both at arrival and at trigger/execution, market participants must be clear on this fact to their clients, allowing clients the freedom to move their business to firms that provide more transparency around execution. 
The Code specifically recognises the importance of time stamps in enabling an "effective audit trail for review and to provide transparency to Clients,” which in turn should accelerate the direction of travel already prescribed within MiFID II, delivering the essential ingredient for any post-trade analysis.  With accurate time stamps, the transparency available to Market Participants through sophisticated technology, such as BestX’s Pre- & Post Trade analytics, can move to another paradigm, enabling comparison of actual and expected costs against a representative market data set sourced from multiple independent providers with millisecond granularity.  Where supplied, multiple time stamps per transaction can also be consumed to measure the slippage from order inception through to market arrival and execution.  Such “implementation shortfall” measures the cost associated with any latency around order processing.  Our unique “Trade Inspector” screen also provides a “Fill Speed” measure directed at algorithmic trades representing the time it takes to fill 1 million USD notional on average.  
Market participants have also focused on the explicit right to pre-hedge client orders when acting as a principal, with the caveat that this must be done fairly and with transparency.  Pre-hedging is the management of the risk associated with one or more anticipated client orders, and though it carries risks of front running, it is designed to benefit the client in connection with such orders by allowing the principal to manage its risk.  To make this principle fair and manageable, both buy-side and sell-side are likely to invest in technological tools to monitor adherence to best execution while still allowing for the practical necessities of pre-hedging.

Most significant among the principles is Principle 14, that the “Mark Up applied to Client transactions by Market Participants acting as Principal should be fair and reasonable.”  Mark up is the spread or charge that may be included in the final price of a transaction in order to compensate the market participant for a number of considerations, including risks taken, costs incurred, and services rendered to a particular client.  This Principle comes at the same time that MiFID II requires enhanced costs and charges disclosure, noting in Recital 79 to the Delegated Regulation that such disclosure “is underpinned by the principle that every difference between the price of a position for the firm and the respective price for the client should be disclosed, including mark-ups and mark-downs.”  Whether the FX market will go as far as other markets in transparency remains to be seen, but one thing is certain, and that is that technology will be necessary to meet this principle.  The Code is clear that “Market Participants should have processes to monitor whether their Mark Up practices are consistent with their policies and procedures, and with their disclosures to Clients. Mark Up should be subject to oversight and escalation within the Market Participant.”  Independent FX software with tools such as automated reports and exception reporting that analyse actual against expected costs will help both sides of a transaction make certain that best execution has been achieved and will become essential to those seeking adherence to the Code.   
 
Overall the Code is a significant step forward for the FX markets.  Yes, there have been global codes before.  Firms have signed up to them with a signature but not behaviour, never realizing they would be called to task or have that code set the standard against which regulators matched their behavior.  This Code will be different, not just because it comes on the heels of the FX scandal, but because that scandal and the evolution of Fintech have created a world of technology that makes it possible to monitor adherence to the Code, including advance communication monitoring, and independent best execution monitoring and transaction cost analysis software.  The best of these technologies are cloud based, and easy to deploy and use, making monitoring simpler, easier and affordable.  There is every hope therefore that this Code will be supported not just by statements of commitments but by real, measurable, change.

Read More
Pete Eggleston Pete Eggleston

Pre-Trade Analysis – Why Bother?

“It is not what we choose that is important; it is the reason we choose it.”
Carolyn Myss

Best execution is not simply about measuring transaction costs, and other relevant metrics, after a trade has been executed. Best execution is a process, whereby informed decisions are made throughout a trade’s lifecycle in order to achieve the best possible result for the client. Clearly, a key stage in the trade lifecycle is ‘pre-trade’, which we will explore in more detail in this article.

As we have touched upon in previous articles, the modern foreign exchange market is a complex beast, providing participants with many different methods of execution. For example:

1.       Risk transfer over the phone
2.       Request for Quote (RFQ) on a multi-dealer platform
3.       Request for Stream (RFS) on either multi-dealer or single dealer platforms
4.       Algorithmic execution

Within each of these methods, there are a multitude of factors, and therefore additional decisions, to consider. For example, if you are employing RFQ, how many liquidity providers should you request quotes from and which ones? Or, if you are considering algorithmic execution, how do you select from the extensive range of products now available, and when a specific product is chosen, how should you select the parameters to use? In addition, do you want to access the market directly and have your liquidity provider place orders on your behalf, or do you want to simply execute with a counterparty as principal? If the former, are there specific venues you would like to access? The decision making process can become quite complex, analogous to deciding which chain of coffee shops to pop into on the way to work, deciding upon Starbucks and then having to select from the fatuous list of types of coffee, milk, sizes, temperature and strengths.

In our view, best practice is to not to necessarily exclude any specific execution method, although not to create a Starbucks situation of too much choice which can result in paralysis in decision making! Its ok, I’ll just have a Tetley’s instead. Each method can add value, and be the appropriate choice, for a given trade, with specific trading objectives within a particular set of market conditions. There may be occasions where a large block of risk needs to be executed quickly, and quietly, and in such cases voice risk transfer may be appropriate with the optimal liquidity provider, who can warehouse and manage such inventory. There may be other occasions where the objective is to minimise spread paid, and selecting an appropriate algo may be the optimal solution. Deciding not to have algos on your ‘menu’ of execution methods due to the added complexity and problems in selecting a specific product from a specific provider should be not be a deterrent. Such products can add significant benefits to the best execution process in terms of cost savings.

Analytics, data and technology can help simplify this process, and in particular pre-trade analytics.

Reading through MiFID II, and other initiatives such as the Global Code of Conduct, doesn’t provide a detailed specification of what is expected or required when it comes to pre-trade analysis, at least from a best execution perspective (N.B. we’re not covering here the pre-trade reporting and transparency aspects of MiFID II, we are simply focusing on how pre-trade analysis can help deliver against the definition of best execution). In the absence of anything official, we thought it might be useful to put some thoughts together on what best practice may look like, at least for FX in the first instance.

1.       Coverage

It doesn’t seem to make sense to perform value-added pre-trade analysis on every single trade. Execution desks trade hundreds of FX transactions every day and it is not practically feasible to conduct what-if analysis on every single order. This is where the positive feedback loop from the post-trade process should cover the majority of the smaller, or more liquid, tickets, as discussed in previous articles[1][2]. A periodic assessment of execution performance allows checks to be carried out on whether any further changes need to made to manage and optimise the decisions for the bulk of the flow. Having said that, if it is possible from a technology perspective, it would be valuable to have a pre-trade benchmark, such as the fair value expected cost, calculated for every trade to allow an ex-post comparison.

So, let us focus on value-added pre-trade analysis for now, defined whereby the user performs scenario, or what-if, analysis on a specific trade defines the universe as larger trades, and trades in less liquid currency pairs. Guidelines for defining what constitutes a larger or less liquid trade could be included in an institution’s best execution policy.

2.       Analysis to be performed

Timing of trade

This is obviously only of interest for trades with discretion around timing. Many FX trades are executed without this discretion, e.g. a 4pm WM Fix order or where a Portfolio Manager requires immediate execution to attain a specific current market level. However, if there is discretion, then the impact on cost can be significant. Pre-trade analytics should allow a user to compare costs for different execution times over a given day. For example, on days with relatively low volatility and little price direction it may be beneficial to wait and execute during times of higher liquidity. This issue of market risk is covered later as taking into account potential ‘opportunity cost’ is clearly critical in such decision making.

Sizing of trade

Another common theme that requires analysis is determining the ‘optimal’ size to trade. Again, there may be little discretion here, but if there is flexibility, then scenario analysis can add value given how costs fluctuate by size. The issue can be fundamentally thought of as ‘how quickly can the market digest my risk’. There is often a misconception that the FX market is so deep and liquid that such questions really shouldn’t be a consideration, often citing the BIS survey’s $5 trn of volume traded per day. However, in reality, we often see examples where relatively small tickets can sometimes create significant market impact and footprint. The FX market is generally liquid compared to other asset classes, but it is also fragmented with a lot of liquidity recycled across venues and liquidity providers. One could argue that the issue of declining risk appetite, and hence inventories, at market makers due to the regulatory environment may start to reverse given the changed administration in the US, which may help improve the conditions for executing larger sizes. However, it is clear, that care should be taken when determining the notional sizes to execute, even for liquid pairs. Pre-trade analysis on costs by size, and also information on prior executions of similar sizes to see what has worked well and what hasn’t at different times of the day, can be extremely valuable.

 Execution method

As alluded to in the introduction, there are now many methods of execution available. We have seen a significant increase in the use of algos across both institutional and corporate clients, which in itself creates the problem of product selection. Such products can provide benefits in the form of cost savings, when viewed on an overall performance basis net of fees. However, there are risks, such as the obvious one that the market simply moves against you whilst the algo is working. This market risk is part and parcel of working any order, so some form of quantification of the possible cost of this is useful in a pre-trade environment to allow an informed decision to be made. Risk transfer may be preferable if the market conditions are unfavourable for working your order via an algo. Having the market move away from you may be simply down to bad luck and the random walk of the FX market, but not always. If your order is being worked in a way that is generating signalling risk[3] then there may be market participants trading ahead of your order, resulting in less favourable execution.  This may happen for many reasons, including through poor product design, simplistic smart order routing, inappropriate sizing, incorrect product selection for the time of day and currency pair. Having metrics available in a pre-trade environment that, for example, quantify market footprint and signalling risk for similar trades in the past can help in the selection of execution method and product to mitigate such risks.

Defining duration

 A common question when deciding to trade over a period of time is “how long”? Especially, if the trade does not have a specific objective of tracking a particular benchmark. For example, when trading an algo over the WMR fixing window, with the specific objective of minimising tracking error to the Fix, then the duration should match the window. Or, if a passive equity portfolio is rebalancing and the objective is to achieve as close to an average rate over the same window of time that the equity exchanges are open, then the duration of the FX trade should match. However, if there is discretion over setting the duration, then pre-trade analysis can add value as there are conflicting forces at play. If you trade too quickly, you may create unsatisfactory market impact whilst minimising the time that the market has to potentially to move against you, defined as opportunity cost. Equally, if you trade too slowly, then you may minimise market impact but you may run significant market risk, especially in a high volatility environment, potentially resulting in adverse opportunity cost. Figure 1 below illustrates the conflict.

Netting

Net or not to net, that is the question. Unfortunately, not an easy question to answer. There is no simple yes or no, it really does depend on a number of factors, including available liquidity and therefore spread cost, together with prevailing market volatility. As above, there are once again competing forces at play. If liquidity is good, and volatility is relatively high, then it may make sense not to wait too long for offsetting orders to benefit from netting as the opportunity cost from waiting could more than outweigh the potential cost savings from crossing spreads less frequently. If, however, volatility is relatively low, and liquidity is poor, then it may make sense to wait to net orders as in this scenario the opportunity cost may be less than the spread savings. This gross simplification is portrayed graphically in Figure 2 below.

So, in essence, the answer is, ‘it depends’. It would therefore be valuable to have some form of netting analysis incorporated within the pre-trade stage of the process to help evaluate this on a case by case basis.

3.       Results storage

So, you’ve done all the analysis and executed the trade. Now what? In our view, best practice should be that such analysis is saved and stored for the specific trade. When you go back into your post-trade analysis, how valuable would it be to have the trades tagged with the associated pre-trade analysis you performed? This then allows a comparison of performance on a post-trade basis with the pre-trade analysis, e.g. did choosing that particular algo perform as expected? This feedback loop is valuable as it allows the decisions to be assessed and then adjusted in the future to improve the result even further. Spending the time to perform pre-trade analysis is not about ‘ticking a box’, it should be time well spent to help add additional value to the execution process.

Conclusion

Pre-trade is a core component of the best execution process. The increasing focus on best execution from a regulatory perspective has propelled pre-trade into a more mandatory status, rather than a ‘nice to have if we have the time’, although one could argue it was never a ‘nice to have’ given the value it can bring to execution result for the client. However, everyone is busy, very busy, all of the time, so incorporating pre-trade in a more systematic fashion requires technology to automate as much as possible. Trades should be prioritised such that only those where significant value can be added are focused on. And you should learn from past performance. Not necessarily in a machine-learning perspective, but simply have at your fingertips previous experience summarised in a form that allows quick, informed decisions to be made. Improving execution systematically requires the use of ‘smart data’, not just ‘big data’.

[1] “Feedback loops and marginal gains – using TCA to save costs and improve returns”, Pete Eggleston, BestX, Oct 16

[2] “Applying the Pareto Principle to Best Execution”, Pete Eggleston, BestX, Feb 17

[3] Signalling Risk – is it a concern in FX markets?, Pete Eggleston, BestX, July 2016

Read More
Pete Eggleston Pete Eggleston

Applying the Pareto Principle to Best Execution

If you're Noah, and your ark is about to sink, look for the elephants first, because you can throw over a bunch of cats, dogs, squirrels, and everything else that is just a small animal and your ark will keep sinking. But if you can find one elephant to get overboard, you're in much better shape.
Vilfredo Pareto

BestX launched its first product last September, delivering a comprehensive set of analytics and reporting for post-trade best execution in FX. The software was designed to satisfy internal and external best ex requirements, including regulatory reporting requirements, often referred to colloquially as ‘box ticking’. Perhaps unfairly, given this is a vital component of the fiduciary responsibility of asset managers to asset owners, and more broadly, of increasing importance to all FX market participants given the Global Code of Conduct and other initiatives. However, this article seeks to explore the value that the software can bring over and above the core ‘box ticking’ requirements, which we have explored in previous articles, expanding upon the article published last October on improving the execution process . We return to this subject and expand upon it using the experience we have gathered over the last few months witnessing how our clients are using the BestX product. Fortunately, we designed the software to be very flexible which has proven essential given the creativity of our clients in employing the product to make tangible cost savings. From very first principles, the philosophy behind the software design was to empower clients to 'make informed decisions’, and to give clients the ability to use the software to make tangible cost savings. We explore some of this practical use cases further in this article.
As previously discussed in the October 2016 article, there are many different factors to the execution process, each of which can be refined to provide further improvements in efficiency and cost. We’ll work through some examples below.

1.    Execution Method Selection

One of the most significant structural changes to the FX industry is the increased range of execution methods available to clients, from risk transfer over the phone, to RFQ or RFS on multi-dealer EMS platforms, single dealer platforms, algo execution and direct market access. Our philosophy is that best execution warrants having a menu of options available to allow different choices to be made depending on the prevailing liquidity conditions and trading objectives. However, at the core of the process should be the ‘go to’ method for the majority of flow, at least in normal market conditions. How do you decide upon this method? Clearly, measuring costs and execution performance rigorously and accurately, whilst performing tests to see if different approaches add value, is one way. We have found clients using BestX to compare execution methods, e.g. multi-dealer EMS vs single liquidity providers, and finding considerable cost savings (of the order of several millions of USD per year). There can be a general perception that FX spreads are generally so tight that if a client is already executing in a very sophisticated fashion, then surely there can only be fractions of basis points to be saved? Perhaps, although annual turnover does not need to be particularly large for such savings to be make a significant impact on the bottom line.

2.    Counterparty Selection

Selection of counterparties is an obvious area for performance improvement and many clients have used either in-house analytics or external providers to help with this for some time. Different liquidity providers have different strengths, for example, in different geographies or through different client franchises or technological advances. Identifying these strengths and allocating business accordingly is where we see the majority of the BestX use cases in this field. Traditional cost analysis, however, only gets you so far and may result in potentially erroneous conclusions. The BestX Expected Cost analytics have proven particularly valuable here, by allowing clients to compare costs across a consistent and level playing field, taking into account the relative difficulty of the business that each counterparty has executed. 
A simple example is illustrated below, where in Figure 1 we display the average spread costs for 4 different Liquidity Providers. A cursory inspection of the results may lead to the conclusion that Liquidity Provider 3 has performed the worst over this period as their Actual Spread Cost incurred is the highest.

Figure 1: Example Actual Spread Costs by Liquidity Provider

However, lets now compute the Expected Costs, or ‘fair value costs’ of the trades that each counterparty executed to allow a fairer comparison. We add these results to the chart plotted in Figure 2, and we now see a very different result. It would appear that Liquidity Provider 3 executed the most difficult, or expensive, business as measured by the BestX Expected Cost measure. When taking this into account, and measuring on which counterparty outperformed the Expected Cost measure, then Liquidity Provider 3 actually delivered the best execution performance and Liquidity Provider 1 underperformed.

Figure 2: Example Actual vs Expected Spread Costs by Liquidity Provider

Businesses are dynamic, for example, franchises change, technology improves, staff turnover, so it is essential to monitor such performance over time. This is obviously a very simple example, but we have seen use cases where clients have drilled into performance by currency pair to help determine which liquidity providers consistently excel in specific currency pairs such as Scandis or EM. We’ve also seen examples where performance by time zone has been assessed to see if any changes should be made to further enhance performance. To allow such bespoke and timely analysis, it was essential that we delivered a flexible user interface, providing clients with self-sufficiency in terms of analysis and reporting.

3.    Channel Selection

Channel refers to the system by which trades are executed. For example, clients may execute via a single dealer platform, a multi-dealer execution management system (EMS), direct APIs with either single liquidity providers or via aggregators, and so on. We have already found that different channels can result in different execution performance, not only in terms of actual costs, but also with regards to implicit cost measures such as impact cost resulting from, for example, information leakage and signalling risk. It is obviously important to compare apples and apples in such analysis, as we have also seen examples where clients may use two EMS, but one of these tends to be used for the less liquid business, so comparing actual cost alone can be misleading and result in erroneous conclusions. The BestX Expected Cost metrics add value here as discussed above in the section on Counterparty Selection.

4.    Venue Selection

An increasingly popular area for analysis is in the selection of execution venues for those clients that are using orders and algos. The Last Look debate is a likely catalyst for this increased attention, and we are seeing more clients wanting to get a quantitative understanding of the impact of executing on different venues with different protocols. It does not appear likely that Last Look is going to be banned in the foreseeable future, although the Global Code of Conduct may help increase both the transparency and standardisation of practices employed in the market. With sufficient transparency, and measurement, it then boils down to choice once again. However, to make an informed choice, it is clearly important to have analytics to measure the costs, including those experienced at the point of trade but also the impact pre- and post-execution. In our experience, not many clients are provided with all of the sufficient order data to compute the true cost arising from the opportunity cost of Last Look, e.g. clients generally don’t receive reject data post execution. We have seen some liquidity providers and ECNs supply this, although the market is still very inconsistent which makes complete analysis across entire portfolios of trades difficult. We have seen clients tagging venues as Last Look or Non Last Look, and then using the BestX analytics to compare spread and impact cost, and also pre- and post-trade revaluation data. We expect this type of use case to increase over time as complete order data availability increases and becomes more standardised.

5.    Algo Selection

Not all algos are created equal. We have seen considerable differences in algo performance over the last few months, depending on the currency pair, time of day, notional size and trading objective. Such performance cannot be measured on any one trade – large samples of trades are required, and then measured on a totally consistent basis, to allow statistically significant conclusions to be drawn. The different charging structures prevalent in the market, coupled with a disparate performance, means that improved algo selection can also result in considerable cost savings in cash terms over the course of a year. To illustrate this, we have conducted some empirical research based on the data set of algo trades we have analysed over the last year.
Using the BestX analytics, we computed the average values for spread cost, impact cost and benchmark performance across the entire universe of algo trades analysed to date, which represents a statistically significant sample size. Looking at EURUSD specifically, where a significant volume has obviously been executed, we found some interesting results in terms of the range of cost and performance experienced. In an attempt to ensure we were using a homogeneous sample, we filtered the trades to only include those executed within the most liquid window (8am-4pm GMT), and also stratified the sample by notional size. Summary results are provided in Table 1 below:

Table 1: Standard Deviation of Costs and TWAP Performance (bps) by Notional Size (USD) for EURUSD Algos executed between 8am-4pm GMT

So, what do these numbers actually mean? By showing the standard deviation of the average costs and performance, we are illustrating how the impact of algo selection can have a significant impact to the bottom line. For example, for trades with notional sizes of 50-100m USD, the standard deviation of the average spread cost is 1.5bps. If you are trading USD 10bn notional of algos per year, and if you were using algos randomly from the sample we analysed, there is a 68.3% probability that your total costs could have fluctuated in the range of plus or minus USD 1.5m. 

6.    3rd Party vs In-House Execution

Another trend we are seeing is the evaluation of outsourcing FX execution to a 3rd party vs bringing execution in house. There are many factors involved in such a decision, and we won’t provide an exhaustive discussion around them all in this article, but suffice to say, a key factor is the comparison of cost. Even when a cost comparison has been performed, however, a number of other variables need to be taken into account, including the provision of other ancillary services such as post-trade reporting, provision of research etc. We have seen a number of clients using the software to compare costs to help with such decision making. Clearly, moving in-house has other associated costs as well as replacing ancillary services, including technology and human resources. It is  a complex decision, but a number of asset managers are finding the BestX execution cost analysis valuable, including use cases where we have seen the analysis justifying the cost of 3rd party execution.

7.    Streamlining the Order Management Process

As with most large institutions, many asset managers have complex technology architectures and operational processes that have evolved over time. Such constraints can result in inefficiencies in the lifecycle of an order, whereby considerable risk can be run from the time an order is first originated to when it is finally executed. Over time, and over large samples of trades, you would expect this risk to potentially net out as clearly the market could move for or against you whilst the order is finding its way through the process, but it is a risk, and one that is uncompensated. Generally speaking, minimising the time it takes to get the order from the portfolio manager to the execution desk is best practise for any best execution process. However, deciding whether to spend scarce investment budget and technology resources on projects to streamline this process is not straightforward and requires a proper cost-benefit analysis to make an informed decision. We have seen clients using the BestX software to help quantify the risk they are running through inefficient order routing processes, and thereby determine whether the cost is worth incurring.

Conclusion

There is sometimes the misconception that the FX market is so deep and liquid that trying to improve the execution process may only result in relatively small cost savings, and therefore not worth the focus of time and resources. In our experience, we have found the opposite. More informed decisions around different aspects of the execution process, some of which have been illustrated in this article, can result in very significant cost savings, especially when viewed in actual cash terms. Yes, a handful of basis points may sound like a relatively small number, especially in relation to the commissions and bid offer spreads available in equities and fixed income. However, if you are turning over USD 100bn of FX per year, and through improved selection of counterparty, channel, venue, algo etc, then a combined saving of, for example, only 1.5bps would result in a cost saving of USD 15m per annum. Such a number is very realistic based on our experience to date.
There is a lot of focus in the FX market at the moment on issues such as Last Look, and the quantification of this. We fully support these important initiatives in the spirit of rigorous, consistent quantification and transparency. However, we also feel that many participants in the FX markets can make more significant improvements to their execution performance and costs by taking a step back and applying a more systematic approach to their process at a macro level. Obviously, some institutions already have an informed selection process in place, and can afford to start looking for more marginal gains. The 80:20 rule applies – get the 80% right before sweating the small stuff.

Read More
Pete Eggleston Pete Eggleston

Red Flags and Outlier Detection

“The uneducated person perceives only the individual phenomenon, the partly educated person the rule, and the educated person the exception.”
Franz Grillparzer

Although there is still considerable debate on exactly what ‘best execution’ means in the FX markets, one component that has become clear is that any best execution policy should include a process to identify, monitor and record outliers. The recently published Q&A from ESMA reiterated this:

Firms’ processes might involve some combination of front office and compliance monitoring and could use systems that rely on random sampling or exception reporting. ESMA Level 3 Q&A on MiFID 2/R Investor Protection issues.

The question now arises – how should I define what is an outlier? As with most things, as soon as you start getting into the details it becomes clear that this is not necessarily straightforward and involves a number of factors. In this article, we explore these factors and suggest some approaches for what we are seeing at BestX evolve as best practice.

MiFID II Article 27(1) defines best execution as the obligation on firms to “take all sufficient steps to obtain . . . the best possible result for their clients taking into account price, costs, speed, likelihood of execution and settlement, size, nature or any other consideration relevant to execution” (emphasis added). 
The core components of defining an exception, or outlier, reporting process for each of the best execution factors, namely price, costs, speed, likelihood of execution and settlement, size, nature or any other consideration can be summarised as follows:

  1. What metrics should be used to define an outlier?

  2. What time stamp should be used as a reference?

  3. What values should be set as thresholds?

  4. What frequency should the exception report be run?

Challengingly, each of these core components must be analysed for each of the best execution factors which is why state of the technology is such a core part of a satisfactory regulatory compliance program in MiFID ll

[a]dvances in technology for monitoring best execution should be considered when applying the best execution framework. MiFID II Recital 92

Metrics

Choice of metrics to be used for defining outliers should be driven by the overriding objectives of the best execution policy. As a minimum, including some measure of price and cost is key although there are different options here as we will explore shortly. Within FX, price and cost tend to be used interchangeably as costs are generally captured within the bid-offer spread, and therefore, within the price. As the market moves to a more order driven market, with explicit commission, it will become more relevant to measure the explicit costs arising from fees from brokers and venues separately.

However, there are also other metrics that may be relevant depending on the best execution policy. For example, for a passive, equity index tracking fund where the resulting FX is all benchmarked to the WMR Fix, it may be appropriate to also include slippage to the WM benchmark as a metric. For a quant fund, which is focused on minimising slippage to Arrival Price, it may be relevant to include slippage to this benchmark in addition to cost. Or, for an active fund that trades a significant proportion of its volumes via algos, often in large sizes, then the best execution policy may require a focus on identifying those algo trades that potentially create more signalling risk than others.

With regards to price and cost, it is important to understand the precise measure of cost to be measured and used for outlier detection. Clearly, a key value that needs to be computed as part of any best execution process is the actual cost incurred, as a simple measure of the difference in the price at which a trade was filled and the prevailing market mid. We will discuss the issue of which time stamp to choose for this exercise later, but for now we will assume this is the mid at the time stamp that the trade was filled (i.e. the completion time stamp). Such values are a key requirement as stipulated by various regulators and legislation, for example the recent FCA paper on disclosure of costs for pension funds  and requirements under PRIIPs.

Such a measure can be used as a metric for outlier detection. For example, an institution may want to be notified of any trade which generates a cost of greater than 10 bps (as defined as a spread to mid for the completion time stamp). Many institutions which use such a measure may need to set up multiple reports to cope with the fact that different thresholds need to be applied to different currency pairs or groups of currencies. For example, typically different thresholds would be set for EURUSD and USDJPY, compared to USDZAR and USDTRY.

However, increasingly the feedback from clients is that a simple comparison to mid is too much of a one-dimensional measure, and does not help with differing liquidity and costs across currency pairs and time of day. Clients have asked for a specific fair value cost measure, against which the actual cost (as defined above) can be compared. So, for example, if a 100 EURUSD trade has generated an actual cost of 4 bps, and the fair value, or expected, cost for such a trade at that time of day in that size was 3.2 bps, then a useful metric to monitor may be the difference between the two (i.e. 0.8 bps). Exceptions can be defined on this difference in that any trade that generates an actual cost of more than, for example, 3 bps than the expected cost would be deemed an outlier. This provides an elegant way of providing a consistent benchmark, allowing for different liquidity conditions etc. In addition, it doesn’t require absolute precision in the time stamp, as the actual and expected costs are computed at the same time, which can be particularly useful for voice trades.

Time Stamps

When computing the cost for which to determine outliers, what time stamp for the trade should be used? There are options here, for example, the time stamp for when the order is first originated could be used, or the time it arrives at the execution desk or the completion time stamp when the trade was actually executed, as referred to earlier.

Again, there does not yet appear to be a standard approach and, to some extent, it depends on what you are trying to measure and manage. If the time stamp is taken when the order arrives at the execution desk, then two key elements are included: i) the slippage arising from market movements during the time taken for the desk to deliver the order to the market and achieve execution, and ii) the actual cost (i.e. spread) applied by the executing counterparty. An advantage of using this time stamp is that many institutions can record the desk arrival time stamp with some accuracy via their OMS, whereas for voice trades the completion time stamp can be subject to error.

If, however, you want to purely focus on the performance of the counterparty, and you are confident in the accuracy of the time stamps, then the completion time stamp is more applicable. This would allow outliers to be identified that are based purely on the actual execution cost, unpolluted by slippage from adverse market movements whilst the order was delivered to the market.

Values

So, you’ve decided the metrics and the time stamps to be used in the exception reporting process, but what values should you set for the thresholds above which outliers are reported? Given the credit driven nature of the FX market, and the heterogeneity of participants, there are no simple rules here. Threshold values are going to be institution specific, and should be agreed and included in the relevant best execution policy. Clearly, the goal here is to identify trades that warrant further investigation and explanation; i.e. trades that are executed outside of ‘normal’ expectations. Thresholds should therefore be set at levels that do not create thousands of outliers per day as such noise masks the real red flag trades that should be investigated, never mind the time taken to process such a volume of exceptions. At BestX we’ve seen many institutions set threshold values based on empirical results, i.e. review results over historical periods of time and estimate appropriate values per currency pair group, product and tenor.

It is important to regularly review whether such levels remain appropriate given changing market conditions and structural execution changes. For example, an institution that has historically outsourced its execution to a custodian may need to review the threshold values if the policy is changed such that execution is brought in house and then traded with multiple counterparties in competition. Equally, if the FX market moves into a new regime of volatility, then this may warrant levels to be adjusted accordingly. There is an argument that levels should be set dynamically to also take into account forthcoming key event days. For example, it would have been justified to widen the threshold levels for GBP pairs for Brexit.

Frequency

We have seen a variety of use cases with regards to the frequency of outlier reporting and monitoring. Typically, many institutions have implemented a daily process, usually run at the end of the day or overnight. The advantage of such frequency is that outliers are explained, recorded and managed whilst the experience is fresh in everyone’s minds. Where OMS/EMS allow, real-time outlier identification can also be valuable as it allows discussion with executing counterparties at the point of trade, rather than waiting on a t+1 basis.

Generally, such a process is supplemented with aggregated summary reporting on a lower frequency. For example, Heads of Execution often receive a weekly summary of all outliers generated that week, with explanations and approval status clearly recorded. In addition, many institutions have monthly Best Execution committees, where summary outlier reports are presented and reviewed.
It should also be noted that on demand reporting, for any time period, is increasingly important in order to respond to adhoc requests from, for example, regulators.

Conclusion

As with most aspects of best execution within OTC markets, there is currently a lack of standardisation. In some respects, this feels appropriate as best execution is a concept that is very specific to a particular institution, albeit there are core components that would benefit from some market standards. Outlier detection and reporting is no exception, and as discussed in this article, processes for identifying exception trades are also specific to some extent.  As we have seen there are a number of factors to consider in any outlier process, and a key conclusion from this is that it is important to have a flexible technology solution that allows the exception reporting process to be tailor-made and adjusted dynamically.

A well designed and implemented exception reporting process can add more value than simply satisfying fiduciary and regulatory best execution responsibilities. With appropriate reporting and analysis, it is possible to use the output of such a process to identify potential abnormalities in an execution process, e.g. maybe one particular counterparty is responsible for the vast majority of outliers in a specific EM currency pair. This may then allow the positive feedback loop discussed in a previous article  to be engaged, whereby adjustments are made to the execution policy (e.g. this particular counterparty is no longer used for this specific pair) resulting in improved performance.

Read More
Pete Eggleston Pete Eggleston

Feedback loops and marginal gains – using TCA to save costs and improve returns

“Without data you are just another person with an opinion”
W. Edwards Deming, Data Scientist

Continuous performance improvements, whereby all aspects of a process are examined with precision, are the hallmark of many leading teams and businesses. Seeking out such marginal gains, as exemplified by Sir Dave Brailsford with the GB cycling team, or the Formula 1 team of Mercedes McLaren, has now become commonplace, and in this article we explore how such approaches can be applied to a continuous refinement of best execution.

TCA and execution analytics add considerably more value than simply providing a framework to satisfy regulators, compliance teams, best execution committees and asset owners. Investing in such analytics purely to ‘tick a box’ represents a missed opportunity to save significant amounts of money. In order to achieve this, however, it is imperative that the execution process is approached with an open mind. There needs to be an environment whereby all aspects of the process are monitored, measured, questioned and tested on an ongoing basis. An ‘open feedback’ loop is required in order to allow lessons to be learnt, and the necessary changes to enable improvements to be made. The opposite of such a process, a ‘closed loop’, does not test assumptions, or measure the impact of changes, or indeed, learn from mistakes. In such an environment, the mantra ‘but we’ve always done it like this’ can often be heard quoted. 

Does this mean a world of decisions and actions taken by machines ? Of course not. The financial markets are too complex and dynamic to allow a purely automated execution process. Equally, however, the complexity means that it would be impossible for a human to be able to process all of the necessary inputs to arrive at an optimal decision alone and without any ‘help’ from analytics. Clearly, the answer is combining the best of both – the experience and intuition of humans with the processing power and objectivity of machines. Machine learning can add value, and is a topic of research at BestX, but is best deployed in the hands of an experienced trader.

There are many decisions taken every time a trade is conducted, some of which won’t matter in the overall scheme of things given, for example, the size of the trade. However, having the ability to at least monitor the impact of all dimensions, and checking whether decisions are having a material impact or not, seems a wise approach. The list of questions below, although not exhaustive, provides an idea of the range of decisions that now need to be taken:

  1. What time of day should I trade?

  2. What size of trade should I execute?

  3. Who should I execute with?

  4. Should I trade principal or request the counterparty to act as agent for my order?

  5. If I trade principal, should I trade via the phone or electronically?

  6. If I trade electronically, which platform should I trade on?

  7. Should I hit a streaming firm electronic price, or should I trade RFQ?

  8. If I RFQ how many quotes should I request?

  9. If I trade electronically, should I use an algo, and if so, which one?

  10. If I use an algo, which venues do I want it to execute on?

  11. If I use an algo, or order, should I employ passive order types or not?

  12. How quickly should I trade?

Clearly, such questions are not explicitly answered for every single trade. A desk may be executing thousands of tickets per day, and the process may be defined and automated for the majority of these. The larger trades may warrant more careful attention, and follow a decision making process which requires further insights and analysis. Either way, a comprehensive Best Execution framework should allow both the broad automated processes, and individual trade decision making, to be measured and monitored over time to check if the original assumptions are still valid.

For example, a best execution process may have defined that all funding trades of less than USD 50m notional are submitted to an RFQ process on a specific EMS, whereby 5 counterparties are requested to quote. This ‘rule’ may have been put in place several years ago. Does it still make sense in today’s market? Regular and ongoing evaluation of the execution performance and total costs are required to answer this question. Analysis of the costs for the entire book of business over a period of time may indicate that tickets of greater than USD 50m notional, that are not subjected to the RFQ rule, have started to systematically incur less cost, indicating that it may make sense to revise the rule.

The complexity of the financial system, however, means that it would be imprudent to simply make the change and hope for the best. There may be other factors at play that are resulting in the observed change in costs. Splitting the business into two and testing half of the portfolio with a revised RFQ rule for the following quarter would allow a more scientific approach to be taken. Such ‘controlled tests’ are widely used in all scientific disciplines and form the basis of any open feedback loop. Change an assumption, perform a controlled test and re-evaluate the results to check if the original hypothesis was correct. If yes, then incorporate the change into the best execution policy going forward with the benefit of having a quantitative and rigorous process behind the decision. If no, then go back to the original process and test a different assumption. For example, perhaps the majority of trades with notional of greater than USD 50m were executed by two counterparties that were different to those used for the RFQ business. In which case, maybe test the original RFQ rule but this time replace two of the counterparties with the two that perform well for trades greater than USD 50m. And so on.

Some of these changes may not result in cost savings. Some may result in marginal savings, and some may contribute significantly to the bottom line. However marginal though, in a world of either low or negative yields, every single basis point really does count. As an example, returning to the list of questions earlier, let’s just focus on the first one regarding timing. Using the BestX Fair Value Cost estimates, we analysed the costs of trading 50m AUDUSD every day for the period of January to May this year. If this trade had been executed at 9am London compared to 2am London, the total cost saving would have been approximately 33 basis points (AUD 167,000) over this period. For a US investor, if the trade had been executed at 9am London instead of 9pm London, the cost saving would have been a whopping 250 basis points (AUD 1.26m). Clearly, such simple analysis does not take into account factors such as opportunity cost, but the point is to illustrate that simple changes can result in considerable savings.

Over the course of 2016, we have analysed thousands of FX transactions at BestX from a wide and diverse array of FX market participants. It is clear from this analysis that there are many cost savings to be made for the majority of institutions, across many dimensions of the execution process. 

Best practice should never be to simply settle and assume that what I’m doing is the best I possibly can. After all, Dyson famously tested over 5,000 versions of his vacuum prototype before launching.

“I made 5,127 prototypes of my vacuum before I got it right. There were 5,126 failures. But I learned from each one. That’s how I came up with a solution.” - James Dyson

The dynamic nature of financial markets, especially in OTC markets at the moment as they continue to transform driven by the regulatory fallout from the financial crisis, require an ongoing evaluation of ‘best’ practices and ways of doing things. Learning by doing, including from both positive and negative results, in a measured, systematic and controlled way is one way to navigate this complexity. Indeed, employing such a feedback mechanism was explicitly mentioned in ESMA’s latest Q&A publication , where it is stressed that the results of ongoing execution monitoring are fed back into execution policies and arrangements to drive improvements in the firm’s processes.

FX is a simple product, but with a complex market, which is getting increasingly more complicated. The impact of the market structure changes, especially the drive towards a more order-driven market at a time when the traditional banking market makers are providing less inventory management in the system due to regulatory changes, is still to be fully determined. However, the recent flash crash in Sterling is a sign of things to come and such liquidity ‘air-pockets’ are becoming increasingly common. Fragmented markets, supported by less risk capital, with the majority of pricing and risk management processes managed electronically, all contribute to more volatile liquidity conditions for the foreseeable future. In such a market, the decision making around how, when and with whom to execute becomes increasingly difficult, especially when coupled with the need to justify such decision making. An interactive, rigorous and systematic approach to measuring and monitoring execution performance, and then using this information to continually enhance the process, is rapidly becoming an essential component of any best execution policy. Static, vanilla TCA reports of the past, produced and filed in a drawer for a rainy day if anyone asks, are no longer adequate.

Read More
Pete Eggleston Pete Eggleston

Factors to consider when implementing a TCA framework

Transaction cost analysis. Execution quality analysis. Performance benchmarking. Best execution. There are many different terms and methods used to describe and analyse the costs and associated performance of execution. Clearly, there is an element of choosing the right tools for the job, and some market participants may require a less extensive range of metrics to measure costs and performance. However, there are some fundamental elements that form the foundations for any meaningful analysis and in this article we explore these core components.

It is essential to take into account the size of the trade

It sounds very obvious but it is a key component of making an informed decision around the quality of the execution. For example, you’ve traded 500m EURUSD at 1pm today. Simply referencing an estimate of the market mid at 1pm may result in a cost number of 12 basis points. But how do you know if this was ‘fair’? And how do you compare the performance of this execution to that you received on 100m AUDUSD at a similar time with a different counterparty? The 12 basis point cost may have been an extremely competitive cost for that size but you don’t know that unless you can reference a framework that provides a consistent, fair value cost measure for this amount of risk, in this currency pair and at that specific time of day. Clearly, a consistent and level playing field is required.

What happens if the time stamp isn’t 100% accurate?

A very common problem. Especially for trades executed by voice, where an approximate time stamp may be recorded although based on beta testing so far, BestX have seen cases where such time stamps can have an error from a few minutes to several hours depending on the rigour of the systems and processes. If you base your TCA on only one metric, i.e. a slippage to mid estimated using a possibly erroneous time stamp, then the results can be potentially very misleading. In such cases, other values can at least provide some useful information. For example, even if the time stamp is say 15 minutes out from the actual time of execution, if you measure what the fair value for the cost should have been for the erroneous time then at least you have something to compare the execution to.
This is illustrated with an example below. Using a simple slippage measure to mid for this trade would have given a cost of 42 basis points, based on the erroneously reported time stamp. Was this cost acceptable or not? In this example, we have estimated the cost for the trade at the reported time stamp, so you can at least get some feel for the quality of the execution. In this case, the estimated fair value cost at 11.15am is actually slightly more than the actual slippage estimated at this time, indicating that the cost seems reasonable.

 
One could argue that if an accurate time stamp is not available, then a simple analysis of where the trade was filled in relation to the observed range of the day is adequate. Indeed, this may provide some comfort around the execution although, in 2016, we feel that it is possible to do better than this. Clearly, the preference is always to have accurate time stamps, and therefore accurate cost estimates, but we know we live in an imperfect world where such data is not always available. Hopefully over time the precision of available time stamp data becomes more accurate and standardised, but until that time, it is important to have a framework that at least allows some measure of execution quality.

Using representative market data sources

Another factor to take into account is the range of sources used to construct the reference market data. It is important that the data used provides a broad, representative view of the market and does not have any inherent bias. For example, if the estimated market mid data that is used to calculate slippage from is sourced from high frequency trading firms, it may not be representative of the prices actually obtainable by a pension fund. 

Coping with more complex execution products such as algos

One of the key trends witnessed recently, and seemingly accelerating, is the move of the FX industry towards a more order driven market. Many more FX market participants now have access to execute via the rapidly growing array of order and algo products. For such execution methods, a simple cost estimate vs a market mid really only illuminates one component of the best execution story. The example below illustrates the point that one metric does not necessarily provide an indication of the quality of the execution received. This particular algo generated significant signalling risk, which effectively pushed the market up, resulting in the client receiving a higher average price than would have been the case with an algo which performs better in terms of hiding its footprint. Supplementing the headline cost numbers with a range of metrics provides the client with a more complete picture, thereby allowing much more informed decisions to be made over time in terms of execution product/venue/counterparty selection, ultimately resulting in significant cost savings.

 
The visualisation of results, allowing informed decisions to be made intuitively and quickly, offers significant benefits, especially when dealing with large complex data sets. Ploughing through thousands of rows of data in Excel or pdf reports is not only a time-sink, but prone to error. Ultimately, if the output does not provide actionable intelligence then there is a risk that the system does not actually get used and the full benefits of monitoring your execution quality are never realised.

Security of results/reporting

With an increased regulatory scrutiny on best execution and transaction costs, it is imperative that the analytics used are not only independent and free of any bias or conflict of interest, but also delivered in a totally secure environment. All data, both input trade data and all outputted results, obviously needs to be stored in an encrypted state and meets the highest industry standards (e.g. AES 256). The FCA have recently announced that it supports the use of Cloud services for Financial Services .

Compliance with the regulatory environment

The MIFID 2 definition of best execution states that an executing counterparty must achieve the ‘best possible result’ for the client, taking into account a range of execution factors. It does not state ‘simply measure the cost as a slippage to the prevailing mid-rate’. The ability to implement a documented best execution policy, monitor it and manage the workflow around exceptions to this policy is essential. Linking such a process, and the reporting of exceptions, to your TCA system such that it satisfies multiple requirements (i.e. business requirements to monitor and save costs, regulatory requirements regarding best execution and the fiduciary responsibility to asset owners) clearly provide benefits from an efficiency and consistency perspective. It would, therefore, seem sensible to future proof your selection of TCA vendor to ensure that it is compliant with the future regulatory environment post the implementation of MIFID 2 in January 2018. 

Conclusions

The need for measuring, recording and justifying best execution is clear. Regardless of the complexity of the FX execution process, there are a core set of components that should be considered when selecting and implementing a set of analytics. For those market participants that only trade FX by voice, or via a custodian, the issues of time stamp availability and accuracy create a nuanced need for measures other than simple slippage cost estimates to mid or range of the day analysis. For those participants that use products such as algos, it is even more important to measure a range of metrics to provide insights into the true execution performance. Many clichés spring to mind, but ‘lies, damned lies and statistics’ seems appropriate as there is clearly a risk of making incorrect decisions if the output from a TCA framework is limited or incomplete. Measuring costs and execution quality is not as straightforward as one might imagine, and to mitigate the other obvious cliché of ‘garbage in, garbage out’, it is important to be mindful of the potential pitfalls.

Read More
Algo Performance Pete Eggleston Algo Performance Pete Eggleston

Signalling Risk – is it a concern in FX markets?

"Should you find yourself in a chronically leaking boat, energy devoted to changing vessels is likely to be more productive than energy devoted to patching leaks."
Warren Buffett

What actually is signalling risk and is it something I should be worried about? This article will seek to answer both of these questions. As a topic, signalling risk has become more widely talked about in the FX markets following the move to a more order driven market, and the increased adoption of the use of algos. As discussed in a previous article, there are benefits to using execution methods such as algos , although there are also associated potential drawbacks. Signalling risk is one of these drawbacks.

In essence, signalling risk is effectively telling the market what you are about to do, perhaps inadvertently, and is also referred to as information leakage. For example, in a penalty shoot-out, there are occasions when the penalty taker effectively informs the goalkeeper of the direction of his intended penalty, as witnessed in the recent Italy v Germany quarter-final at the European Championships. Providing this signal in advance clearly gave the goalkeeper an advantage, and greatly increased the potential for a negative result for the penalty taker.

Information leakage is already a major concern in equity markets, with some studies indicating that an institutional equity order in the US now needs to run at a participation rate of less than 3% in order to prevent detection, whereas this rate was as high as 33% in 2007. This is supported by a separate study by Credit Suisse, which showed that VWAP performance starts to deteriorate when the participation rate starts to exceed 5%. As with a number of market structure issues, the equity markets can be a leading indicator of future developments within the FX markets, and signalling risk is no exception.

When executing an order over a period of time, such as via an algo, there is always the chance that signals to other market participants are provided before the algo is fully completed. Within FX, a good example of this risk is through the use of some algo types during the expanded 5-minute window for the WMR Fix. A paper published in April this year by Pragma highlighted this behaviour, where the authors found that ‘the rate change during the first minute of the window predicts a continuing rate change in the same direction over the subsequent minutes of the window’. Furthermore, they found that this behaviour was exacerbated at month and quarter ends. Such a pattern is potentially easy to detect and therefore allow other market participants to benefit from this knowledge.

Information leakage is not just a concern for those using algos over the WMR window. As liquidity has become more fragmented, and the depth of order books have become generally thinner, it has become easier for patterns in orders to be identified throughout the trading day. The increasing use of direct market access (DMA) via the use of smart order routers provided by liquidity providers and agency brokers contributes to the concerns as placing any order, especially if done naïvely or simplistically, can result in information leakage. How does this leakage contribute to a negative result to the original client order? Well, it is manifested within higher implicit costs through increased slippage. In order to demonstrate best execution, it is becoming increasingly important to take such factors into account when deciding how to execute.

Regulators are becoming increasingly concerned about the compliance risks posed by signalling, since signalling in effect invites front running and may lead to poor client outcomes and even market disruptions including liquidity air pockets and flash crashes.  Such events impact market integrity and harm the reputation of our financial markets.  Accordingly, where firms ignore the signalling risk posed by their algorithmic offerings or other execution methodologies, they may face claims for failure to pay due regard to interests of customers and/or failure to meet best execution requirements, especially where such signalling impacts client outcomes.  (See, e.g., FCA Principles for Business 6: ‘a firm must pay due regard to the interests of its customers and treat them fairly’).  In cases where signalling leads to market disruption, more fundamental claims such as failure to maintain orderly markets may be levelled.

 So what can be done about it? Simply deciding not to use order based execution is probably not the answer as such execution methods can contribute to significant cost savings as previously discussed, and therefore should form part of menu within a best execution policy. However, it is a risk that should be considered in such a policy and process, and the first stage in any form of risk management is measurement. Measuring actual signalling risk is obviously extremely difficult as it is not possible to isolate exactly what the response of the market is to any specific execution. In physics this is referred to as the ‘observer effect’, in which measurements of certain systems cannot be made without affecting the systems (exemplified by Heisenberg’s Uncertainty Principle). In the same way, it is difficult to know with precision exactly what the market would have done without your trade participating in it.

 However, it is possible to produce metrics which indicate the potential signalling risk that an order may have created, and hence, how easy or not it would have been to read by potentially predatory market participants. The key here is to compare apples and apples and use the same metrics, computed in exactly the same way, to allow fair comparisons across order types and providers. BestX have developed unique measures for this purpose, which allow users to compare the relative signalling risk across different order types. These metrics form part of the best execution suite within the BestX Post-Trade application, and use of such measures over time will allow users to mitigate the risk of information leakage. If a particular venue, or algo, consistently produces relatively high signalling risk in a given currency pair when analysed over a statistically large sample size, then informed decisions can be made, and justified, to alter execution choices and decisions.

 As Warren notes at the beginning of the article, it is probably best to change vessel once you discover you are in a chronically leaking boat. 

For further information on Signalling Risk and the available metrics, please contact BestX at contact@bestx.co.uk 

Read More
Pete Eggleston Pete Eggleston

Best Execution - Do Algos have a role to play ?

The use of execution algorithms in the currency markets has increased significantly over recent years(1) (2) (3) and it would appear that this trend is set to continue for the foreseeable future. Why is this ? In this article we explore the benefits of using algorithms, but we also review the potential pitfalls that users should be aware of if they are to incorporate the use of algos in their execution process.

Algo Types

Before we investigate the benefits and pitfalls of using algos, a brief overview of the range of algos now available for FX should be helpful. Although there are now well in excess of 100 different FX algos available on the main multidealer platforms, this universe can be simplified into a number of broad categories as illustrated in the diagram below. There are, however, different dimensions to how an algo should be characterised, for example:

  • Algo style (e.g. is the algo trying to achieve a specific benchmark or is it purely accessing liquidity opportunistically ?)

  • Liquidity source (e.g. is the algo only sourcing liquidity as principal or does the algo behave in an agency format via direct market access, or does it behave in a hybrid form of both ?)

  • Liquidity interaction (e.g. does the algo only aggress liquidity or does it also place bids/offers and interact in a passive way ?)

So, any given TWAP algo could be very different to another TWAP algo depending on the liquidity sources it has available to it and how it interacts with this liquidity.

Benefits

First we’ll review the potential benefits of including algorithms in your menu of execution options, which are summarised below.

  1. Potential to reduce costs

  2. Potential to reduce market impact, especially for larger tickets

  3. Ability to access wider range of liquidity sources

  4. Operational efficiency

  5. Transparency and audit trail

Cost reduction

Using an algorithm can result in significant transaction cost savings, although there are a number of caveats here that we’ll cover in the following section on pitfalls. If an appropriate algo is selected for the specific execution objectives, benchmark and prevailing market conditions, then splitting larger orders, or less liquid orders, via an algo can result in cost savings. Such savings can result from a couple of sources: i) reduced spread costs as the individual child orders are smaller and, depending on the algo used, may in some cases actually result in not crossing the bid-offer spread at all, and ii) reduced market impact costs.

Market impact

Reducing market impact is becoming a higher priority for market participants as the increasing fragmentation of liquidity within the FX market, and associated volatility, is making execution of larger ticket sizes more difficult. Carving an order into smaller child orders, and distributing carefully via a smart order router, can help reduce impact, or market footprint, thereby resulting in improved overall execution. However, once again, caveats apply as poorly performing algos, and/or not very smart order routers, can result in significant signalling risk to the market, which may result in increased impact.

Liquidity sources

The increasing fragmentation of liquidity as new venues are established, and new participants enter the market, for example, non-bank liquidity providers, is creating a significant overhead for participants who would like to directly access as much liquidity as possible. The use of certain algos effectively provides this access in a cost effective fashion, as the algo provider is delivering and maintaining all the necessary venue connectivity.

Operational efficiency

Algos can be a useful tool to increase efficiencies by effectively outsourcing the management of certain orders, which may be beneficial to free up time for an execution desk to allow focus on particularly challenging trades in terms of size and/or illiquidity.

Transparency and audit trail

Another trend in the FX market, in part driven by the scandals associated with fixings and last look, is the desire for increased transparency. Using an algo should result in the delivery of an associated post-trade report which provides full details of exactly how each child order was filled, providing a full audit trail of execution prices and time stamps.

Pitfalls

So far so good ? Algos clearly have benefits and can help contribute to achieving best execution in some circumstances, but they are no panacea. There are potential pitfalls to be aware of when using algos, which have been summarised below.

  1. Cost vs performance

  2. Market footprint and signalling risk

  3. Marketing spin and black boxes

  4. Liquidity sources

  5. Selection

Cost vs Performance

Algos are typically charged for in terms of a specified amount per million of notional, for example $50/M. Different products from different providers will have different costs, and the temptation is clearly to simply use products that appear to cost the least. However, a product that might appear relatively expensive in terms of headline cost, might on average deliver far superior execution performance when taking into account market impact etc. Thus, to make informed decisions when comparing products, it is important to look at net performance of a large enough sample of trades to make the results viable.  This is a challenge when each provider offers only their own performance data.  More crucially each provider generally self-selects their own metrics and presents them in a unique non-standard format such that it is exceedingly difficult to meaningfully compare any two provider's benchmarks and performance side-by-side without the dedication of a significant amount of time and energy.

Market footprint and signalling risk

As just alluded to, net performance is critical and a key component of this is how much market impact any one algo may or may not create. There are a myriad of ways that algos can interact with sources of liquidity, both in terms of the nature of the sources and the way that child orders are actually placed into the market. Poorly designed algos or sub-optimal selection and management of liquidity sources may create significant market impact, which may be compounded by allowing other market participants to further identify the algo behaviour through the signalling risk it has created. So, if a key execution objective is to minimise impact, it is not a given that an algo will automatically achieve this and indeed, depending on the market conditions, using old school risk transfer via the phone may at times be a superior execution method.

Marketing spin and black boxes

There is a bewildering array of algo products now available from many different providers and navigating the maze of marketing material is not straightforward. For example, on a platform such as Bloomberg, clients can access over 100 different FX algo products from multiple providers. Understanding exactly what any given product does, and how is does it, can be challenging as providers understandably are protective of their intellectual property. Conversely, there is growing pressure for users to have full disclosure on the details of how an algo actually works. The vast majority of market participants are uncomfortable using any product that might be perceived as a ‘black box’, and this is probably one of the reasons that the adoption of algos in FX is still largely concentrated in the more ‘simple’ product types.

Liquidity Sources

For algos that access multiple liquidity sources, this can create overhead in terms of monitoring which liquidity providers are delivering high quality liquidity and superior execution, through, for example, low reject rates and rigorous enforcement of participant behaviour. Algo providers manage these relationships although a client may have specific requirements in terms of where they want their algo orders to be worked and where they don’t.

Selection

Given the bewildering array of products now available, the process of how to select a specific algo for the trade in question is obviously complex. A number of factors need to be incorporated within the selection process, including if there is an execution benchmark to consider or other execution objectives. Clearly, it makes little sense selecting a TWAP algo over several hours if a trade has a specific benchmark of mid-market Arrival Price. Even within a certain genre of algo types, such as TWAP, how do you choose between providers ? With the increased focus on FX execution from asset owners, trustees, compliance and regulators, it is more important nowadays to be able to record and justify such selection decisions. A rigorous process, informed by independent analytics, provides the foundations for such a selection process by focusing on objective performance measurement.

Conclusions

Algos clearly have a potential role to play in any modern FX execution process, helping provide efficient, transparent execution in a cost effective manner. However, they are no ‘magic bullet’ and won’t necessarily always deliver ‘best’ execution. It is imperative that upon embarking with the use of algos that the potential pitfalls are understood and they are used judiciously and when appropriate to the trading objectives, benchmarks and liquidity conditions. It makes sense to include algos on any menu of execution options and products but it doesn’t make sense to always choose them for every meal. There may be times where the fish option is preferable to steak or vegetarian.

(1)     Greenwich Associates, October 2014, reported that FX algo usage grew from 7% of clients in 2012 to 11% in 2013, with an expectation of this rising to 18% by end 2014

(2)     Greenwich Associates, May 2016, reported that sophisticated investors executed 61% of their currency trades via automated computer programs in 2015, up from 33% in 2014

(3)     GreySpark Partners, September 2015, estimated that algo trading will account for approx. one third of all currency trading in 2016

Read More
Pete Eggleston Pete Eggleston

Global Code of Conduct for FX Explained

The Bank for International Settlements (BIS) Foreign Exchange Working Group (FXWG) published the first phase of the Global Code of Conduct for the Foreign Exchange Market (Global Code) today.  It also published principles for adherence to the new standards, entitled FX Global Code: Public Update on Adherence. Final publication of the complete FX Global Code is targeted for May 2017.

The Global Code is not a legally binding document. It identifies global best practices and processes that are meant to inform corporate practice, and assist in developing and reviewing internal procedures. The Global Code is also meant to help inform the development of regulation, and could be used by regulators and courts in analysing wholesale FX behaviour going forward.

Guy Debelle, Assistant Governor of the Reserve Bank of Australia, and FXWG Chairman issued a statement today underscoring that the guiding theme of the new Global Code is promotion of a “robust, fair, liquid, open, and transparent market," with the ultimate goal “to restore confidence and promote the effective functioning of the wholesale FX market.”

Written by representatives from 16 jurisdictions, the Global Code is meant to have wide applicability, not just to financial institutions such as banks and brokers, but also asset managers, including sovereign wealth funds, hedge funds, pension funds, and insurance companies, as well as corporate treasury departments, family offices, and electronic trading platforms. 

The first principle of the Global Code is that market participants should strive for the highest ethical standards.  This includes

  • acting honestly in dealings with clients and other market participants

  • acting fairly, dealing with clients and other market participants in a consistent and appropriately transparent manner

  • acting with integrity, particularly in avoiding and confronting questionable practices and behaviours

Offering best execution to clients will be fundamental to this principle, and establishing technologies to evidence best execution will be key to delivering fairness in a transparent manner.

A further principle is that market participants should identify and address conflicts of interest. Many firms have already undertaken great steps to alleviate conflicts of interest, including mandating independent FX transaction cost analysis providers that are independent of execution venues.

The Global Code further clearly states that market participants should handle Client orders fairly and with transparency. Noting that the FX market has traditionally operated as a principal-based market, the Global Code makes clear that “[w]here the acceptance of an order grants the Principal executing the order some discretion, it should exercise this discretion reasonably, fairly and in such a way that is not designed or intended to disadvantage the Client.”  This aligns with the new requirements under MiFID II, including those requiring best execution, greater fairness in pricing, and more transparency.

A key aspect of the final Global Code will be the expectation that market participants promote and maintain a robust control and compliance environment to effectively identify, measure, monitor, manage, and report on the risks associated with their engagement in the FX market.  This section is in development as content to be published May 2017 as part of phase 2 of this work.  Principles related to electronic trading, including algorithmic operators and users, and unique features of FX swap, forward, and options transactions FX are also to come in May 2017.  We anticipate that these principles will underscore the need for compliance systems that are able to test for forms of market manipulation like front running, abuse of barrier options or inappropriate use of last look. Work on a robust control environment is also likely to point towards implementing automated systems that can verify best execution across FX products, including both voice and electronic trading, as well as algorithmic trading.  

Although the new Global Code is light on specific guidance, the annexes provide useful examples that are certain to make their way into compliance handbooks worldwide.  These include examples of inappropriate use of information sharing in the context of offering market colour, inappropriate handling of client’s stop loss orders and inappropriate hedging. 

There are also very helpful examples on mark up practices, including the following inappropriate example:

A Client asks a Market Participant to fill an order to sell 50M USD/JPY and to confirm the details at a later time period. The Market Participant adds a higher Mark Up than normal, by filling the order further away from the actual executed rate, but within the day’s trading range.

The Global Code emphatically states that market participants must be clear and transparent about the application of mark up and that mark up may not be decided by the daily range of the day. 

And yet, critics will argue that the Global Code does not go far enough in banning mark up vagueness. As written, Principle 5 of the new Global Code merely requires participants to publish disclosures that help “Clients understand the determination of Mark Up, such as by indicating the factors that may contribute to the Mark Up, including those related to the nature of the specific transaction and those associated with the broader Client relationship, as well any relevant operating costs.” In other words, mere broad statements will continue to suffice and executing parties will have no obligation to be transparent about the degree or range of mark-up applied unless asked. The Code merely states that “If the Mark Up details are requested by the Client, the Market Participant should make the best effort to explain the factors that help determine the Mark Up.”

The new Global Code in a sense blesses the right of those acting in a principal capacity to keep their secret formula for mark-up under lock and key. While abuse or manipulation will not be tolerated, and while individual jurisdictions may have higher legal obligations, at least per the Global Code, market participants will have no obligation to fully disclose their precise mark-up. In this regard, the new Global Code makes the strongest case ever for the value of independent and sophisticated transaction cost analysis. In a wholesale market of sophisticated market participants, clients are expected to have their own tools to evaluate fair execution.

While the release of this first part of the new Global Code is a welcome development, there is still much to be done by regulators and companies to change the structure and culture of the FX market and restore confidence in the $5 trillion a day market. One step in the right direction is for businesses to equip themselves with new technologies sophisticated enough to analyze transaction costs both pre and post-trade, as well as in a live trading environment. 

BestX is working hard to fill this space and to provide a more transparent and fair FX market place, agnostic of counterparty, medium or venue. 

Please contact us with any questions or comments about this article or to learn more about how BestX can be part of your solution for better execution.

Read More