Article #1: The LGO Platform

The issue with Performance

A key feature of the LGO platform...

The LGO platform has been designed to meet the expectations of all participants and to ensure superior performance levels.

Below are the pillars that ensure this performance:

  • Low Latency: The LGO platform is built around guaranteeing a reduced and stable latency (especially in periods when asset prices are highly volatile, and participants increase their order volume considerably).

  • High Performance: No matter the volatility of the market, the platform allows for millions of trades per second to avoid any execution risk for our clients.

  • High Availability: Finally, our technology is fully operational and available at all times so that the LGO platform allows its participants to place orders at any time of the day, every day of the week (24/7).

...which creates major issues

Addressing the issue of latency is essential to ensure optimal platform performance and a positive experience for all participants.

Latency is obviously a major focus point for our execution engine, which matches buy and sell orders placed by our clients. It is also a key component of our market data stream.

Market data is made up of product data with order book status, last trades made, as well as client specific data such as orders that are open and balances for each asset and the client’s trades.

More precisely, the market data that LGO streams to clients serves as reference information that allows participants to make their decisions on whether or new and existing orders they wish to execute. Any latency in supplying this information represents a risk for the participants. The latency must be constant in any situation and not be subject to an increase during a period of high volatility in the market in order for participants to properly monitor and execute their desired trading strategy. The latency must also be the same for all participants to avoid a distortion of the level of information available to the different participants. Efficient market theory is based on homogeneous access to information for all participants, without which some participants find themselves at an advantage compared to others.

Although the problem of latency is the most important aspect to take care of, the capacity of the underlying networks is also a considerable challenge. As market data is transmitted both inside our platform and outside to our clients' systems, the network inherently carries a risk of message loss. It is extremely important to be able to manage these losses of information without compromising the integrity of the data.

Our technical architecture as the solution to performance challenges

To begin, here is a first diagram representing the technological architecture of our solution:

Log Concentration System and Market Data Snapshots

Initially, we felt it was essential not to have to send all the market data every time it was updated, but only the data that had actually been changed in order to limit the amount of data to be processed and transferred via the network.

To do so, we created an event log concentration system capable of managing the subscription aspects of execution events and providing initial snapshots. Doing so, we were able to make the execution system independent from the consumption system and prevent the latter from impacting the performance of the former. One of the benefits of this architecture is that it allows us to initialize the market data calculation system and to send an initial snapshot to clients when they connect to the system.

What we call snapshot is a first batch containing all market data at a given time. It is necessary for the consumer of market data to receive a snapshot before they can receive other data flows. It turns out that requesting an initial snapshot is very costly for the execution engine, so we have placed an intermediate mechanism that can provide this initial snapshot whenever a client connects to the system.

Likewise, the calculation of client-specific data represents a cost both in terms of calculation and in terms of bandwidth consumption, so we felt it was important to only calculate the data needed by the clients currently connected. We therefore had to implement a dynamic subscription mechanism for customer-specific market data:

Dynamic Data Subscription System

A dynamic subscription system allows to start the data producers needed by currently connected customers only, while guaranteeing performance and reliable delivery of messages. This system allows us to constantly adjust the computing power needed to produce market data so that each type of data is calculated only once and sent to all subscribed broadcasting systems.

The principle is as follows: when logging in, the customer opens a session through which they will subscribe to a set of data streams. The market data system will respond with an initial snapshot containing the current status of the data for each stream, after which the market data will be updated as long as the session is active.

This innovation helps to limit computing and bandwidth costs to ensure efficient market data access.

After making sure that market data is streamed efficiently, our second focus has been to separate data producers from data consumers, while ensuring the consistency of the data processed.

The approach chosen by the Thomson Reuters Enterprise Platform (TREP) system was particularly appealing to us. This system distinguishes between two sub-systems: advanced data hubs that retrieve and aggregate market data and advanced distributions servers that build real-time projections of this data. Since the data is already concentrated in the advanced data hub, the distribution servers do not impact the performance of the initial data producer. Thanks to this architecture, it becomes easier to manage data flow scalings with ability to independently increase the processing capacities of data producers or data consumers. We have designed our own systems based on the "network backbone" that connects these two systems and decorrelates data consumption and production.

As shown in the diagram above, we were inspired by the TREP architecture to build the core of our market data infrastructure:

  • Data producers are started on demand and will compute market data shared by a set of customers.
  • Each customer has a connection to a data consumer that aggregates on their behalf the data to which they subscribe.

In order to avoid wasting power to compute data that is no longer needed, there is a mechanism that constantly checks that the computed data is still needed. For instance, when a client disconnects, we can stop the data producers they were the only one using.

High Performance Data Communication

The next step is to provide a reliable and very high performance communication model between our different modules.

In order to do this, we used research carried out on similar systems, notably in the paper "Topics in High-Performance Messaging" (R. A. Van Valzah, T. L. Montgomery & E. Bowden 2011). These readings have shown us that the use of UDP multicast (a technique for one-to-many communication over an IP network) was going to become a necessity, but we still had to add reliability to a protocol that was not designed with this in mind.
We therefore relied on the work of the IETF on the reliability of UDP, including the draft "RELIABLE UDP PROTOCOL" (T. Bova & T. Krivo Ruchka in 1999) and the RFC "Multicast Negative-Acknowledgment (NACK) Building Blocks" (B. Adamson, C. Bormann, M. Handley & J. Macker in 2008).

Based on this research, we've built a high-performance data transmission system between producers and consumers. We recognized that one producer can serve data to multiple consumers, meaning it is necessary to establish a mechanism that allows multiple customers subscribing to the same source to be served.

The mechanism works as follows:

  1. The system receives execution events through the log concentrator
  2. These events are then applied to the local state of a data producer
  3. Once the construction of the new state is completed we compute the difference between the current and the previous state
  4. The difference is sent to data consumers subscribed to the data stream, which then transmit the message to end customers

Internal Communication Protocol

Most of the available communication protocols are very expensive in terms of encoding and decoding messages. We looked for solutions with standards used by the financial industry such as FIX, which has led us to select a new standard called SBE to evaluate its integration as an internal communication protocol.

At this stage we were able to start the first development work on a prototype to try and implement the different concepts we had identified, however a major problem emerged: how to correctly synchronize the work of several logical units between them?

Fortunately, work on the disruptor available in the paper "Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads" (M. Thompson, D. Farley, M. Barker, P. Gee & A., M. Thompson, D. Farley, M. Barker, P. Gee & A. Stewart 2011) provides a first glimpse into possible solutions. By combining the use of a ring buffer in unmanaged memory with pipelining, we were able to orchestrate work on a set of threads while eliminating the risks of competition. Moreover, this system allows us to set up a mechanism to manage the output contention of data flows. By placing a disruptor at the output of each client's data flow, we become able to effectively isolate it so that it cannot slow down the rest of market data operations.

Final Fixes

Network issues were our final challenges. As a matter of fact, the TCP protocol does not meet our performance constraints, which led us to build a solution that benefits from a reliable UDP-based communication system.

To do this we used the method described in RFC5740 of the IETF which uses a negative acquittal process to guarantee the reliability of a communication. Instead of acquitting each message individually as in TCP, this method consists in sending a sequence number with each message. If a data consumer detects a jump in the sequence, they'll flag it and resume message sending from the sequence number they did not receive.
For co-located processes, we have used a common memory area for sharing messages (IPC). Finally, external communications are done through a websocket API or through a HTTPS REST interface.

Results and impact on the LGO Platform

Our solutions and implementation decisions while building the core of our exchange platform have led us to create a data dissemination model that offers the possibility of moving to horizontal scaling of data consumers without affecting the performance of data producers. The higher level conclusions of this work are perceivable on the LGO spot exchange platform:

  • Guarantees minimum performance and latency even under high load moments

  • Dissociates the calculation of market data dedicated to each customer to avoid any impact on other customers' perceived performance

  • Prevents a performance issue on data consumption from impacting the execution engine

  • Offers low average latency and memory consumption gains

  • Optimizes the network to manage potential message loss

  • Provides market data access capabilities regardless of the underlying protocol used by the customer

As we have emphasized before, we consider our technology to be our greatest asset. We've designed our technology to service the needs of institutional clients looking for the industry-best performance and efficiency. Our standards of excellence permetate through all of our processes and the technology behind the LGO exchange.

In our final article we will talk about a crucial component of any exchange: security.

LGO Technology Team