Exchanges expose market data through native, binary protocols that prioritize compactness and low latency. Unlike text-based protocols (e.g., FIX), native feeds are often fixed-layout, binary messages transmitted over multicast or TCP. This article explains how native market-data protocols work, how to build robust feed handlers, and the operational trade-offs you must consider when designing low-latency market-data ingestion.
Native feeds (e.g., NASDAQ ITCH/OUCH, CME MDP, LSE Millenium) are purpose-built by exchanges. They trade human readability for efficiency:
Standardized feeds (e.g., via FIX) are easier to integrate but add overhead and sometimes higher latency. When microseconds matter, firms prefer native binary feeds.
Characteristics to watch:
Many venues provide multicast for live data and TCP replay endpoints for gaps and snapshots.
A robust feed handler transforms raw messages into a canonical internal representation and publishes updates to downstream consumers (OMS, SOR, risk engines, analytics).
High-level components:
Parsing binary messages can be straightforward but must be careful about alignment and bounds. Two common approaches:
struct.unpack_from in Python, read_* helpers in C/C++/Rust).Example: simplified Python-style parser for an ITCH-like fixed message
1# Simplified conceptual example — not production-ready
2import struct
3
4# Example: message header has 1 byte type, 2 byte length
5HEADER_FMT = '!BH' # network byte order: type (1 byte), length (2 bytes)
6
7def parse_packet(buf: bytes):
8 offset = 0
9 while offset < len(buf):
10 msg_type, msg_len = struct.unpack_from(HEADER_FMT, buf, offset)
11 offset += struct.calcsize(HEADER_FMT)
12 body = buf[offset:offset+msg_len]
13 offset += msg_len
14 if msg_type == 1: # Add order
15 handle_add_order(body)
16 elif msg_type == 2: # Reduce order
17 handle_reduce_order(body)
18 # ... other message types
19For C/C++/Rust, prefer reading from a byte-slice and using safe parsing helpers or nom (Rust) to avoid undefined behavior.
Most exchange feeds provide a snapshot (full state) mechanism and then a stream of incremental updates. The typical flow:
Implementation detail: sequence numbers and timestamps are crucial — keep them in 64-bit types to avoid wrap issues.
Always treat the incremental stream as unreliable and have a fast path to re-acquire a consistent book.
1// Pseudocode — conceptual only
2struct Book {
3 bid: Option<(u64, f64)>,
4 ask: Option<(u64, f64)>,
5}
6
7fn apply_message(book: &mut Book, msg: &Message) {
8 match msg {
9 Message::Add { side, price, size } => {
10 if side == Side::Bid {
11 book.bid = Some((size, price));
12 } else {
13 book.ask = Some((size, price));
14 }
15 }
16 Message::Trade { price, size } => {
17 // adjust quantities
18 }
19 _ => {}
20 }
21}
22Low-latency systems avoid allocations and copies on the hot path. Common patterns:
If guaranteed delivery and durability are required, integrate a persistent pipeline (e.g., Kafka) but be aware of added latency.
Track these key metrics:
Graph these with histograms (p99/p999) and alert on regressions.
Start with a correct, well-tested parser + snapshot/resync flow. Optimize the hot path once you have reliability.
Native market-data feeds are the backbone of low-latency trading systems. Building robust feed handlers requires careful attention to parsing correctness, snapshot/recovery, NIC tuning, and efficient publication to downstream consumers. With a correct baseline and a clear testing and monitoring plan, you can safely optimize for latency while maintaining correctness.
Technical Writer
NordVarg Team is a software engineer at NordVarg specializing in high-performance financial systems and type-safe programming.
Get weekly insights on building high-performance financial systems, latest industry trends, and expert tips delivered straight to your inbox.