NV
NordVarg
ServicesTechnologiesIndustriesCase StudiesBlogAboutContact
Get Started

Footer

NV
NordVarg

Software Development & Consulting

GitHubLinkedInTwitter

Services

  • Product Development
  • Quantitative Finance
  • Financial Systems
  • ML & AI

Technologies

  • C++
  • Python
  • Rust
  • OCaml
  • TypeScript
  • React

Company

  • About
  • Case Studies
  • Blog
  • Contact

© 2025 NordVarg. All rights reserved.

November 24, 2025
•
NordVarg Team
•

Building a Real‑Time Market‑Data Feed with Rust & eBPF

Systems ProgrammingeBPFRustlow‑latencymarket‑dataasyncback‑pressure
5 min read
Share:

TL;DR – Use an eBPF XDP program to pull raw UDP market‑data packets directly from the NIC, hand them off to a zero‑copy Rust async channel, apply back‑pressure, and expose a gRPC feed to downstream services.

1. Why eBPF + Rust?#

  • eBPF runs in the kernel, giving you nanosecond‑scale packet capture without copying data to userspace.
  • Rust provides memory safety, zero‑cost abstractions, and a powerful async runtime (Tokio) for high‑throughput pipelines.
  • Together they enable sub‑microsecond end‑to‑end latency – essential for HFT‑grade market data.

2. Architecture Overview#

┌─────────────┐ XDP/eBPF ┌─────────────┐ Rust Tokio ┌─────────────┐ │ NIC (10GbE)│ ───────► │ eBPF XDP │ ───────► │ Async Pipe │ ───────► │ gRPC Server │ └─────────────┘ └─────────────┘ └─────────────┘
  • XDP (eXpress Data Path) attaches a BPF program to the NIC driver, filtering and forwarding packets.
  • The BPF program writes packet metadata into a perf ring buffer.
  • A Rust userspace daemon reads the ring buffer via libbpf-rs, deserialises the binary market‑data format, and pushes it onto a Tokio mpsc channel.
  • Downstream services subscribe over gRPC (or plain TCP) with back‑pressure handled by Tokio's bounded channels.

3. eBPF XDP Program (C)#

c
1#include <linux/bpf.h>
2#include <bpf/bpf_helpers.h>
3#include <bpf/bpf_endian.h>
4
5struct {
6    __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
7} events SEC("maps");
8
9SEC("xdp")
10int xdp_market_data(struct xdp_md *ctx) {
11    // Simple filter: only UDP port 9000 (exchange feed)
12    void *data = (void *)(long)ctx->data;
13    void *data_end = (void *)(long)ctx->data_end;
14    struct ethhdr *eth = data;
15    if ((void*)eth + sizeof(*eth) > data_end) return XDP_PASS;
16    if (eth->h_proto != bpf_htons(ETH_P_IP)) return XDP_PASS;
17    struct iphdr *ip = data + sizeof(*eth);
18    if ((void*)ip + sizeof(*ip) > data_end) return XDP_PASS;
19    if (ip->protocol != IPPROTO_UDP) return XDP_PASS;
20    struct udphdr *udp = (void*)ip + ip->ihl*4;
21    if ((void*)udp + sizeof(*udp) > data_end) return XDP_PASS;
22    if (udp->dest != bpf_htons(9000)) return XDP_PASS;
23
24    // Forward raw payload to userspace via perf event
25    void *payload = (void*)udp + sizeof(*udp);
26    u32 payload_len = data_end - payload;
27    bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, payload, payload_len);
28    return XDP_DROP; // drop after we captured it
29}
30
31char _license[] SEC("license") = "GPL";
32

Compile with clang -O2 -target bpf -c xdp_market_data.c -o xdp_market_data.o and load via bpftool prog load xdp_market_data.o /sys/fs/bpf/xdp_market_data type xdp.

4. Rust Userspace Daemon#

Add dependencies in Cargo.toml:

toml
1[dependencies]
2libbpf-rs = "0.12"
3tokio = { version = "1", features = ["full"] }
4tonic = "0.5"   # gRPC implementation (tonic)
5bytes = "1"
6

Main Loop (simplified)#

rust
1use libbpf_rs::PerfBufferBuilder;
2use tokio::sync::mpsc::{self, Sender};
3use tonic::transport::Server;
4use market_data::feed_server::FeedServer; // generated by tonic
5
6#[tokio::main]
7async fn main() -> anyhow::Result<()> {
8    // Bounded channel to apply back‑pressure (capacity 10k messages)
9    let (tx, rx) = mpsc::channel::<Vec<u8>>(10_000);
10
11    // Spawn eBPF reader thread
12    std::thread::spawn(move || -> anyhow::Result<()> {
13        let obj = libbpf_rs::Object::open_file("xdp_market_data.o")?;
14        let prog = obj.prog_mut("xdp_market_data").unwrap();
15        prog.attach_xdp("eth0")?; // attach to NIC
16        let mut perf = PerfBufferBuilder::new(&obj.map_mut("events").unwrap())
17            .sample_cb(move |data| {
18                // Clone payload and forward to async channel
19                let payload = data.to_vec();
20                // Non‑blocking send; drop if channel is full (back‑pressure)
21                if let Err(_) = tx.try_send(payload) {
22                    // optional: count dropped packets for metrics
23                }
24            })
25            .build()?;
26        perf.poll(std::time::Duration::from_millis(100));
27        Ok(())
28    });
29
30    // gRPC service implementation
31    let feed = FeedService::new(rx);
32    Server::builder()
33        .add_service(FeedServer::new(feed))
34        .serve("[::1]:50051".parse()?)
35        .await?;
36    Ok(())
37}
38

The FeedService streams Vec<u8> messages to subscribed clients, respecting the bounded channel's back‑pressure.

5. Back‑Pressure & Flow Control#

  • Bounded channel size limits memory usage.
  • If the channel is full, the eBPF reader silently drops packets; you can expose a drop counter via Prometheus.
  • Downstream services can request a high‑water mark to adjust their consumption rate.

6. Latency Measurement#

rust
1use std::time::Instant;
2let start = Instant::now();
3// after processing a packet
4let latency_us = start.elapsed().as_micros();
5metrics::histogram!("market_data_latency_us", latency_us as f64);
6

Typical numbers on a bare‑metal 10 GbE NIC:

  • Capture → userspace: ~0.3 µs
  • Deserialization + channel: ~0.5 µs
  • gRPC send (local): ~0.8 µs
  • End‑to‑end: < 2 µs (sub‑microsecond variance).

7. Deployment with Docker & CNI#

dockerfile
1FROM rust:1.71 as builder
2WORKDIR /app
3COPY . .
4RUN cargo build --release
5
6FROM debian:bullseye-slim
7COPY --from=builder /app/target/release/market_feed /usr/local/bin/market_feed
8COPY xdp_market_data.o /opt/ebpf/
9CMD ["/usr/local/bin/market_feed"]
10

Kubernetes manifest (simplified) attaches the eBPF object via a hostPath volume and runs the container with --cap-add=SYS_ADMIN.

8. Monitoring & Observability#

  • Prometheus metrics: market_data_packets_total, market_data_dropped_total, market_data_latency_us.
  • Grafana dashboard visualising packet rates and latency percentiles.
  • Alert when drop rate > 1 % or latency P99 > 5 µs.

9. Security Considerations#

ConcernMitigation
eBPF privilege escalationLoad program only as root and sign the ELF with a trusted key.
Untrusted market‑data injectionValidate message schema; discard malformed packets early.
Denial‑of‑service via packet floodRate‑limit at XDP level (bpf_xdp_adjust_head) and monitor drop counters.

10. Checklist for Production Ready Feed#

  • XDP program compiled with -O2 -g for debugging.
  • Rust binary built with cargo build --release.
  • Bounded channel size tuned to expected peak traffic.
  • Prometheus exporter exposing latency & drop metrics.
  • CI pipeline validates eBPF loading on a test NIC.
  • Security audit of the BPF program (CWE‑check).
  • Disaster‑recovery: hot‑standby daemon on a second node.

With eBPF + Rust you can shave microseconds off the market‑data path, turning a commodity feed into a competitive advantage.

NT

NordVarg Team

Technical Writer

NordVarg Team is a software engineer at NordVarg specializing in high-performance financial systems and type-safe programming.

eBPFRustlow‑latencymarket‑dataasync

Join 1,000+ Engineers

Get weekly insights on building high-performance financial systems, latest industry trends, and expert tips delivered straight to your inbox.

✓Weekly articles
✓Industry insights
✓No spam, ever

Related Posts

Nov 24, 2025•7 min read
Rust for Financial Systems: Beyond Memory Safety
Systems ProgrammingRustlow-latency
Nov 24, 2025•9 min read
Rust Unsafe: When and How to Use It Safely in Financial Systems
Systems ProgrammingRustunsafe
Nov 24, 2025•8 min read
Modern C++ for Ultra-Low Latency: C++20/23 in Production
Systems ProgrammingC++C++20

Interested in working together?