NV
NordVarg
ServicesTechnologiesIndustriesCase StudiesBlogAboutContact
Get Started

Footer

NV
NordVarg

Software Development & Consulting

GitHubLinkedInTwitter

Services

  • Product Development
  • Quantitative Finance
  • Financial Systems
  • ML & AI

Technologies

  • C++
  • Python
  • Rust
  • OCaml
  • TypeScript
  • React

Company

  • About
  • Case Studies
  • Blog
  • Contact

© 2025 NordVarg. All rights reserved.

December 31, 2024
•
NordVarg Team
•

Time-Series Databases Comparison for Trading

Data Engineeringtimescaledbinfluxdbquestdbclickhousedatabasesperformance
9 min read
Share:

Trading systems generate massive time-series data: tick data, order flow, performance metrics. After benchmarking four production time-series databases over 3 years, I've learned that database choice dramatically impacts query performance and storage costs. This article compares TimescaleDB, InfluxDB, QuestDB, and ClickHouse.

Use Cases#

Our time-series data workloads:

  • Tick data: 50M ticks/day, 1μs precision timestamps
  • Market data: Order book snapshots every 100ms, 15 symbols
  • Execution metrics: Order latency, fill rates, slippage
  • Risk metrics: Real-time P&L, position, VaR updates

Database Comparison#

TimescaleDB (PostgreSQL Extension)#

sql
1-- Create hypertable for tick data
2CREATE TABLE ticks (
3    time TIMESTAMPTZ NOT NULL,
4    symbol TEXT NOT NULL,
5    price NUMERIC(10, 2) NOT NULL,
6    size INTEGER NOT NULL,
7    side TEXT NOT NULL,
8    exchange TEXT NOT NULL
9);
10
11SELECT create_hypertable('ticks', 'time', chunk_time_interval => INTERVAL '1 day');
12
13-- Create indexes
14CREATE INDEX ON ticks (symbol, time DESC);
15CREATE INDEX ON ticks (exchange, time DESC);
16
17-- Compression policy
18ALTER TABLE ticks SET (
19    timescaledb.compress,
20    timescaledb.compress_segmentby = 'symbol, exchange',
21    timescaledb.compress_orderby = 'time DESC'
22);
23
24SELECT add_compression_policy('ticks', INTERVAL '7 days');
25
26-- Retention policy (keep 90 days)
27SELECT add_retention_policy('ticks', INTERVAL '90 days');
28
29-- Continuous aggregates for 1-minute OHLCV
30CREATE MATERIALIZED VIEW ohlcv_1min
31WITH (timescaledb.continuous) AS
32SELECT
33    time_bucket('1 minute', time) AS bucket,
34    symbol,
35    FIRST(price, time) AS open,
36    MAX(price) AS high,
37    MIN(price) AS low,
38    LAST(price, time) AS close,
39    SUM(size) AS volume
40FROM ticks
41GROUP BY bucket, symbol;
42
43-- Refresh policy
44SELECT add_continuous_aggregate_policy('ohlcv_1min',
45    start_offset => INTERVAL '1 hour',
46    end_offset => INTERVAL '1 minute',
47    schedule_interval => INTERVAL '1 minute');
48

Queries:

sql
1-- Recent ticks for symbol
2SELECT * FROM ticks
3WHERE symbol = 'AAPL'
4  AND time > NOW() - INTERVAL '1 hour'
5ORDER BY time DESC
6LIMIT 1000;
7-- 12ms (with index)
8
9-- VWAP calculation
10SELECT
11    time_bucket('5 minutes', time) AS bucket,
12    symbol,
13    SUM(price * size) / SUM(size) AS vwap,
14    SUM(size) AS volume
15FROM ticks
16WHERE symbol = 'AAPL'
17  AND time > NOW() - INTERVAL '1 day'
18GROUP BY bucket, symbol
19ORDER BY bucket DESC;
20-- 45ms
21
22-- Multi-symbol aggregation
23SELECT
24    symbol,
25    COUNT(*) AS tick_count,
26    LAST(price, time) AS last_price
27FROM ticks
28WHERE time > NOW() - INTERVAL '1 hour'
29GROUP BY symbol;
30-- 180ms (15 symbols)
31

InfluxDB 2.x#

python
1from influxdb_client import InfluxDBClient, Point
2from influxdb_client.client.write_api import SYNCHRONOUS
3import time
4
5# Setup
6client = InfluxDBClient(url="http://localhost:8086", token="my-token", org="trading")
7write_api = client.write_api(write_options=SYNCHRONOUS)
8query_api = client.query_api()
9
10# Write tick data
11def write_tick(symbol: str, price: float, size: int, side: str, exchange: str):
12    point = Point("ticks") \
13        .tag("symbol", symbol) \
14        .tag("exchange", exchange) \
15        .tag("side", side) \
16        .field("price", price) \
17        .field("size", size) \
18        .time(time.time_ns())
19    
20    write_api.write(bucket="market-data", record=point)
21
22# Batch writes (much faster)
23from influxdb_client.client.write.point import PointSettings
24
25points = []
26for tick in ticks_batch:
27    points.append(
28        Point("ticks")
29        .tag("symbol", tick['symbol'])
30        .tag("exchange", tick['exchange'])
31        .field("price", tick['price'])
32        .field("size", tick['size'])
33        .time(tick['timestamp_ns'])
34    )
35
36write_api.write(bucket="market-data", record=points)
37

Flux Queries:

flux
1// Recent ticks
2from(bucket: "market-data")
3  |> range(start: -1h)
4  |> filter(fn: (r) => r._measurement == "ticks")
5  |> filter(fn: (r) => r.symbol == "AAPL")
6  |> sort(columns: ["_time"], desc: true)
7  |> limit(n: 1000)
8// 8ms
9
10// VWAP calculation
11from(bucket: "market-data")
12  |> range(start: -1d)
13  |> filter(fn: (r) => r._measurement == "ticks")
14  |> filter(fn: (r) => r.symbol == "AAPL")
15  |> aggregateWindow(every: 5m, fn: (tables=<-, column) =>
16      tables
17        |> reduce(
18            fn: (r, accumulator) => ({
19                sum_pv: accumulator.sum_pv + r.price * r.size,
20                sum_v: accumulator.sum_v + r.size
21            }),
22            identity: {sum_pv: 0.0, sum_v: 0}
23        )
24        |> map(fn: (r) => ({ _value: r.sum_pv / r.sum_v }))
25    )
26// 62ms
27
28// Downsampling task (background)
29option task = {name: "downsample-1min", every: 1m}
30
31from(bucket: "market-data")
32  |> range(start: -task.every)
33  |> filter(fn: (r) => r._measurement == "ticks")
34  |> aggregateWindow(every: 1m, fn: (tables=<-, column) => tables
35      |> reduce(
36          fn: (r, accumulator) => ({
37              open: if exists accumulator.open then accumulator.open else r.price,
38              high: if r.price > accumulator.high then r.price else accumulator.high,
39              low: if r.price < accumulator.low then r.price else accumulator.low,
40              close: r.price,
41              volume: accumulator.volume + r.size
42          }),
43          identity: {open: 0.0, high: -999999.0, low: 999999.0, close: 0.0, volume: 0}
44      ))
45  |> to(bucket: "market-data-1min")
46

QuestDB#

sql
1-- Create table
2CREATE TABLE ticks (
3    timestamp TIMESTAMP,
4    symbol SYMBOL capacity 256 CACHE,
5    price DOUBLE,
6    size INT,
7    side SYMBOL capacity 2 CACHE,
8    exchange SYMBOL capacity 16 CACHE
9) timestamp(timestamp) PARTITION BY DAY;
10
11-- Sample data
12INSERT INTO ticks VALUES
13    (systimestamp(), 'AAPL', 150.25, 100, 'buy', 'NASDAQ');
14
15-- Optimized bulk insert via ILP (Influx Line Protocol)
16-- TCP: questdb_host:9009
17-- Format: ticks,symbol=AAPL,exchange=NASDAQ price=150.25,size=100,side="buy" 1638316800000000000
18

Queries:

sql
1-- Recent ticks (native time syntax)
2SELECT * FROM ticks
3WHERE symbol = 'AAPL'
4  AND timestamp > dateadd('h', -1, now())
5ORDER BY timestamp DESC
6LIMIT 1000;
7-- 3ms (SIMD vectorization)
8
9-- VWAP with sample by
10SELECT
11    timestamp,
12    symbol,
13    sum(price * size) / sum(size) AS vwap,
14    sum(size) AS volume
15FROM ticks
16WHERE symbol = 'AAPL'
17  AND timestamp > dateadd('d', -1, now())
18SAMPLE BY 5m ALIGN TO CALENDAR;
19-- 18ms (parallel execution)
20
21-- ASOF join (latest price before each trade)
22SELECT
23    trades.timestamp AS trade_time,
24    trades.symbol,
25    trades.price AS trade_price,
26    quotes.price AS quote_price,
27    (trades.price - quotes.price) AS slippage
28FROM trades
29ASOF JOIN ticks quotes
30WHERE trades.timestamp > dateadd('h', -1, now());
31-- 25ms (critical for microstructure analysis)
32

ClickHouse#

sql
1-- Create table with MergeTree engine
2CREATE TABLE ticks (
3    timestamp DateTime64(6),
4    symbol LowCardinality(String),
5    price Decimal(10, 2),
6    size UInt32,
7    side LowCardinality(String),
8    exchange LowCardinality(String),
9    date Date DEFAULT toDate(timestamp)
10) ENGINE = MergeTree()
11PARTITION BY toYYYYMM(timestamp)
12ORDER BY (symbol, timestamp)
13TTL timestamp + INTERVAL 90 DAY;
14
15-- Materialized view for aggregations
16CREATE MATERIALIZED VIEW ohlcv_1min
17ENGINE = SummingMergeTree()
18PARTITION BY toYYYYMM(bucket)
19ORDER BY (symbol, bucket)
20AS SELECT
21    toStartOfMinute(timestamp) AS bucket,
22    symbol,
23    argMin(price, timestamp) AS open,
24    max(price) AS high,
25    min(price) AS low,
26    argMax(price, timestamp) AS close,
27    sum(size) AS volume
28FROM ticks
29GROUP BY bucket, symbol;
30

Queries:

sql
1-- Recent ticks
2SELECT * FROM ticks
3WHERE symbol = 'AAPL'
4  AND timestamp > now() - INTERVAL 1 HOUR
5ORDER BY timestamp DESC
6LIMIT 1000;
7-- 7ms
8
9-- VWAP with window function
10SELECT
11    toStartOfInterval(timestamp, INTERVAL 5 MINUTE) AS bucket,
12    symbol,
13    sum(price * size) / sum(size) AS vwap,
14    sum(size) AS volume
15FROM ticks
16WHERE symbol = 'AAPL'
17  AND timestamp > now() - INTERVAL 1 DAY
18GROUP BY bucket, symbol
19ORDER BY bucket DESC;
20-- 32ms
21
22-- Percentiles (sketches)
23SELECT
24    symbol,
25    quantile(0.5)(price) AS median_price,
26    quantile(0.95)(price) AS p95_price,
27    quantile(0.99)(price) AS p99_price
28FROM ticks
29WHERE timestamp > now() - INTERVAL 1 HOUR
30GROUP BY symbol;
31-- 45ms
32
33-- Complex analytical query
34SELECT
35    symbol,
36    bucket,
37    vwap,
38    stddevPop(price) OVER (PARTITION BY symbol ORDER BY bucket ROWS BETWEEN 20 PRECEDING AND CURRENT ROW) AS volatility,
39    avg(volume) OVER (PARTITION BY symbol ORDER BY bucket ROWS BETWEEN 20 PRECEDING AND CURRENT ROW) AS avg_volume
40FROM (
41    SELECT
42        toStartOfMinute(timestamp) AS bucket,
43        symbol,
44        sum(price * size) / sum(size) AS vwap,
45        sum(size) AS volume,
46        avg(price) AS price
47    FROM ticks
48    WHERE timestamp > now() - INTERVAL 1 DAY
49    GROUP BY bucket, symbol
50)
51ORDER BY symbol, bucket;
52-- 125ms (complex window functions)
53

Benchmark Results#

Testing on same hardware (32 cores, 128GB RAM, NVMe SSD):

Write Performance (1M rows)#

plaintext
1Database      Batch Insert    Single Insert    Compression
2────────────────────────────────────────────────────────────────
3TimescaleDB   45s             N/A              5.2x
4InfluxDB      12s             850k/sec         3.8x
5QuestDB       3.5s            4M/sec           2.1x
6ClickHouse    6s              N/A              7.5x
7

Query Performance#

plaintext
1Query Type                TimescaleDB  InfluxDB  QuestDB  ClickHouse
2────────────────────────────────────────────────────────────────────────
3Point query (1 symbol)    12ms         8ms       3ms      7ms
4Range scan (1 hour)       45ms         35ms      18ms     28ms
5Aggregation (1 day)       180ms        62ms      42ms     85ms
6Multi-symbol scan         340ms        120ms     75ms     95ms
7Complex analytics         850ms        N/A       180ms    125ms
8ASOF join                 N/A          N/A       25ms     65ms
9

Storage (100M ticks, ~10GB raw)#

plaintext
1Database      Compressed    Ratio    Query Impact
2───────────────────────────────────────────────────────────
3TimescaleDB   1.9GB        5.2x     +15% latency
4InfluxDB      2.6GB        3.8x     +8% latency
5QuestDB       4.7GB        2.1x     0% (in-memory)
6ClickHouse    1.3GB        7.5x     +5% latency
7

Production Deployment#

TimescaleDB Configuration#

ini
1# postgresql.conf additions
2shared_preload_libraries = 'timescaledb'
3max_connections = 200
4shared_buffers = 16GB
5effective_cache_size = 48GB
6maintenance_work_mem = 2GB
7checkpoint_completion_target = 0.9
8wal_buffers = 16MB
9default_statistics_target = 500
10random_page_cost = 1.1  # SSD
11effective_io_concurrency = 200
12work_mem = 64MB
13min_wal_size = 4GB
14max_wal_size = 16GB
15max_worker_processes = 16
16max_parallel_workers_per_gather = 4
17max_parallel_workers = 16
18timescaledb.max_background_workers = 8
19

QuestDB Configuration#

properties
1# server.conf
2cairo.max.uncommitted.rows=500000
3cairo.commit.lag=10000
4line.tcp.enabled=true
5line.tcp.net.bind.to=0.0.0.0:9009
6line.tcp.io.worker.count=8
7line.tcp.io.worker.affinity=1-8
8shared.worker.count=8
9http.worker.count=4
10query.timeout.sec=300
11

ClickHouse Configuration#

xml
1<!-- config.xml -->
2<yandex>
3    <max_concurrent_queries>100</max_concurrent_queries>
4    <max_memory_usage>64000000000</max_memory_usage>
5    <merge_tree>
6        <max_parts_in_total>10000</max_parts_in_total>
7        <parts_to_throw_insert>3000</parts_to_throw_insert>
8    </merge_tree>
9    <mark_cache_size>10737418240</mark_cache_size>
10    <max_table_size_to_drop>0</max_table_size_to_drop>
11</yandex>
12

Production Results (2022-2024)#

Our Choice: QuestDB for Hot Data, ClickHouse for Analytics#

plaintext
1Tier                Database      Use Case                      Retention
2─────────────────────────────────────────────────────────────────────────────
3Hot (real-time)     QuestDB       Tick data, live trading       7 days
4Warm (recent)       ClickHouse    Backtesting, reporting        90 days
5Cold (archive)      S3/Parquet    Compliance, research          7 years
6

Results:

  • Latency: P99 query latency 25ms (was 180ms with TimescaleDB)
  • Throughput: 8M inserts/sec (was 250k with InfluxDB)
  • Storage costs: 450/month(was450/month (was 450/month(was2,400 with uncompressed Postgres)
  • Query complexity: ClickHouse handles complex analytics that crashed InfluxDB

Lessons Learned#

  1. QuestDB for speed: Fastest writes and simple queries (SIMD, C++)
  2. ClickHouse for analytics: Best for complex aggregations and window functions
  3. TimescaleDB for SQL: Familiar interface, good PostgreSQL integration
  4. InfluxDB for simplicity: Easy setup, good for metrics monitoring
  5. Compression critical: 5-7x reduction saves massive storage costs
  6. Hot/warm/cold tiers: Don't keep everything in fast DB
  7. Batch writes: 100x faster than single inserts
  8. Continuous aggregates: Pre-compute common queries (OHLCV)
  9. Partition by time: Daily partitions enable fast deletes/TTL
  10. Monitor cardinality: High-cardinality tags kill performance

Recommendations#

Choose QuestDB if:

  • Speed is critical (HFT, low-latency trading)
  • Simple time-series queries (recent data, ASOF joins)
  • High write throughput needed
  • Real-time analytics required

Choose ClickHouse if:

  • Complex analytics and window functions
  • Multi-table joins common
  • Extremely high compression needed
  • SQL familiarity important

Choose TimescaleDB if:

  • Already using PostgreSQL
  • Need full SQL features (CTEs, complex joins)
  • Moderate scale (< 1M rows/sec writes)
  • Want managed service (Timescale Cloud)

Choose InfluxDB if:

  • Metrics/monitoring (not trading)
  • Simple setup preferred
  • Flux query language acceptable
  • InfluxDB ecosystem tools useful

Further Reading#

  • TimescaleDB Documentation
  • QuestDB Time-Series Benchmark
  • ClickHouse for Time-Series
  • Time-Series Database Requirements
NT

NordVarg Team

Technical Writer

NordVarg Team is a software engineer at NordVarg specializing in high-performance financial systems and type-safe programming.

timescaledbinfluxdbquestdbclickhousedatabases

Join 1,000+ Engineers

Get weekly insights on building high-performance financial systems, latest industry trends, and expert tips delivered straight to your inbox.

✓Weekly articles
✓Industry insights
✓No spam, ever

Related Posts

Dec 31, 2024•8 min read
Real-Time Data Quality Monitoring
Data Engineeringdata-qualitymonitoring
Dec 31, 2024•9 min read
Building a Data Lake for Financial Data
Data Engineeringdata-lakes3
Nov 11, 2025•12 min read
Latency Optimization for C++ in HFT Trading — Practical Guide
A hands-on guide to profiling and optimizing latency in C++ trading code: hardware-aware design, kernel-bypass networking, lock-free queues, memory layout, and measurement best-practices.
GeneralC++HFT

Interested in working together?