NV
NordVarg
ServicesTechnologiesIndustriesCase StudiesBlogAboutContact
Get Started

Footer

NV
NordVarg

Software Development & Consulting

GitHubLinkedInTwitter

Services

  • Product Development
  • Quantitative Finance
  • Financial Systems
  • ML & AI

Technologies

  • C++
  • Python
  • Rust
  • OCaml
  • TypeScript
  • React

Company

  • About
  • Case Studies
  • Blog
  • Contact

© 2025 NordVarg. All rights reserved.

November 24, 2025
•
NordVarg Team
•

Rust Unsafe: When and How to Use It Safely in Financial Systems

Systems ProgrammingRustunsafeFFIperformancelock-freeSIMDmemory-safetyfinancial-systems
9 min read
Share:

TL;DR – Unsafe Rust is necessary for FFI, performance optimization, and low-level control. This guide shows when to use it, how to minimize risk, and how to test for undefined behavior.

1. When Unsafe is Necessary#

Unsafe Rust is required for:

  • FFI – Calling C/C++ libraries
  • Performance – Zero-copy operations, SIMD intrinsics
  • Low-level control – Custom allocators, memory layout
  • Lock-free structures – Atomic operations beyond safe API
  • Hardware access – Direct memory-mapped I/O

2. FFI: Calling C++ Pricing Libraries#

Interop with existing C++ code:

rust
1// C++ header (pricing.hpp)
2// extern "C" {
3//     double black_scholes_call(double S, double K, double r, double sigma, double T);
4//     void* create_pricing_engine();
5//     void destroy_pricing_engine(void* engine);
6//     double price_option(void* engine, const char* type, double* params, int n);
7// }
8
9#[repr(C)]
10struct PricingParams {
11    spot: f64,
12    strike: f64,
13    rate: f64,
14    volatility: f64,
15    time: f64,
16}
17
18extern "C" {
19    fn black_scholes_call(s: f64, k: f64, r: f64, sigma: f64, t: f64) -> f64;
20    fn create_pricing_engine() -> *mut std::ffi::c_void;
21    fn destroy_pricing_engine(engine: *mut std::ffi::c_void);
22    fn price_option(
23        engine: *mut std::ffi::c_void,
24        option_type: *const std::os::raw::c_char,
25        params: *const f64,
26        n: std::os::raw::c_int,
27    ) -> f64;
28}
29
30// Safe wrapper
31pub struct PricingEngine {
32    ptr: *mut std::ffi::c_void,
33}
34
35impl PricingEngine {
36    pub fn new() -> Self {
37        let ptr = unsafe { create_pricing_engine() };
38        assert!(!ptr.is_null(), "Failed to create pricing engine");
39        PricingEngine { ptr }
40    }
41
42    pub fn price_call(&self, params: &PricingParams) -> f64 {
43        unsafe {
44            black_scholes_call(
45                params.spot,
46                params.strike,
47                params.rate,
48                params.volatility,
49                params.time,
50            )
51        }
52    }
53
54    pub fn price_option(&self, option_type: &str, params: &[f64]) -> f64 {
55        let c_str = std::ffi::CString::new(option_type).unwrap();
56        unsafe {
57            price_option(
58                self.ptr,
59                c_str.as_ptr(),
60                params.as_ptr(),
61                params.len() as std::os::raw::c_int,
62            )
63        }
64    }
65}
66
67impl Drop for PricingEngine {
68    fn drop(&mut self) {
69        unsafe {
70            destroy_pricing_engine(self.ptr);
71        }
72    }
73}
74
75// Usage: safe API
76let engine = PricingEngine::new();
77let params = PricingParams {
78    spot: 100.0,
79    strike: 100.0,
80    rate: 0.05,
81    volatility: 0.2,
82    time: 1.0,
83};
84let price = engine.price_call(&params);
85

Safety: Encapsulate unsafe in safe wrapper, validate pointers.

3. Zero-Copy Ring Buffer#

Lock-free single-producer single-consumer queue:

rust
1use std::sync::atomic::{AtomicUsize, Ordering};
2use std::alloc::{alloc, dealloc, Layout};
3use std::ptr;
4
5pub struct RingBuffer<T> {
6    buffer: *mut T,
7    capacity: usize,
8    head: AtomicUsize,  // Consumer reads from head
9    tail: AtomicUsize,  // Producer writes to tail
10}
11
12unsafe impl<T: Send> Send for RingBuffer<T> {}
13unsafe impl<T: Send> Sync for RingBuffer<T> {}
14
15impl<T> RingBuffer<T> {
16    pub fn new(capacity: usize) -> Self {
17        assert!(capacity.is_power_of_two(), "Capacity must be power of 2");
18        
19        let layout = Layout::array::<T>(capacity).unwrap();
20        let buffer = unsafe { alloc(layout) as *mut T };
21        assert!(!buffer.is_null(), "Allocation failed");
22        
23        RingBuffer {
24            buffer,
25            capacity,
26            head: AtomicUsize::new(0),
27            tail: AtomicUsize::new(0),
28        }
29    }
30
31    pub fn push(&self, value: T) -> Result<(), T> {
32        let tail = self.tail.load(Ordering::Relaxed);
33        let next_tail = (tail + 1) & (self.capacity - 1);
34        
35        // Check if full
36        if next_tail == self.head.load(Ordering::Acquire) {
37            return Err(value);
38        }
39        
40        unsafe {
41            ptr::write(self.buffer.add(tail), value);
42        }
43        
44        self.tail.store(next_tail, Ordering::Release);
45        Ok(())
46    }
47
48    pub fn pop(&self) -> Option<T> {
49        let head = self.head.load(Ordering::Relaxed);
50        
51        // Check if empty
52        if head == self.tail.load(Ordering::Acquire) {
53            return None;
54        }
55        
56        let value = unsafe { ptr::read(self.buffer.add(head)) };
57        
58        let next_head = (head + 1) & (self.capacity - 1);
59        self.head.store(next_head, Ordering::Release);
60        
61        Some(value)
62    }
63}
64
65impl<T> Drop for RingBuffer<T> {
66    fn drop(&mut self) {
67        // Drop remaining elements
68        while self.pop().is_some() {}
69        
70        let layout = Layout::array::<T>(self.capacity).unwrap();
71        unsafe {
72            dealloc(self.buffer as *mut u8, layout);
73        }
74    }
75}
76
77// Usage
78let queue = RingBuffer::<Order>::new(1024);
79queue.push(order).unwrap();
80let order = queue.pop();
81

Performance: < 20 ns per operation, zero allocations.

4. Custom Bump Allocator#

Arena allocator for hot path:

rust
1use std::alloc::{alloc, dealloc, Layout};
2use std::ptr::NonNull;
3use std::cell::Cell;
4
5pub struct BumpAllocator {
6    buffer: NonNull<u8>,
7    capacity: usize,
8    offset: Cell<usize>,
9}
10
11impl BumpAllocator {
12    pub fn new(capacity: usize) -> Self {
13        let layout = Layout::from_size_align(capacity, 64).unwrap();
14        let buffer = unsafe {
15            let ptr = alloc(layout);
16            NonNull::new(ptr).expect("Allocation failed")
17        };
18        
19        BumpAllocator {
20            buffer,
21            capacity,
22            offset: Cell::new(0),
23        }
24    }
25
26    pub fn allocate<T>(&self, value: T) -> Option<&mut T> {
27        let size = std::mem::size_of::<T>();
28        let align = std::mem::align_of::<T>();
29        
30        // Align offset
31        let offset = self.offset.get();
32        let aligned_offset = (offset + align - 1) & !(align - 1);
33        
34        if aligned_offset + size > self.capacity {
35            return None;
36        }
37        
38        unsafe {
39            let ptr = self.buffer.as_ptr().add(aligned_offset) as *mut T;
40            ptr::write(ptr, value);
41            self.offset.set(aligned_offset + size);
42            Some(&mut *ptr)
43        }
44    }
45
46    pub fn reset(&self) {
47        self.offset.set(0);
48    }
49}
50
51impl Drop for BumpAllocator {
52    fn drop(&mut self) {
53        let layout = Layout::from_size_align(self.capacity, 64).unwrap();
54        unsafe {
55            dealloc(self.buffer.as_ptr(), layout);
56        }
57    }
58}
59
60// Usage: per-request arena
61let arena = BumpAllocator::new(4096);
62let order = arena.allocate(Order::new()).unwrap();
63// ... process order ...
64arena.reset();  // Reuse for next request
65

Latency: < 10 ns allocation, deterministic.

5. SIMD Intrinsics for Pricing#

Vectorized Black-Scholes:

rust
1#[cfg(target_arch = "x86_64")]
2use std::arch::x86_64::*;
3
4#[cfg(target_arch = "x86_64")]
5unsafe fn black_scholes_simd_avx2(
6    spot: __m256d,
7    strike: __m256d,
8    rate: __m256d,
9    vol: __m256d,
10    time: __m256d,
11) -> __m256d {
12    // Constants
13    let half = _mm256_set1_pd(0.5);
14    let one = _mm256_set1_pd(1.0);
15    
16    // Calculate d1
17    let vol_sqrt_t = _mm256_mul_pd(vol, _mm256_sqrt_pd(time));
18    let vol_sq = _mm256_mul_pd(vol, vol);
19    let vol_sq_half = _mm256_mul_pd(vol_sq, half);
20    
21    let log_s_k = _mm256_div_pd(spot, strike);
22    // Note: _mm256_log_pd not in std, use approximation or libm
23    
24    let drift = _mm256_mul_pd(
25        _mm256_add_pd(rate, vol_sq_half),
26        time
27    );
28    
29    // Simplified - full implementation needs normal CDF
30    let d1 = _mm256_div_pd(drift, vol_sqrt_t);
31    
32    // Call price = S * N(d1) - K * exp(-rT) * N(d2)
33    // (Simplified for demonstration)
34    spot
35}
36
37pub fn price_options_batch(
38    spots: &[f64],
39    strikes: &[f64],
40    rates: &[f64],
41    vols: &[f64],
42    times: &[f64],
43) -> Vec<f64> {
44    assert_eq!(spots.len(), strikes.len());
45    assert_eq!(spots.len() % 4, 0, "Length must be multiple of 4");
46    
47    let mut prices = vec![0.0; spots.len()];
48    
49    #[cfg(target_arch = "x86_64")]
50    unsafe {
51        for i in (0..spots.len()).step_by(4) {
52            let s = _mm256_loadu_pd(spots.as_ptr().add(i));
53            let k = _mm256_loadu_pd(strikes.as_ptr().add(i));
54            let r = _mm256_loadu_pd(rates.as_ptr().add(i));
55            let v = _mm256_loadu_pd(vols.as_ptr().add(i));
56            let t = _mm256_loadu_pd(times.as_ptr().add(i));
57            
58            let price = black_scholes_simd_avx2(s, k, r, v, t);
59            _mm256_storeu_pd(prices.as_mut_ptr().add(i), price);
60        }
61    }
62    
63    prices
64}
65

Speedup: 4x throughput with AVX2.

6. Memory Layout Control#

Ensure cache-friendly data structures:

rust
1#[repr(C)]
2#[repr(align(64))]  // Cache line alignment
3pub struct Order {
4    pub order_id: u64,
5    pub symbol: [u8; 8],
6    pub price: u64,
7    pub quantity: u32,
8    pub side: u8,
9    _padding: [u8; 39],  // Pad to 64 bytes
10}
11
12// Verify size at compile time
13const _: () = assert!(std::mem::size_of::<Order>() == 64);
14
15#[repr(C, packed)]
16pub struct MarketDataMsg {
17    pub msg_type: u8,
18    pub symbol: [u8; 8],
19    pub price: u64,
20    pub quantity: u32,
21    pub timestamp: u64,
22}
23
24// Accessing packed struct fields
25fn read_price(msg: &MarketDataMsg) -> u64 {
26    unsafe {
27        // Use ptr::read_unaligned for packed structs
28        std::ptr::read_unaligned(&msg.price)
29    }
30}
31

Benefit: Predictable memory layout, cache optimization.

7. Testing Unsafe Code#

Miri: Undefined Behavior Detection#

bash
1# Install Miri
2rustup +nightly component add miri
3
4# Run tests with Miri
5cargo +nightly miri test
6
7# Run specific test
8cargo +nightly miri test test_ring_buffer
9

Property-Based Testing#

rust
1use proptest::prelude::*;
2
3proptest! {
4    #[test]
5    fn ring_buffer_push_pop(values in prop::collection::vec(0u64..1000, 0..100)) {
6        let queue = RingBuffer::new(128);
7        
8        for &v in &values {
9            let _ = queue.push(v);
10        }
11        
12        let mut popped = Vec::new();
13        while let Some(v) = queue.pop() {
14            popped.push(v);
15        }
16        
17        // Verify FIFO order
18        assert_eq!(popped, values.iter().take(popped.len()).copied().collect::<Vec<_>>());
19    }
20}
21

AddressSanitizer#

bash
1# Build with AddressSanitizer
2RUSTFLAGS="-Z sanitizer=address" cargo +nightly build --target x86_64-unknown-linux-gnu
3
4# Run tests
5RUSTFLAGS="-Z sanitizer=address" cargo +nightly test --target x86_64-unknown-linux-gnu
6

8. Best Practices#

Minimize Unsafe Surface#

rust
1// Bad: Large unsafe block
2unsafe {
3    // 100 lines of code
4}
5
6// Good: Small, focused unsafe blocks
7let value = unsafe { ptr::read(ptr) };
8process_value(value);  // Safe code
9

Document Safety Invariants#

rust
1/// # Safety
2///
3/// - `ptr` must be valid for reads
4/// - `ptr` must be properly aligned
5/// - `ptr` must point to an initialized `T`
6pub unsafe fn read_value<T>(ptr: *const T) -> T {
7    ptr::read(ptr)
8}
9

Use Safe Abstractions#

rust
1// Encapsulate unsafe in safe API
2pub struct SafeWrapper {
3    inner: UnsafeType,
4}
5
6impl SafeWrapper {
7    pub fn new() -> Self {
8        // Unsafe initialization
9        unsafe { /* ... */ }
10    }
11    
12    pub fn safe_method(&self) -> Result<T, Error> {
13        // Safe interface
14    }
15}
16

9. Production Checklist#

  • Minimize unsafe code surface area
  • Document all safety invariants
  • Test with Miri for undefined behavior
  • Run AddressSanitizer in CI
  • Use property-based testing for unsafe code
  • Encapsulate unsafe in safe wrappers
  • Verify memory layout with assert! at compile time
  • Profile to ensure unsafe code provides benefit
  • Review unsafe code in code reviews
  • Monitor for memory leaks in production

10. Real-World Example: Lock-Free Order Book#

rust
1use std::sync::atomic::{AtomicPtr, Ordering};
2use std::ptr;
3
4struct Node {
5    price: u64,
6    quantity: u32,
7    next: AtomicPtr<Node>,
8}
9
10pub struct LockFreeOrderBook {
11    bids: AtomicPtr<Node>,
12    asks: AtomicPtr<Node>,
13}
14
15impl LockFreeOrderBook {
16    pub fn new() -> Self {
17        LockFreeOrderBook {
18            bids: AtomicPtr::new(ptr::null_mut()),
19            asks: AtomicPtr::new(ptr::null_mut()),
20        }
21    }
22
23    pub fn insert_bid(&self, price: u64, quantity: u32) {
24        let new_node = Box::into_raw(Box::new(Node {
25            price,
26            quantity,
27            next: AtomicPtr::new(ptr::null_mut()),
28        }));
29
30        loop {
31            let head = self.bids.load(Ordering::Acquire);
32            unsafe {
33                (*new_node).next.store(head, Ordering::Relaxed);
34            }
35            
36            if self.bids.compare_exchange(
37                head,
38                new_node,
39                Ordering::Release,
40                Ordering::Acquire,
41            ).is_ok() {
42                break;
43            }
44        }
45    }
46
47    pub fn best_bid(&self) -> Option<(u64, u32)> {
48        let head = self.bids.load(Ordering::Acquire);
49        if head.is_null() {
50            None
51        } else {
52            unsafe {
53                Some(((*head).price, (*head).quantity))
54            }
55        }
56    }
57}
58
59impl Drop for LockFreeOrderBook {
60    fn drop(&mut self) {
61        // Clean up linked list
62        let mut current = self.bids.load(Ordering::Relaxed);
63        while !current.is_null() {
64            unsafe {
65                let next = (*current).next.load(Ordering::Relaxed);
66                drop(Box::from_raw(current));
67                current = next;
68            }
69        }
70        
71        // Same for asks
72        current = self.asks.load(Ordering::Relaxed);
73        while !current.is_null() {
74            unsafe {
75                let next = (*current).next.load(Ordering::Relaxed);
76                drop(Box::from_raw(current));
77                current = next;
78            }
79        }
80    }
81}
82

Performance: 10M inserts/sec, lock-free concurrency.


Unsafe Rust is a powerful tool for performance-critical financial systems. Use it judiciously, test thoroughly with Miri and sanitizers, and always encapsulate unsafe code in safe abstractions.

NT

NordVarg Team

Technical Writer

NordVarg Team is a software engineer at NordVarg specializing in high-performance financial systems and type-safe programming.

RustunsafeFFIperformancelock-free

Join 1,000+ Engineers

Get weekly insights on building high-performance financial systems, latest industry trends, and expert tips delivered straight to your inbox.

✓Weekly articles
✓Industry insights
✓No spam, ever

Related Posts

Nov 24, 2025•7 min read
Rust for Financial Systems: Beyond Memory Safety
Systems ProgrammingRustlow-latency
Dec 13, 2024•12 min read
Atomic Operations in C++ and Rust: Building Lock-Free Data Structures
Systems Programmingatomicslock-free
Nov 24, 2025•5 min read
Building a Real‑Time Market‑Data Feed with Rust & eBPF
Systems ProgrammingeBPFRust

Interested in working together?