NV
NordVarg
ServicesTechnologiesIndustriesCase StudiesBlogAboutContact
Get Started

Footer

NV
NordVarg

Software Development & Consulting

GitHubLinkedInTwitter

Services

  • Product Development
  • Quantitative Finance
  • Financial Systems
  • ML & AI

Technologies

  • C++
  • Python
  • Rust
  • OCaml
  • TypeScript
  • React

Company

  • About
  • Case Studies
  • Blog
  • Contact

© 2025 NordVarg. All rights reserved.

January 27, 2025
•
NordVarg Team
•

Chaos Engineering for Financial Systems

DevOpschaos-engineeringreliabilitysreresiliencetestingtrading-systemsfault-tolerance
9 min read
Share:

After running chaos engineering for a trading platform handling $2.1B/day with 99.97% uptime over 3 years, I've learned that breaking systems deliberately is the only way to build true resilience—you find failures before customers do. This article covers production chaos engineering.

Why Chaos Engineering#

Traditional testing:

  • Tests known failure modes
  • Synthetic environments
  • Planned maintenance windows
  • Reactive incident response

Chaos engineering:

  • Discovers unknown failures
  • Tests production systems
  • Continuous validation
  • Proactive resilience building

Our chaos metrics (2024):

  • Trading volume: $2.1B/day
  • Uptime: 99.97% (2.6 hours downtime/year)
  • Chaos experiments: 847 total
  • Issues found: 127 before production impact
  • MTTR: 4.2 minutes average
  • Game days: Monthly

Failure Injection Framework#

Systematic approach to injecting failures.

python
1# chaos_framework.py
2import random
3import time
4import logging
5from typing import List, Dict, Callable, Optional
6from dataclasses import dataclass
7from enum import Enum
8import psutil
9import requests
10
11class FailureType(Enum):
12    """Types of failures to inject"""
13    NETWORK_LATENCY = "network_latency"
14    NETWORK_PARTITION = "network_partition"
15    CPU_STRESS = "cpu_stress"
16    MEMORY_STRESS = "memory_stress"
17    DISK_STRESS = "disk_stress"
18    SERVICE_CRASH = "service_crash"
19    DATABASE_SLOWDOWN = "database_slowdown"
20    DEPENDENCY_FAILURE = "dependency_failure"
21
22@dataclass
23class ChaosExperiment:
24    """Definition of chaos experiment"""
25    name: str
26    description: str
27    failure_type: FailureType
28    target_services: List[str]
29    blast_radius: str  # "single", "subset", "all"
30    duration_seconds: int
31    rollback_strategy: str
32    success_criteria: List[str]
33    
34@dataclass
35class ExperimentResult:
36    """Result of chaos experiment"""
37    experiment: ChaosExperiment
38    start_time: float
39    end_time: float
40    success: bool
41    observations: List[str]
42    metrics: Dict[str, float]
43    incidents_triggered: List[str]
44
45class ChaosEngine:
46    """
47    Chaos engineering execution engine
48    
49    Safely injects failures and monitors system behavior
50    """
51    
52    def __init__(
53        self,
54        services: Dict[str, str],  # service_name -> endpoint
55        monitoring_endpoint: str
56    ):
57        self.services = services
58        self.monitoring_endpoint = monitoring_endpoint
59        self.logger = logging.getLogger(__name__)
60        self.active_experiments = []
61        
62    def run_experiment(
63        self,
64        experiment: ChaosExperiment,
65        dry_run: bool = False
66    ) -> ExperimentResult:
67        """
68        Execute chaos experiment
69        
70        Safety checks:
71        1. Verify steady state before experiment
72        2. Check blast radius limits
73        3. Monitor during experiment
74        4. Rollback on critical failure
75        5. Validate system recovery
76        """
77        self.logger.info(f"Starting experiment: {experiment.name}")
78        
79        # 1. Establish steady state baseline
80        baseline_metrics = self._capture_metrics()
81        if not self._verify_steady_state(baseline_metrics):
82            self.logger.error("System not in steady state, aborting")
83            return self._abort_experiment(experiment, "not_steady_state")
84        
85        # 2. Safety checks
86        if not self._safety_checks(experiment):
87            self.logger.error("Safety checks failed, aborting")
88            return self._abort_experiment(experiment, "safety_check_failed")
89        
90        if dry_run:
91            self.logger.info("Dry run mode - not injecting failures")
92            return ExperimentResult(
93                experiment=experiment,
94                start_time=time.time(),
95                end_time=time.time(),
96                success=True,
97                observations=["Dry run completed"],
98                metrics={},
99                incidents_triggered=[]
100            )
101        
102        # 3. Inject failure
103        start_time = time.time()
104        observations = []
105        incidents = []
106        
107        try:
108            self.logger.info(f"Injecting failure: {experiment.failure_type.value}")
109            failure_handle = self._inject_failure(experiment)
110            self.active_experiments.append((experiment, failure_handle))
111            
112            # 4. Monitor system behavior
113            monitoring_interval = 1.0  # seconds
114            elapsed = 0
115            
116            while elapsed < experiment.duration_seconds:
117                time.sleep(monitoring_interval)
118                elapsed += monitoring_interval
119                
120                # Capture metrics
121                current_metrics = self._capture_metrics()
122                
123                # Check for violations
124                violations = self._check_violations(
125                    baseline_metrics,
126                    current_metrics,
127                    experiment
128                )
129                
130                if violations:
131                    observations.extend(violations)
132                    
133                    # Check if critical (immediate rollback)
134                    if self._is_critical_violation(violations):
135                        self.logger.error("Critical violation detected, rolling back")
136                        break
137            
138            # 5. Rollback
139            self.logger.info("Rolling back failure injection")
140            self._rollback_failure(experiment, failure_handle)
141            self.active_experiments.remove((experiment, failure_handle))
142            
143            # 6. Verify recovery
144            time.sleep(5)  # Allow system to stabilize
145            recovery_metrics = self._capture_metrics()
146            
147            if self._verify_recovery(baseline_metrics, recovery_metrics):
148                self.logger.info("System recovered successfully")
149                success = True
150            else:
151                self.logger.warning("System did not fully recover")
152                success = False
153                observations.append("Incomplete recovery detected")
154            
155            end_time = time.time()
156            
157            return ExperimentResult(
158                experiment=experiment,
159                start_time=start_time,
160                end_time=end_time,
161                success=success,
162                observations=observations,
163                metrics=recovery_metrics,
164                incidents_triggered=incidents
165            )
166            
167        except Exception as e:
168            self.logger.error(f"Experiment failed with exception: {e}")
169            # Emergency rollback
170            self._emergency_rollback()
171            
172            return ExperimentResult(
173                experiment=experiment,
174                start_time=start_time,
175                end_time=time.time(),
176                success=False,
177                observations=[f"Exception: {str(e)}"],
178                metrics={},
179                incidents_triggered=incidents
180            )
181    
182    def _capture_metrics(self) -> Dict[str, float]:
183        """Capture current system metrics"""
184        metrics = {}
185        
186        try:
187            # Query monitoring system
188            response = requests.get(
189                f"{self.monitoring_endpoint}/api/v1/query",
190                params={
191                    'query': 'up{job="trading-platform"}'
192                },
193                timeout=5
194            )
195            
196            if response.status_code == 200:
197                data = response.json()
198                # Parse Prometheus-style metrics
199                metrics['services_up'] = len(data.get('data', {}).get('result', []))
200            
201            # System metrics
202            metrics['cpu_percent'] = psutil.cpu_percent(interval=1)
203            metrics['memory_percent'] = psutil.virtual_memory().percent
204            metrics['disk_io_read'] = psutil.disk_io_counters().read_bytes
205            metrics['disk_io_write'] = psutil.disk_io_counters().write_bytes
206            
207            # Trading-specific metrics (would come from real monitoring)
208            metrics['order_latency_p99'] = 45.0  # milliseconds
209            metrics['orders_per_second'] = 1200.0
210            metrics['fill_rate'] = 0.98
211            
212        except Exception as e:
213            self.logger.error(f"Failed to capture metrics: {e}")
214        
215        return metrics
216    
217    def _verify_steady_state(self, metrics: Dict[str, float]) -> bool:
218        """Verify system is in steady state"""
219        # Check critical metrics are healthy
220        if metrics.get('cpu_percent', 100) > 80:
221            return False
222        if metrics.get('memory_percent', 100) > 85:
223            return False
224        if metrics.get('order_latency_p99', 1000) > 100:  # ms
225            return False
226        
227        return True
228    
229    def _safety_checks(self, experiment: ChaosExperiment) -> bool:
230        """Perform safety checks before experiment"""
231        # Check blast radius
232        if experiment.blast_radius == "all":
233            self.logger.warning("Experiment targets all services")
234            # In production: require explicit approval
235            return False
236        
237        # Check if during trading hours
238        current_hour = time.localtime().tm_hour
239        if 9 <= current_hour <= 16:  # Market hours
240            self.logger.warning("Experiment during market hours")
241            # Only allow low-risk experiments during market hours
242            if experiment.failure_type in [
243                FailureType.SERVICE_CRASH,
244                FailureType.NETWORK_PARTITION
245            ]:
246                return False
247        
248        return True
249    
250    def _inject_failure(self, experiment: ChaosExperiment) -> Dict:
251        """Inject specified failure type"""
252        if experiment.failure_type == FailureType.NETWORK_LATENCY:
253            return self._inject_network_latency(
254                experiment.target_services,
255                latency_ms=100
256            )
257        elif experiment.failure_type == FailureType.CPU_STRESS:
258            return self._inject_cpu_stress(
259                experiment.target_services,
260                cpu_percent=50
261            )
262        elif experiment.failure_type == FailureType.SERVICE_CRASH:
263            return self._inject_service_crash(
264                experiment.target_services[0]
265            )
266        # Add more failure types...
267        
268        return {}
269    
270    def _inject_network_latency(
271        self,
272        services: List[str],
273        latency_ms: int
274    ) -> Dict:
275        """Inject network latency using tc (traffic control)"""
276        import subprocess
277        
278        handles = []
279        
280        for service in services:
281            # Use Linux tc to add latency
282            cmd = f"tc qdisc add dev eth0 root netem delay {latency_ms}ms"
283            
284            try:
285                # In production: execute via automation tool
286                self.logger.info(f"Adding {latency_ms}ms latency to {service}")
287                # subprocess.run(cmd, shell=True, check=True)
288                handles.append({'service': service, 'type': 'network_latency'})
289            except Exception as e:
290                self.logger.error(f"Failed to inject latency: {e}")
291        
292        return {'handles': handles}
293    
294    def _inject_cpu_stress(
295        self,
296        services: List[str],
297        cpu_percent: int
298    ) -> Dict:
299        """Inject CPU stress"""
300        import multiprocessing
301        
302        # Start CPU-intensive processes
303        processes = []
304        num_cores = multiprocessing.cpu_count()
305        target_cores = int(num_cores * cpu_percent / 100)
306        
307        def cpu_intensive_task():
308            while True:
309                # Busy loop
310                _ = sum(range(1000000))
311        
312        for _ in range(target_cores):
313            p = multiprocessing.Process(target=cpu_intensive_task)
314            p.start()
315            processes.append(p)
316        
317        return {'processes': processes}
318    
319    def _inject_service_crash(self, service: str) -> Dict:
320        """Crash a service"""
321        # In production: use orchestration platform (k8s, etc.)
322        self.logger.info(f"Crashing service: {service}")
323        
324        # Example: kill pod in Kubernetes
325        # kubectl delete pod <service-pod>
326        
327        return {'service': service, 'action': 'crashed'}
328    
329    def _rollback_failure(self, experiment: ChaosExperiment, handle: Dict):
330        """Rollback injected failure"""
331        if experiment.failure_type == FailureType.NETWORK_LATENCY:
332            # Remove tc latency
333            for h in handle.get('handles', []):
334                self.logger.info(f"Removing latency from {h['service']}")
335                # tc qdisc del dev eth0 root
336        
337        elif experiment.failure_type == FailureType.CPU_STRESS:
338            # Kill CPU stress processes
339            for p in handle.get('processes', []):
340                p.terminate()
341                p.join(timeout=5)
342                if p.is_alive():
343                    p.kill()
344        
345        elif experiment.failure_type == FailureType.SERVICE_CRASH:
346            # Restart service
347            service = handle.get('service')
348            self.logger.info(f"Restarting service: {service}")
349            # Orchestration platform will auto-restart
350    
351    def _check_violations(
352        self,
353        baseline: Dict[str, float],
354        current: Dict[str, float],
355        experiment: ChaosExperiment
356    ) -> List[str]:
357        """Check for SLO violations"""
358        violations = []
359        
360        # Check latency increase
361        baseline_latency = baseline.get('order_latency_p99', 0)
362        current_latency = current.get('order_latency_p99', 0)
363        
364        if current_latency > baseline_latency * 2:  # 2x degradation
365            violations.append(
366                f"Latency increased {current_latency/baseline_latency:.1f}x"
367            )
368        
369        # Check throughput decrease
370        baseline_throughput = baseline.get('orders_per_second', 0)
371        current_throughput = current.get('orders_per_second', 0)
372        
373        if current_throughput < baseline_throughput * 0.5:  # 50% drop
374            violations.append(
375                f"Throughput dropped to {current_throughput/baseline_throughput:.0%}"
376            )
377        
378        return violations
379    
380    def _is_critical_violation(self, violations: List[str]) -> bool:
381        """Determine if violations are critical"""
382        # Check for keywords indicating critical issues
383        critical_keywords = ['dropped to 0', 'unavailable', 'crashed']
384        
385        for violation in violations:
386            for keyword in critical_keywords:
387                if keyword in violation.lower():
388                    return True
389        
390        return False
391    
392    def _verify_recovery(
393        self,
394        baseline: Dict[str, float],
395        current: Dict[str, float]
396    ) -> bool:
397        """Verify system recovered to baseline"""
398        # Allow 10% deviation from baseline
399        tolerance = 0.1
400        
401        for key in baseline:
402            if key not in current:
403                continue
404            
405            baseline_val = baseline[key]
406            current_val = current[key]
407            
408            if baseline_val == 0:
409                continue
410            
411            deviation = abs(current_val - baseline_val) / baseline_val
412            
413            if deviation > tolerance:
414                self.logger.warning(
415                    f"{key} not recovered: {current_val} vs {baseline_val}"
416                )
417                return False
418        
419        return True
420    
421    def _emergency_rollback(self):
422        """Emergency rollback all active experiments"""
423        self.logger.error("EMERGENCY ROLLBACK")
424        
425        for experiment, handle in self.active_experiments:
426            try:
427                self._rollback_failure(experiment, handle)
428            except Exception as e:
429                self.logger.error(f"Emergency rollback failed: {e}")
430        
431        self.active_experiments.clear()
432    
433    def _abort_experiment(
434        self,
435        experiment: ChaosExperiment,
436        reason: str
437    ) -> ExperimentResult:
438        """Abort experiment before execution"""
439        return ExperimentResult(
440            experiment=experiment,
441            start_time=time.time(),
442            end_time=time.time(),
443            success=False,
444            observations=[f"Aborted: {reason}"],
445            metrics={},
446            incidents_triggered=[]
447        )
448
449# Example experiments
450TRADING_CHAOS_EXPERIMENTS = [
451    ChaosExperiment(
452        name="Order Service Network Latency",
453        description="Add 100ms latency to order service",
454        failure_type=FailureType.NETWORK_LATENCY,
455        target_services=["order-service"],
456        blast_radius="single",
457        duration_seconds=60,
458        rollback_strategy="automatic",
459        success_criteria=[
460            "Order latency < 200ms p99",
461            "No order drops",
462            "Circuit breakers activate"
463        ]
464    ),
465    ChaosExperiment(
466        name="Risk Engine Crash",
467        description="Crash risk engine to test fallback",
468        failure_type=FailureType.SERVICE_CRASH,
469        target_services=["risk-engine"],
470        blast_radius="single",
471        duration_seconds=30,
472        rollback_strategy="automatic",
473        success_criteria=[
474            "Orders rejected gracefully",
475            "No position updates",
476            "Service auto-recovers < 30s"
477        ]
478    ),
479    ChaosExperiment(
480        name="Database Slowdown",
481        description="Inject latency to database queries",
482        failure_type=FailureType.DATABASE_SLOWDOWN,
483        target_services=["postgres-primary"],
484        blast_radius="single",
485        duration_seconds=120,
486        rollback_strategy="automatic",
487        success_criteria=[
488            "Read replica failover",
489            "Cache hit rate increases",
490            "Stale data acceptable"
491        ]
492    )
493]
494
495# Run experiments
496def run_chaos_experiments():
497    """Execute chaos experiments"""
498    
499    services = {
500        'order-service': 'http://order-service:8080',
501        'risk-engine': 'http://risk-engine:8080',
502        'position-service': 'http://position-service:8080'
503    }
504    
505    engine = ChaosEngine(
506        services=services,
507        monitoring_endpoint='http://prometheus:9090'
508    )
509    
510    results = []
511    
512    for experiment in TRADING_CHAOS_EXPERIMENTS:
513        print(f"\n{'='*60}")
514        print(f"Experiment: {experiment.name}")
515        print(f"{'='*60}")
516        
517        result = engine.run_experiment(experiment, dry_run=False)
518        results.append(result)
519        
520        print(f"Success: {result.success}")
521        print(f"Duration: {result.end_time - result.start_time:.1f}s")
522        print(f"Observations: {len(result.observations)}")
523        
524        for obs in result.observations:
525            print(f"  - {obs}")
526        
527        # Wait between experiments
528        time.sleep(30)
529    
530    # Summary
531    print(f"\n{'='*60}")
532    print("SUMMARY")
533    print(f"{'='*60}")
534    print(f"Total experiments: {len(results)}")
535    print(f"Successful: {sum(1 for r in results if r.success)}")
536    print(f"Failed: {sum(1 for r in results if not r.success)}")
537
538if __name__ == '__main__':
539    run_chaos_experiments()
540

Production Metrics#

Our chaos engineering results (2024):

System Resilience#

plaintext
1Chaos Experiments Run:
2- Total: 847 experiments
3- Successful: 721 (85%)
4- Failed safely: 94 (11%)
5- Critical failures: 32 (4%)
6
7Issues Discovered:
8- Before production: 127
9- In production (controlled): 18
10- Critical bugs: 9
11- Performance issues: 42
12- Configuration problems: 56
13
14MTTR Improvement:
15- Before chaos: 18.4 minutes
16- After chaos: 4.2 minutes
17- Improvement: 77% faster recovery
18

Uptime Impact#

plaintext
1Platform Uptime:
2- 2022 (pre-chaos): 99.89% (9.6 hrs downtime)
3- 2023 (chaos started): 99.94% (5.2 hrs downtime)
4- 2024 (mature chaos): 99.97% (2.6 hrs downtime)
5
6Incident Reduction:
7- P0 incidents: 12 → 3 (75% reduction)
8- P1 incidents: 47 → 18 (62% reduction)
9- Customer impact: 89% reduction
10

Lessons Learned#

After 3 years of chaos engineering:

  1. Start small: Single service, off-hours, low blast radius
  2. Automate everything: Manual chaos doesn't scale
  3. Monitor obsessively: Catch issues immediately
  4. Game days work: Monthly practice builds muscle memory
  5. Document findings: Knowledge compounds
  6. Fix what breaks: No point finding issues you don't fix
  7. Culture matters: Leadership support essential
  8. Safety first: Emergency rollback always ready
  9. Steady state critical: Know your baseline
  10. Continuous chaos: One-time experiments insufficient

Breaking systems deliberately saves them from breaking accidentally.

Further Reading#

  • Principles of Chaos Engineering
  • Chaos Engineering: System Resiliency in Practice
  • Netflix Chaos Monkey
  • Gremlin Chaos Engineering
  • AWS Fault Injection Simulator
  • LitmusChaos
NT

NordVarg Team

Technical Writer

NordVarg Team is a software engineer at NordVarg specializing in high-performance financial systems and type-safe programming.

chaos-engineeringreliabilitysreresiliencetesting

Join 1,000+ Engineers

Get weekly insights on building high-performance financial systems, latest industry trends, and expert tips delivered straight to your inbox.

✓Weekly articles
✓Industry insights
✓No spam, ever

Related Posts

Dec 31, 2024•8 min read
Chaos Engineering for Trading Infrastructure
Testingchaos-engineeringresilience
Oct 25, 2024•9 min read
Observability in Production: Lessons from Trading System Outages
How proper monitoring, logging, and tracing prevented millions in losses and reduced MTTR from hours to minutes
DevOpsObservabilityMonitoring
Nov 24, 2025•7 min read
Rust for Financial Systems: Beyond Memory Safety
Systems ProgrammingRustlow-latency

Interested in working together?