Benchmark Methodology

A clear, repeatable methodology for evaluating latency and throughput across VectorWave and IronWave.

Latency Benchmarks

Tick-by-tick latency is measured with microbenchmarks (JMH for Java, Criterion for Rust) using pinned CPU affinity, warm caches, and steady-state sampling. We report P50 and note tail behavior separately.

Throughput Benchmarks

Throughput tests use fixed block sizes and controlled wavelet parameters to compare samples per second, highlighting batch efficiency for AI feature engineering workflows.

Hardware & Runtime

Benchmarks run on dedicated Linux hosts with isolated cores, CPU frequency locked, and SIMD flags explicitly enabled to avoid frequency scaling or thermal drift.

Reproducibility

We capture transform parameters, compiler flags, and runtime versions alongside results to ensure consistent comparisons across releases.

Interpreting Results

Latency benchmarks represent steady-state processing under controlled conditions. Production latency can vary based on data distribution, infrastructure, and integration overhead.

Review the latest benchmark results →

FAQ

Do benchmarks include warmup and steady-state sampling?

Yes. We run warmup iterations and report steady-state metrics to avoid cold-start distortion.

What hardware settings are used?

Benchmarks run on dedicated Linux hosts with pinned cores, fixed CPU frequency, and explicit SIMD flags.

Can I reproduce the results?

Yes. We provide parameter details and can share a benchmark pack on request for reproducibility.

Validate in Your Environment

Need a benchmark pack or evaluation license? Contact us for reproducibility guidance.

Request Access