Skip to content

Benchmarks

ergo is benchmarked against five Node.js frameworks across nine scenarios that exercise different stages of the Fast Fail pipeline. Each scenario isolates a specific concern — routing, auth, body parsing, compression, conditional requests, rate limiting — to measure where frameworks spend time and how ergo’s design decisions affect throughput and latency.

Run date: 2026-03-22  |  ergo version: 0.1.0  |  Node.js: 22 (Alpine)

Overall Rankings (by average requests per second)

Section titled “Overall Rankings (by average requests per second)”
Scenario1st2nd3rd4th5th6th
Baseline GETHono (37,183)Koa (36,841)node:http (36,722)ergo (36,472)Fastify (35,049)Express (18,581)
Param GETergo (38,798)Hono (37,487)node:http (36,493)Koa (36,309)Fastify (36,190)Express (17,539)
Auth GETergo (36,437)node:http (35,041)Hono (34,500)Koa (33,424)Fastify (32,051)Express (18,197)
JSON POSTergo (34,389)Koa (33,786)node:http (33,008)Fastify (31,993)Hono (30,811)Express (14,885)
Full Pipelineergo (34,688)Koa (34,014)node:http (33,107)Fastify (31,752)Hono (29,795)Express (14,704)
Concurrency RampKoa (30,249)ergo (30,190)Hono (28,969)node:http (27,930)Fastify (25,133)Express (14,312)
Production Stacknode:http (18,008)ergo (7,386)Koa (6,564)Express (4,834)Hono (4,725)Fastify (4,456)
Conditional GETKoa (40,626)Fastify (38,824)node:http (37,859)Hono (37,198)ergo (36,612)Express (17,953)
Rate-Limit FloodFastify (35,315)node:http (34,626)Hono (34,115)Koa (33,046)ergo (15,430)Express (14,035)
node:httpExpressFastifyHonoKoaergo
BaselineParamAuthBodyFullConcurrencyProd StackConditionalRate Limitnode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergo0k5k10k15k20k25k30k35k40kRequests per second →
Average requests per second by framework across all 9 scenarios. Higher is better.
  • Param GET: #1 — 38,798 RPS (106% of node:http, 107% of Fastify)
  • Auth GET: #1 — 36,437 RPS (104% of node:http, 114% of Fastify)
  • JSON POST: #1 — 34,389 RPS (104% of node:http, 107% of Fastify)
  • Full Pipeline: #1 — 34,688 RPS (105% of node:http, 109% of Fastify)
  • Concurrency Ramp: #2 — 30,190 RPS (108% of node:http, 120% of Fastify)
  • Production Stack: #2 — 7,386 RPS (166% of Fastify, 156% of Hono)
  • Baseline GET: #4 — 36,472 RPS (99% of node:http, 104% of Fastify)
  • Conditional GET: #5 — 36,612 RPS (97% of node:http, 94% of Fastify)
  • Rate-Limit Flood: #5 — 15,430 RPS (45% of node:http, 44% of Fastify)

ergo ranks #1 in the four core pipeline scenarios (Param GET, Auth GET, JSON POST, Full Pipeline) — the scenarios that exercise the Fast Fail design most directly. In the Production Stack scenario with compression, CORS, content negotiation, and timeout middleware, ergo leads all full-featured frameworks at 166% of Fastify’s throughput.

node:httpExpressFastifyHonoKoaergo
5k10k15k20k25k30k35k↑ Requests per secondBaselineParamAuthBodyFullProd Stack
Throughput degradation as middleware complexity increases from Baseline to Production Stack. Flatter lines indicate lower middleware overhead.

All values are the mean across 3 trial runs.

GET /ping — routing overhead only. Measures the framework’s minimum per-request cost with no middleware.

FrameworkAvg RPSp50 (ms)p95 (ms)p99 (ms)Avg Latency (ms)Mem Peak
Hono37,1830.882.194.641.0629.1 MB
Koa36,8410.882.254.891.0732.7 MB
node:http36,7220.892.214.801.0822.4 MB
ergo36,4720.882.275.101.0931.3 MB
Fastify35,0490.912.485.421.1322.8 MB
Express18,5812.105.187.032.1431.8 MB

GET /users/:id?fields=name — parameterized route with query string parsing. Exercises Stage 1 (Negotiation) — URL and query parameter handling.

FrameworkAvg RPSp50 (ms)p95 (ms)p99 (ms)Avg Latency (ms)Mem Peak
ergo38,7980.852.084.461.0230.3 MB
Hono37,4870.872.174.711.0528.6 MB
node:http36,4930.902.244.751.0821.6 MB
Koa36,3090.892.294.961.0933.5 MB
Fastify36,1900.872.345.211.0930.8 MB
Express17,5392.255.387.262.2732.8 MB

GET /auth/users/:id with Bearer token — authenticated request. Exercises Stage 2 (Authorization) — credential extraction and token verification.

FrameworkAvg RPSp50 (ms)p95 (ms)p99 (ms)Avg Latency (ms)Mem Peak
ergo36,4370.902.234.681.0829.4 MB
node:http35,0410.942.334.851.1320.4 MB
Hono34,5000.952.375.001.1529.3 MB
Koa33,4240.972.535.261.1833.1 MB
Fastify32,0510.953.026.171.2425.1 MB
Express18,1972.155.267.172.1831.1 MB

POST /users with JSON body — body parsing. Exercises Stage 3 (Validation) — request body stream reading and JSON deserialization.

FrameworkAvg RPSp50 (ms)p95 (ms)p99 (ms)Avg Latency (ms)Mem Peak
ergo34,3890.942.435.321.1531.0 MB
Koa33,7860.962.485.221.1733.4 MB
node:http33,0080.982.475.411.2027.5 MB
Fastify31,9930.963.395.891.2433.7 MB
Hono30,8111.112.927.571.2837.2 MB
Express14,8852.706.037.832.6733.5 MB

POST /auth/users with Bearer token, JSON body, and AJV validation — exercises all four Fast Fail stages in sequence.

FrameworkAvg RPSp50 (ms)p95 (ms)p99 (ms)Avg Latency (ms)Mem Peak
ergo34,6880.942.355.041.1429.7 MB
Koa34,0140.952.445.271.1633.2 MB
node:http33,1070.992.495.141.1928.4 MB
Fastify31,7520.983.385.691.2535.8 MB
Hono29,7951.182.957.721.3338.2 MB
Express14,7042.726.338.082.7135.2 MB

POST /auth/users with a ramp from 10 to 500 virtual users — measures throughput and latency under increasing concurrency to find the saturation point.

FrameworkAvg RPSp50 (ms)p95 (ms)p99 (ms)Avg Latency (ms)Mem Peak
Koa30,2492.2719.0532.185.4033.7 MB
ergo30,1902.2919.0432.255.4031.3 MB
Hono28,9692.5719.0837.175.6339.0 MB
node:http27,9302.5820.3937.115.8528.4 MB
Fastify25,1333.0421.7340.686.4937.8 MB
Express14,3125.8739.2849.0411.4249.5 MB

POST /stack/auth/users with CORS, content negotiation, timeout, auth, body parsing, AJV validation, and gzip compression — a realistic production middleware stack. The delta from Scenario 05 isolates the cost of additional middleware layers.

FrameworkAvg RPSp50 (ms)p95 (ms)p99 (ms)Avg Latency (ms)Mem Peak
node:http18,0081.965.149.142.2023.1 MB
ergo7,3864.0112.3814.625.4030.5 MB
Koa6,5644.5014.4217.676.0832.6 MB
Express4,8348.5414.1717.228.2628.0 MB
Hono4,7257.7217.7221.318.4534.6 MB
Fastify4,4568.4517.3522.868.9733.7 MB

GET /cached/users/:id with If-None-Match — measures ETag generation and 304 Not Modified short-circuiting. Tests how efficiently frameworks skip serialization and compression when the resource hasn’t changed.

FrameworkAvg RPSp50 (ms)p95 (ms)p99 (ms)Avg Latency (ms)Mem Peak
Koa40,6260.811.994.260.9732.7 MB
Fastify38,8240.842.094.611.0223.7 MB
node:http37,8590.862.154.601.0421.2 MB
Hono37,1980.872.234.891.0629.1 MB
ergo36,6120.872.315.071.0831.2 MB
Express17,9532.165.357.372.2231.6 MB

POST /rate-limited/users under sustained flood — rate limit set to 50 requests per 10-second window with 50 concurrent VUs. The vast majority of requests are rejected with 429. Tests the Fast Fail principle: rejected requests should be cheap because the rate limiter runs in Stage 1 before body parsing, auth, and validation.

FrameworkAvg RPSp50 (ms)p95 (ms)p99 (ms)Avg Latency (ms)Mem Peak
Fastify35,3150.932.285.001.1232.1 MB
node:http34,6260.942.355.061.1424.7 MB
Hono34,1150.952.465.201.1628.4 MB
Koa33,0460.952.656.141.2132.4 MB
ergo15,4300.812.735.982.5831.1 MB
Express14,0352.856.598.512.8432.4 MB
p50p95p99
BaselineAuthFullProd Stacknode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergo246810121416182022Latency (ms) →
Latency distribution across four key scenarios. Tighter dot clusters indicate more predictable response times. Connecting lines show the p50-to-p99 spread.
node:httpExpressFastifyHonoKoaergo
BaselineParamAuthBodyFullConcurrencyProd StackConditionalRate Limit051015202530354045↑ Peak memory (MB)node:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergonode:httpExpressFastifyHonoKoaergo
Peak memory usage (MB) by framework across all scenarios. Lower is better.

Each server runs in its own Docker container, one at a time (sequential, never concurrent), to eliminate inter-container CPU contention.

ResourceServer Containerk6 Container
CPU--cpuset-cpus="0"--cpuset-cpus="1"
Memory--memory=512m--memory=1g
NetworkDocker bridge (bench)Docker bridge (bench)
Node.jsnode:22-alpine
NODE_ENVproduction

All servers listen on port 3000 and implement identical request/response logic — the only variable is the framework.

k6 staged virtual users per scenario:

PhaseDurationVirtual UsersPurpose
Warmup30s0 → 50JIT warm, connection pools established
Sustain60s50Measurement window
Ramp-down10s50 → 0Graceful drain

Three trial runs per scenario × framework combination (162 total runs). The report computes the mean, standard deviation, and coefficient of variation (CoV) across trials to quantify run-to-run consistency.

All frameworks use AJV JSON Schema validation for request body validation (Scenarios 05 and 07). Fastify uses its built-in AJV integration; all others compile schemas at startup. This ensures validation overhead is consistent across the suite.


FrameworkVersionKey Dependencies
node:httpNode.js 22 (Alpine)ajv 8.18.0, etag 1.8.1
Express5.2.1compression 1.8.1, cors 2.8.6, express-rate-limit 8.3.1
Fastify5.8.2@fastify/compress 8.3.1, @fastify/cors 11.2.0, @fastify/etag 6.1.0, @fastify/rate-limit 10.3.0
Hono4.12.8@hono/node-server 1.19.11, ajv 8.18.0, etag 1.8.1
Koa3.1.2@koa/cors 5.0.0, @koa/router 15.4.0, koa-bodyparser 4.4.1, koa-compress 5.2.1
ergo0.1.0ergo-router 0.1.0

Each framework × scenario combination is run 3 times. The report computes the coefficient of variation (CoV = stddev / mean × 100%) for both RPS and p99 latency to quantify measurement noise.

CoV RangeVerdict
< 3%Excellent
3–5%Acceptable
5–10%Noisy
> 10%Unreliable
  • Average CoV across all 54 combinations: 1.8%
  • Noisy (CoV ≥ 5%): 5 / 54 combinations
  • Unreliable (CoV ≥ 10%): 0 / 54 combinations
  • Overall verdict: CLEAN RUN — results are highly reliable

The noisy combinations were:

FrameworkScenarioCoV (RPS)CoV (p99)
FastifyBaseline GET6.3%12.9%
FastifyAuth GET8.6%18.0%
ergoJSON POST5.2%13.2%
ExpressConditional GET5.2%4.2%
KoaRate-Limit Flood9.2%18.6%

The Fastify variability in Scenarios 01 and 03 is consistent with Fastify’s JIT-sensitive startup behavior under short warmup windows.


OS: Darwin 25.3.0 arm64
CPU: Apple M4 Pro
Cores: 14 (12 performance + 2 efficiency)
RAM: 48 GB
Docker: Docker Desktop 4.42.0 (Engine 28.1.1)
Node.js: 22 (pinned via node:22-alpine)
k6: grafana/k6:latest

The complete benchmark suite — scenarios, server implementations, Docker orchestration, and report generator — is committed to the ergo repository for full auditability.

Terminal window
cd benchmarks
chmod +x run.sh
./run.sh # ~2.5 hours for 162 runs
node generate-report.js # produces results/report.md

Prerequisites: Docker Desktop (or Docker Engine) with multi-core CPU pinning support, 2+ CPU cores available, ~4 GB free RAM.