Skip to content

Commit 96602a8

Browse files
committed
Draft refactoring. Changed benchmarks.
1 parent 4106a7d commit 96602a8

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

74 files changed

+2997
-2281
lines changed

benchmarks/README.md

Lines changed: 217 additions & 75 deletions
Original file line numberDiff line numberDiff line change
@@ -1,75 +1,217 @@
1-
## 📊 Quick & Dirty Benchmarks
2-
3-
This benchmark compares the performance of two API implementations:
4-
- **Original**: Uses standard routing with Pydantic validation.
5-
- **FastOpenAPI**: Uses proxy routing with Pydantic validation.
6-
7-
Each implementation runs in a separate instance, and the benchmark measures response times across multiple endpoints.
8-
9-
### 📈 Rough results
10-
- You can check rough results here:
11-
- [AioHttp](aiohttp/AIOHTTP.md)
12-
- [Falcon](falcon/FALCON.md)
13-
- [Flask](flask/FLASK.md)
14-
- [Quart](quart/QUART.md)
15-
- [Sanic](sanic/SANIC.md)
16-
- [Starlette](starlette/STARLETTE.md)
17-
- [Tornado](tornado/TORNADO.md)
18-
- [Django](django/DJANGO.md)
19-
20-
### 📖 How It Works
21-
- The script runs **10,000 requests per endpoint**. You can set your own value.
22-
- It tests **GET, POST, PUT, PATCH, and DELETE** operations.
23-
- For DELETE, it first creates a temporary record to ensure valid deletion.
24-
- Results are printed and compared in a summary table.
25-
26-
### 📂 Benchmark Structure
27-
- The main benchmark script is in `benchmarks/benchmark.py`.
28-
- Test applications are organized in separate folders (`without_fastopenapi/` and `with_fastopenapi/`).
29-
- Each implementation runs on different ports (`8000` and `8001` by default).
30-
31-
### ▶️ Running the Benchmark
32-
1. Start both API implementations:
33-
```sh
34-
python benchmarks/<framework>/without_fastopenapi/run.py
35-
python benchmarks/<framework>/with_fastopenapi/run.py
36-
```
37-
2. Run the benchmark:
38-
```sh
39-
python benchmarks/benchmark.py
40-
```
41-
3. Waiting for the results (example):
42-
```sh
43-
Testing Original Implementation
44-
45-
Original - Running 10000 iterations per endpoint
46-
--------------------------------------------------
47-
GET all records: 16.3760 sec total, 1.64 ms per request
48-
GET one record: 17.7782 sec total, 1.78 ms per request
49-
POST new record: 19.8376 sec total, 1.98 ms per request
50-
PUT record: 20.4346 sec total, 2.04 ms per request
51-
PATCH record: 19.7331 sec total, 1.97 ms per request
52-
DELETE record: 37.4556 sec total, 3.75 ms per request
53-
54-
Testing FastOpenAPI Implementation
55-
56-
FastOpenAPI - Running 10000 iterations per endpoint
57-
--------------------------------------------------
58-
GET all records: 17.4752 sec total, 1.75 ms per request
59-
GET one record: 18.3059 sec total, 1.83 ms per request
60-
POST new record: 19.9647 sec total, 2.00 ms per request
61-
PUT record: 19.3761 sec total, 1.94 ms per request
62-
PATCH record: 19.5880 sec total, 1.96 ms per request
63-
DELETE record: 40.6837 sec total, 4.07 ms per request
64-
65-
Performance Comparison (10000 iterations)
66-
======================================================================
67-
Endpoint Original FastOpenAPI Difference
68-
----------------------------------------------------------------------
69-
GET all records 1.64 ms 1.75 ms 0.11 ms (+6.7%)
70-
GET one record 1.78 ms 1.83 ms 0.05 ms (+3.0%)
71-
POST new record 1.98 ms 2.00 ms 0.01 ms (+0.6%)
72-
PUT record 2.04 ms 1.94 ms -0.11 ms (-5.2%)
73-
PATCH record 1.97 ms 1.96 ms -0.01 ms (-0.7%)
74-
DELETE record 3.75 ms 4.07 ms 0.32 ms (+8.6%)
75-
```
1+
# Benchmarks
2+
3+
## Overview
4+
5+
This benchmark suite compares the performance of different Python web frameworks with three implementation variants:
6+
7+
- **Pure**: Minimal implementation without validation - raw framework performance
8+
- **with_validators**: Adds Pydantic validation for request/response models
9+
- **with_fastopenapi**: Uses FastOpenAPI router with automatic OpenAPI documentation and validation
10+
- **FastAPI** (baseline): Industry-standard async framework for comparison
11+
12+
Each implementation runs as a separate application, and the benchmark measures throughput (RPS) and latency across all CRUD endpoints.
13+
14+
## Supported Frameworks
15+
16+
- [AioHttp](aiohttp/AIOHTTP.md) - Async
17+
- [Django](django/DJANGO.md) - Sync (WSGI)
18+
- [Falcon](falcon/FALCON.md) - Sync (WSGI)
19+
- [Flask](flask/FLASK.md) - Sync (WSGI)
20+
- [Quart](quart/QUART.md) - Async
21+
- [Sanic](sanic/SANIC.md) - Async
22+
- [Starlette](starlette/STARLETTE.md) - Async
23+
- [Tornado](tornado/TORNADO.md) - Async
24+
25+
## How It Works
26+
27+
### Test Parameters
28+
- **10,000 requests** per endpoint (configurable)
29+
- **100 warmup requests** per endpoint
30+
- **20 concurrent requests** (configurable)
31+
- **5 rounds** with randomized order to reduce bias
32+
- **Median values** used for final comparison
33+
34+
### Endpoints Tested
35+
- `GET /records` - List all records
36+
- `GET /records/{id}` - Get single record
37+
- `POST /records` - Create new record
38+
- `PUT /records/{id}` - Replace entire record
39+
- `PATCH /records/{id}` - Partial update
40+
- `DELETE /records/{id}` - Delete record (creates records first)
41+
42+
### Metrics Reported
43+
- **RPS** (Requests Per Second) - Higher is better
44+
- **Mean latency** - Average response time
45+
- **p50** (median) - 50th percentile latency
46+
- **p95** - 95th percentile latency
47+
- **p99** - 99th percentile latency
48+
- **Min/Max** - Best and worst latencies
49+
50+
## Running the Benchmark
51+
52+
### 1. Start all implementations
53+
54+
For each framework, start all three variants:
55+
56+
```bash
57+
# Pure implementation (port 8000)
58+
python benchmarks/<framework>/pure.py
59+
60+
# With validators (port 8001)
61+
python benchmarks/<framework>/with_validators.py
62+
63+
# With FastOpenAPI (port 8002)
64+
python benchmarks/<framework>/with_fastopenapi.py
65+
66+
# FastAPI for comparison (port 8003)
67+
python benchmarks/fastapi/run.py
68+
```
69+
70+
Example for AioHttp:
71+
```bash
72+
python benchmarks/aiohttp/pure.py
73+
python benchmarks/aiohttp/with_validators.py
74+
python benchmarks/aiohttp/with_fastopenapi.py
75+
python benchmarks/fastapi/run.py
76+
```
77+
78+
### 2. Run the benchmark
79+
80+
```bash
81+
python benchmarks/common/benchmark.py
82+
```
83+
84+
The benchmark will:
85+
1. Run 5 rounds in randomized order
86+
2. Display results for each round
87+
3. Calculate median values across rounds
88+
4. Compare all implementations with Pure as baseline
89+
90+
## Example Output
91+
92+
### Per-Round Results
93+
94+
```markdown
95+
================================================================================
96+
97+
## Framework Pure (round 1)
98+
Concurrency: **20**
99+
100+
| Endpoint | RPS | Mean (ms) | p50 (ms) | p95 (ms) | p99 (ms) | Min (ms) | Max (ms) |
101+
|:---------|----:|----------:|---------:|---------:|---------:|---------:|---------:|
102+
| `GET all records` | 45230 | 2.21 | 2.10 | 3.50 | 4.80 | 1.20 | 8.50 |
103+
| `GET one record` | 46890 | 2.13 | 2.05 | 3.30 | 4.60 | 1.15 | 7.80 |
104+
| `POST new record` | 38450 | 2.60 | 2.45 | 4.10 | 5.90 | 1.40 | 10.20 |
105+
| `PUT record` | 42100 | 2.37 | 2.25 | 3.80 | 5.30 | 1.30 | 9.10 |
106+
| `PATCH record` | 43200 | 2.31 | 2.20 | 3.70 | 5.10 | 1.25 | 8.80 |
107+
| `DELETE record` | 41500 | 2.41 | 2.30 | 3.85 | 5.40 | 1.35 | 9.50 |
108+
```
109+
110+
### Median Summary
111+
112+
```markdown
113+
================================================================================
114+
115+
# SUMMARY: Median Results Across All Rounds
116+
117+
================================================================================
118+
119+
## Framework Pure
120+
121+
| Endpoint | RPS (median) | p95 (ms) |
122+
|:---------|-------------:|---------:|
123+
| `GET all records` | 45180 | 3.48 |
124+
| `GET one record` | 46820 | 3.28 |
125+
| `POST new record` | 38390 | 4.08 |
126+
| `PUT record` | 42050 | 3.78 |
127+
| `PATCH record` | 43150 | 3.68 |
128+
| `DELETE record` | 41450 | 3.82 |
129+
130+
## Framework + Validators
131+
132+
| Endpoint | RPS (median) | p95 (ms) |
133+
|:---------|-------------:|---------:|
134+
| `GET all records` | 42300 | 3.75 |
135+
| `GET one record` | 43950 | 3.58 |
136+
...
137+
```
138+
139+
### Performance Comparison
140+
141+
```markdown
142+
================================================================================
143+
144+
## Framework — Performance Comparison (Pure = 100% baseline)
145+
146+
### Requests Per Second (higher is better)
147+
148+
| Endpoint | Pure | +Validators | Δ% | +FastOpenAPI | Δ% | FastAPI | Δ% |
149+
|---------|-----:|------------:|---:|-------------:|---:|--------:|---:|
150+
| `GET all records` | 45180 | 42300 | -6.4% | 41200 | -8.8% | 40500 | -10.4% |
151+
| `GET one record` | 46820 | 43950 | -6.1% | 42800 | -8.6% | 41900 | -10.5% |
152+
| `POST new record` | 38390 | 35200 | -8.3% | 34100 | -11.2% | 33500 | -12.7% |
153+
...
154+
155+
### p95 Latency (lower is better)
156+
157+
| Endpoint | Pure (ms) | +Validators | Δ% | +FastOpenAPI | Δ% | FastAPI | Δ% |
158+
|---------|----------:|------------:|---:|-------------:|---:|--------:|---:|
159+
| `GET all records` | 3.48 | 3.75 | +7.8% | 3.92 | +12.6% | 4.05 | +16.4% |
160+
| `GET one record` | 3.28 | 3.58 | +9.1% | 3.78 | +15.2% | 3.95 | +20.4% |
161+
...
162+
```
163+
164+
## Benchmark Structure
165+
166+
```
167+
benchmarks/
168+
├── common/
169+
│ ├── benchmark_base.py # Core benchmark logic
170+
│ ├── schemas.py # Pydantic models
171+
│ └── storage.py # In-memory store
172+
├── aiohttp/
173+
│ ├── pure.py
174+
│ ├── with_validators.py
175+
│ └── with_fastopenapi.py
176+
├── django/
177+
│ ├── pure.py
178+
│ ├── with_validators.py
179+
│ └── with_fastopenapi.py
180+
├── falcon/
181+
│ ├── pure.py
182+
│ ├── with_validators.py
183+
│ └── with_fastopenapi.py
184+
... (other frameworks)
185+
└── benchmark.py # Main runner script
186+
```
187+
188+
## Interpreting Results
189+
190+
### Pure Performance
191+
Shows the raw framework overhead - routing, parsing, serialization. This is the baseline (100%).
192+
193+
### Validators Overhead
194+
Shows the cost of adding Pydantic validation:
195+
- Input validation (request models)
196+
- Output validation (response models)
197+
- Typically 5-15% overhead
198+
199+
### FastOpenAPI Overhead
200+
Shows the cost of adding automatic OpenAPI documentation generation:
201+
- Router proxy layer
202+
- Schema extraction
203+
- Documentation generation
204+
- Typically 8-20% overhead on top of pure
205+
206+
### FastAPI Comparison
207+
Industry-standard async framework with built-in validation and docs. Useful for:
208+
- "Is FastOpenAPI competitive with FastAPI?"
209+
- Understanding the cost of convenience features
210+
211+
## Notes
212+
213+
- All tests use in-memory storage (no database)
214+
- Tests measure framework overhead, not I/O performance
215+
- Results vary based on hardware and system load
216+
- Async frameworks may show different characteristics under different concurrency levels
217+
- Django runs in single-threaded mode (`--nothreading`) for fair comparison

0 commit comments

Comments
 (0)