Skip to content

noiv benchmark

Comprehensive performance testing and benchmarking for APIs with detailed metrics and analysis.

Syntax

bash
noiv benchmark <config> [OPTIONS]

Description

The benchmark command provides comprehensive performance testing capabilities including:

  • Load testing - Simulate multiple concurrent users
  • Stress testing - Find breaking points and limits
  • Performance profiling - Detailed response time analysis
  • Resource monitoring - Track API resource usage
  • Comparative analysis - Compare performance across versions

Arguments

<config> (required)

Test configuration file or URL to benchmark.

bash
noiv benchmark tests.yaml
noiv benchmark https://api.example.com/health
noiv benchmark performance_config.json

Options

--users, -u

Number of concurrent users to simulate (default: 10).

bash
noiv benchmark tests.yaml --users 50
noiv benchmark tests.yaml -u 100

--duration, -d

Test duration in seconds (default: 60).

bash
noiv benchmark tests.yaml --duration 120
noiv benchmark tests.yaml -d 300

--ramp-up, -r

Ramp-up time to reach target users (default: 30 seconds).

bash
noiv benchmark tests.yaml --ramp-up 60
noiv benchmark tests.yaml -r 120

--requests, -n

Total number of requests instead of duration-based testing.

bash
noiv benchmark tests.yaml --requests 1000
noiv benchmark tests.yaml -n 5000

--rate-limit

Maximum requests per second (default: no limit).

bash
noiv benchmark tests.yaml --rate-limit 100
noiv benchmark tests.yaml --rate-limit 50

--output, -o

Output file for detailed results (default: benchmark_results.json).

bash
noiv benchmark tests.yaml --output perf_results.json
noiv benchmark tests.yaml -o load_test_results.json

--format, -f

Output format: table (default), json, yaml, or html.

bash
noiv benchmark tests.yaml --format json
noiv benchmark tests.yaml -f html

--profile, -p

Performance profile: light, standard (default), heavy, or stress.

bash
noiv benchmark tests.yaml --profile stress
noiv benchmark tests.yaml -p light

--include-errors

Include error scenarios in performance testing.

bash
noiv benchmark tests.yaml --include-errors

--baseline

Compare against baseline results file.

bash
noiv benchmark tests.yaml --baseline previous_results.json

--monitoring

Enable system resource monitoring during tests.

bash
noiv benchmark tests.yaml --monitoring

Performance Profiles

Light Profile

bash
noiv benchmark tests.yaml --profile light

Configuration:

  • Users: 1-10
  • Duration: 30-60 seconds
  • Focus: Basic response time validation
  • Resource usage: Minimal

Standard Profile (Default)

bash
noiv benchmark tests.yaml --profile standard

Configuration:

  • Users: 10-50
  • Duration: 60-300 seconds
  • Focus: Realistic load simulation
  • Resource usage: Moderate

Heavy Profile

bash
noiv benchmark tests.yaml --profile heavy

Configuration:

  • Users: 50-200
  • Duration: 300-600 seconds
  • Focus: High-load performance
  • Resource usage: Intensive

Stress Profile

bash
noiv benchmark tests.yaml --profile stress

Configuration:

  • Users: 200+ (incrementally increased)
  • Duration: Until breaking point
  • Focus: Finding system limits
  • Resource usage: Maximum

Examples

Basic Performance Test

bash
noiv benchmark https://api.example.com/health

Output:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃                                            Benchmark Results                                                  ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

🎯 Endpoint: https://api.example.com/health
👥 Concurrent Users: 10
⏱️  Duration: 60 seconds
📊 Total Requests: 2,456

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃                                          Performance Metrics                                                  ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

📈 Requests/Second: 40.9 (avg)
⚡ Response Time:
   • Average: 234ms
   • Median: 198ms  
   • 95th percentile: 456ms
   • 99th percentile: 789ms
   • Min: 89ms
   • Max: 1,234ms

✅ Success Rate: 99.8% (2,451/2,456)
❌ Error Rate: 0.2% (5/2,456)
🔄 Throughput: 2.1 MB/sec

Load Testing with Multiple Users

bash
noiv benchmark api_tests.yaml \
  --users 100 \
  --duration 300 \
  --ramp-up 60

Stress Testing

bash
noiv benchmark api_tests.yaml \
  --profile stress \
  --monitoring

Output with Resource Monitoring:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃                                         Stress Test Results                                                   ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

🎯 Breaking Point Analysis:
   • Max stable users: 250
   • Breaking point: 350 users
   • Error threshold: 5% at 300 users

📊 Resource Usage (Peak):
   • CPU: 78%
   • Memory: 1.2GB
   • Network I/O: 45 MB/s
   • Database connections: 89/100

⚠️  Performance Degradation:
   • Response time degradation starts: 200 users
   • Significant errors begin: 300 users
   • System instability: 350+ users

Comparative Benchmarking

bash
# Run baseline test
noiv benchmark api_tests.yaml --output baseline.json

# Make API changes...

# Compare new performance
noiv benchmark api_tests.yaml \
  --baseline baseline.json \
  --output current.json

Comparison Output:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃                                       Performance Comparison                                                  ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

📊 Metric Comparison (Current vs Baseline):

Response Time:
   • Average: 234ms → 198ms (✅ 15% faster)
   • 95th percentile: 456ms → 412ms (✅ 10% faster)
   • 99th percentile: 789ms → 845ms (❌ 7% slower)

Throughput:
   • Requests/second: 40.9 → 45.2 (✅ 11% increase)
   • Success rate: 99.8% → 99.9% (✅ 0.1% improvement)

Resource Usage:
   • CPU: 78% → 72% (✅ 8% reduction)
   • Memory: 1.2GB → 1.1GB (✅ 8% reduction)

🎯 Overall Performance: ✅ IMPROVED (+12%)

Test Configuration

YAML Configuration

yaml
name: API Performance Tests
base_url: https://api.example.com
benchmark:
  users: 50
  duration: 300
  ramp_up: 60
  rate_limit: 100
scenarios:
  - name: User Registration Flow
    weight: 30
    tests:
      - name: Register User
        request:
          method: POST
          path: /auth/register
          body:
            email: "{{random_email}}"
            password: "TestPass123"
        expect:
          status: 201
          response_time_ms: 500
      
  - name: User Login Flow  
    weight: 50
    tests:
      - name: Login User
        request:
          method: POST
          path: /auth/login
          body:
            email: "test@example.com"
            password: "TestPass123"
        expect:
          status: 200
          response_time_ms: 300
          
  - name: API Operations
    weight: 20
    tests:
      - name: List Users
        request:
          method: GET
          path: /users
          headers:
            Authorization: "Bearer {{auth_token}}"
        expect:
          status: 200
          response_time_ms: 200

JSON Configuration

json
{
  "name": "E-commerce API Benchmark",
  "base_url": "https://shop-api.example.com",
  "benchmark": {
    "users": 100,
    "duration": 600,
    "ramp_up": 120,
    "profile": "heavy"
  },
  "scenarios": [
    {
      "name": "Product Browsing",
      "weight": 60,
      "tests": [
        {
          "name": "Search Products",
          "request": {
            "method": "GET",
            "path": "/products/search",
            "query": {
              "q": "laptop",
              "category": "electronics"
            }
          },
          "expect": {
            "status": 200,
            "response_time_ms": 250
          }
        }
      ]
    }
  ]
}

Performance Metrics

Response Time Metrics

⚡ Response Time Analysis:
   • Average: 234ms
   • Median (P50): 198ms
   • 90th Percentile (P90): 345ms
   • 95th Percentile (P95): 456ms
   • 99th Percentile (P99): 789ms
   • 99.9th Percentile: 1,234ms
   • Minimum: 89ms
   • Maximum: 2,456ms
   • Standard Deviation: 123ms

Throughput Metrics

📈 Throughput Analysis:
   • Requests per second: 40.9 (average)
   • Peak RPS: 52.3
   • Total requests: 2,456
   • Data transferred: 127 MB
   • Transfer rate: 2.1 MB/sec
   • Request rate distribution:
     - 0-10s: 35.2 RPS
     - 10-20s: 41.8 RPS
     - 20-30s: 43.1 RPS
     - 30-60s: 42.7 RPS

Error Analysis

❌ Error Analysis:
   • Total errors: 5 (0.2%)
   • Connection errors: 2
   • Timeout errors: 1
   • HTTP 5xx errors: 2
   • HTTP 4xx errors: 0
   
   Error Distribution:
   • 502 Bad Gateway: 2 requests
   • Connection timeout: 1 request
   • Connection refused: 2 requests

Resource Monitoring

🖥️  System Resources (Target Server):
   • CPU Usage: 72% (average), 89% (peak)
   • Memory Usage: 1.1GB (average), 1.3GB (peak)
   • Network I/O: 45 MB/s in, 23 MB/s out
   • Disk I/O: 234 ops/sec (read), 89 ops/sec (write)
   • Database connections: 67/100 (active/max)
   • Connection pool utilization: 67%

Output Formats

Table Format (Default)

Displays results in formatted tables with colors and visual indicators.

JSON Format

bash
noiv benchmark tests.yaml --format json
json
{
  "summary": {
    "total_requests": 2456,
    "total_duration": 60.0,
    "average_rps": 40.9,
    "success_rate": 99.8,
    "error_rate": 0.2
  },
  "response_times": {
    "average": 234,
    "median": 198,
    "p90": 345,
    "p95": 456,
    "p99": 789,
    "min": 89,
    "max": 2456
  },
  "errors": [
    {
      "type": "connection_timeout",
      "count": 1,
      "percentage": 0.04
    }
  ],
  "resource_usage": {
    "cpu_percent": 72,
    "memory_mb": 1100,
    "network_io_mbps": 45
  }
}

HTML Report

bash
noiv benchmark tests.yaml --format html --output report.html

Generates a comprehensive HTML report with:

  • Interactive charts and graphs
  • Detailed performance metrics
  • Resource usage visualizations
  • Error analysis
  • Recommendations

YAML Format

bash
noiv benchmark tests.yaml --format yaml
yaml
summary:
  total_requests: 2456
  duration_seconds: 60.0
  requests_per_second: 40.9
  success_rate: 99.8
response_times:
  average_ms: 234
  median_ms: 198
  p95_ms: 456
  p99_ms: 789
errors:
  total: 5
  rate_percent: 0.2

Advanced Features

Scenario-Based Testing

yaml
scenarios:
  - name: Heavy Users (Admin Operations)
    weight: 20
    users: 10
    tests:
      - name: Generate Reports
        request:
          method: POST
          path: /admin/reports/generate
        expect:
          response_time_ms: 2000
          
  - name: Regular Users (Browse & Search)
    weight: 80  
    users: 40
    tests:
      - name: Search Products
        request:
          method: GET
          path: /products/search
        expect:
          response_time_ms: 300

Dynamic Load Patterns

yaml
load_pattern:
  type: ramp_up_down
  phases:
    - name: warm_up
      duration: 60
      users: 10
    - name: load_test
      duration: 300
      users: 100
    - name: stress_test
      duration: 120
      users: 200
    - name: cool_down
      duration: 60
      users: 10

Performance Thresholds

yaml
performance_thresholds:
  response_time:
    p95_ms: 500      # 95% of requests under 500ms
    p99_ms: 1000     # 99% of requests under 1000ms
  error_rate:
    max_percent: 1   # Maximum 1% error rate
  throughput:
    min_rps: 30      # Minimum 30 requests per second

Real-time Monitoring

bash
noiv benchmark tests.yaml --monitoring --output live

Displays real-time performance metrics during testing:

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃                                          Live Performance Monitor                                             ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

⏱️  Elapsed: 32s / 300s    👥 Active Users: 45/100    📊 RPS: 42.3    ❌ Errors: 0.1%

Current Metrics:
   Response Time: 245ms (avg)    Memory: 892MB    CPU: 68%
   
Recent Requests:
   ✅ GET /users         - 198ms
   ✅ POST /auth/login   - 234ms  
   ✅ GET /products      - 187ms
   ❌ GET /slow-endpoint - TIMEOUT
   ✅ POST /orders       - 456ms

Use Cases

Development Performance Testing

bash
# Quick performance check during development
noiv benchmark http://localhost:3000/api/health --users 10 --duration 30

CI/CD Performance Gates

bash
#!/bin/bash
# performance_gate.sh

# Run benchmark
noiv benchmark api_tests.yaml --format json --output results.json

# Check if performance meets criteria
RESPONSE_TIME=$(cat results.json | jq '.response_times.p95')
ERROR_RATE=$(cat results.json | jq '.summary.error_rate')

if (( $(echo "$RESPONSE_TIME > 500" | bc -l) )); then
    echo "❌ Performance gate failed: P95 response time $RESPONSE_TIME ms > 500ms"
    exit 1
fi

if (( $(echo "$ERROR_RATE > 1.0" | bc -l) )); then
    echo "❌ Performance gate failed: Error rate $ERROR_RATE% > 1%"
    exit 1
fi

echo "✅ Performance gate passed"

Production Performance Monitoring

bash
# Regular production performance checks
noiv benchmark production_health_checks.yaml \
  --profile light \
  --baseline baseline_prod.json \
  --output $(date +%Y%m%d)_perf_results.json

Capacity Planning

bash
# Stress test to find capacity limits
noiv benchmark capacity_tests.yaml \
  --profile stress \
  --monitoring \
  --output capacity_analysis.json

Best Practices

1. Start Small and Scale

bash
# Begin with light testing
noiv benchmark tests.yaml --profile light

# Gradually increase load
noiv benchmark tests.yaml --users 50 --duration 120

# Full stress testing
noiv benchmark tests.yaml --profile stress

2. Use Realistic Test Data

yaml
# Use realistic data patterns
tests:
  - name: User Search
    request:
      method: GET
      path: /users/search
      query:
        q: "{{realistic_search_term}}"  # Real search patterns
        limit: 20                        # Realistic page sizes

3. Monitor Both Client and Server

bash
# Monitor target system resources
noiv benchmark tests.yaml --monitoring

# Also monitor your own testing infrastructure

4. Set Appropriate Thresholds

yaml
performance_thresholds:
  response_time:
    p95_ms: 500      # Based on user experience requirements
  error_rate:
    max_percent: 0.1 # Stricter for critical systems

5. Regular Baseline Updates

bash
# Update baselines after significant changes
noiv benchmark tests.yaml --output new_baseline.json

# Archive old baselines for historical tracking
mv baseline.json baselines/baseline_$(date +%Y%m%d).json
mv new_baseline.json baseline.json

Integration Examples

Docker Performance Testing

dockerfile
# Dockerfile.perf-test
FROM python:3.9-slim
RUN pip install noiv
COPY tests/ /tests/
CMD ["noiv", "benchmark", "/tests/api_tests.yaml"]

Kubernetes Performance Jobs

yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: api-performance-test
spec:
  template:
    spec:
      containers:
      - name: noiv-benchmark
        image: noiv-perf:latest
        command: ["noiv", "benchmark"]
        args: ["api_tests.yaml", "--users", "100", "--duration", "300"]
      restartPolicy: Never

See Also

Released under the MIT License.