Skip to content

Performance Testing

NOIV's built-in performance testing capabilities help you understand how your APIs perform under load with detailed metrics and analysis.

Basic Benchmarking

Test any endpoint with the benchmark command:

bash
noiv benchmark https://api.example.com/endpoint

This runs 100 requests with 10 concurrent connections by default.

Customizing Load Parameters

Request Count

bash
# Test with 1000 requests
noiv benchmark https://api.example.com/endpoint -n 1000

Concurrency Level

bash
# 50 concurrent connections
noiv benchmark https://api.example.com/endpoint -c 50

Time-Based Testing

bash
# Run for 60 seconds regardless of request count
noiv benchmark https://api.example.com/endpoint -d 60

Combined Parameters

bash
# 500 requests, 25 concurrent, for maximum 30 seconds
noiv benchmark https://api.example.com/endpoint -n 500 -c 25 -d 30

Understanding Results

Sample Output

╭───────────────────────────────────────────────────╮
│ Benchmarking https://api.example.com/users        │
╰───────────────────────────────────────────────────╯

       Benchmark Results       
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Metric          ┃ Value     ┃
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ Total Requests  │ 100       │
│ Successful      │ 98        │
│ Failed          │ 2         │
│ Total Time      │ 8.45s     │
│ Requests/sec    │ 11.83     │
│ Avg Response    │ 421.33ms  │
│ Min Response    │ 89.12ms   │
│ Max Response    │ 1247.89ms │
│ 50th Percentile │ 387.45ms  │
│ 95th Percentile │ 892.11ms  │
│ 99th Percentile │ 1198.76ms │
└─────────────────┴───────────┘

Key Metrics Explained

Throughput Metrics:

  • Requests/sec: How many requests completed per second
  • Total Requests: Total number of requests attempted
  • Successful/Failed: Request success/failure counts

Response Time Metrics:

  • Avg Response: Mean response time across all requests
  • Min/Max Response: Fastest and slowest individual responses
  • Percentiles: Response time distribution analysis

Percentile Analysis:

  • 50th Percentile (Median): Half of requests completed faster
  • 95th Percentile: 95% of requests completed faster
  • 99th Percentile: 99% of requests completed faster

Performance Testing Strategies

Load Testing Patterns

Baseline Performance

bash
# Light load to establish baseline
noiv benchmark https://api.example.com/users -n 50 -c 5

Normal Load

bash
# Typical production traffic simulation
noiv benchmark https://api.example.com/users -n 500 -c 25

Stress Testing

bash
# High load to find breaking points
noiv benchmark https://api.example.com/users -n 2000 -c 100

Endurance Testing

bash
# Sustained load over time
noiv benchmark https://api.example.com/users -d 300 -c 20

Different Endpoint Types

Read Operations (GET)

bash
# Test data retrieval performance
noiv benchmark https://api.example.com/products -n 1000 -c 50
noiv benchmark https://api.example.com/users/123 -n 500 -c 25

Write Operations (POST)

Note: Be careful with POST requests to avoid creating unwanted data.

bash
# Test with read-only POST endpoints (like search)
noiv benchmark https://api.example.com/search -n 200 -c 10

API Gateway/Load Balancer

bash
# Test infrastructure components
noiv benchmark https://api.example.com/health -n 2000 -c 100

Advanced Performance Analysis

Saving Detailed Results

bash
noiv benchmark https://api.example.com/endpoint -n 1000 -c 50
# When prompted: Save detailed results? [y/n]: y

This creates a JSON file with individual request timings:

json
{
  "url": "https://api.example.com/endpoint",
  "timestamp": 1753362789,
  "config": {
    "requests": 1000,
    "concurrency": 50,
    "duration": null
  },
  "summary": {
    "total_requests": 1000,
    "successful": 987,
    "failed": 13,
    "total_time": 42.5,
    "rps": 23.53,
    "avg_response_time": 412.33
  },
  "detailed_results": [
    {
      "status": 200,
      "time": 0.423,
      "success": true
    }
  ]
}

Performance Regression Testing

Compare performance across deployments:

bash
# Before deployment
noiv benchmark https://staging-api.example.com/users -n 500 -c 25

# After deployment  
noiv benchmark https://api.example.com/users -n 500 -c 25

# Compare the results manually or with scripts

Different Network Conditions

Test from different locations or network conditions:

bash
# Local/fast network
noiv benchmark http://localhost:8080/api -n 1000 -c 50

# Remote/slower network
noiv benchmark https://api.example.com/endpoint -n 500 -c 20

Performance Optimization Insights

Interpreting Results

Good Performance Indicators:

  • Low response time variance (95th percentile close to average)
  • High success rate (>99%)
  • Stable throughput across test duration
  • Reasonable response times for your use case

Performance Issues:

  • High response time variance (large gap between average and 95th percentile)
  • Request failures under normal load
  • Degrading performance as concurrency increases
  • Timeouts or errors at moderate load levels

Common Bottlenecks

Database Issues

Symptoms: High response times, failures under load
Solutions: Query optimization, connection pooling, caching

Memory Problems

Symptoms: Increasing response times, eventual failures
Solutions: Memory profiling, garbage collection tuning

Network Limitations

Symptoms: High min response times, consistent delays
Solutions: CDN, load balancing, geographic distribution

Rate Limiting

Symptoms: 429 status codes, consistent failure patterns
Solutions: Implement backoff, optimize request patterns

Best Practices

1. Start Small

bash
# Begin with light load
noiv benchmark https://api.example.com/endpoint -n 10 -c 2

# Gradually increase
noiv benchmark https://api.example.com/endpoint -n 100 -c 10
noiv benchmark https://api.example.com/endpoint -n 500 -c 25

2. Test Realistic Scenarios

  • Use production-like data volumes
  • Test during typical usage hours
  • Include authentication headers if required
  • Test complete user workflows, not just individual endpoints

3. Monitor System Resources

While running NOIV benchmarks, monitor:

  • CPU usage on API servers
  • Memory consumption
  • Database connections
  • Network bandwidth
  • Disk I/O

4. Establish Baselines

bash
# Create performance baseline
noiv benchmark https://api.example.com/users -n 500 -c 25

# Document results for future comparison
# Response time: 245ms avg, 12.5 req/sec

5. Test Regularly

  • Include performance tests in CI/CD
  • Run before major releases
  • Monitor performance trends over time
  • Set up alerts for performance regressions

Integration with Functional Testing

After Functional Tests

bash
# First verify functionality
noiv test run api_tests.yaml

# Then test performance
noiv benchmark https://api.example.com/endpoint -n 200 -c 20

Performance-Aware Test Suites

yaml
# functional_test.yaml with performance expectations
tests:
  - name: Fast Endpoint Test
    url: https://api.example.com/fast-endpoint
    method: GET
    expected_status: 200
    timeout: 1  # Expect response within 1 second
    
  - name: Slower Endpoint Test  
    url: https://api.example.com/complex-calculation
    method: POST
    expected_status: 200
    timeout: 10  # Allow up to 10 seconds

Troubleshooting Performance Tests

Connection Issues

bash
# If you see many failed requests:
# 1. Reduce concurrency
noiv benchmark https://api.example.com/endpoint -n 100 -c 5

# 2. Check if endpoint is accessible
noiv quick https://api.example.com/endpoint

Rate Limiting

bash
# If you see 429 errors, reduce load:
noiv benchmark https://api.example.com/endpoint -n 50 -c 2 -d 60

Timeouts

bash
# For slow endpoints, use longer duration tests:
noiv benchmark https://api.example.com/slow-endpoint -d 120 -c 5

Next Steps

After mastering performance testing:

  1. HTML Reports - Visualize performance data
  2. HTML Reports - Automate performance testing
  3. Quick Start - Optimize your testing strategy
  4. Installation - Advanced testing options

Important

Always coordinate with your operations team before running performance tests against production systems. High load tests can impact real users.

TIP

Performance testing is most valuable when done consistently. Establish baselines and monitor trends rather than focusing on single test results.

Released under the MIT License.