Skip to content

noiv report

Generate comprehensive HTML reports with rich visualizations and detailed analysis.

Syntax

bash
noiv report <results> [OPTIONS]

Description

The report command creates professional HTML reports from test results, providing:

  • Rich visualizations - Interactive charts and graphs
  • Detailed analysis - Performance metrics and trends
  • Professional formatting - Ready for stakeholders and documentation
  • Multi-format input - Works with various result formats
  • Customizable themes - Multiple report styles available

Arguments

<results> (required)

Test results file or directory containing results.

bash
noiv report test_results.json
noiv report benchmark_results.json
noiv report results/

Options

--output, -o

Output file for the HTML report (default: report.html).

bash
noiv report results.json --output api_report.html
noiv report results.json -o performance_report.html

--template, -t

Report template: standard (default), detailed, executive, or minimal.

bash
noiv report results.json --template detailed
noiv report results.json -t executive

--theme

Visual theme: light (default), dark, blue, or green.

bash
noiv report results.json --theme dark
noiv report results.json --theme blue

--title

Custom report title.

bash
noiv report results.json --title "User API Test Report"

--description

Report description or summary.

bash
noiv report results.json --description "Weekly API regression testing results"

--include-raw

Include raw test data in the report.

bash
noiv report results.json --include-raw

--filter, -f

Include only results matching the pattern.

bash
noiv report results.json --filter "user.*"
noiv report results.json -f "auth|login"

--compare, -c

Compare with previous results file.

bash
noiv report current_results.json --compare baseline_results.json

--merge, -m

Merge multiple result files into one report.

bash
noiv report results1.json results2.json --merge

--open

Automatically open the report in the default browser.

bash
noiv report results.json --open

Report Templates

Standard Template (Default)

bash
noiv report test_results.json --template standard

Features:

  • Executive summary
  • Test results overview
  • Performance metrics
  • Error analysis
  • Environment information

Detailed Template

bash
noiv report test_results.json --template detailed

Features:

  • Comprehensive test breakdown
  • Request/response details
  • Timing analysis
  • Detailed error information
  • Variable extraction logs
  • Complete test history

Executive Template

bash
noiv report test_results.json --template executive

Features:

  • High-level summary
  • Key performance indicators
  • Success/failure rates
  • Trend analysis
  • Minimal technical details

Minimal Template

bash
noiv report test_results.json --template minimal

Features:

  • Basic test results
  • Simple pass/fail indicators
  • Essential metrics only
  • Compact layout

Examples

Basic Report Generation

bash
noiv report test_results.json

Generated Report Sections:

  1. Executive Summary

    📊 Test Execution Summary
    ═══════════════════════════
    Total Tests: 15
    Passed: 13 (86.7%)
    Failed: 2 (13.3%)
    Execution Time: 2.45 seconds
    Environment: staging
  2. Performance Overview

    ⚡ Performance Metrics
    ═══════════════════════
    Average Response Time: 234ms
    95th Percentile: 456ms
    Fastest Request: 89ms (GET /health)
    Slowest Request: 1,234ms (POST /heavy-operation)
  3. Test Results Breakdown

    • Interactive table with pass/fail status
    • Response times and status codes
    • Error details for failed tests

Performance Report

bash
noiv report benchmark_results.json \
  --template detailed \
  --title "API Performance Analysis" \
  --description "Load testing results under 100 concurrent users"

Generated Visualizations:

  1. Response Time Distribution

    • Histogram of response times
    • Percentile breakdown
    • Performance trends over time
  2. Throughput Analysis

    • Requests per second
    • Peak and average throughput
    • Load pattern visualization
  3. Error Rate Analysis

    • Error distribution by type
    • Error rate over time
    • Impact on performance

Comparative Report

bash
noiv report current_results.json \
  --compare baseline_results.json \
  --title "API Performance Comparison" \
  --template detailed

Comparison Features:

  1. Side-by-Side Metrics

    Response Time Comparison
    ═══════════════════════════════════════
                     Current    Baseline    Change
    Average:         234ms      267ms       ✅ -12%
    95th Percentile: 456ms      523ms       ✅ -13%
    99th Percentile: 789ms      845ms       ✅ -7%
    Max:             1.2s       1.8s        ✅ -33%
  2. Trend Visualization

    • Performance improvement/degradation charts
    • Success rate comparison
    • Error rate changes
  3. Detailed Analysis

    • Test-by-test comparison
    • Performance regression detection
    • Improvement recommendations

Multi-Source Report

bash
noiv report functional_results.json performance_results.json security_results.json \
  --merge \
  --title "Comprehensive API Testing Report" \
  --template detailed

Report Components

Executive Dashboard

html
<!-- Example report dashboard -->
<div class="dashboard">
  <div class="metric-card success">
    <h3>✅ Test Success Rate</h3>
    <div class="metric-value">86.7%</div>
    <div class="metric-trend">↗️ +2.3% from last run</div>
  </div>
  
  <div class="metric-card performance">
    <h3>⚡ Avg Response Time</h3>
    <div class="metric-value">234ms</div>
    <div class="metric-trend">↘️ -15ms from baseline</div>
  </div>
  
  <div class="metric-card reliability">
    <h3>🔄 Reliability Score</h3>
    <div class="metric-value">9.2/10</div>
    <div class="metric-trend">→ No change</div>
  </div>
</div>

Interactive Test Results Table

html
<!-- Sortable and filterable table -->
<table class="test-results-table">
  <thead>
    <tr>
      <th>Test Name</th>
      <th>Status</th>
      <th>Response Time</th>
      <th>Status Code</th>
      <th>Details</th>
    </tr>
  </thead>
  <tbody>
    <tr class="test-passed">
      <td>Create User - Valid Data</td>
      <td><span class="status-badge success">✅ PASS</span></td>
      <td>234ms</td>
      <td>201</td>
      <td><button onclick="showDetails('test1')">View</button></td>
    </tr>
    <tr class="test-failed">
      <td>Delete User - Invalid ID</td>
      <td><span class="status-badge failure">❌ FAIL</span></td>
      <td>156ms</td>
      <td>500</td>
      <td><button onclick="showDetails('test2')">View</button></td>
    </tr>
  </tbody>
</table>

Performance Visualizations

Response Time Chart:

javascript
// Chart.js visualization
const responseTimeChart = new Chart(ctx, {
  type: 'line',
  data: {
    labels: testNames,
    datasets: [{
      label: 'Response Time (ms)',
      data: responseTimes,
      borderColor: 'rgb(75, 192, 192)',
      backgroundColor: 'rgba(75, 192, 192, 0.2)'
    }]
  },
  options: {
    responsive: true,
    plugins: {
      title: {
        display: true,
        text: 'Response Time Distribution'
      }
    }
  }
});

Success Rate Pie Chart:

javascript
const successChart = new Chart(ctx, {
  type: 'doughnut',
  data: {
    labels: ['Passed', 'Failed', 'Skipped'],
    datasets: [{
      data: [passedCount, failedCount, skippedCount],
      backgroundColor: ['#4CAF50', '#F44336', '#FF9800']
    }]
  }
});

Error Analysis Section

html
<div class="error-analysis">
  <h2>🚨 Error Analysis</h2>
  
  <div class="error-summary">
    <h3>Error Distribution</h3>
    <ul>
      <li>500 Internal Server Error: 2 occurrences</li>
      <li>Connection Timeout: 1 occurrence</li>
      <li>Validation Error: 1 occurrence</li>
    </ul>
  </div>
  
  <div class="error-details">
    <h3>Detailed Error Information</h3>
    <div class="error-item">
      <h4>Test: Delete User - Valid ID</h4>
      <p><strong>Error:</strong> 500 Internal Server Error</p>
      <p><strong>Response:</strong> {"error": "Database connection failed"}</p>
      <p><strong>Recommendation:</strong> Check database connectivity and server logs</p>
    </div>
  </div>
</div>

Advanced Features

Custom Branding

bash
noiv report results.json \
  --title "Acme Corp API Tests" \
  --description "Production readiness validation" \
  --theme blue \
  --template executive

Custom CSS:

css
/* Custom report styling */
.report-header {
  background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
  color: white;
  padding: 2rem;
}

.company-logo {
  max-height: 50px;
  margin-right: 1rem;
}

.metric-card {
  border-radius: 8px;
  box-shadow: 0 2px 4px rgba(0,0,0,0.1);
  transition: transform 0.2s;
}

.metric-card:hover {
  transform: translateY(-2px);
}

Interactive Elements

Test Details Modal:

html
<div id="testModal" class="modal">
  <div class="modal-content">
    <h2>Test Details</h2>
    <div class="test-info">
      <h3>Request</h3>
      <pre><code class="json">{
  "method": "POST",
  "url": "https://api.example.com/users",
  "headers": {
    "Content-Type": "application/json"
  },
  "body": {
    "name": "John Doe",
    "email": "john@example.com"
  }
}</code></pre>
      
      <h3>Response</h3>
      <pre><code class="json">{
  "id": "12345",
  "name": "John Doe",
  "email": "john@example.com",
  "created_at": "2025-07-24T10:30:00Z"
}</code></pre>
      
      <h3>Assertions</h3>
      <ul class="assertion-list">
        <li class="assertion-pass">✅ Status code equals 201</li>
        <li class="assertion-pass">✅ Response has field 'id'</li>
        <li class="assertion-pass">✅ Response time under 500ms</li>
      </ul>
    </div>
  </div>
</div>

Filtering and Sorting:

javascript
// Interactive table filtering
function filterTests(category) {
  const rows = document.querySelectorAll('.test-row');
  rows.forEach(row => {
    if (category === 'all' || row.classList.contains(category)) {
      row.style.display = '';
    } else {
      row.style.display = 'none';
    }
  });
}

// Real-time search
function searchTests(query) {
  const rows = document.querySelectorAll('.test-row');
  rows.forEach(row => {
    const testName = row.querySelector('.test-name').textContent;
    if (testName.toLowerCase().includes(query.toLowerCase())) {
      row.style.display = '';
    } else {
      row.style.display = 'none';
    }
  });
}

Export Capabilities

PDF Export:

html
<button onclick="exportToPDF()" class="export-btn">
  📄 Export to PDF
</button>

<script>
function exportToPDF() {
  window.print(); // Uses browser's print-to-PDF
}
</script>

CSV Export:

javascript
function exportToCSV() {
  const data = testResults.map(test => ({
    'Test Name': test.name,
    'Status': test.status,
    'Response Time': test.responseTime,
    'Status Code': test.statusCode
  }));
  
  const csv = Papa.unparse(data);
  downloadFile(csv, 'test-results.csv', 'text/csv');
}

Report Customization

Custom Templates

bash
# Create custom template directory
mkdir ~/.noiv/templates

# Copy and modify existing template
cp /usr/local/lib/python3.9/site-packages/noiv/templates/standard.html ~/.noiv/templates/custom.html

# Use custom template
noiv report results.json --template custom

Custom CSS Themes

css
/* ~/.noiv/themes/corporate.css */
:root {
  --primary-color: #2c3e50;
  --secondary-color: #3498db;
  --success-color: #27ae60;
  --error-color: #e74c3c;
  --warning-color: #f39c12;
  --background-color: #f8f9fa;
  --text-color: #2c3e50;
}

.report-container {
  font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
  max-width: 1200px;
  margin: 0 auto;
  padding: 2rem;
  background-color: var(--background-color);
  color: var(--text-color);
}

.header {
  background: linear-gradient(135deg, var(--primary-color), var(--secondary-color));
  color: white;
  padding: 2rem;
  border-radius: 8px;
  margin-bottom: 2rem;
}

Dynamic Content

javascript
// Add real-time timestamps
function addTimestamp() {
  const timestamp = new Date().toLocaleString();
  document.getElementById('generated-time').textContent = `Generated: ${timestamp}`;
}

// Add interactive tooltips
function addTooltips() {
  const elements = document.querySelectorAll('[data-tooltip]');
  elements.forEach(el => {
    el.addEventListener('mouseenter', showTooltip);
    el.addEventListener('mouseleave', hideTooltip);
  });
}

// Progress indicators
function updateProgress() {
  const progressBar = document.querySelector('.progress-bar');
  const percentage = (passedTests / totalTests) * 100;
  progressBar.style.width = `${percentage}%`;
  progressBar.textContent = `${percentage.toFixed(1)}%`;
}

Integration Examples

CI/CD Report Generation

yaml
# .github/workflows/api-tests.yml
name: API Tests with Reports

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      
      - name: Install NOIV
        run: pip install noiv
        
      - name: Run Tests
        run: |
          noiv test api_tests.yaml \
            --format json \
            --output test_results.json
            
      - name: Generate Report
        run: |
          noiv report test_results.json \
            --template detailed \
            --title "API Test Report - Build ${{ github.run_number }}" \
            --output api_report.html
            
      - name: Upload Report
        uses: actions/upload-artifact@v2
        with:
          name: api-test-report
          path: api_report.html
          
      - name: Deploy Report to GitHub Pages
        if: github.ref == 'refs/heads/main'
        uses: peaceiris/actions-gh-pages@v3
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          publish_dir: .
          publish_branch: gh-pages
          destination_dir: reports

Automated Report Distribution

bash
#!/bin/bash
# generate_and_distribute_report.sh

# Run tests and generate report
noiv test api_tests.yaml --format json --output results.json
noiv report results.json \
  --template executive \
  --title "Weekly API Health Report" \
  --description "Automated testing results for week $(date +%V)" \
  --output weekly_report.html

# Email report to stakeholders
echo "Weekly API test report attached" | \
  mail -s "API Health Report - Week $(date +%V)" \
       -a weekly_report.html \
       stakeholders@company.com

# Upload to shared drive
aws s3 cp weekly_report.html s3://company-reports/api/weekly/

Dashboard Integration

bash
# Generate JSON for dashboard consumption
noiv report results.json \
  --format json \
  --output dashboard_data.json

# Upload to monitoring system
curl -X POST https://monitoring.company.com/api/reports \
  -H "Content-Type: application/json" \
  -d @dashboard_data.json

Best Practices

1. Consistent Reporting

bash
# Use standardized report titles and descriptions
noiv report results.json \
  --title "$(basename $(pwd)) API Tests - $(date +%Y-%m-%d)" \
  --description "Automated test execution results"

2. Archive Reports

bash
# Create timestamped reports for historical tracking
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
noiv report results.json --output "reports/api_report_$TIMESTAMP.html"

# Keep only last 30 days of reports
find reports/ -name "*.html" -mtime +30 -delete

3. Environment-Specific Reports

bash
# Include environment information in reports
noiv report results.json \
  --title "API Tests - $ENVIRONMENT Environment" \
  --description "Test results for $ENVIRONMENT deployment"

4. Stakeholder-Appropriate Templates

bash
# Technical team - detailed report
noiv report results.json --template detailed --output tech_report.html

# Management - executive summary
noiv report results.json --template executive --output mgmt_report.html

# QA team - standard report with raw data
noiv report results.json --template standard --include-raw --output qa_report.html

5. Performance Tracking

bash
# Compare against baseline for performance regression detection
noiv report current_results.json \
  --compare baseline_results.json \
  --title "Performance Regression Analysis" \
  --template detailed

Troubleshooting

Missing Data

bash
# Verify results file format
noiv report invalid_results.json
# Output: Error: Invalid results format. Expected JSON with 'tests' array.

# Check file contents
jq . results.json | head -20

Large Report Files

bash
# Generate minimal report for large datasets
noiv report large_results.json \
  --template minimal \
  --filter "critical.*" \
  --output summary_report.html

Browser Compatibility

html
<!-- Reports include compatibility checks -->
<script>
if (!window.fetch) {
  document.body.innerHTML = '<div class="browser-warning">This report requires a modern browser.</div>';
}
</script>

See Also

Released under the MIT License.