noiv report
Generate comprehensive HTML reports with rich visualizations and detailed analysis.
Syntax
noiv report <results> [OPTIONS]
Description
The report
command creates professional HTML reports from test results, providing:
- Rich visualizations - Interactive charts and graphs
- Detailed analysis - Performance metrics and trends
- Professional formatting - Ready for stakeholders and documentation
- Multi-format input - Works with various result formats
- Customizable themes - Multiple report styles available
Arguments
<results>
(required)
Test results file or directory containing results.
noiv report test_results.json
noiv report benchmark_results.json
noiv report results/
Options
--output, -o
Output file for the HTML report (default: report.html
).
noiv report results.json --output api_report.html
noiv report results.json -o performance_report.html
--template, -t
Report template: standard
(default), detailed
, executive
, or minimal
.
noiv report results.json --template detailed
noiv report results.json -t executive
--theme
Visual theme: light
(default), dark
, blue
, or green
.
noiv report results.json --theme dark
noiv report results.json --theme blue
--title
Custom report title.
noiv report results.json --title "User API Test Report"
--description
Report description or summary.
noiv report results.json --description "Weekly API regression testing results"
--include-raw
Include raw test data in the report.
noiv report results.json --include-raw
--filter, -f
Include only results matching the pattern.
noiv report results.json --filter "user.*"
noiv report results.json -f "auth|login"
--compare, -c
Compare with previous results file.
noiv report current_results.json --compare baseline_results.json
--merge, -m
Merge multiple result files into one report.
noiv report results1.json results2.json --merge
--open
Automatically open the report in the default browser.
noiv report results.json --open
Report Templates
Standard Template (Default)
noiv report test_results.json --template standard
Features:
- Executive summary
- Test results overview
- Performance metrics
- Error analysis
- Environment information
Detailed Template
noiv report test_results.json --template detailed
Features:
- Comprehensive test breakdown
- Request/response details
- Timing analysis
- Detailed error information
- Variable extraction logs
- Complete test history
Executive Template
noiv report test_results.json --template executive
Features:
- High-level summary
- Key performance indicators
- Success/failure rates
- Trend analysis
- Minimal technical details
Minimal Template
noiv report test_results.json --template minimal
Features:
- Basic test results
- Simple pass/fail indicators
- Essential metrics only
- Compact layout
Examples
Basic Report Generation
noiv report test_results.json
Generated Report Sections:
Executive Summary
📊 Test Execution Summary ═══════════════════════════ Total Tests: 15 Passed: 13 (86.7%) Failed: 2 (13.3%) Execution Time: 2.45 seconds Environment: staging
Performance Overview
⚡ Performance Metrics ═══════════════════════ Average Response Time: 234ms 95th Percentile: 456ms Fastest Request: 89ms (GET /health) Slowest Request: 1,234ms (POST /heavy-operation)
Test Results Breakdown
- Interactive table with pass/fail status
- Response times and status codes
- Error details for failed tests
Performance Report
noiv report benchmark_results.json \
--template detailed \
--title "API Performance Analysis" \
--description "Load testing results under 100 concurrent users"
Generated Visualizations:
Response Time Distribution
- Histogram of response times
- Percentile breakdown
- Performance trends over time
Throughput Analysis
- Requests per second
- Peak and average throughput
- Load pattern visualization
Error Rate Analysis
- Error distribution by type
- Error rate over time
- Impact on performance
Comparative Report
noiv report current_results.json \
--compare baseline_results.json \
--title "API Performance Comparison" \
--template detailed
Comparison Features:
Side-by-Side Metrics
Response Time Comparison ═══════════════════════════════════════ Current Baseline Change Average: 234ms 267ms ✅ -12% 95th Percentile: 456ms 523ms ✅ -13% 99th Percentile: 789ms 845ms ✅ -7% Max: 1.2s 1.8s ✅ -33%
Trend Visualization
- Performance improvement/degradation charts
- Success rate comparison
- Error rate changes
Detailed Analysis
- Test-by-test comparison
- Performance regression detection
- Improvement recommendations
Multi-Source Report
noiv report functional_results.json performance_results.json security_results.json \
--merge \
--title "Comprehensive API Testing Report" \
--template detailed
Report Components
Executive Dashboard
<!-- Example report dashboard -->
<div class="dashboard">
<div class="metric-card success">
<h3>✅ Test Success Rate</h3>
<div class="metric-value">86.7%</div>
<div class="metric-trend">↗️ +2.3% from last run</div>
</div>
<div class="metric-card performance">
<h3>⚡ Avg Response Time</h3>
<div class="metric-value">234ms</div>
<div class="metric-trend">↘️ -15ms from baseline</div>
</div>
<div class="metric-card reliability">
<h3>🔄 Reliability Score</h3>
<div class="metric-value">9.2/10</div>
<div class="metric-trend">→ No change</div>
</div>
</div>
Interactive Test Results Table
<!-- Sortable and filterable table -->
<table class="test-results-table">
<thead>
<tr>
<th>Test Name</th>
<th>Status</th>
<th>Response Time</th>
<th>Status Code</th>
<th>Details</th>
</tr>
</thead>
<tbody>
<tr class="test-passed">
<td>Create User - Valid Data</td>
<td><span class="status-badge success">✅ PASS</span></td>
<td>234ms</td>
<td>201</td>
<td><button onclick="showDetails('test1')">View</button></td>
</tr>
<tr class="test-failed">
<td>Delete User - Invalid ID</td>
<td><span class="status-badge failure">❌ FAIL</span></td>
<td>156ms</td>
<td>500</td>
<td><button onclick="showDetails('test2')">View</button></td>
</tr>
</tbody>
</table>
Performance Visualizations
Response Time Chart:
// Chart.js visualization
const responseTimeChart = new Chart(ctx, {
type: 'line',
data: {
labels: testNames,
datasets: [{
label: 'Response Time (ms)',
data: responseTimes,
borderColor: 'rgb(75, 192, 192)',
backgroundColor: 'rgba(75, 192, 192, 0.2)'
}]
},
options: {
responsive: true,
plugins: {
title: {
display: true,
text: 'Response Time Distribution'
}
}
}
});
Success Rate Pie Chart:
const successChart = new Chart(ctx, {
type: 'doughnut',
data: {
labels: ['Passed', 'Failed', 'Skipped'],
datasets: [{
data: [passedCount, failedCount, skippedCount],
backgroundColor: ['#4CAF50', '#F44336', '#FF9800']
}]
}
});
Error Analysis Section
<div class="error-analysis">
<h2>🚨 Error Analysis</h2>
<div class="error-summary">
<h3>Error Distribution</h3>
<ul>
<li>500 Internal Server Error: 2 occurrences</li>
<li>Connection Timeout: 1 occurrence</li>
<li>Validation Error: 1 occurrence</li>
</ul>
</div>
<div class="error-details">
<h3>Detailed Error Information</h3>
<div class="error-item">
<h4>Test: Delete User - Valid ID</h4>
<p><strong>Error:</strong> 500 Internal Server Error</p>
<p><strong>Response:</strong> {"error": "Database connection failed"}</p>
<p><strong>Recommendation:</strong> Check database connectivity and server logs</p>
</div>
</div>
</div>
Advanced Features
Custom Branding
noiv report results.json \
--title "Acme Corp API Tests" \
--description "Production readiness validation" \
--theme blue \
--template executive
Custom CSS:
/* Custom report styling */
.report-header {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
padding: 2rem;
}
.company-logo {
max-height: 50px;
margin-right: 1rem;
}
.metric-card {
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
transition: transform 0.2s;
}
.metric-card:hover {
transform: translateY(-2px);
}
Interactive Elements
Test Details Modal:
<div id="testModal" class="modal">
<div class="modal-content">
<h2>Test Details</h2>
<div class="test-info">
<h3>Request</h3>
<pre><code class="json">{
"method": "POST",
"url": "https://api.example.com/users",
"headers": {
"Content-Type": "application/json"
},
"body": {
"name": "John Doe",
"email": "john@example.com"
}
}</code></pre>
<h3>Response</h3>
<pre><code class="json">{
"id": "12345",
"name": "John Doe",
"email": "john@example.com",
"created_at": "2025-07-24T10:30:00Z"
}</code></pre>
<h3>Assertions</h3>
<ul class="assertion-list">
<li class="assertion-pass">✅ Status code equals 201</li>
<li class="assertion-pass">✅ Response has field 'id'</li>
<li class="assertion-pass">✅ Response time under 500ms</li>
</ul>
</div>
</div>
</div>
Filtering and Sorting:
// Interactive table filtering
function filterTests(category) {
const rows = document.querySelectorAll('.test-row');
rows.forEach(row => {
if (category === 'all' || row.classList.contains(category)) {
row.style.display = '';
} else {
row.style.display = 'none';
}
});
}
// Real-time search
function searchTests(query) {
const rows = document.querySelectorAll('.test-row');
rows.forEach(row => {
const testName = row.querySelector('.test-name').textContent;
if (testName.toLowerCase().includes(query.toLowerCase())) {
row.style.display = '';
} else {
row.style.display = 'none';
}
});
}
Export Capabilities
PDF Export:
<button onclick="exportToPDF()" class="export-btn">
📄 Export to PDF
</button>
<script>
function exportToPDF() {
window.print(); // Uses browser's print-to-PDF
}
</script>
CSV Export:
function exportToCSV() {
const data = testResults.map(test => ({
'Test Name': test.name,
'Status': test.status,
'Response Time': test.responseTime,
'Status Code': test.statusCode
}));
const csv = Papa.unparse(data);
downloadFile(csv, 'test-results.csv', 'text/csv');
}
Report Customization
Custom Templates
# Create custom template directory
mkdir ~/.noiv/templates
# Copy and modify existing template
cp /usr/local/lib/python3.9/site-packages/noiv/templates/standard.html ~/.noiv/templates/custom.html
# Use custom template
noiv report results.json --template custom
Custom CSS Themes
/* ~/.noiv/themes/corporate.css */
:root {
--primary-color: #2c3e50;
--secondary-color: #3498db;
--success-color: #27ae60;
--error-color: #e74c3c;
--warning-color: #f39c12;
--background-color: #f8f9fa;
--text-color: #2c3e50;
}
.report-container {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
max-width: 1200px;
margin: 0 auto;
padding: 2rem;
background-color: var(--background-color);
color: var(--text-color);
}
.header {
background: linear-gradient(135deg, var(--primary-color), var(--secondary-color));
color: white;
padding: 2rem;
border-radius: 8px;
margin-bottom: 2rem;
}
Dynamic Content
// Add real-time timestamps
function addTimestamp() {
const timestamp = new Date().toLocaleString();
document.getElementById('generated-time').textContent = `Generated: ${timestamp}`;
}
// Add interactive tooltips
function addTooltips() {
const elements = document.querySelectorAll('[data-tooltip]');
elements.forEach(el => {
el.addEventListener('mouseenter', showTooltip);
el.addEventListener('mouseleave', hideTooltip);
});
}
// Progress indicators
function updateProgress() {
const progressBar = document.querySelector('.progress-bar');
const percentage = (passedTests / totalTests) * 100;
progressBar.style.width = `${percentage}%`;
progressBar.textContent = `${percentage.toFixed(1)}%`;
}
Integration Examples
CI/CD Report Generation
# .github/workflows/api-tests.yml
name: API Tests with Reports
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install NOIV
run: pip install noiv
- name: Run Tests
run: |
noiv test api_tests.yaml \
--format json \
--output test_results.json
- name: Generate Report
run: |
noiv report test_results.json \
--template detailed \
--title "API Test Report - Build ${{ github.run_number }}" \
--output api_report.html
- name: Upload Report
uses: actions/upload-artifact@v2
with:
name: api-test-report
path: api_report.html
- name: Deploy Report to GitHub Pages
if: github.ref == 'refs/heads/main'
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: .
publish_branch: gh-pages
destination_dir: reports
Automated Report Distribution
#!/bin/bash
# generate_and_distribute_report.sh
# Run tests and generate report
noiv test api_tests.yaml --format json --output results.json
noiv report results.json \
--template executive \
--title "Weekly API Health Report" \
--description "Automated testing results for week $(date +%V)" \
--output weekly_report.html
# Email report to stakeholders
echo "Weekly API test report attached" | \
mail -s "API Health Report - Week $(date +%V)" \
-a weekly_report.html \
stakeholders@company.com
# Upload to shared drive
aws s3 cp weekly_report.html s3://company-reports/api/weekly/
Dashboard Integration
# Generate JSON for dashboard consumption
noiv report results.json \
--format json \
--output dashboard_data.json
# Upload to monitoring system
curl -X POST https://monitoring.company.com/api/reports \
-H "Content-Type: application/json" \
-d @dashboard_data.json
Best Practices
1. Consistent Reporting
# Use standardized report titles and descriptions
noiv report results.json \
--title "$(basename $(pwd)) API Tests - $(date +%Y-%m-%d)" \
--description "Automated test execution results"
2. Archive Reports
# Create timestamped reports for historical tracking
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
noiv report results.json --output "reports/api_report_$TIMESTAMP.html"
# Keep only last 30 days of reports
find reports/ -name "*.html" -mtime +30 -delete
3. Environment-Specific Reports
# Include environment information in reports
noiv report results.json \
--title "API Tests - $ENVIRONMENT Environment" \
--description "Test results for $ENVIRONMENT deployment"
4. Stakeholder-Appropriate Templates
# Technical team - detailed report
noiv report results.json --template detailed --output tech_report.html
# Management - executive summary
noiv report results.json --template executive --output mgmt_report.html
# QA team - standard report with raw data
noiv report results.json --template standard --include-raw --output qa_report.html
5. Performance Tracking
# Compare against baseline for performance regression detection
noiv report current_results.json \
--compare baseline_results.json \
--title "Performance Regression Analysis" \
--template detailed
Troubleshooting
Missing Data
# Verify results file format
noiv report invalid_results.json
# Output: Error: Invalid results format. Expected JSON with 'tests' array.
# Check file contents
jq . results.json | head -20
Large Report Files
# Generate minimal report for large datasets
noiv report large_results.json \
--template minimal \
--filter "critical.*" \
--output summary_report.html
Browser Compatibility
<!-- Reports include compatibility checks -->
<script>
if (!window.fetch) {
document.body.innerHTML = '<div class="browser-warning">This report requires a modern browser.</div>';
}
</script>
See Also
- HTML Reports Guide - Comprehensive reporting guide
- noiv test - Generate test results for reporting
- noiv benchmark - Performance testing for reports
- Performance Testing - Learn performance analysis