mirror of
https://github.com/Telecominfraproject/ols-ucentral-client.git
synced 2026-01-27 02:21:38 +00:00
6.2 KiB
6.2 KiB
Quick Start: Testing Guide
TL;DR
# Test all configs with human-readable output (default)
./run-config-tests.sh
# Generate HTML report
./run-config-tests.sh --format html
# Test single config with HTML output
./run-config-tests.sh --format html cfg0.json
# Results are in: output/
Common Commands
Test All Configurations
# Stub mode (default - fast, proto.c parsing only)
./run-config-tests.sh # Console output with colors (default)
./run-config-tests.sh --format html # Interactive HTML report
./run-config-tests.sh --format json # Machine-readable JSON
# Platform mode (integration testing with platform code)
./run-config-tests.sh --mode platform # Console output
./run-config-tests.sh --mode platform --format html # HTML report
Test Single Configuration
# Stub mode (default)
./run-config-tests.sh cfg0.json # Human output (default)
./run-config-tests.sh --format html cfg0.json # HTML report
./run-config-tests.sh --format json cfg0.json # JSON output
# Platform mode
./run-config-tests.sh --mode platform cfg0.json # Human output
./run-config-tests.sh --mode platform --format html cfg0.json # HTML report
View Results
# Open HTML report in browser
open output/test-report.html # macOS
xdg-open output/test-report.html # Linux
# View text results
cat output/test-results.txt
# Parse JSON results
cat output/test-report.json | jq '.summary'
What the Script Does
- ✅ Checks Docker is running
- ✅ Builds Docker environment (only if needed)
- ✅ Starts/reuses container
- ✅ Runs tests inside container
- ✅ Copies results to
output/directory - ✅ Shows summary
Output Formats
| Format | Use Case | Output File |
|---|---|---|
human |
Interactive development, debugging | output/test-results.txt |
html |
Reports, sharing, presentations | output/test-report.html |
json |
CI/CD, automation, metrics | output/test-report.json |
First Run vs Subsequent Runs
First Run (cold start):
- Builds Docker environment: ~10 minutes (one-time)
- Runs tests: ~30 seconds
- Total: ~10 minutes
Subsequent Runs (warm start):
- Reuses environment: ~2 seconds
- Runs tests: ~30 seconds
- Total: ~30 seconds
Troubleshooting
Docker not running
# Start Docker Desktop (macOS/Windows)
# OR
sudo systemctl start docker # Linux
Permission denied
chmod +x run-config-tests.sh
Config not found
# List available configs
ls config-samples/*.json
CI/CD Integration
Exit Codes
0= All tests passed ✅1= Tests failed ❌2= System error ⚠️
Example Pipeline
- name: Run tests
run: ./run-config-tests.sh --format json
- name: Check results
run: |
if [ $? -eq 0 ]; then
echo "✅ All tests passed"
else
echo "❌ Tests failed"
exit 1
fi
Available Test Configs
# List all configs
ls -1 config-samples/*.json | xargs -n1 basename
# Common test configs:
cfg0.json # Basic config
ECS4150-TM.json # Traffic management
ECS4150-ACL.json # Access control lists
ECS4150STP_RSTP.json # Spanning tree
ECS4150_IGMP_Snooping.json # IGMP snooping
ECS4150_POE.json # Power over Ethernet
ECS4150_VLAN.json # VLAN configuration
What Gets Tested
✅ JSON schema validation (structure, types, constraints) ✅ Parser validation (actual C parser implementation) ✅ Property tracking (configured vs unknown properties) ✅ Feature coverage (implemented vs documented features) ✅ Error handling (invalid configs, missing fields)
Test Modes
Stub Mode (Default - Fast)
- Tests proto.c parsing only
- Uses simple platform stubs
- Shows base properties only
- Execution time: ~30 seconds
- Use for: Quick validation, CI/CD
Platform Mode (Integration)
- Tests proto.c + platform code (plat-gnma.c)
- Uses platform implementation with mocks
- Shows base AND platform properties
- Tracks hardware application functions
- Execution time: ~45 seconds
- Use for: Platform-specific validation
Quick Reference
| Task | Command |
|---|---|
| Test everything (stub) | ./run-config-tests.sh |
| Test everything (platform) | ./run-config-tests.sh --mode platform |
| HTML report | ./run-config-tests.sh --format html |
| JSON output | ./run-config-tests.sh --format json |
| Single config | ./run-config-tests.sh cfg0.json |
| Single config HTML | ./run-config-tests.sh -f html cfg0.json |
| Platform mode single config | ./run-config-tests.sh -m platform cfg0.json |
| View HTML | open output/test-report.html |
| View results | cat output/test-results.txt |
| Parse JSON | cat output/test-report.json | jq |
Full Documentation
- TEST_RUNNER_README.md - Complete script documentation
- TESTING_FRAMEWORK.md - Testing framework overview
- tests/config-parser/TEST_CONFIG_README.md - Detailed testing guide
- TEST_CONFIG_PARSER_DESIGN.md - Test framework architecture
- tests/MAINTENANCE.md - Maintenance procedures
- README.md - Project overview and build instructions
Directory Structure
ols-ucentral-client/
├── run-config-tests.sh ← Test runner script
├── output/ ← Test results go here
├── config-samples/ ← Test configurations
└── tests/
├── config-parser/
│ ├── test-config-parser.c ← Test implementation
│ ├── test-stubs.c ← Platform stubs
│ ├── config-parser.h ← Test header
│ ├── Makefile ← Test build system
│ └── TEST_CONFIG_README.md ← Detailed guide
├── schema/
│ ├── validate-schema.py ← Schema validator
│ └── SCHEMA_VALIDATOR_README.md
├── tools/ ← Property database tools
└── MAINTENANCE.md ← Maintenance procedures
Need help? Check TEST_RUNNER_README.md for troubleshooting and advanced usage.