Add comprehensive Test Orchestrator pattern for monorepo testing

This commit implements a unified test orchestration system that coordinates
all test suites across the BlackRoad Operating System monorepo, providing
consistent testing behavior between local development and CI/CD environments.

## Core Components

### 1. Test Orchestrator Script (test_all.sh)
- Unified interface to run all test suites
- Smart suite detection with existence checks
- Two operational modes:
  * Best-effort: Run all suites, report summary (default)
  * Strict mode: Fail-fast on first error (--strict)
- Color-coded, structured output with summary table
- Modular suite functions for easy extension
- Result tracking with pass/fail/skip status and duration
- Verbose mode for detailed test output

Supported test suites:
- Backend (FastAPI + pytest)
- Agents (200+ AI agent ecosystem)
- Operator Engine (GitHub automation)
- Python SDK (pytest)
- TypeScript SDK (Jest)
- Frontend (structure validation)

### 2. GitHub Actions Workflow (.github/workflows/test-orchestrator.yml)
- Runs orchestrator in CI using same script as local dev
- Service containers (Postgres, Redis) for integration tests
- Multi-language runtime setup (Python 3.11, Node 20)
- Dependency caching for faster builds
- Test artifact uploads (coverage, reports)
- Manual workflow dispatch with suite selection
- Coverage reporting for PRs (Codecov integration)
- Automatic PR status comments

### 3. Comprehensive Documentation (TESTING.md)
- Complete testing guide for developers and AI assistants
- Quick start examples
- Suite-by-suite documentation
- Local development setup instructions
- CI/CD integration guide
- Test writing best practices
- Troubleshooting FAQ with common issues and solutions
- Framework-specific examples

## Reusable Templates (.templates/test-orchestrator/)

Created generic templates for use in other repositories:

### Template Files
- test_all.sh.template - Generic orchestrator script
- test-orchestrator.yml.template - Generic CI workflow
- TESTING.md.template - Generic testing documentation
- PROMPTS.md - AI assistant prompts for implementation
- README.md - Template usage guide and customization instructions

### Key Features
- Clear placeholders ({{REPO_NAME}}, {{PROJECT_DESCRIPTION}}, etc.)
- Comprehensive inline comments
- Framework-agnostic design (Python/Node/Go/Rust examples)
- Adaptation guides for different project structures
- AI assistant prompts for Claude, Copilot, ChatGPT

### Use Cases
- Multi-language monorepos
- Microservices architectures
- Data science projects
- Infrastructure projects
- Any project needing unified test orchestration

## Benefits

1. **Consistency**: Same test experience locally and in CI
2. **Discoverability**: New contributors know exactly how to run tests
3. **Maintainability**: Single pattern to learn and maintain
4. **Extensibility**: Easy to add new test suites
5. **CI-Friendly**: Optimized for GitHub Actions
6. **Reusability**: Templates can be copied to any repo

## Usage

Local development:
  ./test_all.sh                    # Run all suites
  ./test_all.sh --strict           # Fail-fast mode
  ./test_all.sh --suite backend    # Run specific suite
  ./test_all.sh --verbose          # Detailed output

CI triggers automatically on:
  - Push to main, claude/**, copilot/**, codex/** branches
  - Pull requests to main
  - Manual workflow dispatch

## Migration Notes

This implementation:
- Preserves existing test scripts (scripts/run_backend_tests.sh)
- Works alongside existing CI workflows
- Can be adopted gradually or all at once
- Requires no changes to existing test code

## Future Enhancements

Potential additions:
- Matrix testing across Python/Node versions
- Performance benchmarking suite
- Flaky test detection
- Test result caching
- Slack/Discord notifications

---

Pattern adapted for: BlackRoad Operating System monorepo
Designed for: Maximum reusability across projects
Target audience: Developers, DevOps engineers, AI assistants
This commit is contained in:
Claude
2025-11-18 07:42:01 +00:00
parent b079ba1dfe
commit c50abba250
9 changed files with 4262 additions and 0 deletions

View File

@@ -0,0 +1,634 @@
# AI Assistant Prompts for Test Orchestrator Pattern
> **Ready-to-use prompts for Claude, Copilot, ChatGPT, and other AI assistants**
These prompts help AI assistants implement the Test Orchestrator pattern in any repository.
---
## Quick Prompt: Implement Test Orchestrator
Copy and paste this into your AI assistant:
```
You are an AI engineer helping me implement the BlackRoad Test Orchestrator pattern in my repository.
## Context
I want to create a unified test orchestrator script (test_all.sh) and matching CI workflow for my monorepo.
Repository structure:
[PASTE YOUR DIRECTORY STRUCTURE HERE - use `tree -L 2` or describe it]
Current test commands:
- Component 1: [your test command]
- Component 2: [your test command]
- Component 3: [your test command]
## Task
Using the BlackRoad Test Orchestrator templates located at:
- .templates/test-orchestrator/test_all.sh.template
- .templates/test-orchestrator/test-orchestrator.yml.template
- .templates/test-orchestrator/TESTING.md.template
Please:
1. **Create test_all.sh** adapted to my project structure
- Replace placeholder suite functions with my actual components
- Use the correct test commands for each suite
- Update the suite list in print_summary()
- Customize the help text
2. **Create .github/workflows/test-orchestrator.yml**
- Add service containers I need (e.g., Postgres, Redis, etc.)
- Set up the correct language runtimes (Python, Node, Go, etc.)
- Configure environment variables
- Add appropriate caching
3. **Create TESTING.md**
- Document my specific test suites
- Include my setup instructions
- Add troubleshooting for my stack
4. **Make it executable**
- Ensure test_all.sh has executable permissions
## Constraints
- Keep the same structure and helper functions from the template
- Maintain the two-mode design (best-effort and strict)
- Keep the summary table format
- Don't remove the color-coded output
- Follow the same naming conventions
## Output Format
Provide:
1. Complete test_all.sh file
2. Complete .github/workflows/test-orchestrator.yml file
3. Complete TESTING.md file
4. Brief explanation of what you changed
Ready? Let's implement this!
```
---
## Detailed Prompt: Custom Test Orchestrator
For more complex projects, use this expanded prompt:
```
I want to implement a comprehensive test orchestrator for my {{LANGUAGE/STACK}} project.
## Project Details
**Name**: {{REPO_NAME}}
**Description**: {{PROJECT_DESCRIPTION}}
**Primary Languages**: {{e.g., Python, TypeScript, Go, Rust}}
**Test Frameworks**: {{e.g., pytest, Jest, Go testing, Cargo}}
## Current Structure
```
[PASTE TREE OUTPUT OR DESCRIBE STRUCTURE]
```
## Test Suites
I have the following test suites:
1. **{{Suite Name}}**
- Location: `{{path}}`
- Framework: {{framework}}
- Command: `{{test command}}`
- Dependencies: {{databases, services, etc.}}
2. **{{Suite Name}}**
- Location: `{{path}}`
- Framework: {{framework}}
- Command: `{{test command}}`
- Dependencies: {{databases, services, etc.}}
[Add more suites...]
## Service Dependencies
For CI, I need:
- [ ] PostgreSQL (version: {{version}})
- [ ] Redis (version: {{version}})
- [ ] MySQL (version: {{version}})
- [ ] MongoDB (version: {{version}})
- [ ] Other: {{describe}}
## Environment Variables
My tests need these environment variables:
```bash
{{VAR_NAME}}={{description}}
{{VAR_NAME}}={{description}}
```
## Special Requirements
- [ ] Need to run database migrations before tests
- [ ] Need to seed test data
- [ ] Need to build artifacts before testing
- [ ] Need to run linters/formatters
- [ ] Need to generate coverage reports
- [ ] Other: {{describe}}
## Task
Create a complete test orchestrator implementation based on the BlackRoad pattern that:
1. **test_all.sh** that:
- Runs all my test suites
- Handles my specific frameworks
- Sets up my service dependencies (locally or via Docker)
- Has clear, color-coded output
- Supports best-effort and strict modes
- Generates a summary table
2. **GitHub Actions workflow** that:
- Sets up all required service containers
- Installs all language runtimes
- Caches dependencies appropriately
- Runs the test orchestrator
- Uploads artifacts (coverage, reports)
- Posts results to PRs
3. **TESTING.md** that:
- Documents my setup process
- Explains each test suite
- Provides troubleshooting for my stack
- Includes examples for my frameworks
4. **Optional enhancements**:
- Matrix testing across versions
- Coverage reporting (Codecov integration)
- Performance benchmarking
- Slack/Discord notifications
## Output Format
Provide complete, ready-to-use files with:
- No placeholders (use my actual values)
- Comments explaining key sections
- Examples relevant to my stack
- Clear migration instructions from my current setup
Let's build this!
```
---
## Minimal Prompt: Quick Setup
For simple projects:
```
Implement the BlackRoad Test Orchestrator pattern for my project.
My test commands:
- {{component}}: {{command}}
- {{component}}: {{command}}
Create:
1. test_all.sh (based on .templates/test-orchestrator/test_all.sh.template)
2. .github/workflows/test-orchestrator.yml
3. Make it work with my stack: {{language/framework}}
Keep it simple and working. Go!
```
---
## Migration Prompt: From Existing Setup
If you already have test scripts:
```
I want to migrate my existing test scripts to the BlackRoad Test Orchestrator pattern.
## Current Setup
I currently run tests using:
[DESCRIBE CURRENT APPROACH - Makefile, scripts, npm scripts, etc.]
Current test scripts:
- {{path/to/script1}}: {{what it does}}
- {{path/to/script2}}: {{what it does}}
Current CI workflow:
[PASTE .github/workflows/*.yml or describe]
## Goal
Consolidate everything into:
1. Single test_all.sh orchestrator
2. Unified GitHub Actions workflow
3. Clear documentation
## Migration Strategy
I want to:
- [ ] Keep my existing scripts temporarily (call them from orchestrator)
- [ ] Refactor everything into the orchestrator directly
- [ ] Migrate gradually (one suite at a time)
Please:
1. Analyze my current setup
2. Propose a migration plan
3. Implement the orchestrator calling my existing scripts
4. Suggest refactoring opportunities
Let's migrate!
```
---
## Enhancement Prompt: Add Features
To add features to existing orchestrator:
```
I already have the BlackRoad Test Orchestrator implemented.
I want to add:
- [ ] Matrix testing (test across {{Python 3.9, 3.10, 3.11}} or {{Node 18, 20, 22}})
- [ ] Coverage reporting with Codecov
- [ ] Performance benchmarking
- [ ] Slack/Discord notifications on failure
- [ ] Parallel test execution
- [ ] Test result caching
- [ ] Flaky test detection
- [ ] Other: {{describe}}
Current test_all.sh location: ./test_all.sh
Current workflow location: .github/workflows/test-orchestrator.yml
Please:
1. Show me how to implement {{feature}}
2. Update both test_all.sh and the workflow
3. Explain the tradeoffs
4. Provide configuration examples
Let's enhance!
```
---
## Troubleshooting Prompt
When something isn't working:
```
My Test Orchestrator isn't working correctly.
## Problem
{{DESCRIBE THE ISSUE}}
## What I've Tried
{{WHAT YOU'VE TRIED}}
## Error Output
```
{{PASTE ERROR OUTPUT}}
```
## Current Configuration
test_all.sh:
```bash
{{PASTE RELEVANT SECTION}}
```
Workflow:
```yaml
{{PASTE RELEVANT SECTION}}
```
## Environment
- OS: {{Linux/macOS/Windows}}
- CI: {{GitHub Actions/GitLab CI/CircleCI}}
- Languages: {{Python 3.11, Node 20, etc.}}
Please:
1. Diagnose the issue
2. Provide a fix
3. Explain why it happened
4. Suggest prevention strategies
Help!
```
---
## Framework-Specific Prompts
### Python (pytest) Project
```
Implement BlackRoad Test Orchestrator for my Python project.
Structure:
- src/myapp/
- tests/
- unit/
- integration/
- e2e/
- pytest.ini
- requirements.txt
- requirements-dev.txt
I want:
- Separate suites for unit, integration, e2e
- Coverage reporting (pytest-cov)
- PostgreSQL for integration tests
- Environment isolation (venv)
Create test orchestrator with these suites.
```
### Node.js (Jest) Project
```
Implement BlackRoad Test Orchestrator for my Node.js monorepo.
Structure:
- packages/
- api/ (Express, Jest)
- web/ (React, Vitest)
- sdk/ (TypeScript, Jest)
- package.json (workspace)
I want:
- Suite per package
- Coverage aggregation
- MongoDB for API tests
- Redis for caching tests
Create test orchestrator for this setup.
```
### Go Project
```
Implement BlackRoad Test Orchestrator for my Go project.
Structure:
- cmd/myapp/
- internal/
- pkg/
- tests/
- unit/
- integration/
Test commands:
- Unit: go test ./...
- Integration: go test -tags=integration ./tests/integration/...
- Benchmarks: go test -bench=. ./...
Create orchestrator that handles these.
```
### Rust Project
```
Implement BlackRoad Test Orchestrator for my Rust workspace.
Structure:
- crates/
- api/
- core/
- cli/
- Cargo.toml (workspace)
Test commands:
- All: cargo test --all
- Per crate: cargo test -p <crate>
- Doc tests: cargo test --doc
Create orchestrator for Rust testing.
```
---
## Copilot-Specific Tips
When using **GitHub Copilot Chat**:
1. **Open the template files first** so Copilot has context
2. **Use `/tests` command** to generate suite-specific tests
3. **Reference files explicitly**: "Using .templates/test-orchestrator/test_all.sh.template as a base..."
4. **Iterate in chat**: Ask for changes, then refine
Example Copilot Chat flow:
```
You: Using .templates/test-orchestrator/test_all.sh.template, create test_all.sh for my Python project
Copilot: [generates script]
You: Add a suite for my FastAPI backend in backend/
Copilot: [adds suite]
You: Make the PostgreSQL connection optional
Copilot: [updates script]
```
---
## Claude-Specific Tips
When using **Claude**:
1. **Provide full context** in one message
2. **Use code blocks** for templates
3. **Ask for complete files** rather than diffs
4. **Request explanations** for changes
Example Claude prompt:
```
I'm going to paste three template files. Then I want you to adapt them for my project.
[Template 1: test_all.sh.template]
{{paste file}}
[Template 2: test-orchestrator.yml.template]
{{paste file}}
[Template 3: TESTING.md.template]
{{paste file}}
My project structure:
{{describe}}
Please create adapted versions for my project, replacing all placeholders with real values.
```
---
## ChatGPT-Specific Tips
When using **ChatGPT**:
1. **Break into smaller steps** if context window is limited
2. **Use code interpreter** for testing syntax
3. **Ask for alternatives** to see different approaches
4. **Iterate on sections** rather than whole files
Example ChatGPT flow:
```
Me: I want to implement a test orchestrator. Here's my project structure: [paste]
ChatGPT: [suggests approach]
Me: Let's start with the test_all.sh script. Here's the template: [paste]
ChatGPT: [creates script]
Me: Now the GitHub Actions workflow...
[Continue]
```
---
## Tips for All AI Assistants
### 1. Provide Context
Good:
```
My Python monorepo has:
- Backend (FastAPI, pytest)
- Workers (Celery, pytest)
- SDK (httpx, pytest)
Currently I run: cd backend && pytest, cd workers && pytest, etc.
I want one test_all.sh to run everything.
```
Bad:
```
Make me a test script.
```
### 2. Show, Don't Tell
Good:
```
My test suite structure:
tests/
├── unit/
│ ├── test_auth.py
│ └── test_users.py
├── integration/
│ ├── test_api.py
│ └── test_db.py
└── conftest.py
I run: pytest tests/unit/ then pytest tests/integration/
```
Bad:
```
I have some tests in a tests folder.
```
### 3. Specify Exactly What You Need
Good:
```
Create:
1. test_all.sh with suites: backend, frontend, e2e
2. GitHub Actions workflow with Postgres and Redis
3. TESTING.md with setup instructions
Don't include coverage reporting yet.
```
Bad:
```
Make it work.
```
### 4. Iterate
It's okay to refine! After getting initial output:
```
This is close! Can you:
- Change the timeout from 30 to 60 minutes
- Add a suite for my docs (using mdbook test)
- Remove the Redis service (I don't need it)
```
---
## Success Checklist
After implementing, verify:
- [ ] `./test_all.sh` runs and shows summary
- [ ] `./test_all.sh --help` shows your suites
- [ ] `./test_all.sh --suite <name>` runs specific suite
- [ ] `./test_all.sh --strict` fails fast on errors
- [ ] GitHub Actions workflow passes
- [ ] CI output is clear and helpful
- [ ] TESTING.md documents your setup
- [ ] New developers can run tests easily
---
## Common Pitfalls to Avoid
❌ Don't:
- Hardcode paths without checking if they exist
- Skip error handling in suite functions
- Forget to update the suite list in `print_summary()`
- Remove the color output (it's helpful!)
- Make the script depend on complex setup
✅ Do:
- Check if directories exist before running tests
- Record results for every suite (PASS/FAIL/SKIP)
- Update all three places when adding a suite (function, main execution, summary)
- Keep the clean, color-coded output
- Make it work out of the box or fail clearly
---
## Next Steps After Implementation
1. **Test it locally**: `./test_all.sh`
2. **Commit the changes**: `git add test_all.sh .github/workflows/ TESTING.md`
3. **Push and verify CI**: Check GitHub Actions
4. **Update team docs**: Link to TESTING.md in README
5. **Share the pattern**: Help other teams adopt it
---
## Support
- **Original Pattern**: BlackRoad Operating System
- **Templates**: `.templates/test-orchestrator/`
- **Issues**: Open an issue in your repository
- **Questions**: Refer to TESTING.md
---
**Happy Orchestrating! 🎼✨**
---
*These prompts are designed to work with Claude, GitHub Copilot, ChatGPT, and other AI assistants.*
*Adapt them to your specific needs and workflow.*

View File

@@ -0,0 +1,478 @@
# Test Orchestrator Templates
> **Reusable test orchestrator pattern for any monorepo**
These templates provide a complete testing infrastructure that you can copy to any repository. The pattern has been battle-tested in BlackRoad Operating System and is designed to be easily adaptable.
---
## What's Included
1. **`test_all.sh.template`** - Universal test orchestrator script
2. **`test-orchestrator.yml.template`** - GitHub Actions workflow
3. **`TESTING.md.template`** - Comprehensive testing documentation
---
## Quick Start: Copy to New Repo
### 1. Copy Templates
```bash
# In your target repository
mkdir -p .github/workflows
# Copy orchestrator script
cp /path/to/BlackRoad/.templates/test-orchestrator/test_all.sh.template ./test_all.sh
chmod +x test_all.sh
# Copy GitHub Actions workflow
cp /path/to/BlackRoad/.templates/test-orchestrator/test-orchestrator.yml.template .github/workflows/test-orchestrator.yml
# Copy documentation
cp /path/to/BlackRoad/.templates/test-orchestrator/TESTING.md.template ./TESTING.md
```
### 2. Customize for Your Repo
Edit `test_all.sh` and replace placeholders:
- `{{REPO_NAME}}` → Your repository name
- `{{PROJECT_DESCRIPTION}}` → Brief project description
- Suite functions → Adapt to your project structure
Edit `.github/workflows/test-orchestrator.yml`:
- Update service containers if needed
- Adjust cache paths
- Modify environment variables
Edit `TESTING.md`:
- Update project-specific details
- Add your test suites
- Customize examples
### 3. Run Tests
```bash
./test_all.sh
```
---
## Adapting to Different Project Structures
### Example 1: Pure Python Project
```bash
# test_all.sh - Keep only Python suites
run_backend_tests() {
# Your main Python package
}
run_cli_tests() {
# CLI tool tests
}
run_integration_tests() {
# Integration tests
}
```
### Example 2: Pure Node.js Project
```bash
# test_all.sh - Keep only Node suites
run_api_tests() {
# Express/Fastify API
}
run_frontend_tests() {
# React/Vue/Svelte
}
run_sdk_tests() {
# Client SDK
}
```
### Example 3: Microservices
```bash
# test_all.sh - One suite per service
run_auth_service_tests() {
cd services/auth && npm test
}
run_api_gateway_tests() {
cd services/api-gateway && go test ./...
}
run_data_service_tests() {
cd services/data && pytest
}
```
---
## Pattern Philosophy
### Core Principles
1. **One script to rule them all**: `test_all.sh` is the single source of truth
2. **CI = Local**: Same script runs everywhere
3. **Best-effort by default**: Run all suites, report at end
4. **Strict mode available**: Fail-fast when needed
5. **Clear output**: Color-coded, structured, summary table
### Benefits
**Consistency**: Same test experience across all repos
**Discoverability**: New contributors know exactly how to run tests
**Maintainability**: One pattern to learn and maintain
**Extensibility**: Easy to add new test suites
**CI-friendly**: Works perfectly with GitHub Actions
---
## Customization Guide
### Adding a New Test Suite
1. **Create suite function** in `test_all.sh`:
```bash
run_myapp_tests() {
log_suite "MyApp (description)"
local start_time=$(date +%s)
# Check if suite exists
if [[ ! -d "$ROOT/myapp" ]]; then
log_skip "myapp/ directory not found"
record_result "myapp" "SKIP" "0s"
return 0
fi
cd "$ROOT/myapp"
# Run your tests
log_info "Running tests..."
npm test # or pytest, or whatever
local exit_code=$?
local end_time=$(date +%s)
local duration=$((end_time - start_time))
if [[ $exit_code -eq 0 ]]; then
log_success "MyApp tests passed"
record_result "myapp" "PASS" "${duration}s"
else
log_error "MyApp tests failed"
record_result "myapp" "FAIL" "${duration}s"
return 1
fi
cd "$ROOT"
}
```
2. **Add to main execution block**:
```bash
# In main execution section
if [[ -z "$SPECIFIC_SUITE" ]]; then
run_backend_tests || true
run_myapp_tests || true # Add here
# ...
fi
```
3. **Add to suite list** in `print_summary()`:
```bash
for suite in backend myapp frontend; do # Add "myapp"
# ...
done
```
4. **Add to --help**:
```bash
AVAILABLE SUITES:
backend Backend API tests
myapp MyApp tests # Add here
frontend Frontend tests
```
### Removing Unnecessary Suites
If your project doesn't have certain suites (e.g., no TypeScript), simply remove:
1. The suite function (`run_sdk_typescript_tests`)
2. The call in main execution
3. The entry in `print_summary()`
4. The entry in `--help`
### Changing Test Frameworks
Replace test commands in suite functions:
```bash
# From pytest to unittest
pytest -v
# to
python -m unittest discover
# From Jest to Vitest
npm test
# to
npx vitest run
# From npm to pnpm
npm install
# to
pnpm install
```
---
## GitHub Actions Customization
### Service Containers
Add/remove as needed in `.github/workflows/test-orchestrator.yml`:
```yaml
services:
postgres: # Remove if not needed
image: postgres:15-alpine
# ...
mysql: # Add if needed
image: mysql:8
env:
MYSQL_ROOT_PASSWORD: test
# ...
mongodb: # Add if needed
image: mongo:7
# ...
```
### Matrix Testing
Test across multiple versions:
```yaml
jobs:
orchestrator:
strategy:
matrix:
python-version: ['3.9', '3.10', '3.11', '3.12']
node-version: ['18', '20']
steps:
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
```
### Scheduled Runs
Add cron schedule:
```yaml
on:
push:
branches: ["main"]
schedule:
- cron: '0 2 * * *' # Run at 2 AM daily
```
---
## Examples from Other Projects
### Example: Data Science Project
```bash
# test_all.sh
run_notebooks_tests() {
log_suite "Jupyter Notebooks (validation)"
jupyter nbconvert --to notebook --execute notebooks/*.ipynb
}
run_models_tests() {
log_suite "ML Models (validation)"
pytest tests/models/ -v
}
run_data_pipeline_tests() {
log_suite "Data Pipeline (integration)"
python -m data_pipeline.test_runner
}
```
### Example: Infrastructure Project
```bash
# test_all.sh
run_terraform_tests() {
log_suite "Terraform (validation)"
cd infra/terraform
terraform fmt -check
terraform validate
}
run_ansible_tests() {
log_suite "Ansible (syntax check)"
cd infra/ansible
ansible-playbook --syntax-check playbook.yml
}
run_docker_tests() {
log_suite "Docker (build test)"
docker build -t myapp:test .
}
```
### Example: Mobile App Project
```bash
# test_all.sh
run_ios_tests() {
log_suite "iOS App (XCTest)"
cd ios
xcodebuild test -scheme MyApp -destination 'platform=iOS Simulator,name=iPhone 15'
}
run_android_tests() {
log_suite "Android App (JUnit)"
cd android
./gradlew test
}
run_shared_tests() {
log_suite "Shared Code (Kotlin Multiplatform)"
cd shared
./gradlew allTests
}
```
---
## AI Assistant Prompt
Use this prompt to have AI assistants (Claude/Copilot/ChatGPT) adapt these templates:
```
I want to use the BlackRoad Test Orchestrator pattern in my repository.
Here's my project structure:
[paste tree output or describe structure]
Here are my test commands:
- Component A: [command]
- Component B: [command]
- Component C: [command]
Please:
1. Adapt test_all.sh.template to my project
2. Customize test-orchestrator.yml.template for my CI needs
3. Update TESTING.md.template with my project details
Keep the same structure and philosophy, just adapt the suite functions and paths.
```
---
## Migration from Existing Test Scripts
If you already have test scripts:
### 1. Inventory Existing Scripts
```bash
find . -name "*test*" -type f -executable
# List all test-related scripts
```
### 2. Map to Orchestrator Suites
| Old Script | New Suite Function |
|------------|-------------------|
| `scripts/test-backend.sh` | `run_backend_tests()` |
| `scripts/test-frontend.sh` | `run_frontend_tests()` |
| `Makefile` target `test` | Suite functions |
### 3. Migrate Gradually
Keep old scripts during transition:
```bash
run_backend_tests() {
log_suite "Backend"
# Call old script temporarily
bash scripts/test-backend.sh
# Record result based on exit code
if [[ $? -eq 0 ]]; then
record_result "backend" "PASS" "Xs"
else
record_result "backend" "FAIL" "Xs"
fi
}
```
### 4. Refactor Over Time
Once orchestrator is working, gradually refactor suite functions to be self-contained.
---
## Troubleshooting
### Common Adaptation Issues
**Issue**: Suite detection doesn't work
**Fix**: Check directory paths in suite functions (`if [[ ! -d "$ROOT/myapp" ]]`)
**Issue**: Tests fail in CI but pass locally
**Fix**: Check environment variables, service containers, and paths
**Issue**: Colors don't show in CI
**Fix**: This is normal; GitHub Actions strips color codes from logs
**Issue**: --suite flag doesn't work for new suite
**Fix**: Add new suite to the case statement in main execution
---
## Support
- **Original Implementation**: BlackRoad Operating System
- **Issues**: Report in your repository's issue tracker
- **Questions**: Refer to `TESTING.md` in BlackRoad repo for detailed examples
---
## License
These templates are provided as-is from the BlackRoad Operating System project.
Adapt freely for your own projects.
---
**Happy Testing! 🧪✨**

View File

@@ -0,0 +1,376 @@
# Testing Guide - {{REPO_NAME}}
> **Comprehensive testing documentation**
---
## Table of Contents
1. [Overview](#overview)
2. [Quick Start](#quick-start)
3. [Test Orchestrator](#test-orchestrator)
4. [Test Suites](#test-suites)
5. [Local Development](#local-development)
6. [CI/CD Integration](#cicd-integration)
7. [Writing Tests](#writing-tests)
8. [Troubleshooting](#troubleshooting)
9. [Best Practices](#best-practices)
---
## Overview
{{PROJECT_DESCRIPTION}}
This project uses a **unified test orchestrator** that coordinates all test suites. This ensures consistent testing behavior between local development and CI/CD environments.
### Testing Philosophy
- **Comprehensive**: All components are tested
- **Fast Feedback**: Tests run quickly with clear output
- **Isolated**: Each suite runs independently
- **Consistent**: Same test runner works locally and in CI
- **Extensible**: Easy to add new test suites
### Test Coverage
| Component | Framework | Location | Status |
|-----------|-----------|----------|--------|
| TODO: Component 1 | TODO: Framework | `path/to/tests/` | ✅ Active |
| TODO: Component 2 | TODO: Framework | `path/to/tests/` | ✅ Active |
<!-- Add your test suites here -->
---
## Quick Start
### Run All Tests
```bash
# Best-effort mode - runs all suites, reports summary
./test_all.sh
# Strict mode - fails on first error
./test_all.sh --strict
# Verbose output
./test_all.sh --verbose
```
### Run Specific Suite
```bash
# TODO: Replace with your actual suite names
./test_all.sh --suite example-suite-1
./test_all.sh --suite example-suite-2
```
---
## Test Orchestrator
The **Test Orchestrator** (`test_all.sh`) is the central test runner.
### Features
✅ **Unified Interface**: Single command to run all tests
✅ **Smart Detection**: Automatically finds and runs test suites
✅ **Two Modes**: Best-effort (run all) and strict (fail-fast)
✅ **Clear Output**: Color-coded, structured output with summary
✅ **CI-Friendly**: Same script works locally and in GitHub Actions
### Usage
```bash
# Run all suites with detailed output
./test_all.sh --verbose
# Run specific suite in strict mode
./test_all.sh --suite my-suite --strict
# Get help
./test_all.sh --help
```
### Output Example
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
{{REPO_NAME}} - Test Orchestrator
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
▶ Example Suite 1
✓ Tests passed
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TEST SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Suite Result Duration
─────────────────────────────────────────────────────────
example-suite-1 ✓ PASS 5s
Total: 1 suites | 1 passed | 0 failed | 0 skipped
✅ ALL TESTS PASSED
```
---
## Test Suites
<!-- TODO: Document each of your test suites -->
### 1. Example Suite
**Framework**: pytest / jest / etc.
**Location**: `path/to/tests/`
**Coverage**: Description of what this suite tests
#### Running This Suite
```bash
# Via orchestrator
./test_all.sh --suite example-suite-1
# Direct command
# TODO: Add direct test command
```
<!-- Add more test suite sections as needed -->
---
## Local Development
### Prerequisites
| Tool | Version | Purpose |
|------|---------|---------|
<!-- TODO: List your required tools -->
| **Python** | 3.11+ | Backend tests |
| **Node.js** | 20+ | Frontend tests |
| **Git** | 2.30+ | Version control |
### Setup
```bash
# 1. Clone repository
git clone https://github.com/{{ORG_NAME}}/{{REPO_NAME}}.git
cd {{REPO_NAME}}
# 2. Install dependencies
# TODO: Add your installation commands
# 3. Run tests
./test_all.sh
```
### Environment Variables
```bash
# TODO: Document required environment variables
# Example:
DATABASE_URL=postgresql://localhost/testdb
API_KEY=your-test-api-key
```
---
## CI/CD Integration
### GitHub Actions Workflow
**File**: `.github/workflows/test-orchestrator.yml`
The CI workflow uses the same `test_all.sh` script as local development.
#### Workflow Features
✅ Runs on push to main branch
✅ Runs on pull requests
✅ Supports manual dispatch
✅ Uploads test artifacts
✅ Generates test summary
#### Triggering CI
```bash
# Push to main
git push origin main
# Create pull request
gh pr create --title "My Feature" --body "Description"
```
#### Manual Dispatch
1. Go to **Actions** → **Test Orchestrator**
2. Click **Run workflow**
3. Select branch and options
---
## Writing Tests
### Test Structure
```
tests/
├── unit/ # Fast, isolated tests
├── integration/ # Tests with dependencies
├── fixtures/ # Shared test data
└── conftest.py # Shared configuration
```
### Example Test
<!-- TODO: Add framework-specific examples -->
```python
# Example pytest test
def test_example():
assert 1 + 1 == 2
```
```javascript
// Example Jest test
test('example test', () => {
expect(1 + 1).toBe(2);
});
```
### Test Naming
- **Files**: `test_*.py` or `*.test.ts`
- **Functions**: `test_feature_scenario()`
- **Classes**: `TestFeature`
---
## Troubleshooting
### Common Issues
#### Issue: Permission Denied
```bash
# Solution
chmod +x test_all.sh
```
#### Issue: Command Not Found
```bash
# Solution: Install missing tool
# TODO: Add installation commands for your tools
```
#### Issue: Tests Pass Locally but Fail in CI
**Possible causes**:
- Missing environment variables
- Different tool versions
- Service dependencies not ready
**Solution**: Check CI logs for specific errors
### Debugging
```bash
# Verbose output
./test_all.sh --verbose
# Run single test
# TODO: Add framework-specific command
```
---
## Best Practices
### General
1. **Write tests first** (TDD when possible)
2. **Keep tests isolated** and independent
3. **Use descriptive names**
4. **Mock external dependencies**
5. **Test both success and error paths**
### Specific to Your Stack
<!-- TODO: Add framework-specific best practices -->
---
## FAQ
### Q: How do I add a new test suite?
**A**: Edit `test_all.sh`:
1. Add a new `run_mysuite_tests()` function
2. Call it in the main execution block
3. Update the suite list in `print_summary()`
### Q: How do I skip a test temporarily?
**A**: Use your framework's skip decorator:
```python
# Pytest
@pytest.mark.skip(reason="TODO")
def test_something():
pass
```
```javascript
// Jest
test.skip('something', () => {
// Skipped
});
```
### Q: How do I run tests with coverage?
**A**: Use framework-specific coverage tools:
```bash
# TODO: Add coverage commands for your frameworks
```
---
## Resources
- [Project README](./README.md)
- [Contributing Guide](./CONTRIBUTING.md)
<!-- TODO: Add links to your documentation -->
### Testing Frameworks
<!-- TODO: Add links to framework docs you use -->
- [pytest](https://docs.pytest.org/)
- [Jest](https://jestjs.io/)
---
## Contributing
When adding tests:
1. **Run all tests** before submitting PR: `./test_all.sh`
2. **Ensure tests pass** in CI
3. **Add tests for new features**
4. **Update this documentation** if changing test infrastructure
---
**Questions?** Open an issue or check the project documentation.
**Happy Testing! 🧪✨**
---
*This testing guide uses the BlackRoad Operating System Test Orchestrator pattern.*
*Adapted from: https://github.com/blackboxprogramming/BlackRoad-Operating-System*

View File

@@ -0,0 +1,256 @@
# GitHub Actions Workflow - Test Orchestrator
#
# Adapted from: BlackRoad Operating System Test Orchestrator Pattern
#
# TODO: Customize this workflow for your project
# 1. Update trigger branches
# 2. Add/remove service containers as needed
# 3. Adjust cache paths
# 4. Update environment variables
# 5. Modify suite options in workflow_dispatch
name: Test Orchestrator - All Suites
on:
push:
branches:
- "main"
# TODO: Add your branch patterns here
# - "develop"
# - "claude/**"
# - "feature/**"
pull_request:
branches: ["main"]
workflow_dispatch:
inputs:
suite:
description: 'Specific test suite to run (leave empty for all)'
required: false
type: choice
options:
- ''
# TODO: Add your suite names here
- 'example-suite-1'
- 'example-suite-2'
strict_mode:
description: 'Enable strict mode (fail-fast)'
required: false
type: boolean
default: false
jobs:
orchestrator:
name: Run Test Orchestrator
runs-on: ubuntu-latest
timeout-minutes: 30
# TODO: Add/remove service containers as needed
# services:
# postgres:
# image: postgres:15-alpine
# env:
# POSTGRES_USER: testuser
# POSTGRES_PASSWORD: testpass
# POSTGRES_DB: testdb
# ports:
# - 5432:5432
# options: >-
# --health-cmd pg_isready
# --health-interval 10s
# --health-timeout 5s
# --health-retries 5
#
# redis:
# image: redis:7-alpine
# ports:
# - 6379:6379
# options: >-
# --health-cmd "redis-cli ping"
# --health-interval 10s
# --health-timeout 5s
# --health-retries 5
#
# mysql:
# image: mysql:8
# env:
# MYSQL_ROOT_PASSWORD: testpass
# MYSQL_DATABASE: testdb
# ports:
# - 3306:3306
#
# mongodb:
# image: mongo:7
# ports:
# - 27017:27017
steps:
- name: Checkout repository
uses: actions/checkout@v4
# TODO: Setup language runtimes as needed
# Uncomment and customize based on your stack
# Python setup
# - name: Setup Python
# uses: actions/setup-python@v5
# with:
# python-version: '3.11' # TODO: Set your Python version
# cache: 'pip'
# cache-dependency-path: |
# requirements.txt
# # Add more dependency files here
# Node.js setup
# - name: Setup Node.js
# uses: actions/setup-node@v4
# with:
# node-version: '20' # TODO: Set your Node version
# cache: 'npm' # or 'yarn' or 'pnpm'
# cache-dependency-path: 'package-lock.json'
# Go setup
# - name: Setup Go
# uses: actions/setup-go@v5
# with:
# go-version: '1.21' # TODO: Set your Go version
# cache: true
# Rust setup
# - name: Setup Rust
# uses: actions-rs/toolchain@v1
# with:
# toolchain: stable
# profile: minimal
# Java setup
# - name: Setup Java
# uses: actions/setup-java@v4
# with:
# java-version: '17' # TODO: Set your Java version
# distribution: 'temurin'
# TODO: Install system dependencies if needed
# - name: Install system dependencies
# run: |
# sudo apt-get update
# sudo apt-get install -y --no-install-recommends \
# build-essential \
# libpq-dev \
# # Add more packages as needed
# TODO: Create environment files if needed
# - name: Create test environment file
# run: |
# cat > .env.test << EOF
# DATABASE_URL=postgresql://testuser:testpass@localhost:5432/testdb
# REDIS_URL=redis://localhost:6379/0
# SECRET_KEY=test-secret-key-$(openssl rand -hex 16)
# ENVIRONMENT=testing
# # Add more environment variables
# EOF
- name: Make test orchestrator executable
run: chmod +x test_all.sh
- name: Run Test Orchestrator (All Suites)
if: ${{ github.event.inputs.suite == '' }}
run: |
if [[ "${{ github.event.inputs.strict_mode }}" == "true" ]]; then
./test_all.sh --strict --verbose
else
./test_all.sh --verbose
fi
- name: Run Test Orchestrator (Specific Suite)
if: ${{ github.event.inputs.suite != '' }}
run: |
if [[ "${{ github.event.inputs.strict_mode }}" == "true" ]]; then
./test_all.sh --suite "${{ github.event.inputs.suite }}" --strict --verbose
else
./test_all.sh --suite "${{ github.event.inputs.suite }}" --verbose
fi
# TODO: Upload test artifacts
# Customize paths based on your test output locations
# - name: Upload test artifacts
# if: always()
# uses: actions/upload-artifact@v4
# with:
# name: test-results
# path: |
# test-results/
# coverage/
# *.log
# retention-days: 7
# if-no-files-found: ignore
- name: Generate test summary
if: always()
run: |
echo "## 🧪 Test Orchestrator Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Repository:** ${{ github.repository }}" >> $GITHUB_STEP_SUMMARY
echo "**Branch:** ${{ github.ref_name }}" >> $GITHUB_STEP_SUMMARY
echo "**Commit:** ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Test Results" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "Check the job output above for detailed test results." >> $GITHUB_STEP_SUMMARY
# TODO: Add optional coverage job
# coverage:
# name: Generate Coverage Report
# runs-on: ubuntu-latest
# needs: orchestrator
# if: github.event_name == 'pull_request'
#
# steps:
# - name: Checkout repository
# uses: actions/checkout@v4
#
# # Setup language runtime
# # Run tests with coverage
# # Upload to Codecov or similar
#
# - name: Upload coverage to Codecov
# uses: codecov/codecov-action@v4
# with:
# file: ./coverage.xml
# flags: unittests
# name: coverage
# fail_ci_if_error: false
# token: ${{ secrets.CODECOV_TOKEN }}
status-check:
name: Final Status Check
runs-on: ubuntu-latest
needs: [orchestrator]
if: always()
steps:
- name: Check orchestrator status
run: |
if [[ "${{ needs.orchestrator.result }}" != "success" ]]; then
echo "❌ Test orchestrator failed or was cancelled"
exit 1
fi
echo "✅ All test suites passed!"
# TODO: Optional PR comment
# - name: Post status to PR
# if: github.event_name == 'pull_request' && always()
# uses: actions/github-script@v7
# with:
# script: |
# const status = '${{ needs.orchestrator.result }}';
# const icon = status === 'success' ? '✅' : '❌';
# const message = status === 'success'
# ? 'All test suites passed!'
# : 'One or more test suites failed. Check the orchestrator job for details.';
#
# github.rest.issues.createComment({
# issue_number: context.issue.number,
# owner: context.repo.owner,
# repo: context.repo.repo,
# body: `## ${icon} Test Orchestrator\n\n${message}\n\n[View Details](${context.payload.pull_request.html_url}/checks)`
# });

View File

@@ -0,0 +1,329 @@
#!/usr/bin/env bash
#
# test_all.sh - {{REPO_NAME}} Test Orchestrator
#
# {{PROJECT_DESCRIPTION}}
#
# This script runs all test suites across the repository in a coordinated fashion.
# Adapted from: BlackRoad Operating System Test Orchestrator Pattern
#
# Usage:
# ./test_all.sh # Best-effort mode (run all suites, report summary)
# ./test_all.sh --strict # Strict mode (fail on first error)
# ./test_all.sh --suite <name> # Run specific suite only
# ./test_all.sh --help # Show usage
#
# Available suites: {{LIST_YOUR_SUITES_HERE}}
# Example: backend, frontend, api, sdk, docs
#
set -uo pipefail
###############################################################################
# CONFIGURATION
###############################################################################
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$ROOT"
STRICT_MODE=false
SPECIFIC_SUITE=""
VERBOSE=false
# Color codes for pretty output
if [[ -t 1 ]]; then
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
BOLD='\033[1m'
RESET='\033[0m'
else
RED='' GREEN='' YELLOW='' BLUE='' CYAN='' BOLD='' RESET=''
fi
# Results tracking
declare -A SUITE_RESULTS
declare -A SUITE_TIMES
SUITES_RAN=0
SUITES_PASSED=0
SUITES_FAILED=0
SUITES_SKIPPED=0
###############################################################################
# HELPERS
###############################################################################
have() {
command -v "$1" >/dev/null 2>&1
}
log_header() {
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}"
echo -e "${BOLD}$1${RESET}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}"
}
log_suite() {
echo ""
echo -e "${BLUE}▶ $1${RESET}"
}
log_info() {
echo -e " ${CYAN}${RESET} $1"
}
log_success() {
echo -e " ${GREEN}✓${RESET} $1"
}
log_warning() {
echo -e " ${YELLOW}⚠${RESET} $1"
}
log_error() {
echo -e " ${RED}✗${RESET} $1"
}
log_skip() {
echo -e " ${YELLOW}⊘${RESET} $1"
}
record_result() {
local suite=$1
local result=$2 # PASS, FAIL, SKIP
local duration=$3
SUITE_RESULTS[$suite]=$result
SUITE_TIMES[$suite]=$duration
((SUITES_RAN++))
case $result in
PASS) ((SUITES_PASSED++)) ;;
FAIL) ((SUITES_FAILED++)) ;;
SKIP) ((SUITES_SKIPPED++)) ;;
esac
if [[ "$result" == "FAIL" && "$STRICT_MODE" == "true" ]]; then
log_error "Strict mode enabled - aborting on first failure"
print_summary
exit 1
fi
}
print_summary() {
echo ""
log_header "TEST SUMMARY"
echo ""
# Summary table
printf "${BOLD}%-25s %-10s %-15s${RESET}\n" "Suite" "Result" "Duration"
echo "─────────────────────────────────────────────────────────"
# TODO: Update this list with your actual suites
for suite in example-suite-1 example-suite-2 example-suite-3; do
if [[ -n "${SUITE_RESULTS[$suite]:-}" ]]; then
result="${SUITE_RESULTS[$suite]}"
duration="${SUITE_TIMES[$suite]}"
case $result in
PASS)
printf "${GREEN}%-25s %-10s %-15s${RESET}\n" "$suite" "✓ PASS" "$duration"
;;
FAIL)
printf "${RED}%-25s %-10s %-15s${RESET}\n" "$suite" "✗ FAIL" "$duration"
;;
SKIP)
printf "${YELLOW}%-25s %-10s %-15s${RESET}\n" "$suite" "⊘ SKIP" "$duration"
;;
esac
fi
done
echo "─────────────────────────────────────────────────────────"
echo ""
echo -e "${BOLD}Total:${RESET} $SUITES_RAN suites | ${GREEN}$SUITES_PASSED passed${RESET} | ${RED}$SUITES_FAILED failed${RESET} | ${YELLOW}$SUITES_SKIPPED skipped${RESET}"
echo ""
if [[ $SUITES_FAILED -gt 0 ]]; then
echo -e "${RED}${BOLD}❌ TESTS FAILED${RESET}"
return 1
else
echo -e "${GREEN}${BOLD}✅ ALL TESTS PASSED${RESET}"
return 0
fi
}
###############################################################################
# TEST SUITE FUNCTIONS
# TODO: Customize these for your project!
###############################################################################
# EXAMPLE SUITE 1: Replace with your actual test suite
run_example_suite_1() {
log_suite "Example Suite 1 (Description)"
local start_time=$(date +%s)
# Check if suite exists
if [[ ! -d "$ROOT/path/to/suite1" ]]; then
log_skip "path/to/suite1 directory not found"
record_result "example-suite-1" "SKIP" "0s"
return 0
fi
cd "$ROOT/path/to/suite1"
# TODO: Add your test commands here
# Examples:
# - pytest -v
# - npm test
# - go test ./...
# - cargo test
log_info "Running tests..."
# YOUR_TEST_COMMAND_HERE
local exit_code=$?
local end_time=$(date +%s)
local duration=$((end_time - start_time))
if [[ $exit_code -eq 0 ]]; then
log_success "Example Suite 1 tests passed"
record_result "example-suite-1" "PASS" "${duration}s"
else
log_error "Example Suite 1 tests failed"
record_result "example-suite-1" "FAIL" "${duration}s"
return 1
fi
cd "$ROOT"
}
# EXAMPLE SUITE 2: Add more suites as needed
run_example_suite_2() {
log_suite "Example Suite 2 (Description)"
local start_time=$(date +%s)
if [[ ! -d "$ROOT/path/to/suite2" ]]; then
log_skip "path/to/suite2 directory not found"
record_result "example-suite-2" "SKIP" "0s"
return 0
fi
# TODO: Implement your test logic
cd "$ROOT"
}
# Add more suite functions as needed...
###############################################################################
# COMMAND-LINE PARSING
###############################################################################
show_help() {
cat << EOF
${BOLD}{{REPO_NAME}} - Test Orchestrator${RESET}
${BOLD}USAGE:${RESET}
./test_all.sh [OPTIONS]
${BOLD}OPTIONS:${RESET}
--strict Fail on first test suite failure (default: best-effort)
--suite <name> Run specific test suite only
--verbose, -v Show verbose test output
--help, -h Show this help message
${BOLD}AVAILABLE SUITES:${RESET}
example-suite-1 Description of suite 1
example-suite-2 Description of suite 2
# TODO: Update this list with your actual suites
${BOLD}EXAMPLES:${RESET}
./test_all.sh # Run all suites, best-effort mode
./test_all.sh --strict # Run all suites, fail-fast mode
./test_all.sh --suite example-suite-1 # Run only suite 1
./test_all.sh --suite example-suite-1 --verbose # Verbose output
${BOLD}EXIT CODES:${RESET}
0 All tests passed
1 One or more test suites failed
EOF
}
while [[ $# -gt 0 ]]; do
case $1 in
--strict)
STRICT_MODE=true
shift
;;
--suite)
SPECIFIC_SUITE="$2"
shift 2
;;
--verbose|-v)
VERBOSE=true
shift
;;
--help|-h)
show_help
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${RESET}"
echo "Use --help for usage information"
exit 1
;;
esac
done
###############################################################################
# MAIN EXECUTION
###############################################################################
log_header "{{REPO_NAME}} - Test Orchestrator"
if [[ "$STRICT_MODE" == "true" ]]; then
log_info "Mode: ${RED}${BOLD}STRICT${RESET} (fail-fast)"
else
log_info "Mode: ${GREEN}${BOLD}BEST-EFFORT${RESET} (run all suites)"
fi
if [[ -n "$SPECIFIC_SUITE" ]]; then
log_info "Running suite: ${BOLD}$SPECIFIC_SUITE${RESET}"
fi
echo ""
# TODO: Update this section with your actual suite functions
if [[ -z "$SPECIFIC_SUITE" ]]; then
# Run all suites
run_example_suite_1 || true
run_example_suite_2 || true
# Add more suite calls here...
else
# Run specific suite
case $SPECIFIC_SUITE in
example-suite-1)
run_example_suite_1
;;
example-suite-2)
run_example_suite_2
;;
# Add more cases here...
*)
log_error "Unknown suite: $SPECIFIC_SUITE"
echo "Use --help to see available suites"
exit 1
;;
esac
fi
# Print summary and exit with appropriate code
print_summary
exit $?