mirror of
https://github.com/blackboxprogramming/BlackRoad-Operating-System.git
synced 2026-03-18 01:34:00 -05:00
Create reusable test orchestrator and CI workflow (#104)
This commit implements a unified test orchestration system that
coordinates all test suites across the BlackRoad Operating System
monorepo, providing consistent testing behavior between local
development and CI/CD environments.
## Core Components
### 1. Test Orchestrator Script (test_all.sh)
- Unified interface to run all test suites
- Smart suite detection with existence checks
- Two operational modes:
* Best-effort: Run all suites, report summary (default)
* Strict mode: Fail-fast on first error (--strict)
- Color-coded, structured output with summary table
- Modular suite functions for easy extension
- Result tracking with pass/fail/skip status and duration
- Verbose mode for detailed test output
Supported test suites:
- Backend (FastAPI + pytest)
- Agents (200+ AI agent ecosystem)
- Operator Engine (GitHub automation)
- Python SDK (pytest)
- TypeScript SDK (Jest)
- Frontend (structure validation)
### 2. GitHub Actions Workflow (.github/workflows/test-orchestrator.yml)
- Runs orchestrator in CI using same script as local dev
- Service containers (Postgres, Redis) for integration tests
- Multi-language runtime setup (Python 3.11, Node 20)
- Dependency caching for faster builds
- Test artifact uploads (coverage, reports)
- Manual workflow dispatch with suite selection
- Coverage reporting for PRs (Codecov integration)
- Automatic PR status comments
### 3. Comprehensive Documentation (TESTING.md)
- Complete testing guide for developers and AI assistants
- Quick start examples
- Suite-by-suite documentation
- Local development setup instructions
- CI/CD integration guide
- Test writing best practices
- Troubleshooting FAQ with common issues and solutions
- Framework-specific examples
## Reusable Templates (.templates/test-orchestrator/)
Created generic templates for use in other repositories:
### Template Files
- test_all.sh.template - Generic orchestrator script
- test-orchestrator.yml.template - Generic CI workflow
- TESTING.md.template - Generic testing documentation
- PROMPTS.md - AI assistant prompts for implementation
- README.md - Template usage guide and customization instructions
### Key Features
- Clear placeholders ({{REPO_NAME}}, {{PROJECT_DESCRIPTION}}, etc.)
- Comprehensive inline comments
- Framework-agnostic design (Python/Node/Go/Rust examples)
- Adaptation guides for different project structures
- AI assistant prompts for Claude, Copilot, ChatGPT
### Use Cases
- Multi-language monorepos
- Microservices architectures
- Data science projects
- Infrastructure projects
- Any project needing unified test orchestration
## Benefits
1. **Consistency**: Same test experience locally and in CI
2. **Discoverability**: New contributors know exactly how to run tests
3. **Maintainability**: Single pattern to learn and maintain
4. **Extensibility**: Easy to add new test suites
5. **CI-Friendly**: Optimized for GitHub Actions
6. **Reusability**: Templates can be copied to any repo
## Usage
Local development:
./test_all.sh # Run all suites
./test_all.sh --strict # Fail-fast mode
./test_all.sh --suite backend # Run specific suite
./test_all.sh --verbose # Detailed output
CI triggers automatically on:
- Push to main, claude/**, copilot/**, codex/** branches
- Pull requests to main
- Manual workflow dispatch
## Migration Notes
This implementation:
- Preserves existing test scripts (scripts/run_backend_tests.sh)
- Works alongside existing CI workflows
- Can be adopted gradually or all at once
- Requires no changes to existing test code
## Future Enhancements
Potential additions:
- Matrix testing across Python/Node versions
- Performance benchmarking suite
- Flaky test detection
- Test result caching
- Slack/Discord notifications
---
Pattern adapted for: BlackRoad Operating System monorepo
Designed for: Maximum reusability across projects
Target audience: Developers, DevOps engineers, AI assistants
# Pull Request
## Description
<!-- Provide a brief description of the changes in this PR -->
## Type of Change
<!-- Mark the relevant option with an 'x' -->
- [ ] 📝 Documentation update
- [ ] 🧪 Tests only
- [ ] 🏗️ Scaffolding/stubs
- [ ] ✨ New feature
- [ ] 🐛 Bug fix
- [ ] ♻️ Refactoring
- [ ] ⚙️ Infrastructure/CI
- [ ] 📦 Dependencies update
- [ ] 🔒 Security fix
- [ ] 💥 Breaking change
## Checklist
<!-- Mark completed items with an 'x' -->
- [ ] Code follows the project's style guidelines
- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my
feature works
- [ ] New and existing unit tests pass locally with my changes
## Auto-Merge Eligibility
<!-- This section helps determine if this PR qualifies for auto-merge
-->
**Eligible for auto-merge?**
- [ ] Yes - This is a docs-only, tests-only, or small AI-generated PR
- [ ] No - Requires human review
**Reason for auto-merge eligibility:**
- [ ] Docs-only (Tier 1)
- [ ] Tests-only (Tier 2)
- [ ] Scaffolding < 200 lines (Tier 3)
- [ ] AI-generated < 500 lines (Tier 4)
- [ ] Dependency patch/minor (Tier 5)
**If not auto-merge eligible, why?**
- [ ] Breaking change
- [ ] Security-related
- [ ] Infrastructure changes
- [ ] Requires discussion
- [ ] Large PR (> 500 lines)
## Related Issues
<!-- Link to related issues -->
Closes #
Related to #
## Test Plan
<!-- Describe how you tested these changes -->
## Screenshots (if applicable)
<!-- Add screenshots for UI changes -->
---
**Note**: This PR will be automatically labeled based on files changed.
See `GITHUB_AUTOMATION_RULES.md` for details.
If this PR meets auto-merge criteria (see `AUTO_MERGE_POLICY.md`), it
will be automatically approved and merged after checks pass.
For questions about the merge queue system, see `MERGE_QUEUE_PLAN.md`.
This commit is contained in:
291
.github/workflows/test-orchestrator.yml
vendored
Normal file
291
.github/workflows/test-orchestrator.yml
vendored
Normal file
@@ -0,0 +1,291 @@
|
|||||||
|
name: Test Orchestrator - All Suites
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: ["main", "claude/**", "copilot/**", "codex/**"]
|
||||||
|
pull_request:
|
||||||
|
branches: ["main"]
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
suite:
|
||||||
|
description: 'Specific test suite to run (leave empty for all)'
|
||||||
|
required: false
|
||||||
|
type: choice
|
||||||
|
options:
|
||||||
|
- ''
|
||||||
|
- backend
|
||||||
|
- agents
|
||||||
|
- operator
|
||||||
|
- sdk-python
|
||||||
|
- sdk-typescript
|
||||||
|
- frontend
|
||||||
|
strict_mode:
|
||||||
|
description: 'Enable strict mode (fail-fast)'
|
||||||
|
required: false
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
orchestrator:
|
||||||
|
name: Run Test Orchestrator
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 30
|
||||||
|
|
||||||
|
# Service containers for backend tests
|
||||||
|
services:
|
||||||
|
postgres:
|
||||||
|
image: postgres:15-alpine
|
||||||
|
env:
|
||||||
|
POSTGRES_USER: blackroad
|
||||||
|
POSTGRES_PASSWORD: test_password
|
||||||
|
POSTGRES_DB: blackroad_test
|
||||||
|
ports:
|
||||||
|
- 5432:5432
|
||||||
|
options: >-
|
||||||
|
--health-cmd pg_isready
|
||||||
|
--health-interval 10s
|
||||||
|
--health-timeout 5s
|
||||||
|
--health-retries 5
|
||||||
|
|
||||||
|
redis:
|
||||||
|
image: redis:7-alpine
|
||||||
|
ports:
|
||||||
|
- 6379:6379
|
||||||
|
options: >-
|
||||||
|
--health-cmd "redis-cli ping"
|
||||||
|
--health-interval 10s
|
||||||
|
--health-timeout 5s
|
||||||
|
--health-retries 5
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Setup Python
|
||||||
|
uses: actions/setup-python@v5
|
||||||
|
with:
|
||||||
|
python-version: '3.11'
|
||||||
|
cache: 'pip'
|
||||||
|
cache-dependency-path: |
|
||||||
|
backend/requirements.txt
|
||||||
|
sdk/python/pyproject.toml
|
||||||
|
agents/requirements.txt
|
||||||
|
|
||||||
|
- name: Setup Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: '20'
|
||||||
|
cache: 'npm'
|
||||||
|
cache-dependency-path: 'sdk/typescript/package-lock.json'
|
||||||
|
|
||||||
|
- name: Install system dependencies
|
||||||
|
run: |
|
||||||
|
sudo apt-get update
|
||||||
|
sudo apt-get install -y --no-install-recommends \
|
||||||
|
build-essential \
|
||||||
|
libpq-dev
|
||||||
|
|
||||||
|
- name: Create test environment file
|
||||||
|
run: |
|
||||||
|
mkdir -p backend
|
||||||
|
cat > backend/.env << EOF
|
||||||
|
DATABASE_URL=postgresql://blackroad:test_password@localhost:5432/blackroad_test
|
||||||
|
DATABASE_ASYNC_URL=postgresql+asyncpg://blackroad:test_password@localhost:5432/blackroad_test
|
||||||
|
REDIS_URL=redis://localhost:6379/0
|
||||||
|
SECRET_KEY=test-secret-key-for-ci-$(openssl rand -hex 16)
|
||||||
|
ALGORITHM=HS256
|
||||||
|
ACCESS_TOKEN_EXPIRE_MINUTES=30
|
||||||
|
ENVIRONMENT=testing
|
||||||
|
DEBUG=True
|
||||||
|
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:8000
|
||||||
|
WALLET_MASTER_KEY=test-master-key-32-characters-long
|
||||||
|
EOF
|
||||||
|
|
||||||
|
- name: Run Test Orchestrator (All Suites)
|
||||||
|
if: ${{ github.event.inputs.suite == '' }}
|
||||||
|
run: |
|
||||||
|
if [[ "${{ github.event.inputs.strict_mode }}" == "true" ]]; then
|
||||||
|
./test_all.sh --strict --verbose
|
||||||
|
else
|
||||||
|
./test_all.sh --verbose
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Run Test Orchestrator (Specific Suite)
|
||||||
|
if: ${{ github.event.inputs.suite != '' }}
|
||||||
|
run: |
|
||||||
|
if [[ "${{ github.event.inputs.strict_mode }}" == "true" ]]; then
|
||||||
|
./test_all.sh --suite "${{ github.event.inputs.suite }}" --strict --verbose
|
||||||
|
else
|
||||||
|
./test_all.sh --suite "${{ github.event.inputs.suite }}" --verbose
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Upload test artifacts (Backend)
|
||||||
|
if: always()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: backend-test-results
|
||||||
|
path: |
|
||||||
|
backend/test.db
|
||||||
|
backend/.coverage
|
||||||
|
backend/htmlcov/
|
||||||
|
backend/pytest-report.xml
|
||||||
|
retention-days: 7
|
||||||
|
if-no-files-found: ignore
|
||||||
|
|
||||||
|
- name: Upload test artifacts (Python SDK)
|
||||||
|
if: always()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: python-sdk-test-results
|
||||||
|
path: |
|
||||||
|
sdk/python/.coverage
|
||||||
|
sdk/python/htmlcov/
|
||||||
|
sdk/python/pytest-report.xml
|
||||||
|
retention-days: 7
|
||||||
|
if-no-files-found: ignore
|
||||||
|
|
||||||
|
- name: Upload test artifacts (TypeScript SDK)
|
||||||
|
if: always()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: typescript-sdk-test-results
|
||||||
|
path: |
|
||||||
|
sdk/typescript/coverage/
|
||||||
|
sdk/typescript/test-results/
|
||||||
|
retention-days: 7
|
||||||
|
if-no-files-found: ignore
|
||||||
|
|
||||||
|
- name: Generate test summary
|
||||||
|
if: always()
|
||||||
|
run: |
|
||||||
|
echo "## 🧪 Test Orchestrator Summary" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "**Repository:** ${{ github.repository }}" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "**Branch:** ${{ github.ref_name }}" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "**Commit:** ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "### Test Results" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "Check the job output above for detailed test results." >> $GITHUB_STEP_SUMMARY
|
||||||
|
|
||||||
|
# Parallel suite jobs (optional - can run in parallel with orchestrator)
|
||||||
|
backend-coverage:
|
||||||
|
name: Backend Coverage Report
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: orchestrator
|
||||||
|
if: github.event_name == 'pull_request'
|
||||||
|
|
||||||
|
services:
|
||||||
|
postgres:
|
||||||
|
image: postgres:15-alpine
|
||||||
|
env:
|
||||||
|
POSTGRES_USER: blackroad
|
||||||
|
POSTGRES_PASSWORD: test_password
|
||||||
|
POSTGRES_DB: blackroad_test
|
||||||
|
ports:
|
||||||
|
- 5432:5432
|
||||||
|
options: >-
|
||||||
|
--health-cmd pg_isready
|
||||||
|
--health-interval 10s
|
||||||
|
--health-timeout 5s
|
||||||
|
--health-retries 5
|
||||||
|
|
||||||
|
redis:
|
||||||
|
image: redis:7-alpine
|
||||||
|
ports:
|
||||||
|
- 6379:6379
|
||||||
|
options: >-
|
||||||
|
--health-cmd "redis-cli ping"
|
||||||
|
--health-interval 10s
|
||||||
|
--health-timeout 5s
|
||||||
|
--health-retries 5
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Setup Python
|
||||||
|
uses: actions/setup-python@v5
|
||||||
|
with:
|
||||||
|
python-version: '3.11'
|
||||||
|
cache: 'pip'
|
||||||
|
cache-dependency-path: 'backend/requirements.txt'
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: |
|
||||||
|
cd backend
|
||||||
|
pip install --upgrade pip
|
||||||
|
pip install -r requirements.txt
|
||||||
|
pip install pytest-cov
|
||||||
|
|
||||||
|
- name: Create test .env file
|
||||||
|
run: |
|
||||||
|
cd backend
|
||||||
|
cat > .env << EOF
|
||||||
|
DATABASE_URL=postgresql://blackroad:test_password@localhost:5432/blackroad_test
|
||||||
|
DATABASE_ASYNC_URL=postgresql+asyncpg://blackroad:test_password@localhost:5432/blackroad_test
|
||||||
|
REDIS_URL=redis://localhost:6379/0
|
||||||
|
SECRET_KEY=test-secret-key-for-ci
|
||||||
|
ALGORITHM=HS256
|
||||||
|
ACCESS_TOKEN_EXPIRE_MINUTES=30
|
||||||
|
ENVIRONMENT=testing
|
||||||
|
DEBUG=True
|
||||||
|
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:8000
|
||||||
|
WALLET_MASTER_KEY=test-master-key-32-characters-long
|
||||||
|
EOF
|
||||||
|
|
||||||
|
- name: Run pytest with coverage
|
||||||
|
run: |
|
||||||
|
cd backend
|
||||||
|
pytest -v --cov=app --cov-report=xml --cov-report=term --cov-report=html
|
||||||
|
|
||||||
|
- name: Upload coverage to Codecov
|
||||||
|
uses: codecov/codecov-action@v4
|
||||||
|
if: always()
|
||||||
|
with:
|
||||||
|
file: ./backend/coverage.xml
|
||||||
|
flags: backend
|
||||||
|
name: backend-coverage
|
||||||
|
fail_ci_if_error: false
|
||||||
|
token: ${{ secrets.CODECOV_TOKEN }}
|
||||||
|
|
||||||
|
- name: Comment coverage on PR
|
||||||
|
if: github.event_name == 'pull_request'
|
||||||
|
uses: py-cov-action/python-coverage-comment-action@v3
|
||||||
|
with:
|
||||||
|
GITHUB_TOKEN: ${{ github.token }}
|
||||||
|
MINIMUM_GREEN: 80
|
||||||
|
MINIMUM_ORANGE: 60
|
||||||
|
|
||||||
|
status-check:
|
||||||
|
name: Final Status Check
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: [orchestrator]
|
||||||
|
if: always()
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Check orchestrator status
|
||||||
|
run: |
|
||||||
|
if [[ "${{ needs.orchestrator.result }}" != "success" ]]; then
|
||||||
|
echo "❌ Test orchestrator failed or was cancelled"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "✅ All test suites passed!"
|
||||||
|
|
||||||
|
- name: Post status to PR
|
||||||
|
if: github.event_name == 'pull_request' && always()
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
const status = '${{ needs.orchestrator.result }}';
|
||||||
|
const icon = status === 'success' ? '✅' : '❌';
|
||||||
|
const message = status === 'success'
|
||||||
|
? 'All test suites passed!'
|
||||||
|
: 'One or more test suites failed. Check the orchestrator job for details.';
|
||||||
|
|
||||||
|
github.rest.issues.createComment({
|
||||||
|
issue_number: context.issue.number,
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
body: `## ${icon} Test Orchestrator\n\n${message}\n\n[View Details](${context.payload.pull_request.html_url}/checks)`
|
||||||
|
});
|
||||||
137
.templates/README.md
Normal file
137
.templates/README.md
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
# BlackRoad Templates
|
||||||
|
|
||||||
|
> **Reusable patterns and templates from BlackRoad Operating System**
|
||||||
|
|
||||||
|
This directory contains battle-tested templates and patterns that can be copied to other projects.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Available Templates
|
||||||
|
|
||||||
|
### 🧪 [Test Orchestrator](./test-orchestrator/)
|
||||||
|
|
||||||
|
**Purpose**: Unified test orchestration for monorepos
|
||||||
|
|
||||||
|
**What's Included**:
|
||||||
|
- `test_all.sh.template` - Universal test orchestrator script
|
||||||
|
- `test-orchestrator.yml.template` - GitHub Actions workflow
|
||||||
|
- `TESTING.md.template` - Comprehensive testing documentation
|
||||||
|
- `PROMPTS.md` - AI assistant prompts for implementation
|
||||||
|
- `README.md` - Usage guide and customization instructions
|
||||||
|
|
||||||
|
**When to Use**:
|
||||||
|
- Multi-language monorepos
|
||||||
|
- Projects with multiple test suites
|
||||||
|
- Teams wanting consistent test execution
|
||||||
|
- CI/CD pipelines needing orchestration
|
||||||
|
|
||||||
|
**Quick Start**:
|
||||||
|
```bash
|
||||||
|
# Copy to your repo
|
||||||
|
cp -r .templates/test-orchestrator/* /path/to/your/repo/
|
||||||
|
|
||||||
|
# Customize for your project
|
||||||
|
# See test-orchestrator/README.md for details
|
||||||
|
```
|
||||||
|
|
||||||
|
**Used In**:
|
||||||
|
- BlackRoad Operating System (this repo)
|
||||||
|
- Can be adapted for any project
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Future Templates
|
||||||
|
|
||||||
|
Plans to add:
|
||||||
|
|
||||||
|
- **CI/CD Pipeline Templates** - Reusable GitHub Actions workflows
|
||||||
|
- **Docker Compose Stacks** - Development environment templates
|
||||||
|
- **API Documentation** - OpenAPI/Swagger templates
|
||||||
|
- **Project Structure** - New project scaffolding
|
||||||
|
- **Release Automation** - Semantic versioning and changelog generation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Contributing Templates
|
||||||
|
|
||||||
|
When adding new templates to this directory:
|
||||||
|
|
||||||
|
1. **Create a subdirectory**: `.templates/my-template/`
|
||||||
|
2. **Include these files**:
|
||||||
|
- `README.md` - Usage guide
|
||||||
|
- Template files with `.template` extension
|
||||||
|
- `PROMPTS.md` - AI assistant prompts (optional)
|
||||||
|
- Examples (optional)
|
||||||
|
3. **Document in this README**: Add section above
|
||||||
|
4. **Test thoroughly**: Ensure it works in multiple scenarios
|
||||||
|
5. **Keep generic**: Use placeholders like `{{REPO_NAME}}`
|
||||||
|
|
||||||
|
### Template Quality Standards
|
||||||
|
|
||||||
|
✅ **Good Template**:
|
||||||
|
- Clear placeholders (e.g., `{{TODO}}`, `{{REPLACE_ME}}`)
|
||||||
|
- Comprehensive comments
|
||||||
|
- Works out of the box after customization
|
||||||
|
- Includes usage documentation
|
||||||
|
- Provides examples
|
||||||
|
|
||||||
|
❌ **Poor Template**:
|
||||||
|
- Hardcoded values
|
||||||
|
- Missing documentation
|
||||||
|
- Assumes specific tools without checks
|
||||||
|
- No examples or guidance
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage Philosophy
|
||||||
|
|
||||||
|
These templates embody BlackRoad's core principles:
|
||||||
|
|
||||||
|
1. **Consistency**: Same patterns across all projects
|
||||||
|
2. **Discoverability**: Clear documentation and examples
|
||||||
|
3. **Extensibility**: Easy to customize and extend
|
||||||
|
4. **Simplicity**: No unnecessary complexity
|
||||||
|
5. **Practicality**: Battle-tested in production
|
||||||
|
|
||||||
|
When using these templates:
|
||||||
|
|
||||||
|
- **Don't blindly copy**: Understand what they do
|
||||||
|
- **Customize thoughtfully**: Adapt to your needs
|
||||||
|
- **Maintain quality**: Keep the same standards
|
||||||
|
- **Share improvements**: Contribute back when you find better approaches
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Template Versioning
|
||||||
|
|
||||||
|
Templates follow semantic versioning:
|
||||||
|
|
||||||
|
- **Major**: Breaking changes to structure or API
|
||||||
|
- **Minor**: New features, backwards compatible
|
||||||
|
- **Patch**: Bug fixes, documentation updates
|
||||||
|
|
||||||
|
Current versions:
|
||||||
|
|
||||||
|
- `test-orchestrator`: v1.0.0 (2025-11-18)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
- **Documentation**: Each template has its own README
|
||||||
|
- **Issues**: Report in BlackRoad Operating System repo
|
||||||
|
- **Questions**: Open a discussion in the main repo
|
||||||
|
- **AI Assistance**: Use PROMPTS.md files with Claude/Copilot
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Templates are provided as-is from BlackRoad Operating System.
|
||||||
|
Use freely in your own projects, commercial or personal.
|
||||||
|
|
||||||
|
Attribution appreciated but not required.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Build amazing things! 🚀✨**
|
||||||
634
.templates/test-orchestrator/PROMPTS.md
Normal file
634
.templates/test-orchestrator/PROMPTS.md
Normal file
@@ -0,0 +1,634 @@
|
|||||||
|
# AI Assistant Prompts for Test Orchestrator Pattern
|
||||||
|
|
||||||
|
> **Ready-to-use prompts for Claude, Copilot, ChatGPT, and other AI assistants**
|
||||||
|
|
||||||
|
These prompts help AI assistants implement the Test Orchestrator pattern in any repository.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Prompt: Implement Test Orchestrator
|
||||||
|
|
||||||
|
Copy and paste this into your AI assistant:
|
||||||
|
|
||||||
|
```
|
||||||
|
You are an AI engineer helping me implement the BlackRoad Test Orchestrator pattern in my repository.
|
||||||
|
|
||||||
|
## Context
|
||||||
|
I want to create a unified test orchestrator script (test_all.sh) and matching CI workflow for my monorepo.
|
||||||
|
|
||||||
|
Repository structure:
|
||||||
|
[PASTE YOUR DIRECTORY STRUCTURE HERE - use `tree -L 2` or describe it]
|
||||||
|
|
||||||
|
Current test commands:
|
||||||
|
- Component 1: [your test command]
|
||||||
|
- Component 2: [your test command]
|
||||||
|
- Component 3: [your test command]
|
||||||
|
|
||||||
|
## Task
|
||||||
|
Using the BlackRoad Test Orchestrator templates located at:
|
||||||
|
- .templates/test-orchestrator/test_all.sh.template
|
||||||
|
- .templates/test-orchestrator/test-orchestrator.yml.template
|
||||||
|
- .templates/test-orchestrator/TESTING.md.template
|
||||||
|
|
||||||
|
Please:
|
||||||
|
|
||||||
|
1. **Create test_all.sh** adapted to my project structure
|
||||||
|
- Replace placeholder suite functions with my actual components
|
||||||
|
- Use the correct test commands for each suite
|
||||||
|
- Update the suite list in print_summary()
|
||||||
|
- Customize the help text
|
||||||
|
|
||||||
|
2. **Create .github/workflows/test-orchestrator.yml**
|
||||||
|
- Add service containers I need (e.g., Postgres, Redis, etc.)
|
||||||
|
- Set up the correct language runtimes (Python, Node, Go, etc.)
|
||||||
|
- Configure environment variables
|
||||||
|
- Add appropriate caching
|
||||||
|
|
||||||
|
3. **Create TESTING.md**
|
||||||
|
- Document my specific test suites
|
||||||
|
- Include my setup instructions
|
||||||
|
- Add troubleshooting for my stack
|
||||||
|
|
||||||
|
4. **Make it executable**
|
||||||
|
- Ensure test_all.sh has executable permissions
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
- Keep the same structure and helper functions from the template
|
||||||
|
- Maintain the two-mode design (best-effort and strict)
|
||||||
|
- Keep the summary table format
|
||||||
|
- Don't remove the color-coded output
|
||||||
|
- Follow the same naming conventions
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
Provide:
|
||||||
|
1. Complete test_all.sh file
|
||||||
|
2. Complete .github/workflows/test-orchestrator.yml file
|
||||||
|
3. Complete TESTING.md file
|
||||||
|
4. Brief explanation of what you changed
|
||||||
|
|
||||||
|
Ready? Let's implement this!
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Detailed Prompt: Custom Test Orchestrator
|
||||||
|
|
||||||
|
For more complex projects, use this expanded prompt:
|
||||||
|
|
||||||
|
```
|
||||||
|
I want to implement a comprehensive test orchestrator for my {{LANGUAGE/STACK}} project.
|
||||||
|
|
||||||
|
## Project Details
|
||||||
|
|
||||||
|
**Name**: {{REPO_NAME}}
|
||||||
|
**Description**: {{PROJECT_DESCRIPTION}}
|
||||||
|
**Primary Languages**: {{e.g., Python, TypeScript, Go, Rust}}
|
||||||
|
**Test Frameworks**: {{e.g., pytest, Jest, Go testing, Cargo}}
|
||||||
|
|
||||||
|
## Current Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
[PASTE TREE OUTPUT OR DESCRIBE STRUCTURE]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Suites
|
||||||
|
|
||||||
|
I have the following test suites:
|
||||||
|
|
||||||
|
1. **{{Suite Name}}**
|
||||||
|
- Location: `{{path}}`
|
||||||
|
- Framework: {{framework}}
|
||||||
|
- Command: `{{test command}}`
|
||||||
|
- Dependencies: {{databases, services, etc.}}
|
||||||
|
|
||||||
|
2. **{{Suite Name}}**
|
||||||
|
- Location: `{{path}}`
|
||||||
|
- Framework: {{framework}}
|
||||||
|
- Command: `{{test command}}`
|
||||||
|
- Dependencies: {{databases, services, etc.}}
|
||||||
|
|
||||||
|
[Add more suites...]
|
||||||
|
|
||||||
|
## Service Dependencies
|
||||||
|
|
||||||
|
For CI, I need:
|
||||||
|
- [ ] PostgreSQL (version: {{version}})
|
||||||
|
- [ ] Redis (version: {{version}})
|
||||||
|
- [ ] MySQL (version: {{version}})
|
||||||
|
- [ ] MongoDB (version: {{version}})
|
||||||
|
- [ ] Other: {{describe}}
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
My tests need these environment variables:
|
||||||
|
```bash
|
||||||
|
{{VAR_NAME}}={{description}}
|
||||||
|
{{VAR_NAME}}={{description}}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Special Requirements
|
||||||
|
|
||||||
|
- [ ] Need to run database migrations before tests
|
||||||
|
- [ ] Need to seed test data
|
||||||
|
- [ ] Need to build artifacts before testing
|
||||||
|
- [ ] Need to run linters/formatters
|
||||||
|
- [ ] Need to generate coverage reports
|
||||||
|
- [ ] Other: {{describe}}
|
||||||
|
|
||||||
|
## Task
|
||||||
|
|
||||||
|
Create a complete test orchestrator implementation based on the BlackRoad pattern that:
|
||||||
|
|
||||||
|
1. **test_all.sh** that:
|
||||||
|
- Runs all my test suites
|
||||||
|
- Handles my specific frameworks
|
||||||
|
- Sets up my service dependencies (locally or via Docker)
|
||||||
|
- Has clear, color-coded output
|
||||||
|
- Supports best-effort and strict modes
|
||||||
|
- Generates a summary table
|
||||||
|
|
||||||
|
2. **GitHub Actions workflow** that:
|
||||||
|
- Sets up all required service containers
|
||||||
|
- Installs all language runtimes
|
||||||
|
- Caches dependencies appropriately
|
||||||
|
- Runs the test orchestrator
|
||||||
|
- Uploads artifacts (coverage, reports)
|
||||||
|
- Posts results to PRs
|
||||||
|
|
||||||
|
3. **TESTING.md** that:
|
||||||
|
- Documents my setup process
|
||||||
|
- Explains each test suite
|
||||||
|
- Provides troubleshooting for my stack
|
||||||
|
- Includes examples for my frameworks
|
||||||
|
|
||||||
|
4. **Optional enhancements**:
|
||||||
|
- Matrix testing across versions
|
||||||
|
- Coverage reporting (Codecov integration)
|
||||||
|
- Performance benchmarking
|
||||||
|
- Slack/Discord notifications
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Provide complete, ready-to-use files with:
|
||||||
|
- No placeholders (use my actual values)
|
||||||
|
- Comments explaining key sections
|
||||||
|
- Examples relevant to my stack
|
||||||
|
- Clear migration instructions from my current setup
|
||||||
|
|
||||||
|
Let's build this!
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Minimal Prompt: Quick Setup
|
||||||
|
|
||||||
|
For simple projects:
|
||||||
|
|
||||||
|
```
|
||||||
|
Implement the BlackRoad Test Orchestrator pattern for my project.
|
||||||
|
|
||||||
|
My test commands:
|
||||||
|
- {{component}}: {{command}}
|
||||||
|
- {{component}}: {{command}}
|
||||||
|
|
||||||
|
Create:
|
||||||
|
1. test_all.sh (based on .templates/test-orchestrator/test_all.sh.template)
|
||||||
|
2. .github/workflows/test-orchestrator.yml
|
||||||
|
3. Make it work with my stack: {{language/framework}}
|
||||||
|
|
||||||
|
Keep it simple and working. Go!
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration Prompt: From Existing Setup
|
||||||
|
|
||||||
|
If you already have test scripts:
|
||||||
|
|
||||||
|
```
|
||||||
|
I want to migrate my existing test scripts to the BlackRoad Test Orchestrator pattern.
|
||||||
|
|
||||||
|
## Current Setup
|
||||||
|
|
||||||
|
I currently run tests using:
|
||||||
|
[DESCRIBE CURRENT APPROACH - Makefile, scripts, npm scripts, etc.]
|
||||||
|
|
||||||
|
Current test scripts:
|
||||||
|
- {{path/to/script1}}: {{what it does}}
|
||||||
|
- {{path/to/script2}}: {{what it does}}
|
||||||
|
|
||||||
|
Current CI workflow:
|
||||||
|
[PASTE .github/workflows/*.yml or describe]
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Consolidate everything into:
|
||||||
|
1. Single test_all.sh orchestrator
|
||||||
|
2. Unified GitHub Actions workflow
|
||||||
|
3. Clear documentation
|
||||||
|
|
||||||
|
## Migration Strategy
|
||||||
|
|
||||||
|
I want to:
|
||||||
|
- [ ] Keep my existing scripts temporarily (call them from orchestrator)
|
||||||
|
- [ ] Refactor everything into the orchestrator directly
|
||||||
|
- [ ] Migrate gradually (one suite at a time)
|
||||||
|
|
||||||
|
Please:
|
||||||
|
1. Analyze my current setup
|
||||||
|
2. Propose a migration plan
|
||||||
|
3. Implement the orchestrator calling my existing scripts
|
||||||
|
4. Suggest refactoring opportunities
|
||||||
|
|
||||||
|
Let's migrate!
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Enhancement Prompt: Add Features
|
||||||
|
|
||||||
|
To add features to existing orchestrator:
|
||||||
|
|
||||||
|
```
|
||||||
|
I already have the BlackRoad Test Orchestrator implemented.
|
||||||
|
|
||||||
|
I want to add:
|
||||||
|
- [ ] Matrix testing (test across {{Python 3.9, 3.10, 3.11}} or {{Node 18, 20, 22}})
|
||||||
|
- [ ] Coverage reporting with Codecov
|
||||||
|
- [ ] Performance benchmarking
|
||||||
|
- [ ] Slack/Discord notifications on failure
|
||||||
|
- [ ] Parallel test execution
|
||||||
|
- [ ] Test result caching
|
||||||
|
- [ ] Flaky test detection
|
||||||
|
- [ ] Other: {{describe}}
|
||||||
|
|
||||||
|
Current test_all.sh location: ./test_all.sh
|
||||||
|
Current workflow location: .github/workflows/test-orchestrator.yml
|
||||||
|
|
||||||
|
Please:
|
||||||
|
1. Show me how to implement {{feature}}
|
||||||
|
2. Update both test_all.sh and the workflow
|
||||||
|
3. Explain the tradeoffs
|
||||||
|
4. Provide configuration examples
|
||||||
|
|
||||||
|
Let's enhance!
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting Prompt
|
||||||
|
|
||||||
|
When something isn't working:
|
||||||
|
|
||||||
|
```
|
||||||
|
My Test Orchestrator isn't working correctly.
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
{{DESCRIBE THE ISSUE}}
|
||||||
|
|
||||||
|
## What I've Tried
|
||||||
|
|
||||||
|
{{WHAT YOU'VE TRIED}}
|
||||||
|
|
||||||
|
## Error Output
|
||||||
|
|
||||||
|
```
|
||||||
|
{{PASTE ERROR OUTPUT}}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Current Configuration
|
||||||
|
|
||||||
|
test_all.sh:
|
||||||
|
```bash
|
||||||
|
{{PASTE RELEVANT SECTION}}
|
||||||
|
```
|
||||||
|
|
||||||
|
Workflow:
|
||||||
|
```yaml
|
||||||
|
{{PASTE RELEVANT SECTION}}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment
|
||||||
|
|
||||||
|
- OS: {{Linux/macOS/Windows}}
|
||||||
|
- CI: {{GitHub Actions/GitLab CI/CircleCI}}
|
||||||
|
- Languages: {{Python 3.11, Node 20, etc.}}
|
||||||
|
|
||||||
|
Please:
|
||||||
|
1. Diagnose the issue
|
||||||
|
2. Provide a fix
|
||||||
|
3. Explain why it happened
|
||||||
|
4. Suggest prevention strategies
|
||||||
|
|
||||||
|
Help!
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Framework-Specific Prompts
|
||||||
|
|
||||||
|
### Python (pytest) Project
|
||||||
|
|
||||||
|
```
|
||||||
|
Implement BlackRoad Test Orchestrator for my Python project.
|
||||||
|
|
||||||
|
Structure:
|
||||||
|
- src/myapp/
|
||||||
|
- tests/
|
||||||
|
- unit/
|
||||||
|
- integration/
|
||||||
|
- e2e/
|
||||||
|
- pytest.ini
|
||||||
|
- requirements.txt
|
||||||
|
- requirements-dev.txt
|
||||||
|
|
||||||
|
I want:
|
||||||
|
- Separate suites for unit, integration, e2e
|
||||||
|
- Coverage reporting (pytest-cov)
|
||||||
|
- PostgreSQL for integration tests
|
||||||
|
- Environment isolation (venv)
|
||||||
|
|
||||||
|
Create test orchestrator with these suites.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Node.js (Jest) Project
|
||||||
|
|
||||||
|
```
|
||||||
|
Implement BlackRoad Test Orchestrator for my Node.js monorepo.
|
||||||
|
|
||||||
|
Structure:
|
||||||
|
- packages/
|
||||||
|
- api/ (Express, Jest)
|
||||||
|
- web/ (React, Vitest)
|
||||||
|
- sdk/ (TypeScript, Jest)
|
||||||
|
- package.json (workspace)
|
||||||
|
|
||||||
|
I want:
|
||||||
|
- Suite per package
|
||||||
|
- Coverage aggregation
|
||||||
|
- MongoDB for API tests
|
||||||
|
- Redis for caching tests
|
||||||
|
|
||||||
|
Create test orchestrator for this setup.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Go Project
|
||||||
|
|
||||||
|
```
|
||||||
|
Implement BlackRoad Test Orchestrator for my Go project.
|
||||||
|
|
||||||
|
Structure:
|
||||||
|
- cmd/myapp/
|
||||||
|
- internal/
|
||||||
|
- pkg/
|
||||||
|
- tests/
|
||||||
|
- unit/
|
||||||
|
- integration/
|
||||||
|
|
||||||
|
Test commands:
|
||||||
|
- Unit: go test ./...
|
||||||
|
- Integration: go test -tags=integration ./tests/integration/...
|
||||||
|
- Benchmarks: go test -bench=. ./...
|
||||||
|
|
||||||
|
Create orchestrator that handles these.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rust Project
|
||||||
|
|
||||||
|
```
|
||||||
|
Implement BlackRoad Test Orchestrator for my Rust workspace.
|
||||||
|
|
||||||
|
Structure:
|
||||||
|
- crates/
|
||||||
|
- api/
|
||||||
|
- core/
|
||||||
|
- cli/
|
||||||
|
- Cargo.toml (workspace)
|
||||||
|
|
||||||
|
Test commands:
|
||||||
|
- All: cargo test --all
|
||||||
|
- Per crate: cargo test -p <crate>
|
||||||
|
- Doc tests: cargo test --doc
|
||||||
|
|
||||||
|
Create orchestrator for Rust testing.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Copilot-Specific Tips
|
||||||
|
|
||||||
|
When using **GitHub Copilot Chat**:
|
||||||
|
|
||||||
|
1. **Open the template files first** so Copilot has context
|
||||||
|
2. **Use `/tests` command** to generate suite-specific tests
|
||||||
|
3. **Reference files explicitly**: "Using .templates/test-orchestrator/test_all.sh.template as a base..."
|
||||||
|
4. **Iterate in chat**: Ask for changes, then refine
|
||||||
|
|
||||||
|
Example Copilot Chat flow:
|
||||||
|
|
||||||
|
```
|
||||||
|
You: Using .templates/test-orchestrator/test_all.sh.template, create test_all.sh for my Python project
|
||||||
|
|
||||||
|
Copilot: [generates script]
|
||||||
|
|
||||||
|
You: Add a suite for my FastAPI backend in backend/
|
||||||
|
|
||||||
|
Copilot: [adds suite]
|
||||||
|
|
||||||
|
You: Make the PostgreSQL connection optional
|
||||||
|
|
||||||
|
Copilot: [updates script]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Claude-Specific Tips
|
||||||
|
|
||||||
|
When using **Claude**:
|
||||||
|
|
||||||
|
1. **Provide full context** in one message
|
||||||
|
2. **Use code blocks** for templates
|
||||||
|
3. **Ask for complete files** rather than diffs
|
||||||
|
4. **Request explanations** for changes
|
||||||
|
|
||||||
|
Example Claude prompt:
|
||||||
|
|
||||||
|
```
|
||||||
|
I'm going to paste three template files. Then I want you to adapt them for my project.
|
||||||
|
|
||||||
|
[Template 1: test_all.sh.template]
|
||||||
|
{{paste file}}
|
||||||
|
|
||||||
|
[Template 2: test-orchestrator.yml.template]
|
||||||
|
{{paste file}}
|
||||||
|
|
||||||
|
[Template 3: TESTING.md.template]
|
||||||
|
{{paste file}}
|
||||||
|
|
||||||
|
My project structure:
|
||||||
|
{{describe}}
|
||||||
|
|
||||||
|
Please create adapted versions for my project, replacing all placeholders with real values.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ChatGPT-Specific Tips
|
||||||
|
|
||||||
|
When using **ChatGPT**:
|
||||||
|
|
||||||
|
1. **Break into smaller steps** if context window is limited
|
||||||
|
2. **Use code interpreter** for testing syntax
|
||||||
|
3. **Ask for alternatives** to see different approaches
|
||||||
|
4. **Iterate on sections** rather than whole files
|
||||||
|
|
||||||
|
Example ChatGPT flow:
|
||||||
|
|
||||||
|
```
|
||||||
|
Me: I want to implement a test orchestrator. Here's my project structure: [paste]
|
||||||
|
|
||||||
|
ChatGPT: [suggests approach]
|
||||||
|
|
||||||
|
Me: Let's start with the test_all.sh script. Here's the template: [paste]
|
||||||
|
|
||||||
|
ChatGPT: [creates script]
|
||||||
|
|
||||||
|
Me: Now the GitHub Actions workflow...
|
||||||
|
|
||||||
|
[Continue]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tips for All AI Assistants
|
||||||
|
|
||||||
|
### 1. Provide Context
|
||||||
|
|
||||||
|
Good:
|
||||||
|
```
|
||||||
|
My Python monorepo has:
|
||||||
|
- Backend (FastAPI, pytest)
|
||||||
|
- Workers (Celery, pytest)
|
||||||
|
- SDK (httpx, pytest)
|
||||||
|
|
||||||
|
Currently I run: cd backend && pytest, cd workers && pytest, etc.
|
||||||
|
|
||||||
|
I want one test_all.sh to run everything.
|
||||||
|
```
|
||||||
|
|
||||||
|
Bad:
|
||||||
|
```
|
||||||
|
Make me a test script.
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Show, Don't Tell
|
||||||
|
|
||||||
|
Good:
|
||||||
|
```
|
||||||
|
My test suite structure:
|
||||||
|
tests/
|
||||||
|
├── unit/
|
||||||
|
│ ├── test_auth.py
|
||||||
|
│ └── test_users.py
|
||||||
|
├── integration/
|
||||||
|
│ ├── test_api.py
|
||||||
|
│ └── test_db.py
|
||||||
|
└── conftest.py
|
||||||
|
|
||||||
|
I run: pytest tests/unit/ then pytest tests/integration/
|
||||||
|
```
|
||||||
|
|
||||||
|
Bad:
|
||||||
|
```
|
||||||
|
I have some tests in a tests folder.
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Specify Exactly What You Need
|
||||||
|
|
||||||
|
Good:
|
||||||
|
```
|
||||||
|
Create:
|
||||||
|
1. test_all.sh with suites: backend, frontend, e2e
|
||||||
|
2. GitHub Actions workflow with Postgres and Redis
|
||||||
|
3. TESTING.md with setup instructions
|
||||||
|
|
||||||
|
Don't include coverage reporting yet.
|
||||||
|
```
|
||||||
|
|
||||||
|
Bad:
|
||||||
|
```
|
||||||
|
Make it work.
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Iterate
|
||||||
|
|
||||||
|
It's okay to refine! After getting initial output:
|
||||||
|
|
||||||
|
```
|
||||||
|
This is close! Can you:
|
||||||
|
- Change the timeout from 30 to 60 minutes
|
||||||
|
- Add a suite for my docs (using mdbook test)
|
||||||
|
- Remove the Redis service (I don't need it)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Checklist
|
||||||
|
|
||||||
|
After implementing, verify:
|
||||||
|
|
||||||
|
- [ ] `./test_all.sh` runs and shows summary
|
||||||
|
- [ ] `./test_all.sh --help` shows your suites
|
||||||
|
- [ ] `./test_all.sh --suite <name>` runs specific suite
|
||||||
|
- [ ] `./test_all.sh --strict` fails fast on errors
|
||||||
|
- [ ] GitHub Actions workflow passes
|
||||||
|
- [ ] CI output is clear and helpful
|
||||||
|
- [ ] TESTING.md documents your setup
|
||||||
|
- [ ] New developers can run tests easily
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Pitfalls to Avoid
|
||||||
|
|
||||||
|
❌ Don't:
|
||||||
|
- Hardcode paths without checking if they exist
|
||||||
|
- Skip error handling in suite functions
|
||||||
|
- Forget to update the suite list in `print_summary()`
|
||||||
|
- Remove the color output (it's helpful!)
|
||||||
|
- Make the script depend on complex setup
|
||||||
|
|
||||||
|
✅ Do:
|
||||||
|
- Check if directories exist before running tests
|
||||||
|
- Record results for every suite (PASS/FAIL/SKIP)
|
||||||
|
- Update all three places when adding a suite (function, main execution, summary)
|
||||||
|
- Keep the clean, color-coded output
|
||||||
|
- Make it work out of the box or fail clearly
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps After Implementation
|
||||||
|
|
||||||
|
1. **Test it locally**: `./test_all.sh`
|
||||||
|
2. **Commit the changes**: `git add test_all.sh .github/workflows/ TESTING.md`
|
||||||
|
3. **Push and verify CI**: Check GitHub Actions
|
||||||
|
4. **Update team docs**: Link to TESTING.md in README
|
||||||
|
5. **Share the pattern**: Help other teams adopt it
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
- **Original Pattern**: BlackRoad Operating System
|
||||||
|
- **Templates**: `.templates/test-orchestrator/`
|
||||||
|
- **Issues**: Open an issue in your repository
|
||||||
|
- **Questions**: Refer to TESTING.md
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Happy Orchestrating! 🎼✨**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*These prompts are designed to work with Claude, GitHub Copilot, ChatGPT, and other AI assistants.*
|
||||||
|
*Adapt them to your specific needs and workflow.*
|
||||||
478
.templates/test-orchestrator/README.md
Normal file
478
.templates/test-orchestrator/README.md
Normal file
@@ -0,0 +1,478 @@
|
|||||||
|
# Test Orchestrator Templates
|
||||||
|
|
||||||
|
> **Reusable test orchestrator pattern for any monorepo**
|
||||||
|
|
||||||
|
These templates provide a complete testing infrastructure that you can copy to any repository. The pattern has been battle-tested in BlackRoad Operating System and is designed to be easily adaptable.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What's Included
|
||||||
|
|
||||||
|
1. **`test_all.sh.template`** - Universal test orchestrator script
|
||||||
|
2. **`test-orchestrator.yml.template`** - GitHub Actions workflow
|
||||||
|
3. **`TESTING.md.template`** - Comprehensive testing documentation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start: Copy to New Repo
|
||||||
|
|
||||||
|
### 1. Copy Templates
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# In your target repository
|
||||||
|
mkdir -p .github/workflows
|
||||||
|
|
||||||
|
# Copy orchestrator script
|
||||||
|
cp /path/to/BlackRoad/.templates/test-orchestrator/test_all.sh.template ./test_all.sh
|
||||||
|
chmod +x test_all.sh
|
||||||
|
|
||||||
|
# Copy GitHub Actions workflow
|
||||||
|
cp /path/to/BlackRoad/.templates/test-orchestrator/test-orchestrator.yml.template .github/workflows/test-orchestrator.yml
|
||||||
|
|
||||||
|
# Copy documentation
|
||||||
|
cp /path/to/BlackRoad/.templates/test-orchestrator/TESTING.md.template ./TESTING.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Customize for Your Repo
|
||||||
|
|
||||||
|
Edit `test_all.sh` and replace placeholders:
|
||||||
|
|
||||||
|
- `{{REPO_NAME}}` → Your repository name
|
||||||
|
- `{{PROJECT_DESCRIPTION}}` → Brief project description
|
||||||
|
- Suite functions → Adapt to your project structure
|
||||||
|
|
||||||
|
Edit `.github/workflows/test-orchestrator.yml`:
|
||||||
|
|
||||||
|
- Update service containers if needed
|
||||||
|
- Adjust cache paths
|
||||||
|
- Modify environment variables
|
||||||
|
|
||||||
|
Edit `TESTING.md`:
|
||||||
|
|
||||||
|
- Update project-specific details
|
||||||
|
- Add your test suites
|
||||||
|
- Customize examples
|
||||||
|
|
||||||
|
### 3. Run Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./test_all.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Adapting to Different Project Structures
|
||||||
|
|
||||||
|
### Example 1: Pure Python Project
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# test_all.sh - Keep only Python suites
|
||||||
|
run_backend_tests() {
|
||||||
|
# Your main Python package
|
||||||
|
}
|
||||||
|
|
||||||
|
run_cli_tests() {
|
||||||
|
# CLI tool tests
|
||||||
|
}
|
||||||
|
|
||||||
|
run_integration_tests() {
|
||||||
|
# Integration tests
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Pure Node.js Project
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# test_all.sh - Keep only Node suites
|
||||||
|
run_api_tests() {
|
||||||
|
# Express/Fastify API
|
||||||
|
}
|
||||||
|
|
||||||
|
run_frontend_tests() {
|
||||||
|
# React/Vue/Svelte
|
||||||
|
}
|
||||||
|
|
||||||
|
run_sdk_tests() {
|
||||||
|
# Client SDK
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 3: Microservices
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# test_all.sh - One suite per service
|
||||||
|
run_auth_service_tests() {
|
||||||
|
cd services/auth && npm test
|
||||||
|
}
|
||||||
|
|
||||||
|
run_api_gateway_tests() {
|
||||||
|
cd services/api-gateway && go test ./...
|
||||||
|
}
|
||||||
|
|
||||||
|
run_data_service_tests() {
|
||||||
|
cd services/data && pytest
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pattern Philosophy
|
||||||
|
|
||||||
|
### Core Principles
|
||||||
|
|
||||||
|
1. **One script to rule them all**: `test_all.sh` is the single source of truth
|
||||||
|
2. **CI = Local**: Same script runs everywhere
|
||||||
|
3. **Best-effort by default**: Run all suites, report at end
|
||||||
|
4. **Strict mode available**: Fail-fast when needed
|
||||||
|
5. **Clear output**: Color-coded, structured, summary table
|
||||||
|
|
||||||
|
### Benefits
|
||||||
|
|
||||||
|
✅ **Consistency**: Same test experience across all repos
|
||||||
|
✅ **Discoverability**: New contributors know exactly how to run tests
|
||||||
|
✅ **Maintainability**: One pattern to learn and maintain
|
||||||
|
✅ **Extensibility**: Easy to add new test suites
|
||||||
|
✅ **CI-friendly**: Works perfectly with GitHub Actions
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Customization Guide
|
||||||
|
|
||||||
|
### Adding a New Test Suite
|
||||||
|
|
||||||
|
1. **Create suite function** in `test_all.sh`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
run_myapp_tests() {
|
||||||
|
log_suite "MyApp (description)"
|
||||||
|
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
# Check if suite exists
|
||||||
|
if [[ ! -d "$ROOT/myapp" ]]; then
|
||||||
|
log_skip "myapp/ directory not found"
|
||||||
|
record_result "myapp" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT/myapp"
|
||||||
|
|
||||||
|
# Run your tests
|
||||||
|
log_info "Running tests..."
|
||||||
|
npm test # or pytest, or whatever
|
||||||
|
|
||||||
|
local exit_code=$?
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
|
||||||
|
if [[ $exit_code -eq 0 ]]; then
|
||||||
|
log_success "MyApp tests passed"
|
||||||
|
record_result "myapp" "PASS" "${duration}s"
|
||||||
|
else
|
||||||
|
log_error "MyApp tests failed"
|
||||||
|
record_result "myapp" "FAIL" "${duration}s"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Add to main execution block**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# In main execution section
|
||||||
|
if [[ -z "$SPECIFIC_SUITE" ]]; then
|
||||||
|
run_backend_tests || true
|
||||||
|
run_myapp_tests || true # Add here
|
||||||
|
# ...
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Add to suite list** in `print_summary()`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for suite in backend myapp frontend; do # Add "myapp"
|
||||||
|
# ...
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Add to --help**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
AVAILABLE SUITES:
|
||||||
|
backend Backend API tests
|
||||||
|
myapp MyApp tests # Add here
|
||||||
|
frontend Frontend tests
|
||||||
|
```
|
||||||
|
|
||||||
|
### Removing Unnecessary Suites
|
||||||
|
|
||||||
|
If your project doesn't have certain suites (e.g., no TypeScript), simply remove:
|
||||||
|
|
||||||
|
1. The suite function (`run_sdk_typescript_tests`)
|
||||||
|
2. The call in main execution
|
||||||
|
3. The entry in `print_summary()`
|
||||||
|
4. The entry in `--help`
|
||||||
|
|
||||||
|
### Changing Test Frameworks
|
||||||
|
|
||||||
|
Replace test commands in suite functions:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From pytest to unittest
|
||||||
|
pytest -v
|
||||||
|
# to
|
||||||
|
python -m unittest discover
|
||||||
|
|
||||||
|
# From Jest to Vitest
|
||||||
|
npm test
|
||||||
|
# to
|
||||||
|
npx vitest run
|
||||||
|
|
||||||
|
# From npm to pnpm
|
||||||
|
npm install
|
||||||
|
# to
|
||||||
|
pnpm install
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## GitHub Actions Customization
|
||||||
|
|
||||||
|
### Service Containers
|
||||||
|
|
||||||
|
Add/remove as needed in `.github/workflows/test-orchestrator.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
postgres: # Remove if not needed
|
||||||
|
image: postgres:15-alpine
|
||||||
|
# ...
|
||||||
|
|
||||||
|
mysql: # Add if needed
|
||||||
|
image: mysql:8
|
||||||
|
env:
|
||||||
|
MYSQL_ROOT_PASSWORD: test
|
||||||
|
# ...
|
||||||
|
|
||||||
|
mongodb: # Add if needed
|
||||||
|
image: mongo:7
|
||||||
|
# ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Matrix Testing
|
||||||
|
|
||||||
|
Test across multiple versions:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
jobs:
|
||||||
|
orchestrator:
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
python-version: ['3.9', '3.10', '3.11', '3.12']
|
||||||
|
node-version: ['18', '20']
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Setup Python
|
||||||
|
uses: actions/setup-python@v5
|
||||||
|
with:
|
||||||
|
python-version: ${{ matrix.python-version }}
|
||||||
|
|
||||||
|
- name: Setup Node
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: ${{ matrix.node-version }}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Scheduled Runs
|
||||||
|
|
||||||
|
Add cron schedule:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: ["main"]
|
||||||
|
schedule:
|
||||||
|
- cron: '0 2 * * *' # Run at 2 AM daily
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Examples from Other Projects
|
||||||
|
|
||||||
|
### Example: Data Science Project
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# test_all.sh
|
||||||
|
|
||||||
|
run_notebooks_tests() {
|
||||||
|
log_suite "Jupyter Notebooks (validation)"
|
||||||
|
jupyter nbconvert --to notebook --execute notebooks/*.ipynb
|
||||||
|
}
|
||||||
|
|
||||||
|
run_models_tests() {
|
||||||
|
log_suite "ML Models (validation)"
|
||||||
|
pytest tests/models/ -v
|
||||||
|
}
|
||||||
|
|
||||||
|
run_data_pipeline_tests() {
|
||||||
|
log_suite "Data Pipeline (integration)"
|
||||||
|
python -m data_pipeline.test_runner
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example: Infrastructure Project
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# test_all.sh
|
||||||
|
|
||||||
|
run_terraform_tests() {
|
||||||
|
log_suite "Terraform (validation)"
|
||||||
|
cd infra/terraform
|
||||||
|
terraform fmt -check
|
||||||
|
terraform validate
|
||||||
|
}
|
||||||
|
|
||||||
|
run_ansible_tests() {
|
||||||
|
log_suite "Ansible (syntax check)"
|
||||||
|
cd infra/ansible
|
||||||
|
ansible-playbook --syntax-check playbook.yml
|
||||||
|
}
|
||||||
|
|
||||||
|
run_docker_tests() {
|
||||||
|
log_suite "Docker (build test)"
|
||||||
|
docker build -t myapp:test .
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example: Mobile App Project
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# test_all.sh
|
||||||
|
|
||||||
|
run_ios_tests() {
|
||||||
|
log_suite "iOS App (XCTest)"
|
||||||
|
cd ios
|
||||||
|
xcodebuild test -scheme MyApp -destination 'platform=iOS Simulator,name=iPhone 15'
|
||||||
|
}
|
||||||
|
|
||||||
|
run_android_tests() {
|
||||||
|
log_suite "Android App (JUnit)"
|
||||||
|
cd android
|
||||||
|
./gradlew test
|
||||||
|
}
|
||||||
|
|
||||||
|
run_shared_tests() {
|
||||||
|
log_suite "Shared Code (Kotlin Multiplatform)"
|
||||||
|
cd shared
|
||||||
|
./gradlew allTests
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AI Assistant Prompt
|
||||||
|
|
||||||
|
Use this prompt to have AI assistants (Claude/Copilot/ChatGPT) adapt these templates:
|
||||||
|
|
||||||
|
```
|
||||||
|
I want to use the BlackRoad Test Orchestrator pattern in my repository.
|
||||||
|
|
||||||
|
Here's my project structure:
|
||||||
|
[paste tree output or describe structure]
|
||||||
|
|
||||||
|
Here are my test commands:
|
||||||
|
- Component A: [command]
|
||||||
|
- Component B: [command]
|
||||||
|
- Component C: [command]
|
||||||
|
|
||||||
|
Please:
|
||||||
|
1. Adapt test_all.sh.template to my project
|
||||||
|
2. Customize test-orchestrator.yml.template for my CI needs
|
||||||
|
3. Update TESTING.md.template with my project details
|
||||||
|
|
||||||
|
Keep the same structure and philosophy, just adapt the suite functions and paths.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration from Existing Test Scripts
|
||||||
|
|
||||||
|
If you already have test scripts:
|
||||||
|
|
||||||
|
### 1. Inventory Existing Scripts
|
||||||
|
|
||||||
|
```bash
|
||||||
|
find . -name "*test*" -type f -executable
|
||||||
|
# List all test-related scripts
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Map to Orchestrator Suites
|
||||||
|
|
||||||
|
| Old Script | New Suite Function |
|
||||||
|
|------------|-------------------|
|
||||||
|
| `scripts/test-backend.sh` | `run_backend_tests()` |
|
||||||
|
| `scripts/test-frontend.sh` | `run_frontend_tests()` |
|
||||||
|
| `Makefile` target `test` | Suite functions |
|
||||||
|
|
||||||
|
### 3. Migrate Gradually
|
||||||
|
|
||||||
|
Keep old scripts during transition:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
run_backend_tests() {
|
||||||
|
log_suite "Backend"
|
||||||
|
|
||||||
|
# Call old script temporarily
|
||||||
|
bash scripts/test-backend.sh
|
||||||
|
|
||||||
|
# Record result based on exit code
|
||||||
|
if [[ $? -eq 0 ]]; then
|
||||||
|
record_result "backend" "PASS" "Xs"
|
||||||
|
else
|
||||||
|
record_result "backend" "FAIL" "Xs"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Refactor Over Time
|
||||||
|
|
||||||
|
Once orchestrator is working, gradually refactor suite functions to be self-contained.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Adaptation Issues
|
||||||
|
|
||||||
|
**Issue**: Suite detection doesn't work
|
||||||
|
**Fix**: Check directory paths in suite functions (`if [[ ! -d "$ROOT/myapp" ]]`)
|
||||||
|
|
||||||
|
**Issue**: Tests fail in CI but pass locally
|
||||||
|
**Fix**: Check environment variables, service containers, and paths
|
||||||
|
|
||||||
|
**Issue**: Colors don't show in CI
|
||||||
|
**Fix**: This is normal; GitHub Actions strips color codes from logs
|
||||||
|
|
||||||
|
**Issue**: --suite flag doesn't work for new suite
|
||||||
|
**Fix**: Add new suite to the case statement in main execution
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
- **Original Implementation**: BlackRoad Operating System
|
||||||
|
- **Issues**: Report in your repository's issue tracker
|
||||||
|
- **Questions**: Refer to `TESTING.md` in BlackRoad repo for detailed examples
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
These templates are provided as-is from the BlackRoad Operating System project.
|
||||||
|
Adapt freely for your own projects.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Happy Testing! 🧪✨**
|
||||||
376
.templates/test-orchestrator/TESTING.md.template
Normal file
376
.templates/test-orchestrator/TESTING.md.template
Normal file
@@ -0,0 +1,376 @@
|
|||||||
|
# Testing Guide - {{REPO_NAME}}
|
||||||
|
|
||||||
|
> **Comprehensive testing documentation**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Overview](#overview)
|
||||||
|
2. [Quick Start](#quick-start)
|
||||||
|
3. [Test Orchestrator](#test-orchestrator)
|
||||||
|
4. [Test Suites](#test-suites)
|
||||||
|
5. [Local Development](#local-development)
|
||||||
|
6. [CI/CD Integration](#cicd-integration)
|
||||||
|
7. [Writing Tests](#writing-tests)
|
||||||
|
8. [Troubleshooting](#troubleshooting)
|
||||||
|
9. [Best Practices](#best-practices)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
{{PROJECT_DESCRIPTION}}
|
||||||
|
|
||||||
|
This project uses a **unified test orchestrator** that coordinates all test suites. This ensures consistent testing behavior between local development and CI/CD environments.
|
||||||
|
|
||||||
|
### Testing Philosophy
|
||||||
|
|
||||||
|
- **Comprehensive**: All components are tested
|
||||||
|
- **Fast Feedback**: Tests run quickly with clear output
|
||||||
|
- **Isolated**: Each suite runs independently
|
||||||
|
- **Consistent**: Same test runner works locally and in CI
|
||||||
|
- **Extensible**: Easy to add new test suites
|
||||||
|
|
||||||
|
### Test Coverage
|
||||||
|
|
||||||
|
| Component | Framework | Location | Status |
|
||||||
|
|-----------|-----------|----------|--------|
|
||||||
|
| TODO: Component 1 | TODO: Framework | `path/to/tests/` | ✅ Active |
|
||||||
|
| TODO: Component 2 | TODO: Framework | `path/to/tests/` | ✅ Active |
|
||||||
|
<!-- Add your test suites here -->
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### Run All Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Best-effort mode - runs all suites, reports summary
|
||||||
|
./test_all.sh
|
||||||
|
|
||||||
|
# Strict mode - fails on first error
|
||||||
|
./test_all.sh --strict
|
||||||
|
|
||||||
|
# Verbose output
|
||||||
|
./test_all.sh --verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
### Run Specific Suite
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# TODO: Replace with your actual suite names
|
||||||
|
./test_all.sh --suite example-suite-1
|
||||||
|
./test_all.sh --suite example-suite-2
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Test Orchestrator
|
||||||
|
|
||||||
|
The **Test Orchestrator** (`test_all.sh`) is the central test runner.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
✅ **Unified Interface**: Single command to run all tests
|
||||||
|
✅ **Smart Detection**: Automatically finds and runs test suites
|
||||||
|
✅ **Two Modes**: Best-effort (run all) and strict (fail-fast)
|
||||||
|
✅ **Clear Output**: Color-coded, structured output with summary
|
||||||
|
✅ **CI-Friendly**: Same script works locally and in GitHub Actions
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run all suites with detailed output
|
||||||
|
./test_all.sh --verbose
|
||||||
|
|
||||||
|
# Run specific suite in strict mode
|
||||||
|
./test_all.sh --suite my-suite --strict
|
||||||
|
|
||||||
|
# Get help
|
||||||
|
./test_all.sh --help
|
||||||
|
```
|
||||||
|
|
||||||
|
### Output Example
|
||||||
|
|
||||||
|
```
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
{{REPO_NAME}} - Test Orchestrator
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
|
||||||
|
▶ Example Suite 1
|
||||||
|
✓ Tests passed
|
||||||
|
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
TEST SUMMARY
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
|
||||||
|
Suite Result Duration
|
||||||
|
─────────────────────────────────────────────────────────
|
||||||
|
example-suite-1 ✓ PASS 5s
|
||||||
|
|
||||||
|
Total: 1 suites | 1 passed | 0 failed | 0 skipped
|
||||||
|
|
||||||
|
✅ ALL TESTS PASSED
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Test Suites
|
||||||
|
|
||||||
|
<!-- TODO: Document each of your test suites -->
|
||||||
|
|
||||||
|
### 1. Example Suite
|
||||||
|
|
||||||
|
**Framework**: pytest / jest / etc.
|
||||||
|
**Location**: `path/to/tests/`
|
||||||
|
**Coverage**: Description of what this suite tests
|
||||||
|
|
||||||
|
#### Running This Suite
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Via orchestrator
|
||||||
|
./test_all.sh --suite example-suite-1
|
||||||
|
|
||||||
|
# Direct command
|
||||||
|
# TODO: Add direct test command
|
||||||
|
```
|
||||||
|
|
||||||
|
<!-- Add more test suite sections as needed -->
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Local Development
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
| Tool | Version | Purpose |
|
||||||
|
|------|---------|---------|
|
||||||
|
<!-- TODO: List your required tools -->
|
||||||
|
| **Python** | 3.11+ | Backend tests |
|
||||||
|
| **Node.js** | 20+ | Frontend tests |
|
||||||
|
| **Git** | 2.30+ | Version control |
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Clone repository
|
||||||
|
git clone https://github.com/{{ORG_NAME}}/{{REPO_NAME}}.git
|
||||||
|
cd {{REPO_NAME}}
|
||||||
|
|
||||||
|
# 2. Install dependencies
|
||||||
|
# TODO: Add your installation commands
|
||||||
|
|
||||||
|
# 3. Run tests
|
||||||
|
./test_all.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# TODO: Document required environment variables
|
||||||
|
# Example:
|
||||||
|
DATABASE_URL=postgresql://localhost/testdb
|
||||||
|
API_KEY=your-test-api-key
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CI/CD Integration
|
||||||
|
|
||||||
|
### GitHub Actions Workflow
|
||||||
|
|
||||||
|
**File**: `.github/workflows/test-orchestrator.yml`
|
||||||
|
|
||||||
|
The CI workflow uses the same `test_all.sh` script as local development.
|
||||||
|
|
||||||
|
#### Workflow Features
|
||||||
|
|
||||||
|
✅ Runs on push to main branch
|
||||||
|
✅ Runs on pull requests
|
||||||
|
✅ Supports manual dispatch
|
||||||
|
✅ Uploads test artifacts
|
||||||
|
✅ Generates test summary
|
||||||
|
|
||||||
|
#### Triggering CI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Push to main
|
||||||
|
git push origin main
|
||||||
|
|
||||||
|
# Create pull request
|
||||||
|
gh pr create --title "My Feature" --body "Description"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Manual Dispatch
|
||||||
|
|
||||||
|
1. Go to **Actions** → **Test Orchestrator**
|
||||||
|
2. Click **Run workflow**
|
||||||
|
3. Select branch and options
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Writing Tests
|
||||||
|
|
||||||
|
### Test Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
tests/
|
||||||
|
├── unit/ # Fast, isolated tests
|
||||||
|
├── integration/ # Tests with dependencies
|
||||||
|
├── fixtures/ # Shared test data
|
||||||
|
└── conftest.py # Shared configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Test
|
||||||
|
|
||||||
|
<!-- TODO: Add framework-specific examples -->
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Example pytest test
|
||||||
|
def test_example():
|
||||||
|
assert 1 + 1 == 2
|
||||||
|
```
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Example Jest test
|
||||||
|
test('example test', () => {
|
||||||
|
expect(1 + 1).toBe(2);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Naming
|
||||||
|
|
||||||
|
- **Files**: `test_*.py` or `*.test.ts`
|
||||||
|
- **Functions**: `test_feature_scenario()`
|
||||||
|
- **Classes**: `TestFeature`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
#### Issue: Permission Denied
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Solution
|
||||||
|
chmod +x test_all.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Issue: Command Not Found
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Solution: Install missing tool
|
||||||
|
# TODO: Add installation commands for your tools
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Issue: Tests Pass Locally but Fail in CI
|
||||||
|
|
||||||
|
**Possible causes**:
|
||||||
|
- Missing environment variables
|
||||||
|
- Different tool versions
|
||||||
|
- Service dependencies not ready
|
||||||
|
|
||||||
|
**Solution**: Check CI logs for specific errors
|
||||||
|
|
||||||
|
### Debugging
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verbose output
|
||||||
|
./test_all.sh --verbose
|
||||||
|
|
||||||
|
# Run single test
|
||||||
|
# TODO: Add framework-specific command
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### General
|
||||||
|
|
||||||
|
1. **Write tests first** (TDD when possible)
|
||||||
|
2. **Keep tests isolated** and independent
|
||||||
|
3. **Use descriptive names**
|
||||||
|
4. **Mock external dependencies**
|
||||||
|
5. **Test both success and error paths**
|
||||||
|
|
||||||
|
### Specific to Your Stack
|
||||||
|
|
||||||
|
<!-- TODO: Add framework-specific best practices -->
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## FAQ
|
||||||
|
|
||||||
|
### Q: How do I add a new test suite?
|
||||||
|
|
||||||
|
**A**: Edit `test_all.sh`:
|
||||||
|
|
||||||
|
1. Add a new `run_mysuite_tests()` function
|
||||||
|
2. Call it in the main execution block
|
||||||
|
3. Update the suite list in `print_summary()`
|
||||||
|
|
||||||
|
### Q: How do I skip a test temporarily?
|
||||||
|
|
||||||
|
**A**: Use your framework's skip decorator:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Pytest
|
||||||
|
@pytest.mark.skip(reason="TODO")
|
||||||
|
def test_something():
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Jest
|
||||||
|
test.skip('something', () => {
|
||||||
|
// Skipped
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Q: How do I run tests with coverage?
|
||||||
|
|
||||||
|
**A**: Use framework-specific coverage tools:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# TODO: Add coverage commands for your frameworks
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Project README](./README.md)
|
||||||
|
- [Contributing Guide](./CONTRIBUTING.md)
|
||||||
|
<!-- TODO: Add links to your documentation -->
|
||||||
|
|
||||||
|
### Testing Frameworks
|
||||||
|
|
||||||
|
<!-- TODO: Add links to framework docs you use -->
|
||||||
|
- [pytest](https://docs.pytest.org/)
|
||||||
|
- [Jest](https://jestjs.io/)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
When adding tests:
|
||||||
|
|
||||||
|
1. **Run all tests** before submitting PR: `./test_all.sh`
|
||||||
|
2. **Ensure tests pass** in CI
|
||||||
|
3. **Add tests for new features**
|
||||||
|
4. **Update this documentation** if changing test infrastructure
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Questions?** Open an issue or check the project documentation.
|
||||||
|
|
||||||
|
**Happy Testing! 🧪✨**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This testing guide uses the BlackRoad Operating System Test Orchestrator pattern.*
|
||||||
|
*Adapted from: https://github.com/blackboxprogramming/BlackRoad-Operating-System*
|
||||||
256
.templates/test-orchestrator/test-orchestrator.yml.template
Normal file
256
.templates/test-orchestrator/test-orchestrator.yml.template
Normal file
@@ -0,0 +1,256 @@
|
|||||||
|
# GitHub Actions Workflow - Test Orchestrator
|
||||||
|
#
|
||||||
|
# Adapted from: BlackRoad Operating System Test Orchestrator Pattern
|
||||||
|
#
|
||||||
|
# TODO: Customize this workflow for your project
|
||||||
|
# 1. Update trigger branches
|
||||||
|
# 2. Add/remove service containers as needed
|
||||||
|
# 3. Adjust cache paths
|
||||||
|
# 4. Update environment variables
|
||||||
|
# 5. Modify suite options in workflow_dispatch
|
||||||
|
|
||||||
|
name: Test Orchestrator - All Suites
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- "main"
|
||||||
|
# TODO: Add your branch patterns here
|
||||||
|
# - "develop"
|
||||||
|
# - "claude/**"
|
||||||
|
# - "feature/**"
|
||||||
|
pull_request:
|
||||||
|
branches: ["main"]
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
suite:
|
||||||
|
description: 'Specific test suite to run (leave empty for all)'
|
||||||
|
required: false
|
||||||
|
type: choice
|
||||||
|
options:
|
||||||
|
- ''
|
||||||
|
# TODO: Add your suite names here
|
||||||
|
- 'example-suite-1'
|
||||||
|
- 'example-suite-2'
|
||||||
|
strict_mode:
|
||||||
|
description: 'Enable strict mode (fail-fast)'
|
||||||
|
required: false
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
orchestrator:
|
||||||
|
name: Run Test Orchestrator
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 30
|
||||||
|
|
||||||
|
# TODO: Add/remove service containers as needed
|
||||||
|
# services:
|
||||||
|
# postgres:
|
||||||
|
# image: postgres:15-alpine
|
||||||
|
# env:
|
||||||
|
# POSTGRES_USER: testuser
|
||||||
|
# POSTGRES_PASSWORD: testpass
|
||||||
|
# POSTGRES_DB: testdb
|
||||||
|
# ports:
|
||||||
|
# - 5432:5432
|
||||||
|
# options: >-
|
||||||
|
# --health-cmd pg_isready
|
||||||
|
# --health-interval 10s
|
||||||
|
# --health-timeout 5s
|
||||||
|
# --health-retries 5
|
||||||
|
#
|
||||||
|
# redis:
|
||||||
|
# image: redis:7-alpine
|
||||||
|
# ports:
|
||||||
|
# - 6379:6379
|
||||||
|
# options: >-
|
||||||
|
# --health-cmd "redis-cli ping"
|
||||||
|
# --health-interval 10s
|
||||||
|
# --health-timeout 5s
|
||||||
|
# --health-retries 5
|
||||||
|
#
|
||||||
|
# mysql:
|
||||||
|
# image: mysql:8
|
||||||
|
# env:
|
||||||
|
# MYSQL_ROOT_PASSWORD: testpass
|
||||||
|
# MYSQL_DATABASE: testdb
|
||||||
|
# ports:
|
||||||
|
# - 3306:3306
|
||||||
|
#
|
||||||
|
# mongodb:
|
||||||
|
# image: mongo:7
|
||||||
|
# ports:
|
||||||
|
# - 27017:27017
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
# TODO: Setup language runtimes as needed
|
||||||
|
# Uncomment and customize based on your stack
|
||||||
|
|
||||||
|
# Python setup
|
||||||
|
# - name: Setup Python
|
||||||
|
# uses: actions/setup-python@v5
|
||||||
|
# with:
|
||||||
|
# python-version: '3.11' # TODO: Set your Python version
|
||||||
|
# cache: 'pip'
|
||||||
|
# cache-dependency-path: |
|
||||||
|
# requirements.txt
|
||||||
|
# # Add more dependency files here
|
||||||
|
|
||||||
|
# Node.js setup
|
||||||
|
# - name: Setup Node.js
|
||||||
|
# uses: actions/setup-node@v4
|
||||||
|
# with:
|
||||||
|
# node-version: '20' # TODO: Set your Node version
|
||||||
|
# cache: 'npm' # or 'yarn' or 'pnpm'
|
||||||
|
# cache-dependency-path: 'package-lock.json'
|
||||||
|
|
||||||
|
# Go setup
|
||||||
|
# - name: Setup Go
|
||||||
|
# uses: actions/setup-go@v5
|
||||||
|
# with:
|
||||||
|
# go-version: '1.21' # TODO: Set your Go version
|
||||||
|
# cache: true
|
||||||
|
|
||||||
|
# Rust setup
|
||||||
|
# - name: Setup Rust
|
||||||
|
# uses: actions-rs/toolchain@v1
|
||||||
|
# with:
|
||||||
|
# toolchain: stable
|
||||||
|
# profile: minimal
|
||||||
|
|
||||||
|
# Java setup
|
||||||
|
# - name: Setup Java
|
||||||
|
# uses: actions/setup-java@v4
|
||||||
|
# with:
|
||||||
|
# java-version: '17' # TODO: Set your Java version
|
||||||
|
# distribution: 'temurin'
|
||||||
|
|
||||||
|
# TODO: Install system dependencies if needed
|
||||||
|
# - name: Install system dependencies
|
||||||
|
# run: |
|
||||||
|
# sudo apt-get update
|
||||||
|
# sudo apt-get install -y --no-install-recommends \
|
||||||
|
# build-essential \
|
||||||
|
# libpq-dev \
|
||||||
|
# # Add more packages as needed
|
||||||
|
|
||||||
|
# TODO: Create environment files if needed
|
||||||
|
# - name: Create test environment file
|
||||||
|
# run: |
|
||||||
|
# cat > .env.test << EOF
|
||||||
|
# DATABASE_URL=postgresql://testuser:testpass@localhost:5432/testdb
|
||||||
|
# REDIS_URL=redis://localhost:6379/0
|
||||||
|
# SECRET_KEY=test-secret-key-$(openssl rand -hex 16)
|
||||||
|
# ENVIRONMENT=testing
|
||||||
|
# # Add more environment variables
|
||||||
|
# EOF
|
||||||
|
|
||||||
|
- name: Make test orchestrator executable
|
||||||
|
run: chmod +x test_all.sh
|
||||||
|
|
||||||
|
- name: Run Test Orchestrator (All Suites)
|
||||||
|
if: ${{ github.event.inputs.suite == '' }}
|
||||||
|
run: |
|
||||||
|
if [[ "${{ github.event.inputs.strict_mode }}" == "true" ]]; then
|
||||||
|
./test_all.sh --strict --verbose
|
||||||
|
else
|
||||||
|
./test_all.sh --verbose
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Run Test Orchestrator (Specific Suite)
|
||||||
|
if: ${{ github.event.inputs.suite != '' }}
|
||||||
|
run: |
|
||||||
|
if [[ "${{ github.event.inputs.strict_mode }}" == "true" ]]; then
|
||||||
|
./test_all.sh --suite "${{ github.event.inputs.suite }}" --strict --verbose
|
||||||
|
else
|
||||||
|
./test_all.sh --suite "${{ github.event.inputs.suite }}" --verbose
|
||||||
|
fi
|
||||||
|
|
||||||
|
# TODO: Upload test artifacts
|
||||||
|
# Customize paths based on your test output locations
|
||||||
|
# - name: Upload test artifacts
|
||||||
|
# if: always()
|
||||||
|
# uses: actions/upload-artifact@v4
|
||||||
|
# with:
|
||||||
|
# name: test-results
|
||||||
|
# path: |
|
||||||
|
# test-results/
|
||||||
|
# coverage/
|
||||||
|
# *.log
|
||||||
|
# retention-days: 7
|
||||||
|
# if-no-files-found: ignore
|
||||||
|
|
||||||
|
- name: Generate test summary
|
||||||
|
if: always()
|
||||||
|
run: |
|
||||||
|
echo "## 🧪 Test Orchestrator Summary" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "**Repository:** ${{ github.repository }}" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "**Branch:** ${{ github.ref_name }}" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "**Commit:** ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "### Test Results" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "" >> $GITHUB_STEP_SUMMARY
|
||||||
|
echo "Check the job output above for detailed test results." >> $GITHUB_STEP_SUMMARY
|
||||||
|
|
||||||
|
# TODO: Add optional coverage job
|
||||||
|
# coverage:
|
||||||
|
# name: Generate Coverage Report
|
||||||
|
# runs-on: ubuntu-latest
|
||||||
|
# needs: orchestrator
|
||||||
|
# if: github.event_name == 'pull_request'
|
||||||
|
#
|
||||||
|
# steps:
|
||||||
|
# - name: Checkout repository
|
||||||
|
# uses: actions/checkout@v4
|
||||||
|
#
|
||||||
|
# # Setup language runtime
|
||||||
|
# # Run tests with coverage
|
||||||
|
# # Upload to Codecov or similar
|
||||||
|
#
|
||||||
|
# - name: Upload coverage to Codecov
|
||||||
|
# uses: codecov/codecov-action@v4
|
||||||
|
# with:
|
||||||
|
# file: ./coverage.xml
|
||||||
|
# flags: unittests
|
||||||
|
# name: coverage
|
||||||
|
# fail_ci_if_error: false
|
||||||
|
# token: ${{ secrets.CODECOV_TOKEN }}
|
||||||
|
|
||||||
|
status-check:
|
||||||
|
name: Final Status Check
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: [orchestrator]
|
||||||
|
if: always()
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Check orchestrator status
|
||||||
|
run: |
|
||||||
|
if [[ "${{ needs.orchestrator.result }}" != "success" ]]; then
|
||||||
|
echo "❌ Test orchestrator failed or was cancelled"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "✅ All test suites passed!"
|
||||||
|
|
||||||
|
# TODO: Optional PR comment
|
||||||
|
# - name: Post status to PR
|
||||||
|
# if: github.event_name == 'pull_request' && always()
|
||||||
|
# uses: actions/github-script@v7
|
||||||
|
# with:
|
||||||
|
# script: |
|
||||||
|
# const status = '${{ needs.orchestrator.result }}';
|
||||||
|
# const icon = status === 'success' ? '✅' : '❌';
|
||||||
|
# const message = status === 'success'
|
||||||
|
# ? 'All test suites passed!'
|
||||||
|
# : 'One or more test suites failed. Check the orchestrator job for details.';
|
||||||
|
#
|
||||||
|
# github.rest.issues.createComment({
|
||||||
|
# issue_number: context.issue.number,
|
||||||
|
# owner: context.repo.owner,
|
||||||
|
# repo: context.repo.repo,
|
||||||
|
# body: `## ${icon} Test Orchestrator\n\n${message}\n\n[View Details](${context.payload.pull_request.html_url}/checks)`
|
||||||
|
# });
|
||||||
329
.templates/test-orchestrator/test_all.sh.template
Normal file
329
.templates/test-orchestrator/test_all.sh.template
Normal file
@@ -0,0 +1,329 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# test_all.sh - {{REPO_NAME}} Test Orchestrator
|
||||||
|
#
|
||||||
|
# {{PROJECT_DESCRIPTION}}
|
||||||
|
#
|
||||||
|
# This script runs all test suites across the repository in a coordinated fashion.
|
||||||
|
# Adapted from: BlackRoad Operating System Test Orchestrator Pattern
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./test_all.sh # Best-effort mode (run all suites, report summary)
|
||||||
|
# ./test_all.sh --strict # Strict mode (fail on first error)
|
||||||
|
# ./test_all.sh --suite <name> # Run specific suite only
|
||||||
|
# ./test_all.sh --help # Show usage
|
||||||
|
#
|
||||||
|
# Available suites: {{LIST_YOUR_SUITES_HERE}}
|
||||||
|
# Example: backend, frontend, api, sdk, docs
|
||||||
|
#
|
||||||
|
|
||||||
|
set -uo pipefail
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# CONFIGURATION
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
cd "$ROOT"
|
||||||
|
|
||||||
|
STRICT_MODE=false
|
||||||
|
SPECIFIC_SUITE=""
|
||||||
|
VERBOSE=false
|
||||||
|
|
||||||
|
# Color codes for pretty output
|
||||||
|
if [[ -t 1 ]]; then
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
CYAN='\033[0;36m'
|
||||||
|
BOLD='\033[1m'
|
||||||
|
RESET='\033[0m'
|
||||||
|
else
|
||||||
|
RED='' GREEN='' YELLOW='' BLUE='' CYAN='' BOLD='' RESET=''
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Results tracking
|
||||||
|
declare -A SUITE_RESULTS
|
||||||
|
declare -A SUITE_TIMES
|
||||||
|
SUITES_RAN=0
|
||||||
|
SUITES_PASSED=0
|
||||||
|
SUITES_FAILED=0
|
||||||
|
SUITES_SKIPPED=0
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# HELPERS
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
have() {
|
||||||
|
command -v "$1" >/dev/null 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
log_header() {
|
||||||
|
echo ""
|
||||||
|
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}"
|
||||||
|
echo -e "${BOLD}$1${RESET}"
|
||||||
|
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_suite() {
|
||||||
|
echo ""
|
||||||
|
echo -e "${BLUE}▶ $1${RESET}"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_info() {
|
||||||
|
echo -e " ${CYAN}ℹ${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_success() {
|
||||||
|
echo -e " ${GREEN}✓${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_warning() {
|
||||||
|
echo -e " ${YELLOW}⚠${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_error() {
|
||||||
|
echo -e " ${RED}✗${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_skip() {
|
||||||
|
echo -e " ${YELLOW}⊘${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
record_result() {
|
||||||
|
local suite=$1
|
||||||
|
local result=$2 # PASS, FAIL, SKIP
|
||||||
|
local duration=$3
|
||||||
|
|
||||||
|
SUITE_RESULTS[$suite]=$result
|
||||||
|
SUITE_TIMES[$suite]=$duration
|
||||||
|
((SUITES_RAN++))
|
||||||
|
|
||||||
|
case $result in
|
||||||
|
PASS) ((SUITES_PASSED++)) ;;
|
||||||
|
FAIL) ((SUITES_FAILED++)) ;;
|
||||||
|
SKIP) ((SUITES_SKIPPED++)) ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
if [[ "$result" == "FAIL" && "$STRICT_MODE" == "true" ]]; then
|
||||||
|
log_error "Strict mode enabled - aborting on first failure"
|
||||||
|
print_summary
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
print_summary() {
|
||||||
|
echo ""
|
||||||
|
log_header "TEST SUMMARY"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Summary table
|
||||||
|
printf "${BOLD}%-25s %-10s %-15s${RESET}\n" "Suite" "Result" "Duration"
|
||||||
|
echo "─────────────────────────────────────────────────────────"
|
||||||
|
|
||||||
|
# TODO: Update this list with your actual suites
|
||||||
|
for suite in example-suite-1 example-suite-2 example-suite-3; do
|
||||||
|
if [[ -n "${SUITE_RESULTS[$suite]:-}" ]]; then
|
||||||
|
result="${SUITE_RESULTS[$suite]}"
|
||||||
|
duration="${SUITE_TIMES[$suite]}"
|
||||||
|
|
||||||
|
case $result in
|
||||||
|
PASS)
|
||||||
|
printf "${GREEN}%-25s %-10s %-15s${RESET}\n" "$suite" "✓ PASS" "$duration"
|
||||||
|
;;
|
||||||
|
FAIL)
|
||||||
|
printf "${RED}%-25s %-10s %-15s${RESET}\n" "$suite" "✗ FAIL" "$duration"
|
||||||
|
;;
|
||||||
|
SKIP)
|
||||||
|
printf "${YELLOW}%-25s %-10s %-15s${RESET}\n" "$suite" "⊘ SKIP" "$duration"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "─────────────────────────────────────────────────────────"
|
||||||
|
echo ""
|
||||||
|
echo -e "${BOLD}Total:${RESET} $SUITES_RAN suites | ${GREEN}$SUITES_PASSED passed${RESET} | ${RED}$SUITES_FAILED failed${RESET} | ${YELLOW}$SUITES_SKIPPED skipped${RESET}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ $SUITES_FAILED -gt 0 ]]; then
|
||||||
|
echo -e "${RED}${BOLD}❌ TESTS FAILED${RESET}"
|
||||||
|
return 1
|
||||||
|
else
|
||||||
|
echo -e "${GREEN}${BOLD}✅ ALL TESTS PASSED${RESET}"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# TEST SUITE FUNCTIONS
|
||||||
|
# TODO: Customize these for your project!
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
# EXAMPLE SUITE 1: Replace with your actual test suite
|
||||||
|
run_example_suite_1() {
|
||||||
|
log_suite "Example Suite 1 (Description)"
|
||||||
|
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
# Check if suite exists
|
||||||
|
if [[ ! -d "$ROOT/path/to/suite1" ]]; then
|
||||||
|
log_skip "path/to/suite1 directory not found"
|
||||||
|
record_result "example-suite-1" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT/path/to/suite1"
|
||||||
|
|
||||||
|
# TODO: Add your test commands here
|
||||||
|
# Examples:
|
||||||
|
# - pytest -v
|
||||||
|
# - npm test
|
||||||
|
# - go test ./...
|
||||||
|
# - cargo test
|
||||||
|
|
||||||
|
log_info "Running tests..."
|
||||||
|
# YOUR_TEST_COMMAND_HERE
|
||||||
|
|
||||||
|
local exit_code=$?
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
|
||||||
|
if [[ $exit_code -eq 0 ]]; then
|
||||||
|
log_success "Example Suite 1 tests passed"
|
||||||
|
record_result "example-suite-1" "PASS" "${duration}s"
|
||||||
|
else
|
||||||
|
log_error "Example Suite 1 tests failed"
|
||||||
|
record_result "example-suite-1" "FAIL" "${duration}s"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT"
|
||||||
|
}
|
||||||
|
|
||||||
|
# EXAMPLE SUITE 2: Add more suites as needed
|
||||||
|
run_example_suite_2() {
|
||||||
|
log_suite "Example Suite 2 (Description)"
|
||||||
|
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
if [[ ! -d "$ROOT/path/to/suite2" ]]; then
|
||||||
|
log_skip "path/to/suite2 directory not found"
|
||||||
|
record_result "example-suite-2" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# TODO: Implement your test logic
|
||||||
|
|
||||||
|
cd "$ROOT"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add more suite functions as needed...
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# COMMAND-LINE PARSING
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
show_help() {
|
||||||
|
cat << EOF
|
||||||
|
${BOLD}{{REPO_NAME}} - Test Orchestrator${RESET}
|
||||||
|
|
||||||
|
${BOLD}USAGE:${RESET}
|
||||||
|
./test_all.sh [OPTIONS]
|
||||||
|
|
||||||
|
${BOLD}OPTIONS:${RESET}
|
||||||
|
--strict Fail on first test suite failure (default: best-effort)
|
||||||
|
--suite <name> Run specific test suite only
|
||||||
|
--verbose, -v Show verbose test output
|
||||||
|
--help, -h Show this help message
|
||||||
|
|
||||||
|
${BOLD}AVAILABLE SUITES:${RESET}
|
||||||
|
example-suite-1 Description of suite 1
|
||||||
|
example-suite-2 Description of suite 2
|
||||||
|
# TODO: Update this list with your actual suites
|
||||||
|
|
||||||
|
${BOLD}EXAMPLES:${RESET}
|
||||||
|
./test_all.sh # Run all suites, best-effort mode
|
||||||
|
./test_all.sh --strict # Run all suites, fail-fast mode
|
||||||
|
./test_all.sh --suite example-suite-1 # Run only suite 1
|
||||||
|
./test_all.sh --suite example-suite-1 --verbose # Verbose output
|
||||||
|
|
||||||
|
${BOLD}EXIT CODES:${RESET}
|
||||||
|
0 All tests passed
|
||||||
|
1 One or more test suites failed
|
||||||
|
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
--strict)
|
||||||
|
STRICT_MODE=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--suite)
|
||||||
|
SPECIFIC_SUITE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--verbose|-v)
|
||||||
|
VERBOSE=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
show_help
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo -e "${RED}Unknown option: $1${RESET}"
|
||||||
|
echo "Use --help for usage information"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# MAIN EXECUTION
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
log_header "{{REPO_NAME}} - Test Orchestrator"
|
||||||
|
|
||||||
|
if [[ "$STRICT_MODE" == "true" ]]; then
|
||||||
|
log_info "Mode: ${RED}${BOLD}STRICT${RESET} (fail-fast)"
|
||||||
|
else
|
||||||
|
log_info "Mode: ${GREEN}${BOLD}BEST-EFFORT${RESET} (run all suites)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$SPECIFIC_SUITE" ]]; then
|
||||||
|
log_info "Running suite: ${BOLD}$SPECIFIC_SUITE${RESET}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# TODO: Update this section with your actual suite functions
|
||||||
|
if [[ -z "$SPECIFIC_SUITE" ]]; then
|
||||||
|
# Run all suites
|
||||||
|
run_example_suite_1 || true
|
||||||
|
run_example_suite_2 || true
|
||||||
|
# Add more suite calls here...
|
||||||
|
else
|
||||||
|
# Run specific suite
|
||||||
|
case $SPECIFIC_SUITE in
|
||||||
|
example-suite-1)
|
||||||
|
run_example_suite_1
|
||||||
|
;;
|
||||||
|
example-suite-2)
|
||||||
|
run_example_suite_2
|
||||||
|
;;
|
||||||
|
# Add more cases here...
|
||||||
|
*)
|
||||||
|
log_error "Unknown suite: $SPECIFIC_SUITE"
|
||||||
|
echo "Use --help to see available suites"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Print summary and exit with appropriate code
|
||||||
|
print_summary
|
||||||
|
exit $?
|
||||||
1115
TESTING.md
Normal file
1115
TESTING.md
Normal file
File diff suppressed because it is too large
Load Diff
646
test_all.sh
Executable file
646
test_all.sh
Executable file
@@ -0,0 +1,646 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# test_all.sh - BlackRoad Operating System Test Orchestrator
|
||||||
|
#
|
||||||
|
# This script runs all test suites across the monorepo in a coordinated fashion.
|
||||||
|
# It supports both strict mode (fail-fast) and best-effort mode (run all, report summary).
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./test_all.sh # Best-effort mode (run all suites, report at end)
|
||||||
|
# ./test_all.sh --strict # Strict mode (fail on first error)
|
||||||
|
# ./test_all.sh --suite backend # Run specific suite only
|
||||||
|
# ./test_all.sh --help # Show usage
|
||||||
|
#
|
||||||
|
# Available suites: backend, agents, operator, sdk-python, sdk-typescript, frontend
|
||||||
|
#
|
||||||
|
|
||||||
|
set -uo pipefail
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# CONFIGURATION
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
cd "$ROOT"
|
||||||
|
|
||||||
|
STRICT_MODE=false
|
||||||
|
SPECIFIC_SUITE=""
|
||||||
|
VERBOSE=false
|
||||||
|
|
||||||
|
# Color codes for pretty output
|
||||||
|
if [[ -t 1 ]]; then
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
CYAN='\033[0;36m'
|
||||||
|
BOLD='\033[1m'
|
||||||
|
RESET='\033[0m'
|
||||||
|
else
|
||||||
|
RED='' GREEN='' YELLOW='' BLUE='' CYAN='' BOLD='' RESET=''
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Results tracking
|
||||||
|
declare -A SUITE_RESULTS
|
||||||
|
declare -A SUITE_TIMES
|
||||||
|
SUITES_RAN=0
|
||||||
|
SUITES_PASSED=0
|
||||||
|
SUITES_FAILED=0
|
||||||
|
SUITES_SKIPPED=0
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# HELPERS
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
have() {
|
||||||
|
command -v "$1" >/dev/null 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
log_header() {
|
||||||
|
echo ""
|
||||||
|
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}"
|
||||||
|
echo -e "${BOLD}$1${RESET}"
|
||||||
|
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_suite() {
|
||||||
|
echo ""
|
||||||
|
echo -e "${BLUE}▶ $1${RESET}"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_info() {
|
||||||
|
echo -e " ${CYAN}ℹ${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_success() {
|
||||||
|
echo -e " ${GREEN}✓${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_warning() {
|
||||||
|
echo -e " ${YELLOW}⚠${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_error() {
|
||||||
|
echo -e " ${RED}✗${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_skip() {
|
||||||
|
echo -e " ${YELLOW}⊘${RESET} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
record_result() {
|
||||||
|
local suite=$1
|
||||||
|
local result=$2 # PASS, FAIL, SKIP
|
||||||
|
local duration=$3
|
||||||
|
|
||||||
|
SUITE_RESULTS[$suite]=$result
|
||||||
|
SUITE_TIMES[$suite]=$duration
|
||||||
|
((SUITES_RAN++))
|
||||||
|
|
||||||
|
case $result in
|
||||||
|
PASS) ((SUITES_PASSED++)) ;;
|
||||||
|
FAIL) ((SUITES_FAILED++)) ;;
|
||||||
|
SKIP) ((SUITES_SKIPPED++)) ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
if [[ "$result" == "FAIL" && "$STRICT_MODE" == "true" ]]; then
|
||||||
|
log_error "Strict mode enabled - aborting on first failure"
|
||||||
|
print_summary
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
print_summary() {
|
||||||
|
echo ""
|
||||||
|
log_header "TEST SUMMARY"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Summary table
|
||||||
|
printf "${BOLD}%-25s %-10s %-15s${RESET}\n" "Suite" "Result" "Duration"
|
||||||
|
echo "─────────────────────────────────────────────────────────"
|
||||||
|
|
||||||
|
for suite in backend agents operator sdk-python sdk-typescript frontend; do
|
||||||
|
if [[ -n "${SUITE_RESULTS[$suite]:-}" ]]; then
|
||||||
|
result="${SUITE_RESULTS[$suite]}"
|
||||||
|
duration="${SUITE_TIMES[$suite]}"
|
||||||
|
|
||||||
|
case $result in
|
||||||
|
PASS)
|
||||||
|
printf "${GREEN}%-25s %-10s %-15s${RESET}\n" "$suite" "✓ PASS" "$duration"
|
||||||
|
;;
|
||||||
|
FAIL)
|
||||||
|
printf "${RED}%-25s %-10s %-15s${RESET}\n" "$suite" "✗ FAIL" "$duration"
|
||||||
|
;;
|
||||||
|
SKIP)
|
||||||
|
printf "${YELLOW}%-25s %-10s %-15s${RESET}\n" "$suite" "⊘ SKIP" "$duration"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "─────────────────────────────────────────────────────────"
|
||||||
|
echo ""
|
||||||
|
echo -e "${BOLD}Total:${RESET} $SUITES_RAN suites | ${GREEN}$SUITES_PASSED passed${RESET} | ${RED}$SUITES_FAILED failed${RESET} | ${YELLOW}$SUITES_SKIPPED skipped${RESET}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ $SUITES_FAILED -gt 0 ]]; then
|
||||||
|
echo -e "${RED}${BOLD}❌ TESTS FAILED${RESET}"
|
||||||
|
return 1
|
||||||
|
else
|
||||||
|
echo -e "${GREEN}${BOLD}✅ ALL TESTS PASSED${RESET}"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# TEST SUITE FUNCTIONS
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
run_backend_tests() {
|
||||||
|
log_suite "Backend (FastAPI + pytest)"
|
||||||
|
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
if [[ ! -d "$ROOT/backend" ]]; then
|
||||||
|
log_skip "backend/ directory not found"
|
||||||
|
record_result "backend" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT/backend"
|
||||||
|
|
||||||
|
# Detect Python
|
||||||
|
local PY=python3
|
||||||
|
if ! have python3; then
|
||||||
|
if have python; then
|
||||||
|
PY=python
|
||||||
|
else
|
||||||
|
log_error "Python not found"
|
||||||
|
record_result "backend" "FAIL" "0s"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Using Python: $($PY --version 2>&1)"
|
||||||
|
|
||||||
|
# Setup virtual environment for isolation
|
||||||
|
local VENV_DIR=".venv-tests"
|
||||||
|
if [[ ! -d "$VENV_DIR" ]]; then
|
||||||
|
log_info "Creating test virtual environment..."
|
||||||
|
$PY -m venv "$VENV_DIR" >/dev/null 2>&1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# shellcheck disable=SC1091
|
||||||
|
source "$VENV_DIR/bin/activate"
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
log_info "Installing dependencies..."
|
||||||
|
pip install --upgrade pip >/dev/null 2>&1
|
||||||
|
if [[ -f requirements.txt ]]; then
|
||||||
|
pip install -r requirements.txt >/dev/null 2>&1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Run pytest
|
||||||
|
if have pytest && [[ -f pytest.ini || -d tests ]]; then
|
||||||
|
log_info "Running pytest..."
|
||||||
|
|
||||||
|
# Export test environment variables
|
||||||
|
export TEST_DATABASE_URL="${TEST_DATABASE_URL:-sqlite+aiosqlite:///./test.db}"
|
||||||
|
export ENVIRONMENT="testing"
|
||||||
|
export ALLOWED_ORIGINS="http://localhost:3000,http://localhost:8000"
|
||||||
|
|
||||||
|
if [[ "$VERBOSE" == "true" ]]; then
|
||||||
|
pytest -v --maxfail=1
|
||||||
|
else
|
||||||
|
pytest -q --maxfail=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
local exit_code=$?
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
|
||||||
|
if [[ $exit_code -eq 0 ]]; then
|
||||||
|
log_success "Backend tests passed"
|
||||||
|
record_result "backend" "PASS" "${duration}s"
|
||||||
|
else
|
||||||
|
log_error "Backend tests failed (exit code: $exit_code)"
|
||||||
|
record_result "backend" "FAIL" "${duration}s"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_skip "pytest not available or no tests found"
|
||||||
|
record_result "backend" "SKIP" "0s"
|
||||||
|
fi
|
||||||
|
|
||||||
|
deactivate 2>/dev/null || true
|
||||||
|
cd "$ROOT"
|
||||||
|
}
|
||||||
|
|
||||||
|
run_agents_tests() {
|
||||||
|
log_suite "Agents (200+ AI agent ecosystem)"
|
||||||
|
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
if [[ ! -d "$ROOT/agents/tests" ]]; then
|
||||||
|
log_skip "agents/tests/ directory not found"
|
||||||
|
record_result "agents" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT"
|
||||||
|
|
||||||
|
local PY=python3
|
||||||
|
if ! have python3; then
|
||||||
|
PY=python
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Install agents dependencies
|
||||||
|
if [[ -f agents/requirements.txt ]]; then
|
||||||
|
log_info "Installing agent dependencies..."
|
||||||
|
$PY -m pip install -r agents/requirements.txt >/dev/null 2>&1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if have pytest; then
|
||||||
|
log_info "Running agent tests..."
|
||||||
|
|
||||||
|
if [[ "$VERBOSE" == "true" ]]; then
|
||||||
|
pytest agents/tests/ -v
|
||||||
|
else
|
||||||
|
pytest agents/tests/ -q
|
||||||
|
fi
|
||||||
|
|
||||||
|
local exit_code=$?
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
|
||||||
|
if [[ $exit_code -eq 0 ]]; then
|
||||||
|
log_success "Agent tests passed"
|
||||||
|
record_result "agents" "PASS" "${duration}s"
|
||||||
|
else
|
||||||
|
log_error "Agent tests failed"
|
||||||
|
record_result "agents" "FAIL" "${duration}s"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_skip "pytest not available"
|
||||||
|
record_result "agents" "SKIP" "0s"
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT"
|
||||||
|
}
|
||||||
|
|
||||||
|
run_operator_tests() {
|
||||||
|
log_suite "Operator Engine (GitHub automation)"
|
||||||
|
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
if [[ ! -d "$ROOT/operator_engine" ]]; then
|
||||||
|
log_skip "operator_engine/ directory not found"
|
||||||
|
record_result "operator" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT/operator_engine"
|
||||||
|
|
||||||
|
local PY=python3
|
||||||
|
if ! have python3; then
|
||||||
|
PY=python
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Install dependencies if requirements.txt exists
|
||||||
|
if [[ -f requirements.txt ]]; then
|
||||||
|
log_info "Installing operator dependencies..."
|
||||||
|
$PY -m pip install -r requirements.txt >/dev/null 2>&1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if have pytest && [[ -d tests ]]; then
|
||||||
|
log_info "Running operator tests..."
|
||||||
|
|
||||||
|
if [[ "$VERBOSE" == "true" ]]; then
|
||||||
|
pytest tests/ -v
|
||||||
|
else
|
||||||
|
pytest tests/ -q
|
||||||
|
fi
|
||||||
|
|
||||||
|
local exit_code=$?
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
|
||||||
|
if [[ $exit_code -eq 0 ]]; then
|
||||||
|
log_success "Operator tests passed"
|
||||||
|
record_result "operator" "PASS" "${duration}s"
|
||||||
|
else
|
||||||
|
log_error "Operator tests failed"
|
||||||
|
record_result "operator" "FAIL" "${duration}s"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_skip "pytest not available or no tests found"
|
||||||
|
record_result "operator" "SKIP" "0s"
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT"
|
||||||
|
}
|
||||||
|
|
||||||
|
run_sdk_python_tests() {
|
||||||
|
log_suite "SDK: Python (Official Python SDK)"
|
||||||
|
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
if [[ ! -d "$ROOT/sdk/python" ]]; then
|
||||||
|
log_skip "sdk/python/ directory not found"
|
||||||
|
record_result "sdk-python" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT/sdk/python"
|
||||||
|
|
||||||
|
local PY=python3
|
||||||
|
if ! have python3; then
|
||||||
|
PY=python
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Installing SDK in editable mode..."
|
||||||
|
$PY -m pip install -e .[dev] >/dev/null 2>&1
|
||||||
|
|
||||||
|
if have pytest; then
|
||||||
|
log_info "Running Python SDK tests..."
|
||||||
|
|
||||||
|
if [[ "$VERBOSE" == "true" ]]; then
|
||||||
|
pytest -v
|
||||||
|
else
|
||||||
|
pytest -q
|
||||||
|
fi
|
||||||
|
|
||||||
|
local exit_code=$?
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
|
||||||
|
if [[ $exit_code -eq 0 ]]; then
|
||||||
|
log_success "Python SDK tests passed"
|
||||||
|
record_result "sdk-python" "PASS" "${duration}s"
|
||||||
|
else
|
||||||
|
log_error "Python SDK tests failed"
|
||||||
|
record_result "sdk-python" "FAIL" "${duration}s"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_skip "pytest not available"
|
||||||
|
record_result "sdk-python" "SKIP" "0s"
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT"
|
||||||
|
}
|
||||||
|
|
||||||
|
run_sdk_typescript_tests() {
|
||||||
|
log_suite "SDK: TypeScript (Official TypeScript/JavaScript SDK)"
|
||||||
|
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
if [[ ! -d "$ROOT/sdk/typescript" ]]; then
|
||||||
|
log_skip "sdk/typescript/ directory not found"
|
||||||
|
record_result "sdk-typescript" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! have node; then
|
||||||
|
log_skip "Node.js not installed"
|
||||||
|
record_result "sdk-typescript" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT/sdk/typescript"
|
||||||
|
|
||||||
|
log_info "Using Node: $(node --version)"
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
if [[ -f package.json ]]; then
|
||||||
|
log_info "Installing npm dependencies..."
|
||||||
|
npm install >/dev/null 2>&1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if test script exists
|
||||||
|
if npm run -s test >/dev/null 2>&1; then
|
||||||
|
log_info "Running TypeScript SDK tests (Jest)..."
|
||||||
|
|
||||||
|
if [[ "$VERBOSE" == "true" ]]; then
|
||||||
|
npm test
|
||||||
|
else
|
||||||
|
npm test -- --silent 2>&1 | grep -E "(PASS|FAIL|Test Suites)" || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
local exit_code=${PIPESTATUS[0]}
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
|
||||||
|
if [[ $exit_code -eq 0 ]]; then
|
||||||
|
log_success "TypeScript SDK tests passed"
|
||||||
|
record_result "sdk-typescript" "PASS" "${duration}s"
|
||||||
|
else
|
||||||
|
log_error "TypeScript SDK tests failed"
|
||||||
|
record_result "sdk-typescript" "FAIL" "${duration}s"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_skip "No test script found in package.json"
|
||||||
|
record_result "sdk-typescript" "SKIP" "0s"
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$ROOT"
|
||||||
|
}
|
||||||
|
|
||||||
|
run_frontend_tests() {
|
||||||
|
log_suite "Frontend (Vanilla JavaScript validation)"
|
||||||
|
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
# Note: Frontend currently has CI validation only (no unit tests)
|
||||||
|
# This suite validates that the frontend files are present and valid
|
||||||
|
|
||||||
|
if [[ ! -d "$ROOT/backend/static" ]]; then
|
||||||
|
log_skip "backend/static/ directory not found"
|
||||||
|
record_result "frontend" "SKIP" "0s"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Validating frontend structure..."
|
||||||
|
|
||||||
|
local errors=0
|
||||||
|
|
||||||
|
# Check for essential files
|
||||||
|
if [[ ! -f "$ROOT/backend/static/index.html" ]]; then
|
||||||
|
log_error "Missing: backend/static/index.html"
|
||||||
|
((errors++))
|
||||||
|
else
|
||||||
|
log_success "Found: index.html"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -d "$ROOT/backend/static/js" ]]; then
|
||||||
|
log_error "Missing: backend/static/js/ directory"
|
||||||
|
((errors++))
|
||||||
|
else
|
||||||
|
log_success "Found: js/ directory"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Basic JavaScript syntax check (if node is available)
|
||||||
|
if have node && [[ -d "$ROOT/backend/static/js" ]]; then
|
||||||
|
log_info "Running JavaScript syntax validation..."
|
||||||
|
|
||||||
|
local js_errors=0
|
||||||
|
while IFS= read -r -d '' file; do
|
||||||
|
if ! node --check "$file" 2>/dev/null; then
|
||||||
|
log_error "Syntax error in: $file"
|
||||||
|
((js_errors++))
|
||||||
|
fi
|
||||||
|
done < <(find "$ROOT/backend/static/js" -name "*.js" -print0)
|
||||||
|
|
||||||
|
if [[ $js_errors -eq 0 ]]; then
|
||||||
|
log_success "JavaScript syntax validation passed"
|
||||||
|
else
|
||||||
|
log_error "JavaScript syntax validation failed ($js_errors files)"
|
||||||
|
((errors++))
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
|
||||||
|
if [[ $errors -eq 0 ]]; then
|
||||||
|
log_success "Frontend validation passed"
|
||||||
|
record_result "frontend" "PASS" "${duration}s"
|
||||||
|
else
|
||||||
|
log_error "Frontend validation failed ($errors errors)"
|
||||||
|
record_result "frontend" "FAIL" "${duration}s"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# COMMAND-LINE PARSING
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
show_help() {
|
||||||
|
cat << EOF
|
||||||
|
${BOLD}BlackRoad Operating System - Test Orchestrator${RESET}
|
||||||
|
|
||||||
|
${BOLD}USAGE:${RESET}
|
||||||
|
./test_all.sh [OPTIONS]
|
||||||
|
|
||||||
|
${BOLD}OPTIONS:${RESET}
|
||||||
|
--strict Fail on first test suite failure (default: best-effort)
|
||||||
|
--suite <name> Run specific test suite only
|
||||||
|
--verbose, -v Show verbose test output
|
||||||
|
--help, -h Show this help message
|
||||||
|
|
||||||
|
${BOLD}AVAILABLE SUITES:${RESET}
|
||||||
|
backend Backend FastAPI tests (pytest)
|
||||||
|
agents AI agent ecosystem tests (pytest)
|
||||||
|
operator Operator engine tests (pytest)
|
||||||
|
sdk-python Python SDK tests (pytest)
|
||||||
|
sdk-typescript TypeScript SDK tests (jest)
|
||||||
|
frontend Frontend validation (structure + syntax)
|
||||||
|
|
||||||
|
${BOLD}EXAMPLES:${RESET}
|
||||||
|
./test_all.sh # Run all suites, best-effort mode
|
||||||
|
./test_all.sh --strict # Run all suites, fail-fast mode
|
||||||
|
./test_all.sh --suite backend # Run only backend tests
|
||||||
|
./test_all.sh --suite sdk-python --verbose # Run Python SDK tests with verbose output
|
||||||
|
|
||||||
|
${BOLD}EXIT CODES:${RESET}
|
||||||
|
0 All tests passed
|
||||||
|
1 One or more test suites failed
|
||||||
|
|
||||||
|
${BOLD}NOTES:${RESET}
|
||||||
|
- In best-effort mode, all suites run even if some fail
|
||||||
|
- In strict mode, execution stops at first failure
|
||||||
|
- Suite results are summarized at the end
|
||||||
|
- Use --verbose for detailed test output
|
||||||
|
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
--strict)
|
||||||
|
STRICT_MODE=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--suite)
|
||||||
|
SPECIFIC_SUITE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--verbose|-v)
|
||||||
|
VERBOSE=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
show_help
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo -e "${RED}Unknown option: $1${RESET}"
|
||||||
|
echo "Use --help for usage information"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# MAIN EXECUTION
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
log_header "BlackRoad Operating System - Test Orchestrator"
|
||||||
|
|
||||||
|
if [[ "$STRICT_MODE" == "true" ]]; then
|
||||||
|
log_info "Mode: ${RED}${BOLD}STRICT${RESET} (fail-fast)"
|
||||||
|
else
|
||||||
|
log_info "Mode: ${GREEN}${BOLD}BEST-EFFORT${RESET} (run all suites)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$SPECIFIC_SUITE" ]]; then
|
||||||
|
log_info "Running suite: ${BOLD}$SPECIFIC_SUITE${RESET}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Run suites
|
||||||
|
if [[ -z "$SPECIFIC_SUITE" ]]; then
|
||||||
|
# Run all suites
|
||||||
|
run_backend_tests || true
|
||||||
|
run_agents_tests || true
|
||||||
|
run_operator_tests || true
|
||||||
|
run_sdk_python_tests || true
|
||||||
|
run_sdk_typescript_tests || true
|
||||||
|
run_frontend_tests || true
|
||||||
|
else
|
||||||
|
# Run specific suite
|
||||||
|
case $SPECIFIC_SUITE in
|
||||||
|
backend)
|
||||||
|
run_backend_tests
|
||||||
|
;;
|
||||||
|
agents)
|
||||||
|
run_agents_tests
|
||||||
|
;;
|
||||||
|
operator)
|
||||||
|
run_operator_tests
|
||||||
|
;;
|
||||||
|
sdk-python)
|
||||||
|
run_sdk_python_tests
|
||||||
|
;;
|
||||||
|
sdk-typescript)
|
||||||
|
run_sdk_typescript_tests
|
||||||
|
;;
|
||||||
|
frontend)
|
||||||
|
run_frontend_tests
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
log_error "Unknown suite: $SPECIFIC_SUITE"
|
||||||
|
echo "Use --help to see available suites"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Print summary and exit with appropriate code
|
||||||
|
print_summary
|
||||||
|
exit $?
|
||||||
Reference in New Issue
Block a user