3 Commits

Author SHA1 Message Date
Alexa Louise
522752a6fd Add 5 Copilot PR review agents
Some checks failed
Auto Deploy PR / detect-and-deploy (push) Has been cancelled
- 🤖 Code Review Agent
- 🛡️ Security Audit Agent
- 📚 Documentation Agent
- 🧪 Test Coverage Agent
-  Performance Agent

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 10:36:13 -06:00
Alexa Louise
9376570aa7 Add auto-deploy on PR workflow 2025-11-30 07:47:58 -06:00
Alexa Louise
5070a3b0b6 Add BlackRoad Completion Framework
- GitHub Actions workflows (auto-merge, branch-tracker, issue-to-board, stale-cleanup)
- Issue templates (agent-task, bug, task)
- PR template
- Automation scripts (slack-to-github, create-issue)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 07:40:24 -06:00
17 changed files with 1805 additions and 36 deletions

85
.github/ISSUE_TEMPLATE/agent-task.yml vendored Normal file
View File

@@ -0,0 +1,85 @@
name: "🤖 Agent Task"
description: "For Codex or other agents to pick up"
title: "[AGENT] "
labels: ["agent-task", "automated"]
body:
- type: markdown
attributes:
value: |
## Agent-Executable Task
This issue is designed to be picked up and executed by Codex or another AI agent.
Keep instructions clear, specific, and executable.
- type: dropdown
id: priority
attributes:
label: Priority
options:
- "P0 - Immediate"
- "P1 - Today"
- "P2 - This week"
- "P3 - When available"
validations:
required: true
- type: textarea
id: instruction
attributes:
label: Instruction
description: What should the agent do? Be specific and executable.
placeholder: |
Create a new API endpoint at /api/users/preferences that:
- Accepts GET and PUT requests
- GET returns current user preferences from database
- PUT updates preferences with validation
- Follow existing patterns in /api/users/profile
validations:
required: true
- type: textarea
id: files
attributes:
label: Files to touch
description: Which files should be created or modified?
placeholder: |
- Create: src/pages/api/users/preferences.ts
- Modify: src/types/user.ts (add PreferencesType)
- Modify: prisma/schema.prisma (if needed)
validations:
required: false
- type: textarea
id: done
attributes:
label: Definition of done
description: How does the agent know it's complete?
placeholder: |
- Endpoint responds correctly to GET/PUT
- TypeScript compiles without errors
- Tests pass (if tests exist for this area)
validations:
required: true
- type: textarea
id: constraints
attributes:
label: Constraints / Don'ts
description: What should the agent avoid?
placeholder: |
- Don't modify the auth middleware
- Don't add new dependencies
- Follow existing code style
validations:
required: false
- type: dropdown
id: agent
attributes:
label: Assigned agent
options:
- "Codex"
- "Any available agent"
- "Specific agent (note in context)"
validations:
required: false

71
.github/ISSUE_TEMPLATE/bug.yml vendored Normal file
View File

@@ -0,0 +1,71 @@
name: "🐛 Bug"
description: "Something is broken"
title: "[BUG] "
labels: ["bug"]
body:
- type: markdown
attributes:
value: |
## Bug Report
What's broken? Let's fix it.
- type: dropdown
id: severity
attributes:
label: Severity
description: How bad is it?
options:
- "🔥 Critical - Production down, data loss, security issue"
- "🟠 High - Major feature broken, no workaround"
- "🟡 Medium - Feature broken but has workaround"
- "🟢 Low - Minor issue, cosmetic, edge case"
validations:
required: true
- type: textarea
id: what
attributes:
label: What's broken?
description: One sentence description
placeholder: "Login button doesn't work on mobile Safari"
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behavior
description: What should happen?
placeholder: "Clicking login should open the auth modal"
validations:
required: true
- type: textarea
id: actual
attributes:
label: Actual behavior
description: What actually happens?
placeholder: "Nothing happens. Console shows: TypeError..."
validations:
required: true
- type: textarea
id: repro
attributes:
label: How to reproduce
description: Steps to trigger the bug
placeholder: |
1. Open site on iPhone Safari
2. Tap Login button
3. Nothing happens
validations:
required: true
- type: textarea
id: context
attributes:
label: Environment / Context
description: Browser, OS, device, account type, etc.
placeholder: "iOS 17, Safari, iPhone 15 Pro"
validations:
required: false

View File

@@ -1,5 +1,5 @@
blank_issues_enabled: true blank_issues_enabled: true
contact_links: contact_links:
- name: BlackRoad OS Docs - name: "💬 Quick question / discussion"
url: https://github.com/BlackRoad-OS/blackroad-os-docs url: https://github.com/YOUR_USERNAME/YOUR_REPO/discussions
about: Check the docs before opening an issue about: "For questions, ideas, or discussions that aren't actionable tasks yet"

65
.github/ISSUE_TEMPLATE/task.yml vendored Normal file
View File

@@ -0,0 +1,65 @@
name: "🎯 Task"
description: "Standard work item - features, changes, improvements"
title: "[TASK] "
labels: ["task"]
body:
- type: markdown
attributes:
value: |
## Quick Task Creation
Keep it simple. One task = one thing to do.
- type: dropdown
id: priority
attributes:
label: Priority
description: How urgent is this?
options:
- "P0 - Do it now (blocks everything)"
- "P1 - Today"
- "P2 - This week"
- "P3 - Backlog"
validations:
required: true
- type: textarea
id: what
attributes:
label: What needs to happen?
description: One sentence. If it's more than one sentence, split into multiple issues.
placeholder: "Add a logout button to the nav bar"
validations:
required: true
- type: textarea
id: done
attributes:
label: Definition of done
description: How do we know this is complete?
placeholder: |
- Logout button visible in nav
- Clicking it clears session and redirects to /login
- Works on mobile
validations:
required: true
- type: textarea
id: context
attributes:
label: Context (optional)
description: Any additional info that helps
placeholder: "Related to issue #45. See Figma design at..."
validations:
required: false
- type: dropdown
id: assignee
attributes:
label: Who should handle this?
description: Human or agent?
options:
- "Codex / Agent"
- "Human (me)"
- "Unassigned - triage needed"
validations:
required: false

View File

@@ -1,39 +1,22 @@
## Summary ## What
<!-- What does this PR do? One paragraph max. -->
<!-- One line: what does this PR do? -->
## Linked Issue
Fixes #
<!-- The "Fixes #123" syntax auto-closes the issue when merged -->
## Changes ## Changes
<!-- Bulleted list of specific changes -->
-
## Type <!-- Optional: bullet list of key changes if not obvious -->
<!-- Check one -->
- [ ] Feature
- [ ] Bug fix
- [ ] Infrastructure / CI
- [ ] Documentation
- [ ] Refactor
- [ ] Config change
## Tests ---
<!-- How was this tested? -->
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] Manual testing (describe below)
- [ ] N/A (docs only)
**Manual testing steps:** <!--
This PR will auto-merge when CI passes.
No manual approval required.
If CI fails, fix it or add the 'blocked' label if you need help.
## Risk / Impact -->
<!-- What could go wrong? What's the blast radius? -->
- Risk level: Low / Medium / High
- Affected services:
## Checklist
- [ ] Code follows project conventions
- [ ] No secrets or credentials committed
- [ ] README updated (if applicable)
- [ ] This is a single logical change (atomic PR)
## Related Issues
<!-- Closes #123, Relates to #456 -->

111
.github/workflows/agent-code-review.yml vendored Normal file
View File

@@ -0,0 +1,111 @@
name: "🤖 Agent: Code Review"
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
jobs:
code-review:
name: Code Review Agent
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v44
with:
files: |
**/*.ts
**/*.tsx
**/*.js
**/*.jsx
**/*.py
**/*.go
- name: Analyze code quality
if: steps.changed-files.outputs.any_changed == 'true'
uses: actions/github-script@v7
with:
script: |
const changedFiles = '${{ steps.changed-files.outputs.all_changed_files }}'.split(' ').filter(f => f);
let reviewComments = [];
let summary = '## 🤖 Code Review Agent Report\n\n';
summary += `Analyzed **${changedFiles.length}** changed files.\n\n`;
// Analyze patterns
const patterns = {
'console.log': { severity: 'warning', msg: 'Consider removing debug logs before merging' },
'TODO': { severity: 'info', msg: 'Found TODO comment - ensure this is tracked' },
'FIXME': { severity: 'warning', msg: 'FIXME found - should be addressed' },
'any': { severity: 'warning', msg: 'Avoid using `any` type - prefer specific types' },
'eslint-disable': { severity: 'info', msg: 'ESLint rule disabled - ensure this is intentional' },
'password': { severity: 'error', msg: '⚠️ Possible hardcoded credential detected' },
'secret': { severity: 'error', msg: '⚠️ Possible secret in code' },
};
const { execSync } = require('child_process');
let issues = { error: 0, warning: 0, info: 0 };
for (const file of changedFiles) {
try {
const content = require('fs').readFileSync(file, 'utf8');
const lines = content.split('\n');
lines.forEach((line, idx) => {
for (const [pattern, config] of Object.entries(patterns)) {
if (line.toLowerCase().includes(pattern.toLowerCase())) {
issues[config.severity]++;
reviewComments.push(`- **${file}:${idx + 1}** [${config.severity.toUpperCase()}] ${config.msg}`);
}
}
});
} catch (e) {
// File might not exist in working directory
}
}
// Build summary
if (issues.error > 0) {
summary += `### ❌ Errors: ${issues.error}\n`;
}
if (issues.warning > 0) {
summary += `### ⚠️ Warnings: ${issues.warning}\n`;
}
if (issues.info > 0) {
summary += `### Info: ${issues.info}\n`;
}
if (reviewComments.length > 0) {
summary += '\n### Details\n\n';
summary += reviewComments.slice(0, 20).join('\n');
if (reviewComments.length > 20) {
summary += `\n\n*...and ${reviewComments.length - 20} more issues*`;
}
} else {
summary += '\n✅ No issues found! Code looks good.\n';
}
summary += '\n\n---\n*🤖 Automated review by Code Review Agent*';
// Post comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
body: summary
});
// Fail if errors found
if (issues.error > 0) {
core.setFailed(`Found ${issues.error} error(s) in code review`);
}

View File

@@ -0,0 +1,137 @@
name: "📚 Agent: Documentation"
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
jobs:
documentation:
name: Documentation Agent
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v44
- name: Analyze documentation coverage
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const path = require('path');
const changedFiles = '${{ steps.changed-files.outputs.all_changed_files }}'.split(' ').filter(f => f);
let report = '## 📚 Documentation Agent Report\n\n';
let suggestions = [];
let stats = {
codeFiles: 0,
docFiles: 0,
hasJsdoc: 0,
missingJsdoc: 0,
readmeUpdated: false
};
// Categorize files
const codeExtensions = ['.ts', '.tsx', '.js', '.jsx', '.py', '.go'];
const docExtensions = ['.md', '.mdx', '.rst', '.txt'];
for (const file of changedFiles) {
const ext = path.extname(file);
if (codeExtensions.includes(ext)) {
stats.codeFiles++;
// Check for JSDoc/docstrings
try {
const content = fs.readFileSync(file, 'utf8');
// Check for exported functions without documentation
const exportedFunctions = content.match(/export\s+(async\s+)?function\s+\w+/g) || [];
const jsdocBlocks = content.match(/\/\*\*[\s\S]*?\*\//g) || [];
if (exportedFunctions.length > jsdocBlocks.length) {
stats.missingJsdoc++;
suggestions.push(`📝 **${file}**: Consider adding JSDoc comments to exported functions`);
} else if (jsdocBlocks.length > 0) {
stats.hasJsdoc++;
}
// Check for complex functions that need docs
const lines = content.split('\n').length;
if (lines > 200 && jsdocBlocks.length === 0) {
suggestions.push(`📖 **${file}**: Large file (${lines} lines) without documentation`);
}
} catch (e) {
// File might not exist
}
}
if (docExtensions.includes(ext)) {
stats.docFiles++;
}
if (file.toLowerCase().includes('readme')) {
stats.readmeUpdated = true;
}
}
// Build report
report += '### 📊 Documentation Stats\n\n';
report += `| Metric | Value |\n`;
report += `|--------|-------|\n`;
report += `| Code files changed | ${stats.codeFiles} |\n`;
report += `| Doc files changed | ${stats.docFiles} |\n`;
report += `| Files with JSDoc | ${stats.hasJsdoc} |\n`;
report += `| Files needing docs | ${stats.missingJsdoc} |\n`;
report += `| README updated | ${stats.readmeUpdated ? '✅' : '❌'} |\n\n`;
// Calculate documentation score
const docScore = stats.codeFiles > 0
? Math.round((stats.hasJsdoc / stats.codeFiles) * 100)
: 100;
report += `### 📈 Documentation Score: ${docScore}%\n\n`;
if (docScore >= 80) {
report += '✅ Great documentation coverage!\n\n';
} else if (docScore >= 50) {
report += '⚠️ Documentation could be improved.\n\n';
} else {
report += '❌ Documentation coverage is low. Please add docs.\n\n';
}
// Suggestions
if (suggestions.length > 0) {
report += '### 💡 Suggestions\n\n';
suggestions.slice(0, 10).forEach(s => report += `- ${s}\n`);
if (suggestions.length > 10) {
report += `\n*...and ${suggestions.length - 10} more suggestions*\n`;
}
}
// Tips
if (stats.codeFiles > 0 && !stats.readmeUpdated) {
report += '\n### 💡 Tip\n';
report += 'Consider updating the README if this PR introduces new features or API changes.\n';
}
report += '\n---\n*📚 Automated review by Documentation Agent*';
// Post comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
body: report
});

180
.github/workflows/agent-performance.yml vendored Normal file
View File

@@ -0,0 +1,180 @@
name: "⚡ Agent: Performance"
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
jobs:
performance:
name: Performance Agent
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get PR stats
id: pr-stats
run: |
# Get diff stats
ADDITIONS=$(git diff --shortstat origin/${{ github.base_ref }}...HEAD | grep -oP '\d+(?= insertion)' || echo 0)
DELETIONS=$(git diff --shortstat origin/${{ github.base_ref }}...HEAD | grep -oP '\d+(?= deletion)' || echo 0)
FILES_CHANGED=$(git diff --name-only origin/${{ github.base_ref }}...HEAD | wc -l)
echo "additions=$ADDITIONS" >> $GITHUB_OUTPUT
echo "deletions=$DELETIONS" >> $GITHUB_OUTPUT
echo "files_changed=$FILES_CHANGED" >> $GITHUB_OUTPUT
- name: Analyze bundle size impact
id: bundle
run: |
# Check if package.json exists and get dependencies
if [ -f "package.json" ]; then
DEPS=$(cat package.json | jq '.dependencies | length // 0')
DEV_DEPS=$(cat package.json | jq '.devDependencies | length // 0')
echo "deps=$DEPS" >> $GITHUB_OUTPUT
echo "dev_deps=$DEV_DEPS" >> $GITHUB_OUTPUT
else
echo "deps=0" >> $GITHUB_OUTPUT
echo "dev_deps=0" >> $GITHUB_OUTPUT
fi
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v44
- name: Performance analysis
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const path = require('path');
const changedFiles = '${{ steps.changed-files.outputs.all_changed_files }}'.split(' ').filter(f => f);
const additions = parseInt('${{ steps.pr-stats.outputs.additions }}') || 0;
const deletions = parseInt('${{ steps.pr-stats.outputs.deletions }}') || 0;
const filesChanged = parseInt('${{ steps.pr-stats.outputs.files_changed }}') || 0;
const deps = parseInt('${{ steps.bundle.outputs.deps }}') || 0;
const devDeps = parseInt('${{ steps.bundle.outputs.dev_deps }}') || 0;
let report = '## ⚡ Performance Agent Report\n\n';
let warnings = [];
let suggestions = [];
// PR Size Analysis
report += '### 📦 PR Size Analysis\n\n';
report += `| Metric | Value |\n`;
report += `|--------|-------|\n`;
report += `| Files changed | ${filesChanged} |\n`;
report += `| Lines added | +${additions} |\n`;
report += `| Lines removed | -${deletions} |\n`;
report += `| Net change | ${additions - deletions > 0 ? '+' : ''}${additions - deletions} |\n`;
report += `| Dependencies | ${deps} |\n`;
report += `| Dev Dependencies | ${devDeps} |\n\n`;
// PR Size Rating
const totalChanges = additions + deletions;
let sizeRating = '';
if (totalChanges < 100) {
sizeRating = '🟢 Small PR - Easy to review';
} else if (totalChanges < 500) {
sizeRating = '🟡 Medium PR - Moderate review effort';
} else if (totalChanges < 1000) {
sizeRating = '🟠 Large PR - Consider breaking down';
warnings.push('Large PR detected. Consider splitting into smaller PRs for easier review.');
} else {
sizeRating = '🔴 Very Large PR - Difficult to review';
warnings.push('Very large PR! This will be difficult to review. Strongly consider breaking into smaller PRs.');
}
report += `**Size Rating:** ${sizeRating}\n\n`;
// Performance patterns check
report += '### 🔍 Performance Patterns\n\n';
const perfPatterns = [
{ pattern: /\.forEach\s*\(/g, msg: 'forEach loop - consider for...of for better performance', severity: 'info' },
{ pattern: /JSON\.parse\s*\(.*JSON\.stringify/g, msg: 'Deep clone via JSON - consider structuredClone()', severity: 'warning' },
{ pattern: /new\s+RegExp\s*\(/g, msg: 'Dynamic RegExp creation - consider caching if used repeatedly', severity: 'info' },
{ pattern: /document\.querySelector.*loop|for.*querySelector/gi, msg: 'DOM query in loop - cache selectors outside loop', severity: 'warning' },
{ pattern: /\bawait\b.*\bawait\b.*\bawait\b/g, msg: 'Multiple sequential awaits - consider Promise.all()', severity: 'warning' },
{ pattern: /\.filter\(.*\)\.map\(/g, msg: 'filter().map() chain - consider reduce() or single pass', severity: 'info' },
{ pattern: /useEffect.*\[\s*\]/g, msg: 'Empty dependency array - ensure this is intentional', severity: 'info' },
{ pattern: /new\s+Date\(\).*loop|for.*new\s+Date/gi, msg: 'Date creation in loop - cache Date object', severity: 'warning' },
];
let patternFindings = [];
for (const file of changedFiles) {
try {
const content = fs.readFileSync(file, 'utf8');
for (const { pattern, msg, severity } of perfPatterns) {
if (pattern.test(content)) {
patternFindings.push({ file, msg, severity });
}
}
// Check file size
const lines = content.split('\n').length;
if (lines > 500) {
warnings.push(`\`${file}\` has ${lines} lines - consider splitting into smaller modules`);
}
} catch (e) {}
}
if (patternFindings.length > 0) {
patternFindings.slice(0, 10).forEach(({ file, msg, severity }) => {
const icon = severity === 'warning' ? '⚠️' : '';
report += `- ${icon} **${file}**: ${msg}\n`;
});
if (patternFindings.length > 10) {
report += `\n*...and ${patternFindings.length - 10} more findings*\n`;
}
} else {
report += '✅ No performance anti-patterns detected!\n';
}
report += '\n';
// Warnings
if (warnings.length > 0) {
report += '### ⚠️ Warnings\n\n';
warnings.forEach(w => report += `- ${w}\n`);
report += '\n';
}
// Bundle impact estimation
report += '### 📊 Impact Assessment\n\n';
// Check for new dependencies in package.json changes
const pkgChanged = changedFiles.some(f => f.includes('package.json'));
if (pkgChanged) {
report += '⚠️ `package.json` was modified - bundle size may be affected.\n';
report += 'Consider running bundle analysis after merging.\n\n';
}
// Recommendations
report += '### 💡 Recommendations\n\n';
if (totalChanges > 500) {
report += '- Consider breaking this PR into smaller, focused changes\n';
}
if (patternFindings.some(f => f.severity === 'warning')) {
report += '- Review the performance warnings above\n';
}
report += '- Run performance tests before and after merging\n';
report += '- Monitor production metrics after deployment\n';
report += '\n---\n*⚡ Automated analysis by Performance Agent*';
// Post comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
body: report
});

View File

@@ -0,0 +1,127 @@
name: "🛡️ Agent: Security Audit"
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
security-events: write
jobs:
security-audit:
name: Security Audit Agent
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Get PR diff
id: diff
run: |
git diff origin/${{ github.base_ref }}...HEAD > pr_diff.txt
echo "diff_size=$(wc -l < pr_diff.txt)" >> $GITHUB_OUTPUT
- name: Run security patterns check
id: security-check
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const diff = fs.readFileSync('pr_diff.txt', 'utf8');
const securityPatterns = [
{ pattern: /eval\s*\(/gi, severity: 'high', msg: 'Dangerous eval() usage detected' },
{ pattern: /innerHTML\s*=/gi, severity: 'medium', msg: 'innerHTML assignment - potential XSS' },
{ pattern: /document\.write/gi, severity: 'medium', msg: 'document.write usage - potential XSS' },
{ pattern: /dangerouslySetInnerHTML/gi, severity: 'medium', msg: 'React dangerouslySetInnerHTML - ensure sanitized' },
{ pattern: /exec\s*\(/gi, severity: 'high', msg: 'Shell exec detected - potential command injection' },
{ pattern: /subprocess|os\.system/gi, severity: 'high', msg: 'System command execution detected' },
{ pattern: /localStorage\.setItem.*password/gi, severity: 'high', msg: 'Storing password in localStorage' },
{ pattern: /Bearer\s+[A-Za-z0-9\-_]+\.[A-Za-z0-9\-_]+/gi, severity: 'critical', msg: '⚠️ Possible JWT token in code' },
{ pattern: /sk-[A-Za-z0-9]{32,}/gi, severity: 'critical', msg: '⚠️ Possible API key detected' },
{ pattern: /AKIA[0-9A-Z]{16}/gi, severity: 'critical', msg: '⚠️ Possible AWS key detected' },
{ pattern: /ghp_[A-Za-z0-9]{36}/gi, severity: 'critical', msg: '⚠️ Possible GitHub token detected' },
{ pattern: /-----BEGIN (RSA |EC |DSA |OPENSSH )?PRIVATE KEY-----/gi, severity: 'critical', msg: '⚠️ Private key detected!' },
{ pattern: /sql.*['"]\s*\+/gi, severity: 'high', msg: 'Potential SQL injection - use parameterized queries' },
{ pattern: /password\s*[=:]\s*['"][^'"]+['"]/gi, severity: 'high', msg: 'Hardcoded password detected' },
];
let findings = { critical: [], high: [], medium: [], low: [] };
let totalIssues = 0;
for (const { pattern, severity, msg } of securityPatterns) {
const matches = diff.match(pattern);
if (matches) {
findings[severity].push({ msg, count: matches.length });
totalIssues += matches.length;
}
}
// Build report
let report = '## 🛡️ Security Audit Agent Report\n\n';
if (totalIssues === 0) {
report += '✅ **No security issues detected!**\n\n';
report += 'The changes in this PR passed all security checks.\n';
} else {
report += `⚠️ **Found ${totalIssues} potential security issue(s)**\n\n`;
if (findings.critical.length > 0) {
report += '### 🚨 Critical\n';
findings.critical.forEach(f => report += `- ${f.msg} (${f.count} occurrence(s))\n`);
report += '\n';
}
if (findings.high.length > 0) {
report += '### 🔴 High\n';
findings.high.forEach(f => report += `- ${f.msg} (${f.count} occurrence(s))\n`);
report += '\n';
}
if (findings.medium.length > 0) {
report += '### 🟠 Medium\n';
findings.medium.forEach(f => report += `- ${f.msg} (${f.count} occurrence(s))\n`);
report += '\n';
}
}
report += '\n### Checks Performed\n';
report += '- [x] Secrets & API keys scan\n';
report += '- [x] SQL injection patterns\n';
report += '- [x] XSS vulnerability patterns\n';
report += '- [x] Command injection patterns\n';
report += '- [x] Hardcoded credentials\n';
report += '\n---\n*🛡️ Automated audit by Security Agent*';
// Post comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
body: report
});
// Fail on critical findings
if (findings.critical.length > 0) {
core.setFailed('Critical security issues found! Please review before merging.');
}
return { findings, totalIssues };
- name: Run npm audit (if package.json exists)
continue-on-error: true
run: |
if [ -f "package.json" ]; then
npm audit --audit-level=high || echo "Vulnerabilities found"
fi

View File

@@ -0,0 +1,173 @@
name: "🧪 Agent: Test Coverage"
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
jobs:
test-coverage:
name: Test Coverage Agent
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v44
with:
files: |
**/*.ts
**/*.tsx
**/*.js
**/*.jsx
- name: Analyze test coverage
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const path = require('path');
const changedFiles = '${{ steps.changed-files.outputs.all_changed_files }}'.split(' ').filter(f => f);
let report = '## 🧪 Test Coverage Agent Report\n\n';
let stats = {
sourceFiles: [],
testFiles: [],
missingTests: [],
hasTestFramework: false
};
// Check for test framework
try {
const packageJson = JSON.parse(fs.readFileSync('package.json', 'utf8'));
const deps = { ...packageJson.dependencies, ...packageJson.devDependencies };
stats.hasTestFramework = !!(deps.jest || deps.vitest || deps.mocha || deps['@testing-library/react']);
} catch (e) {}
// Analyze changed files
for (const file of changedFiles) {
const isTest = file.includes('.test.') ||
file.includes('.spec.') ||
file.includes('__tests__') ||
file.includes('test/') ||
file.includes('tests/');
if (isTest) {
stats.testFiles.push(file);
} else {
stats.sourceFiles.push(file);
// Check if corresponding test exists
const basename = path.basename(file, path.extname(file));
const dirname = path.dirname(file);
const testPatterns = [
`${dirname}/${basename}.test${path.extname(file)}`,
`${dirname}/${basename}.spec${path.extname(file)}`,
`${dirname}/__tests__/${basename}.test${path.extname(file)}`,
`__tests__/${basename}.test${path.extname(file)}`,
];
let hasTest = false;
for (const testPath of testPatterns) {
if (fs.existsSync(testPath)) {
hasTest = true;
break;
}
}
if (!hasTest && !file.includes('index.') && !file.includes('.d.ts')) {
stats.missingTests.push(file);
}
}
}
// Calculate coverage percentage
const coveragePercent = stats.sourceFiles.length > 0
? Math.round(((stats.sourceFiles.length - stats.missingTests.length) / stats.sourceFiles.length) * 100)
: 100;
// Build report
report += '### 📊 Test Analysis\n\n';
report += `| Metric | Value |\n`;
report += `|--------|-------|\n`;
report += `| Source files changed | ${stats.sourceFiles.length} |\n`;
report += `| Test files changed | ${stats.testFiles.length} |\n`;
report += `| Files with tests | ${stats.sourceFiles.length - stats.missingTests.length} |\n`;
report += `| Files missing tests | ${stats.missingTests.length} |\n`;
report += `| Test framework | ${stats.hasTestFramework ? '✅ Detected' : '❌ Not found'} |\n\n`;
report += `### 📈 Test Coverage Score: ${coveragePercent}%\n\n`;
// Progress bar
const filled = Math.round(coveragePercent / 10);
const empty = 10 - filled;
report += `\`[${'█'.repeat(filled)}${'░'.repeat(empty)}]\`\n\n`;
if (coveragePercent >= 80) {
report += '✅ Excellent test coverage!\n\n';
} else if (coveragePercent >= 50) {
report += '⚠️ Consider adding more tests for better coverage.\n\n';
} else {
report += '❌ Low test coverage. Please add tests for your changes.\n\n';
}
// Missing tests
if (stats.missingTests.length > 0) {
report += '### 🔍 Files Missing Tests\n\n';
stats.missingTests.slice(0, 10).forEach(f => {
report += `- \`${f}\`\n`;
});
if (stats.missingTests.length > 10) {
report += `\n*...and ${stats.missingTests.length - 10} more files*\n`;
}
report += '\n';
}
// Recommendations
report += '### 💡 Recommendations\n\n';
if (!stats.hasTestFramework) {
report += '- Consider adding a test framework (Jest, Vitest, etc.)\n';
}
if (stats.testFiles.length === 0 && stats.sourceFiles.length > 0) {
report += '- No test files in this PR - consider adding tests\n';
}
if (stats.missingTests.length > 0) {
report += '- Add unit tests for the files listed above\n';
}
if (coveragePercent === 100) {
report += '- All changed files have corresponding tests! 🎉\n';
}
report += '\n---\n*🧪 Automated analysis by Test Coverage Agent*';
// Post comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
body: report
});
- name: Run tests (if available)
continue-on-error: true
run: |
if [ -f "package.json" ]; then
npm ci --ignore-scripts 2>/dev/null || npm install --ignore-scripts 2>/dev/null || true
npm test 2>/dev/null || echo "No tests configured or tests failed"
fi

242
.github/workflows/auto-deploy-pr.yml vendored Normal file
View File

@@ -0,0 +1,242 @@
name: Auto Deploy PR
on:
pull_request:
types: [opened, synchronize, reopened]
push:
branches: [main, master, RAILWAY-DEPLOY-BRANCH]
env:
RAILWAY_TOKEN: ${{ secrets.RAILWAY_TOKEN }}
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
jobs:
detect-and-deploy:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
deployments: write
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Detect Project Type
id: detect
run: |
# Detect what kind of project this is
if [ -f "railway.toml" ] || [ -f "railway.json" ]; then
echo "platform=railway" >> $GITHUB_OUTPUT
elif [ -f "wrangler.toml" ] || [ -f "wrangler.json" ]; then
echo "platform=cloudflare-workers" >> $GITHUB_OUTPUT
elif [ -f "next.config.js" ] || [ -f "next.config.mjs" ] || [ -f "next.config.ts" ]; then
echo "platform=cloudflare-pages" >> $GITHUB_OUTPUT
echo "framework=next" >> $GITHUB_OUTPUT
elif [ -f "vite.config.ts" ] || [ -f "vite.config.js" ]; then
echo "platform=cloudflare-pages" >> $GITHUB_OUTPUT
echo "framework=vite" >> $GITHUB_OUTPUT
elif [ -f "Dockerfile" ]; then
echo "platform=railway" >> $GITHUB_OUTPUT
elif [ -f "requirements.txt" ] || [ -f "pyproject.toml" ]; then
echo "platform=railway" >> $GITHUB_OUTPUT
echo "framework=python" >> $GITHUB_OUTPUT
elif [ -f "package.json" ]; then
echo "platform=cloudflare-pages" >> $GITHUB_OUTPUT
echo "framework=static" >> $GITHUB_OUTPUT
else
echo "platform=railway" >> $GITHUB_OUTPUT
fi
# Get repo name for service naming
echo "repo_name=${GITHUB_REPOSITORY#*/}" >> $GITHUB_OUTPUT
# Set environment based on event
if [ "${{ github.event_name }}" == "pull_request" ]; then
echo "environment=preview" >> $GITHUB_OUTPUT
echo "pr_number=${{ github.event.pull_request.number }}" >> $GITHUB_OUTPUT
else
echo "environment=production" >> $GITHUB_OUTPUT
fi
- name: Install Dependencies
if: steps.detect.outputs.platform == 'cloudflare-pages' || steps.detect.outputs.platform == 'cloudflare-workers'
run: |
if [ -f "package.json" ]; then
npm ci || npm install
fi
- name: Build (if needed)
if: steps.detect.outputs.platform == 'cloudflare-pages'
run: |
if [ -f "package.json" ]; then
npm run build || true
fi
# ============ RAILWAY DEPLOYMENT ============
- name: Install Railway CLI
if: steps.detect.outputs.platform == 'railway' && env.RAILWAY_TOKEN != ''
run: npm install -g @railway/cli
- name: Deploy to Railway
if: steps.detect.outputs.platform == 'railway' && env.RAILWAY_TOKEN != ''
id: railway
run: |
# Link to existing project or create new one
railway link --environment ${{ steps.detect.outputs.environment }} 2>/dev/null || true
# Deploy
DEPLOY_URL=$(railway up --detach 2>&1 | grep -oP 'https://[^\s]+' | head -1 || echo "")
if [ -n "$DEPLOY_URL" ]; then
echo "url=$DEPLOY_URL" >> $GITHUB_OUTPUT
echo "success=true" >> $GITHUB_OUTPUT
else
# Try to get the URL from railway status
DEPLOY_URL=$(railway status --json 2>/dev/null | jq -r '.deployments[0].url // empty' || echo "")
echo "url=$DEPLOY_URL" >> $GITHUB_OUTPUT
echo "success=true" >> $GITHUB_OUTPUT
fi
# ============ CLOUDFLARE PAGES DEPLOYMENT ============
- name: Deploy to Cloudflare Pages
if: steps.detect.outputs.platform == 'cloudflare-pages' && env.CLOUDFLARE_API_TOKEN != ''
id: cloudflare-pages
run: |
npm install -g wrangler
# Determine output directory
if [ -d "dist" ]; then
OUTPUT_DIR="dist"
elif [ -d "build" ]; then
OUTPUT_DIR="build"
elif [ -d "out" ]; then
OUTPUT_DIR="out"
elif [ -d ".next" ]; then
OUTPUT_DIR=".next"
elif [ -d "public" ]; then
OUTPUT_DIR="public"
else
OUTPUT_DIR="."
fi
PROJECT_NAME="${{ steps.detect.outputs.repo_name }}"
# Create project if it doesn't exist
wrangler pages project create "$PROJECT_NAME" --production-branch main 2>/dev/null || true
# Deploy
if [ "${{ steps.detect.outputs.environment }}" == "preview" ]; then
RESULT=$(wrangler pages deploy "$OUTPUT_DIR" --project-name="$PROJECT_NAME" --branch="pr-${{ steps.detect.outputs.pr_number }}" 2>&1)
else
RESULT=$(wrangler pages deploy "$OUTPUT_DIR" --project-name="$PROJECT_NAME" --branch=main 2>&1)
fi
DEPLOY_URL=$(echo "$RESULT" | grep -oP 'https://[^\s]+\.pages\.dev' | head -1 || echo "")
echo "url=$DEPLOY_URL" >> $GITHUB_OUTPUT
echo "success=true" >> $GITHUB_OUTPUT
# ============ CLOUDFLARE WORKERS DEPLOYMENT ============
- name: Deploy to Cloudflare Workers
if: steps.detect.outputs.platform == 'cloudflare-workers' && env.CLOUDFLARE_API_TOKEN != ''
id: cloudflare-workers
run: |
npm install -g wrangler
if [ "${{ steps.detect.outputs.environment }}" == "preview" ]; then
RESULT=$(wrangler deploy --env preview 2>&1 || wrangler deploy 2>&1)
else
RESULT=$(wrangler deploy 2>&1)
fi
DEPLOY_URL=$(echo "$RESULT" | grep -oP 'https://[^\s]+\.workers\.dev' | head -1 || echo "")
echo "url=$DEPLOY_URL" >> $GITHUB_OUTPUT
echo "success=true" >> $GITHUB_OUTPUT
# ============ COMMENT ON PR ============
- name: Comment Deployment URL on PR
if: github.event_name == 'pull_request' && (steps.railway.outputs.success == 'true' || steps.cloudflare-pages.outputs.success == 'true' || steps.cloudflare-workers.outputs.success == 'true')
uses: actions/github-script@v7
with:
script: |
const railwayUrl = '${{ steps.railway.outputs.url }}';
const pagesUrl = '${{ steps.cloudflare-pages.outputs.url }}';
const workersUrl = '${{ steps.cloudflare-workers.outputs.url }}';
const platform = '${{ steps.detect.outputs.platform }}';
const deployUrl = railwayUrl || pagesUrl || workersUrl || 'Deployment in progress...';
const platformEmoji = {
'railway': '🚂',
'cloudflare-pages': '📄',
'cloudflare-workers': '⚡'
};
const body = `## ${platformEmoji[platform] || '🚀'} Preview Deployment Ready!
| Platform | URL |
|----------|-----|
| **${platform}** | ${deployUrl ? `[${deployUrl}](${deployUrl})` : 'Deploying...'} |
---
🤖 *Auto-deployed by BlackRoad OS*`;
// Find existing comment
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const botComment = comments.find(c => c.body.includes('Preview Deployment Ready'));
if (botComment) {
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: botComment.id,
body: body
});
} else {
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: body
});
}
# ============ CREATE GITHUB DEPLOYMENT ============
- name: Create GitHub Deployment Status
if: steps.railway.outputs.success == 'true' || steps.cloudflare-pages.outputs.success == 'true' || steps.cloudflare-workers.outputs.success == 'true'
uses: actions/github-script@v7
with:
script: |
const deployUrl = '${{ steps.railway.outputs.url }}' || '${{ steps.cloudflare-pages.outputs.url }}' || '${{ steps.cloudflare-workers.outputs.url }}';
const environment = '${{ steps.detect.outputs.environment }}';
const deployment = await github.rest.repos.createDeployment({
owner: context.repo.owner,
repo: context.repo.repo,
ref: context.sha,
environment: environment,
auto_merge: false,
required_contexts: []
});
if (deployment.data.id) {
await github.rest.repos.createDeploymentStatus({
owner: context.repo.owner,
repo: context.repo.repo,
deployment_id: deployment.data.id,
state: 'success',
environment_url: deployUrl,
description: 'Deployment successful'
});
}

81
.github/workflows/auto-merge.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: Auto-Approve and Merge
# This workflow automatically approves and merges PRs when:
# 1. CI passes
# 2. PR is from a trusted source (you, Codex, or designated bots)
#
# No human approval required. CI is the reviewer.
on:
pull_request:
types: [opened, synchronize, reopened]
check_suite:
types: [completed]
workflow_run:
workflows: ["CI"] # Replace with your actual CI workflow name
types: [completed]
permissions:
contents: write
pull-requests: write
jobs:
auto-merge:
runs-on: ubuntu-latest
# Only run for trusted actors
# Add your GitHub username, Codex bot, any other trusted sources
if: |
github.actor == 'YOUR_GITHUB_USERNAME' ||
github.actor == 'codex-bot' ||
github.actor == 'dependabot[bot]' ||
github.actor == 'github-actions[bot]'
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Wait for CI to complete
uses: fountainhead/action-wait-for-check@v1.1.0
id: wait-for-ci
with:
token: ${{ secrets.GITHUB_TOKEN }}
checkName: build # Replace with your CI check name
ref: ${{ github.event.pull_request.head.sha }}
timeoutSeconds: 300
intervalSeconds: 10
- name: Auto-approve PR
if: steps.wait-for-ci.outputs.conclusion == 'success'
uses: hmarr/auto-approve-action@v4
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Auto-merge PR
if: steps.wait-for-ci.outputs.conclusion == 'success'
uses: pascalgn/automerge-action@v0.16.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
MERGE_METHOD: squash
MERGE_COMMIT_MESSAGE: pull-request-title
MERGE_DELETE_BRANCH: true
UPDATE_METHOD: rebase
- name: Add blocked label on CI failure
if: steps.wait-for-ci.outputs.conclusion == 'failure'
uses: actions/github-script@v7
with:
script: |
github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
labels: ['blocked', 'ci-failed']
});
github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
body: '🔴 **CI Failed** - Auto-merge blocked. Check the logs and fix the issue.'
});

83
.github/workflows/branch-tracker.yml vendored Normal file
View File

@@ -0,0 +1,83 @@
name: Track Branch Activity
# When a branch is created that matches an issue number,
# automatically update the project board to show work has started
on:
create:
branches:
- 'issue-*'
- 'fix-*'
- 'feat-*'
push:
branches:
- 'issue-*'
- 'fix-*'
- 'feat-*'
permissions:
issues: write
repository-projects: write
jobs:
track-branch:
runs-on: ubuntu-latest
steps:
- name: Extract issue number from branch name
id: extract
run: |
BRANCH_NAME="${{ github.ref_name }}"
echo "Branch: $BRANCH_NAME"
# Extract issue number from branch name
# Supports: issue-123-description, fix-123-bug, feat-123-feature
ISSUE_NUM=$(echo "$BRANCH_NAME" | grep -oP '(?<=issue-|fix-|feat-)\d+' | head -1)
if [ -n "$ISSUE_NUM" ]; then
echo "issue_number=$ISSUE_NUM" >> $GITHUB_OUTPUT
echo "Found issue number: $ISSUE_NUM"
else
echo "No issue number found in branch name"
echo "issue_number=" >> $GITHUB_OUTPUT
fi
- name: Add 'in-progress' label to linked issue
if: steps.extract.outputs.issue_number != ''
uses: actions/github-script@v7
with:
script: |
const issueNumber = ${{ steps.extract.outputs.issue_number }};
try {
// Add in-progress label
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issueNumber,
labels: ['in-progress']
});
// Remove triage/inbox labels if present
try {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issueNumber,
name: 'triage'
});
} catch (e) {
// Label might not exist, that's fine
}
// Add comment showing work has started
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issueNumber,
body: `🚀 **Work started** on branch \`${{ github.ref_name }}\`\n\nActor: ${{ github.actor }}`
});
console.log(`Updated issue #${issueNumber} - work in progress`);
} catch (error) {
console.log(`Could not update issue #${issueNumber}: ${error.message}`);
}

56
.github/workflows/issue-to-board.yml vendored Normal file
View File

@@ -0,0 +1,56 @@
name: Add Issue to Project Board
# Automatically adds new issues to your GitHub Project board
# No manual card creation needed
on:
issues:
types: [opened]
permissions:
issues: write
repository-projects: write
env:
# Replace with your Project number (find in Project URL)
# Example: https://github.com/users/YOUR_USERNAME/projects/1 → PROJECT_NUMBER=1
PROJECT_NUMBER: 1
# Replace with your GitHub username or org
PROJECT_OWNER: YOUR_GITHUB_USERNAME
jobs:
add-to-project:
runs-on: ubuntu-latest
steps:
- name: Add issue to project
uses: actions/add-to-project@v0.5.0
with:
project-url: https://github.com/users/${{ env.PROJECT_OWNER }}/projects/${{ env.PROJECT_NUMBER }}
github-token: ${{ secrets.PROJECT_TOKEN }}
# Note: You need a PAT with project permissions, not GITHUB_TOKEN
# Create one at: Settings → Developer settings → Personal access tokens
# Scopes needed: project, repo
- name: Set initial status to Inbox
uses: actions/github-script@v7
with:
github-token: ${{ secrets.PROJECT_TOKEN }}
script: |
// This sets the Status field to "Inbox" on the project board
// You'll need to customize field IDs for your specific project
console.log('Issue #${{ github.event.issue.number }} added to project board');
console.log('Title: ${{ github.event.issue.title }}');
- name: Add triage label if no labels
if: github.event.issue.labels[0] == null
uses: actions/github-script@v7
with:
script: |
github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['triage']
});

69
.github/workflows/stale-cleanup.yml vendored Normal file
View File

@@ -0,0 +1,69 @@
name: Stale Issue Cleanup
# Automatically flags and closes stale issues
# Keeps your board clean without manual gardening
on:
schedule:
# Run daily at midnight UTC
- cron: '0 0 * * *'
workflow_dispatch:
# Allow manual trigger
permissions:
issues: write
pull-requests: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- name: Mark and close stale issues
uses: actions/stale@v9
with:
# Issues
stale-issue-message: |
⏰ **This issue has been inactive for 14 days.**
If this is still needed, please comment or update the issue.
Otherwise, it will be closed in 7 days.
To keep this open: add a comment or remove the `stale` label.
close-issue-message: |
🗂️ **Closed due to inactivity.**
If this is still needed, reopen the issue or create a new one.
stale-issue-label: 'stale'
days-before-issue-stale: 14
days-before-issue-close: 7
# PRs - more aggressive since they should flow fast
stale-pr-message: |
⏰ **This PR has been inactive for 7 days.**
If CI is failing, fix it. If it's blocked, add the `blocked` label.
Otherwise, this PR will be closed in 3 days.
close-pr-message: |
🗂️ **Closed due to inactivity.**
If this work is still needed, create a new PR.
stale-pr-label: 'stale'
days-before-pr-stale: 7
days-before-pr-close: 3
# Exemptions
exempt-issue-labels: 'pinned,security,blocked,p0-now'
exempt-pr-labels: 'pinned,security,blocked'
# Don't mark issues that have recent commits on linked branches
exempt-all-pr-milestones: true
# Operations per run (GitHub API limits)
operations-per-run: 100
- name: Report cleanup stats
uses: actions/github-script@v7
with:
script: |
console.log('Stale cleanup completed');
console.log('Check the Actions log for details on marked/closed items');

123
scripts/create-issue.sh Normal file
View File

@@ -0,0 +1,123 @@
#!/bin/bash
# create-issue.sh
# Quick issue creation from the command line
#
# Usage:
# ./create-issue.sh "Fix the login redirect bug"
# ./create-issue.sh "Add dark mode toggle" --priority p1
# ./create-issue.sh "Refactor auth module" --agent
#
# Requirements:
# - GitHub CLI (gh) installed and authenticated
# - Run from within a git repo
set -e
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Default values
PRIORITY="P2"
LABELS="task"
AGENT_MODE=false
TITLE=""
BODY=""
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--priority|-p)
PRIORITY="${2^^}" # Uppercase
shift 2
;;
--agent|-a)
AGENT_MODE=true
LABELS="agent-task,automated"
shift
;;
--bug|-b)
LABELS="bug"
shift
;;
--body)
BODY="$2"
shift 2
;;
--help|-h)
echo "Usage: $0 \"Issue title\" [options]"
echo ""
echo "Options:"
echo " --priority, -p <P0|P1|P2|P3> Set priority (default: P2)"
echo " --agent, -a Mark as agent task"
echo " --bug, -b Mark as bug"
echo " --body \"text\" Add body text"
echo ""
echo "Examples:"
echo " $0 \"Fix login redirect\""
echo " $0 \"Add dark mode\" --priority p1"
echo " $0 \"Refactor auth\" --agent --priority p0"
exit 0
;;
*)
if [[ -z "$TITLE" ]]; then
TITLE="$1"
fi
shift
;;
esac
done
# Validate
if [[ -z "$TITLE" ]]; then
echo -e "${RED}Error: Issue title required${NC}"
echo "Usage: $0 \"Issue title\" [options]"
exit 1
fi
# Check gh is installed
if ! command -v gh &> /dev/null; then
echo -e "${RED}Error: GitHub CLI (gh) not installed${NC}"
echo "Install: https://cli.github.com/"
exit 1
fi
# Build the issue
echo -e "${YELLOW}Creating issue...${NC}"
# Add priority label
case $PRIORITY in
P0) LABELS="$LABELS,p0-now" ;;
P1) LABELS="$LABELS,p1-today" ;;
P2) LABELS="$LABELS,p2-week" ;;
P3) LABELS="$LABELS,p3-backlog" ;;
esac
# Construct title prefix
if [[ "$LABELS" == *"agent-task"* ]]; then
FULL_TITLE="[AGENT] $TITLE"
elif [[ "$LABELS" == *"bug"* ]]; then
FULL_TITLE="[BUG] $TITLE"
else
FULL_TITLE="[TASK] $TITLE"
fi
# Create the issue
if [[ -n "$BODY" ]]; then
ISSUE_URL=$(gh issue create --title "$FULL_TITLE" --body "$BODY" --label "$LABELS" 2>&1)
else
ISSUE_URL=$(gh issue create --title "$FULL_TITLE" --body "Created via CLI" --label "$LABELS" 2>&1)
fi
echo -e "${GREEN}✅ Issue created${NC}"
echo "$ISSUE_URL"
# Extract issue number for convenience
ISSUE_NUM=$(echo "$ISSUE_URL" | grep -oP '\d+$')
if [[ -n "$ISSUE_NUM" ]]; then
echo ""
echo -e "Branch name: ${YELLOW}issue-$ISSUE_NUM-$(echo "$TITLE" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | tr -cd 'a-z0-9-' | head -c 30)${NC}"
fi

183
scripts/slack-to-github.js Normal file
View File

@@ -0,0 +1,183 @@
/**
* slack-to-github.js
*
* Webhook handler that creates GitHub issues from Slack messages.
* Deploy as: Cloudflare Worker, Vercel Function, or any serverless platform.
*
* Trigger: Slack slash command or bot mention
* Example: /issue Fix the login redirect bug
* Example: @blackroad-bot create issue: Add dark mode toggle
*
* Setup:
* 1. Create a Slack App with slash commands or bot
* 2. Set the request URL to your deployed function
* 3. Set environment variables: GITHUB_TOKEN, GITHUB_OWNER, GITHUB_REPO, SLACK_SIGNING_SECRET
*/
// For Cloudflare Workers
export default {
async fetch(request, env) {
return handleRequest(request, env);
}
};
// For Vercel/other platforms, use:
// export default async function handler(req, res) { ... }
async function handleRequest(request, env) {
// Verify this is a POST request
if (request.method !== 'POST') {
return new Response('Method not allowed', { status: 405 });
}
try {
// Parse the Slack payload
const formData = await request.formData();
const payload = Object.fromEntries(formData);
// For slash commands, the text is in payload.text
// For bot mentions, you'd parse payload differently
const text = payload.text || '';
const userId = payload.user_id || 'unknown';
const userName = payload.user_name || 'unknown';
const channelId = payload.channel_id || '';
if (!text.trim()) {
return slackResponse('Please provide an issue title. Usage: `/issue Fix the login bug`');
}
// Parse priority from text if included
let priority = 'P2';
let title = text;
const priorityMatch = text.match(/\b(p[0-3])\b/i);
if (priorityMatch) {
priority = priorityMatch[1].toUpperCase();
title = text.replace(priorityMatch[0], '').trim();
}
// Detect if this should be an agent task
const isAgentTask = text.toLowerCase().includes('[agent]') ||
text.toLowerCase().includes('--agent');
title = title.replace(/\[agent\]/gi, '').replace(/--agent/gi, '').trim();
// Detect if this is a bug
const isBug = text.toLowerCase().includes('[bug]') ||
text.toLowerCase().includes('--bug');
title = title.replace(/\[bug\]/gi, '').replace(/--bug/gi, '').trim();
// Build labels
const labels = [];
if (isBug) {
labels.push('bug');
} else if (isAgentTask) {
labels.push('agent-task', 'automated');
} else {
labels.push('task');
}
// Add priority label
switch (priority) {
case 'P0': labels.push('p0-now'); break;
case 'P1': labels.push('p1-today'); break;
case 'P2': labels.push('p2-week'); break;
case 'P3': labels.push('p3-backlog'); break;
}
// Build title prefix
let fullTitle = title;
if (isBug) {
fullTitle = `[BUG] ${title}`;
} else if (isAgentTask) {
fullTitle = `[AGENT] ${title}`;
} else {
fullTitle = `[TASK] ${title}`;
}
// Create the GitHub issue
const issue = await createGitHubIssue({
title: fullTitle,
body: `Created from Slack by @${userName}\n\nChannel: <#${channelId}>`,
labels: labels,
env: env
});
// Send success response back to Slack
return slackResponse(
`✅ Issue created: <${issue.html_url}|#${issue.number} ${title}>\n` +
`Priority: ${priority} | Labels: ${labels.join(', ')}`
);
} catch (error) {
console.error('Error:', error);
return slackResponse(`❌ Failed to create issue: ${error.message}`);
}
}
async function createGitHubIssue({ title, body, labels, env }) {
const response = await fetch(
`https://api.github.com/repos/${env.GITHUB_OWNER}/${env.GITHUB_REPO}/issues`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${env.GITHUB_TOKEN}`,
'Accept': 'application/vnd.github.v3+json',
'Content-Type': 'application/json',
'User-Agent': 'BlackRoad-Slack-Bot'
},
body: JSON.stringify({
title,
body,
labels
})
}
);
if (!response.ok) {
const error = await response.text();
throw new Error(`GitHub API error: ${response.status} - ${error}`);
}
return response.json();
}
function slackResponse(text) {
return new Response(
JSON.stringify({
response_type: 'in_channel', // visible to everyone, or 'ephemeral' for private
text: text
}),
{
headers: {
'Content-Type': 'application/json'
}
}
);
}
/**
* Environment variables needed:
*
* GITHUB_TOKEN - Personal access token with repo scope
* GITHUB_OWNER - Your GitHub username or org
* GITHUB_REPO - Repository name
* SLACK_SIGNING_SECRET - (optional) For verifying Slack requests
*
* To verify Slack requests (recommended for production):
*
* async function verifySlackRequest(request, signingSecret) {
* const timestamp = request.headers.get('x-slack-request-timestamp');
* const signature = request.headers.get('x-slack-signature');
* const body = await request.text();
*
* const baseString = `v0:${timestamp}:${body}`;
* const hmac = crypto.createHmac('sha256', signingSecret);
* hmac.update(baseString);
* const computed = `v0=${hmac.digest('hex')}`;
*
* return crypto.timingSafeEqual(
* Buffer.from(signature),
* Buffer.from(computed)
* );
* }
*/