kpi: auto-update metrics 2026-03-13

RoadChain-SHA2048: c645c1292ab1555e
RoadChain-Identity: alexa@sovereign
RoadChain-Full: c645c1292ab1555ebe6982915536d1c94701ff6bb16c20ed6ef4144eb50c9f984b4bfe5b9902109e8defd958d6be43ced8ec11cf95d6241536cd4da0b75f8fb48cbeb1b9f450c8f665b73d39e837d23e73e2ba4201af4dc40c02a34283efb04b39c612083465536f194f16adfadb1b56f714a65b918f40750f54eebf7724236861de173ec31963ff3b1b988d712be7e5acc3fe391eb804d3fdcfb9ccf77afc732660d23fff801f894318327eabf775eb4f4e67f7f22d07f23b0e17f6594cfe95b83b275fb7baaa97115e86562604fc5b47cc8024574b61396924e0ee2b7e454b0a1480c3076c7ad72408ceb4a75360d2d49c7d805c37ac5315af00e4a8ca2262
This commit is contained in:
2026-03-13 23:16:12 -05:00
parent 0c714c106c
commit ec7b1445b5
25 changed files with 815 additions and 1112 deletions

View File

@@ -8,62 +8,45 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
DevOps engineer operating a 7-node distributed infrastructure spanning edge devices and cloud, with 52 automated tasks, 256 managed systemd services, and 99 Cloudflare deployments. Builds self-healing systems with automated monitoring, fleet-wide power optimization, and daily KPI collection across 9 data sources.
Needed production infrastructure without a team or budget. Built a self-healing 7-node fleet from Raspberry Pis, automated 52 operational tasks, and deployed 99 cloud services — solo, from scratch.
---
## Experience
### BlackRoad OS | Founder & Infrastructure Lead | 2025Present
### BlackRoad OS | Founder & Senior DevOps Engineer | 2025Present
**Infrastructure Operations**
- Operate 5 Raspberry Pi nodes + 2 DigitalOcean droplets with WireGuard mesh VPN connecting all endpoints
- Manage 256 systemd services and 35 timers across fleet with self-healing autonomy scripts
- Deploy and maintain 14 Docker containers via Docker Swarm orchestration
- Route 48+ custom domains through 4 Cloudflare tunnels to backend services
**The Problem: Zero Infrastructure, Zero Team**
- No existing infrastructure, no ops team, no vendor contracts — needed production-grade systems running 48+ domains on day one
- Solved by designing a hybrid fleet: 5 Pi nodes + 2 cloud VMs + Cloudflare edge, all connected via WireGuard mesh VPN — total cost under $700 hardware
- Result: 256 systemd services running across fleet, 48 Nginx reverse proxy sites, 14 Docker containers — all managed by one person
**CI/CD & Automation**
- Built 212 CLI tools (121 MB) for infrastructure management and deployment
- Maintain 17 Mac cron jobs + 35 fleet timers for continuous automation
- Operate GitHub Actions CI/CD pipelines and self-hosted Gitea (207 repos)
- Automated GitHub-to-Gitea relay syncing every 30 minutes
**The Bet: Self-Healing Over Manual Ops**
- Fleet nodes crash, services fail, temperatures spike — manual monitoring doesn't scale for a solo operator running 256 services
- Built autonomy scripts: heartbeat every 60 seconds, heal cycle every 5 minutes, automatic service restarts on failure
- Detected a node cooking at 73.8°C from a runaway Ollama loop — auto-isolated the process, dropped temp to 57.9°C without downtime
**Cloud Infrastructure (Cloudflare)**
- Deployed 99 Pages projects, 22 D1 databases, 46 KV namespaces, 11 R2 buckets
- Manage Cloudflare Workers for edge compute and API routing
**Monitoring & Observability**
- Built daily KPI collection system tracking 60+ metrics across 9 collectors
- Implemented fleet power optimization with CPU governor tuning, voltage monitoring (avg 44.8°C)
- Created distributed observability with traces database and per-node SSH health probes
- Manage 48 Nginx reverse proxy sites with health checking
**Reliability**
- Implemented self-healing cron-based autonomy on all nodes (heartbeat 1m, heal 5m)
- Reduced node temperature from 73.8°C to 57.9°C by identifying and disabling runaway services
- Resolved undervoltage throttling via config.txt optimization (+95mV on one node)
**The Multiplier: 212 CLI Tools**
- Every repeated task became a tool. 212 CLI tools (121 MB) in ~/bin — deploy, probe, audit, sync, report
- GitHub-to-Gitea relay syncs 207 repos every 30 minutes. Daily KPI collection tracks 60+ metrics across 10 data sources
- 99 Cloudflare Pages, 23 D1 databases, 47 KV namespaces, 11 R2 buckets — all deployed and maintained through CLI automation
---
## Technical Skills
**Infrastructure:** Linux (Debian/Raspberry Pi OS), Docker, Docker Swarm, systemd, Nginx, WireGuard
**Cloud:** Cloudflare (Pages, Workers, D1, KV, R2, Tunnels), DigitalOcean, Tailscale
**CI/CD:** GitHub Actions, Gitea, shell scripting, cron automation
**Languages:** Bash (91 scripts, 212 CLI tools), Python, JavaScript/TypeScript
**Monitoring:** Custom KPI system, power monitoring, thermal management, distributed tracing
**Databases:** PostgreSQL (11 DBs), SQLite (230 DBs), D1, KV stores
Linux/Debian, Docker Swarm, systemd, Nginx, WireGuard, Cloudflare, GitHub Actions, Bash, Python
---
## Metrics
| Metric | Value |
|--------|-------|
| Nodes managed | 7 |
| Systemd services | 256 |
| Automated tasks | 52 |
| Docker containers | 14 |
| Cloudflare deployments | 99 |
| CLI tools built | 212 |
| Repos managed | 1,603 GitHub + 207 Gitea |
| Metric | Value | Source |
|--------|-------|--------|
| Systemd Services | *live* | services.sh — systemctl list-units via SSH |
| Docker Containers | *live* | services.sh — docker ps via SSH |
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| CF Pages | *live* | cloudflare.sh — wrangler pages list |
| CLI Tools | *live* | local.sh — ls ~/bin | wc -l |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |
| Nginx Sites | *live* | services.sh — /etc/nginx/sites-enabled via SSH |

View File

@@ -8,64 +8,44 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
AI/ML engineer operating 27 deployed language models (48.1 GB) across a distributed edge fleet with 52 TOPS of dedicated AI acceleration. Builds custom model pipelines, inference APIs, and AI-powered automation across a 7-node infrastructure serving 48+ domains.
Cloud AI APIs are expensive and you don't own the data. Deployed 27 language models on-premise across edge hardware with 52 TOPS of dedicated acceleration — full inference sovereignty at a fraction of the cost.
---
## Experience
### BlackRoad OS | Founder & AI Lead | 2025Present
### BlackRoad OS | Founder & AI/ML Engineer | 2025Present
**Model Deployment & Operations**
- Deploy and manage 27 Ollama models (48.1 GB) across 3 edge nodes
- Fine-tuned 4 custom CECE personality models for domain-specific generation
- Operate 2× Hailo-8 NPUs (26 TOPS each = 52 TOPS total) for accelerated inference
- Built Ollama Bridge SSE proxy for streaming model responses to web clients
**The Problem: AI Without Vendor Lock-In**
- Needed persistent, private AI inference without per-token API costs or data leaving the network
- Deployed 27 Ollama models (48.1 GB) across 3 Pi 5 nodes — installed 2x Hailo-8 NPUs (52 TOPS total) for hardware acceleration
- Fine-tuned 4 custom CECE personality models for domain-specific generation — models that don't exist anywhere else
**AI Infrastructure**
- Designed multi-node inference routing across Pi 5 fleet (20 GB RAM total)
- Built AI image generation hub (images.blackroad.io) with 4 backend agents (DALL-E, Flux, SDXL)
- Manage model lifecycle: deployment, monitoring, memory optimization, thermal throttle prevention
- Reduced inference node temperature from 73.8°C to 57.9°C by isolating runaway generation loops
**The Challenge: Thermals Kill Edge AI**
- Inference on $80 hardware generates heat. A runaway generation loop pushed one node to 73.8°C — approaching thermal shutdown
- Built power monitoring (cron every 5 min), CPU governor tuning, and voltage optimization — stabilized fleet at 42°C average
- Reduced GPU memory allocation from 256MB to 16MB on headless nodes, capped frequencies, applied conservative governors — no inference quality loss
**APIs & Integration**
- Built CECE API (FastAPI) for custom model interaction and TTS generation
- Created code search engine indexing 354 repos with FTS5 full-text search
- Operate 11 PostgreSQL databases for AI application state
- Deploy AI services via Cloudflare Workers (99 Pages, 22 D1 databases)
**Data & Knowledge Systems**
- Manage 230 SQLite databases (1.4 GB) for agent memory, metrics, and state
- Built FTS5 knowledge index across 156,675 memory entries
- 111 registered systems tracked in systems database
- Custom RoadC programming language with tree-walking interpreter
**Automation & Monitoring**
- Daily automated KPI collection tracking 60+ AI and infrastructure metrics
- Power optimization across all AI nodes (CPU governors, voltage tuning)
- Self-healing autonomy scripts ensuring model availability
**The Stack: From Model to API to User**
- Built Ollama Bridge SSE proxy for streaming model responses to web clients in real-time
- AI image generation hub with 4 backend agents (DALL-E, Flux, SDXL, FAL) — single API, best-model routing
- FTS5 knowledge index across 156,675 memory entries — models can search their own history across 230 SQLite databases
---
## Technical Skills
**AI/ML:** Ollama, Hailo-8 NPU, DALL-E, Flux, SDXL, custom fine-tuning, LLM orchestration
**Languages:** Python (470 repos), JavaScript (114 repos), TypeScript (85 repos), Bash, Go, C
**Frameworks:** FastAPI, Next.js, React, Node.js
**Infrastructure:** Raspberry Pi 5 fleet, Docker Swarm, WireGuard, Cloudflare
**Databases:** PostgreSQL, SQLite/FTS5, Cloudflare D1, KV stores, Qdrant (vector)
**Tools:** Ollama, Docker, GitHub Actions, Gitea, Wrangler
Ollama, Hailo-8 NPU, DALL-E, Flux, SDXL, FastAPI, Python, FTS5, Docker
---
## Metrics
| Metric | Value |
|--------|-------|
| Models deployed | 27 (48.1 GB) |
| AI acceleration | 52 TOPS |
| Custom models | 4 fine-tuned |
| Image gen agents | 4 |
| Databases | 241 (11 PG + 230 SQLite) |
| Lines of code | 7,212,320 |
| Repos | 1,810 |
| Metric | Value | Source |
|--------|-------|--------|
| AI Models | *live* | services.sh — ollama list via SSH |
| Model Size (GB) | *live* | services.sh — ollama list via SSH |
| Lines of Code | *live* | loc.sh — cloc + fleet SSH |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |
| SQLite DBs | *live* | local.sh — find ~/.blackroad -name *.db |
| Docker Containers | *live* | services.sh — docker ps via SSH |

View File

@@ -8,66 +8,46 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
SRE managing a 7-node distributed fleet with 256 systemd services, 52 automated tasks, and self-healing autonomy. Maintains 48+ production domains, 99 Cloudflare deployments, and a daily KPI system tracking 60+ reliability metrics across 9 data sources.
Running 256 services across distributed hardware with no on-call team. Built observability from scratch, resolved 10+ production incidents solo, and automated reliability into the infrastructure itself.
---
## Experience
### BlackRoad OS | Founder & SRE Lead | 2025Present
### BlackRoad OS | Founder & Site Reliability Engineer | 2025Present
**Reliability & Uptime**
- Operate 5 Raspberry Pi edge nodes + 2 cloud VMs with WireGuard mesh connectivity
- Implement self-healing cron automation: heartbeat every 1 minute, heal cycle every 5 minutes
- Monitor and resolve 12 failed systemd units across fleet with automated restart policies
- Manage 48 Nginx reverse proxy sites routing traffic to backend services
**The Reality: Solo On-Call for Everything**
- One person responsible for 256 services, 48 domains, 7 nodes, 283 databases — every incident is yours
- Built a 10-collector KPI system tracking 60+ metrics daily: fleet health, service status, temperatures, swap, processes, connections
- Day-over-day delta tracking catches regressions before they become outages — automated Slack notifications on anomalies
**Incident Response**
- Identified and resolved thermal throttling (73.8°C → 57.9°C) caused by runaway Ollama loops
- Fixed undervoltage issues across Pi fleet via config.txt tuning (+95mV recovery)
- Discovered and removed obfuscated cron dropper (security incident on Cecilia)
- Resolved swap exhaustion (100% on Cecilia) by identifying memory-hungry services
- Migrated leaked credentials from plaintext crontabs to secured env files (chmod 600)
**The Incidents: Real Problems, Real Fixes**
- Node at 73.8°C — identified runaway Ollama generation loop via power monitoring, killed and disabled the service, temp dropped to 57.9°C
- Swap at 100% on Cecilia — found 4 concurrent rclone instances syncing same Google Drive, consolidated to 1, freed 2 GB swap
- Obfuscated cron dropper discovered on Cecilia — exec'ing from /tmp/op.py. Removed the malware, audited all nodes, rotated credentials fleet-wide
- Leaked GitHub PAT found in systemd service file — removed from config, rotated token, migrated all secrets to chmod 600 env files
**Monitoring & Observability**
- Built 9-collector KPI system: GitHub, Gitea, fleet, services, autonomy, LOC, local, Cloudflare, deep GitHub
- Track 60+ metrics daily: commits, fleet health, temperatures, swap, processes, connections
- Distributed tracing database with nanosecond-precision spans
- Per-node SSH health probes with Python-based remote execution
- Power monitoring deployed to all nodes (cron every 5 minutes, persistent logs)
**Infrastructure Management**
- 14 Docker containers via Docker Swarm with leader election
- 11 PostgreSQL databases with automated backup
- 9 Tailscale mesh peers for secure cross-network access
- 4 Cloudflare tunnels routing 48+ domains to fleet services
**Capacity Planning**
- Fleet: 20 GB RAM, 707 GB storage, 52 TOPS AI compute
- Identified and disabled 16 skeleton microservices freeing 800 MB RAM
- Cleaned 19 GB of stale GitHub Actions runner directories
- Power optimization: conservative CPU governors, WiFi power management, GPU memory reduction
**The System: Reliability as Code**
- Self-healing autonomy: heartbeat every 60s detects down services, heal cycle every 5m auto-restarts them
- Power monitoring on every node (cron */5, persistent logs) — voltage, throttle state, temperature, governor all tracked
- Distributed tracing database with nanosecond-precision spans — can trace any request across any node
---
## Technical Skills
**SRE:** systemd, cron, Nginx, Docker Swarm, WireGuard, Tailscale, Cloudflare Tunnels
**Monitoring:** Custom KPI collection, distributed tracing, thermal/voltage monitoring, SSH probes
**Incident Response:** Root cause analysis, credential rotation, service isolation, capacity recovery
**Languages:** Bash (212 CLI tools), Python, JavaScript
**Cloud:** Cloudflare (99 Pages, 22 D1, 46 KV, 11 R2), DigitalOcean
systemd, cron, Nginx, Docker Swarm, WireGuard, Tailscale, distributed tracing, Bash, Python
---
## Metrics
| Metric | Value |
|--------|-------|
| Services managed | 256 |
| Automated tasks | 52 |
| Domains served | 48+ |
| KPI metrics tracked | 60+ |
| Fleet nodes | 7 |
| Incident resolutions | 10+ |
| Docker containers | 14 |
| Metric | Value | Source |
|--------|-------|--------|
| Systemd Services | *live* | services.sh — systemctl list-units via SSH |
| Failed Units | *live* | services.sh — systemctl --failed via SSH |
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| Nodes Online | *live* | fleet.sh — SSH probe to all nodes |
| Avg Temp | *live* | fleet.sh — /sys/class/thermal via SSH |
| Docker Containers | *live* | services.sh — docker ps via SSH |
| Nginx Sites | *live* | services.sh — /etc/nginx/sites-enabled via SSH |

View File

@@ -8,64 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Platform engineer who built and operates a full-stack developer platform spanning 7 nodes, 99 Cloudflare deployments, and 1,810 repos (1,603 GitHub + 207 Gitea). Designed self-service tooling (212 CLI tools), automated deployment pipelines, and a daily metrics system tracking 60+ KPIs.
No platform team, no internal tools budget. Built a complete developer platform from scratch: 212 CLI tools, self-hosted Git, code search, CI/CD pipelines, and automated observability — because waiting for someone else wasn't an option.
---
## Experience
### BlackRoad OS | Founder & Platform Lead | 2025Present
### BlackRoad OS | Founder & Platform Engineer | 2025Present
**Developer Platform**
- Built 212 CLI tools (121 MB) for deployment, monitoring, and infrastructure management
- Operate self-hosted Gitea with 207 repos across 7 organizations as internal code platform
- Maintain GitHub-to-Gitea sync relay running every 30 minutes
- Created code search engine indexing 354 repos with FTS5 full-text search (index.blackroad.io)
**The Gap: No Developer Platform Exists**
- 1,603 repos across 17 GitHub orgs + 207 Gitea repos — needed unified tooling to manage code, deploy, search, and monitor across all of it
- Built 212 CLI tools (121 MB) — every common workflow is a single command: deploy, probe, audit, sync, collect, report
- Self-hosted Gitea on the fleet with 207 repos across 7 orgs — full Git sovereignty with GitHub-to-Gitea relay syncing every 30 minutes
**Infrastructure Platform**
- Manage 99 Cloudflare Pages projects with automated deployment from git
- Operate 22 D1 databases, 46 KV namespaces, 11 R2 buckets as platform data layer
- 14 Docker containers on Docker Swarm with service discovery
- 48 Nginx sites as reverse proxy layer for fleet services
**The Platform: Search, Deploy, Observe**
- Code search engine indexing 354 repos with FTS5 full-text search — find anything across the entire codebase in milliseconds
- 99 Cloudflare Pages projects with git-push deployment — every commit triggers build and deploy automatically
- 10-collector KPI system generates daily observability: fleet health, code velocity, cloud inventory, service status
**Edge Computing Platform**
- 5 Raspberry Pi nodes + 2 cloud VMs connected via WireGuard mesh VPN
- RoadNet carrier-grade WiFi mesh (5 access points, dedicated subnets per node)
- 52 TOPS AI compute (2× Hailo-8 NPUs) available as platform capability
- 27 Ollama models (48.1 GB) deployed as shared AI services
**Observability Platform**
- Built 9-collector KPI system running daily across all infrastructure
- 60+ metrics: code velocity, fleet health, services, autonomy, Cloudflare, local system
- Distributed tracing with span-level observability
- Self-healing autonomy scripts on all nodes
**Data Platform**
- 11 PostgreSQL databases for application state
- 230 SQLite databases (1.4 GB) for agent memory and configuration
- 46 KV namespaces for edge state
- 22 D1 databases for serverless applications
**Why It Matters**
- A solo developer operating at the output of a small team needs tools that multiply, not slow down
- 326 commits/day sustained velocity. 4,019 PRs merged. 20 languages. This throughput requires platform, not heroics
---
## Technical Skills
**Platform:** Cloudflare (Pages, Workers, D1, KV, R2), Docker Swarm, Gitea, GitHub Actions
**Infrastructure:** Linux, systemd (256 services), Nginx, WireGuard, Tailscale
**Languages:** JavaScript (114 repos), Python (470 repos), TypeScript (85 repos), Shell (160 repos), Go (2 repos)
**Databases:** PostgreSQL, SQLite, D1, KV, Qdrant
**Tools:** 212 custom CLI tools, Wrangler, Docker, cron (52 automated tasks)
Cloudflare Pages/Workers, Gitea, GitHub Actions, Docker Swarm, CLI tooling, Bash, Python, FTS5
---
## Metrics
| Metric | Value |
|--------|-------|
| CLI tools built | 212 |
| Total repos | 309 (102 GH + 207 Gitea) |
| Cloudflare Pages | 99 |
| Data stores | 79 (22 D1 + 46 KV + 11 R2) |
| Lines of code | 7,212,320 |
| Systemd services | 256 |
| Automated tasks | 52 |
| Metric | Value | Source |
|--------|-------|--------|
| CLI Tools | *live* | local.sh — ls ~/bin | wc -l |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |
| CF Pages | *live* | cloudflare.sh — wrangler pages list |
| Systemd Services | *live* | services.sh — systemctl list-units via SSH |
| Docker Containers | *live* | services.sh — docker ps via SSH |
| Lines of Code | *live* | loc.sh — cloc + fleet SSH |

View File

@@ -8,65 +8,44 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Full-stack engineer with 7.2 million lines of code across 1,603 repositories in 20 languages. Builds end-to-end applications from React/Next.js frontends to Python/Node.js APIs, deployed across 99 Cloudflare Pages and 7 backend nodes serving 48+ domains.
Designed, built, and shipped end-to-end: 7.2M lines of code, 20 languages, 99 deployed sites, FastAPI backends, 283 databases, and a brand system powering 75 templates — because "full-stack" means owning the entire vertical.
---
## Experience
### BlackRoad OS | Founder & Lead Engineer | 2025Present
### BlackRoad OS | Founder & Full-Stack Engineer | 2025Present
**Frontend**
- 75 design templates (HTML/JSX) with brand-locked design system
- 99 Cloudflare Pages projects deployed across 48+ custom domains
- Built dashboard, landing, auth, settings, status, blog, and portfolio page templates
- React/Next.js applications with real-time WebSocket integration
**The Frontend: 99 Live Sites, One Design System**
- 75 design templates with brand-locked system — gradient spectrum, golden ratio spacing, Space Grotesk + JetBrains Mono typography
- 99 Cloudflare Pages projects deployed across 48+ custom domains — every site is live, every domain has SSL
- 15 page types covering the full SaaS surface: landing, pricing, blog, docs, dashboard, auth, portfolio, settings, status, changelog
**Backend**
- FastAPI services (CECE API, TTS API, Lucidia API) on edge fleet
- Node.js/Express services for API routing and webhooks
- 48 Nginx reverse proxy sites routing to backend services
- 14 Docker containers running production workloads
**The Backend: APIs That Power Everything**
- CECE API (FastAPI) for custom LLM interaction and TTS. Lucidia API for application platform. Fleet health APIs for monitoring
- AI image generation API with 4 backend agents — single endpoint, automatic model routing between DALL-E, Flux, SDXL
- 48 Nginx reverse proxy sites routing traffic to the right backend across the fleet — zero-trust via Cloudflare tunnels
**Databases**
- 11 PostgreSQL databases for relational data
- 230 SQLite databases (1.4 GB) for application state and agent memory
- 22 Cloudflare D1 databases for serverless applications
- 46 KV namespaces for edge caching and configuration
- FTS5 full-text search indexing 354 repos
**APIs & Services**
- AI image generation API with 4 backend agents (DALL-E, Flux, SDXL)
- Code search engine API indexing 354 repos
- Fleet health and KPI collection APIs
- GitHub-to-Gitea relay for cross-platform sync
**DevOps**
- 212 CLI tools for deployment and management
- GitHub Actions CI/CD + self-hosted Gitea (207 repos)
- Docker Swarm orchestration across Pi fleet
- 52 automated tasks (cron + systemd timers)
**The Data Layer: 283 Databases, 5 Engines**
- 11 PostgreSQL for relational data, 230 SQLite (1.4 GB) for app state, 23 D1 for serverless, 47 KV for edge config, Qdrant for vectors
- FTS5 full-text search across 156K entries — sub-millisecond lookups across the entire knowledge base
---
## Technical Skills
**Frontend:** React, Next.js, HTML/CSS, JSX, Cloudflare Pages
**Backend:** Python/FastAPI, Node.js, Express, Bash
**Databases:** PostgreSQL, SQLite/FTS5, Cloudflare D1, KV, R2, Qdrant
**Infrastructure:** Docker, Nginx, Linux, WireGuard, Cloudflare Tunnels
**Languages:** JavaScript (114), Python (470), HTML (314), TypeScript (85), Shell (160), Go (2), C (2)
React, Next.js, FastAPI, Node.js, PostgreSQL, SQLite, Cloudflare D1/KV/R2, Docker, Nginx
---
## Metrics
| Metric | Value |
|--------|-------|
| Lines of code | 7,212,320 |
| Repositories | 1,810 |
| Languages | 20 |
| Pages deployed | 99 |
| Databases | 283 total |
| API services | 6+ |
| Domains | 48+ |
| Metric | Value | Source |
|--------|-------|--------|
| Lines of Code | *live* | loc.sh — cloc + fleet SSH |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |
| CF Pages | *live* | cloudflare.sh — wrangler pages list |
| PostgreSQL DBs | *live* | services.sh — psql -l via SSH |
| SQLite DBs | *live* | local.sh — find ~/.blackroad -name *.db |
| Nginx Sites | *live* | services.sh — /etc/nginx/sites-enabled via SSH |
| Docker Containers | *live* | services.sh — docker ps via SSH |

View File

@@ -8,67 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Cloud engineer managing a hybrid edge-cloud infrastructure: 99 Cloudflare Pages, 22 D1 databases, 46 KV namespaces, 11 R2 buckets, 2 DigitalOcean droplets, and 5 edge nodes connected via WireGuard mesh VPN. Serves 48+ domains through 4 Cloudflare tunnels.
Needed global reach without global infrastructure costs. Architected a hybrid edge-cloud stack: Cloudflare serverless for global distribution, Pi fleet for sovereignty, WireGuard mesh for secure connectivity — 178 cloud resources managed solo.
---
## Experience
### BlackRoad OS | Founder & Cloud Architect | 2025Present
### BlackRoad OS | Founder & Cloud Engineer | 2025Present
**Cloudflare Platform**
- Deploy and manage 99 Pages projects with git-based CI/CD
- Operate 22 D1 serverless databases (40 MB total) for application state
- Manage 46 KV namespaces for edge configuration and caching
- Maintain 11 R2 object storage buckets for assets, models, and artifacts
- Route 48+ custom domains through 4 Cloudflare tunnels to fleet services
**The Strategy: Edge + Cloud, Not Either/Or**
- Pure cloud is expensive and you don't own the compute. Pure edge is limited and hard to reach. Combined both
- 99 Pages for global CDN, 23 D1 for serverless databases, 47 KV for edge config, 11 R2 for object storage — all on Cloudflare
- 5 Pi edge nodes for persistent compute, AI inference, and data sovereignty. WireGuard mesh connects everything. 4 tunnels route 48+ domains
**Hybrid Cloud Architecture**
- 5 Raspberry Pi edge nodes (4× Pi 5, 1× Pi 400) with 20 GB RAM, 707 GB storage
- 2 DigitalOcean droplets (NYC regions) for WireGuard hub and public services
- WireGuard mesh VPN connecting all nodes (10.8.0.x subnet)
- Tailscale overlay network with 9 peers for management access
- RoadNet WiFi mesh (5 APs) for local device connectivity
**The Architecture: Zero Open Ports**
- No port forwarding, no exposed services. All external traffic flows through Cloudflare tunnels to fleet
- WireGuard mesh (10.8.0.x) for encrypted inter-node communication. Tailscale overlay (9 peers) for management access
- RoadNet WiFi mesh (5 APs) provides local device connectivity — devices on the mesh can reach the fleet directly
**Compute & Services**
- 14 Docker containers via Docker Swarm (leader on Octavia)
- 256 systemd services managed across fleet
- 48 Nginx reverse proxy sites
- 11 PostgreSQL databases, 230 SQLite databases
**Security & Networking**
- Cloudflare tunnels for zero-trust access to fleet services
- WireGuard encryption for inter-node communication
- UFW firewall policies on edge nodes
- Credential management via secured env files (chmod 600)
**Monitoring**
- Daily KPI collection across all cloud and edge infrastructure
- 60+ metrics tracked: deployments, database sizes, fleet health
- Power and thermal monitoring on all edge nodes
- Automated alerting for service failures
**The Numbers**
- 178 total Cloudflare resources deployed and maintained. 48+ custom domains with automated SSL/TLS
- Cloudflare Workers for edge compute and API routing — millisecond response times at the edge, heavy processing on fleet
---
## Technical Skills
**Cloud:** Cloudflare (Pages, Workers, D1, KV, R2, Tunnels, DNS), DigitalOcean
**Networking:** WireGuard, Tailscale, Nginx, DNS (Pi-hole, PowerDNS), Cloudflare DNS
**Containers:** Docker, Docker Swarm, container orchestration
**Infrastructure:** Linux, systemd, Raspberry Pi, edge computing
**IaC:** Shell automation (212 CLI tools), cron, GitHub Actions
Cloudflare Pages/Workers/D1/KV/R2/Tunnels, DigitalOcean, WireGuard, Tailscale, Docker, Nginx
---
## Metrics
| Metric | Value |
|--------|-------|
| Cloudflare Pages | 99 |
| D1 databases | 22 |
| KV namespaces | 46 |
| R2 buckets | 11 |
| Domains managed | 48+ |
| CF tunnels | 4 |
| Edge nodes | 5 |
| Cloud VMs | 2 |
| Metric | Value | Source |
|--------|-------|--------|
| CF Pages | *live* | cloudflare.sh — wrangler pages list |
| D1 Databases | *live* | cloudflare.sh — wrangler d1 list --json |
| KV Namespaces | *live* | cloudflare.sh — wrangler kv list |
| R2 Buckets | *live* | cloudflare.sh — wrangler r2 bucket list |
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| Nginx Sites | *live* | services.sh — /etc/nginx/sites-enabled via SSH |

View File

@@ -8,59 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Infrastructure engineer building and operating a 7-node distributed compute fleet with 707 GB storage, 20 GB RAM, 52 TOPS AI acceleration, and WireGuard mesh networking. Manages 256 systemd services, 14 Docker containers, and 48+ production domains with automated monitoring and self-healing.
Built a production fleet from single-board computers. 5 Raspberry Pis, 2 cloud VMs, 52 TOPS of AI acceleration, 707 GB distributed storage — proving that serious infrastructure doesn't require serious budgets.
---
## Experience
### BlackRoad OS | Founder & Infrastructure Lead | 2025Present
### BlackRoad OS | Founder & Infrastructure Engineer | 2025Present
**Physical Infrastructure**
- Built and maintain 5 Raspberry Pi edge nodes (4× Pi 5 with NVMe, 1× Pi 400)
- Installed and configured 2× Hailo-8 AI accelerators (52 TOPS combined)
- Managed hardware: NVMe drives, Pironman5 enclosures, OLED displays (I2C), UART adapters
- Resolved undervoltage issues via config.txt tuning, power supply assessment, overclock removal
**The Thesis: Commodity Hardware, Production Workloads**
- A Raspberry Pi 5 costs $80. A Hailo-8 NPU costs $100. Together they deliver 26 TOPS of AI inference with 8 GB RAM
- Built a 7-node fleet for under $700 total hardware cost — runs 256 systemd services, 14 Docker containers, 27 AI models, 48 Nginx sites
- Same fleet handles production traffic across 48+ domains serving real users through Cloudflare tunnels
**Network Infrastructure**
- WireGuard mesh VPN (10.8.0.x) connecting all nodes through DigitalOcean hub
- RoadNet WiFi mesh: 5 access points on channels 1/6/11, dedicated 10.10.x.0/24 subnets per node
- 4 Cloudflare tunnels for zero-trust external access to 48+ domains
- 48 Nginx reverse proxy configurations
- DNS infrastructure: Pi-hole (Alice), PowerDNS (Lucidia), custom dnsmasq zones (Cecilia)
**The Hard Part: Power, Heat, and Storage**
- Pi 5 + Hailo-8 + NVMe draws more than a standard 5V/3A PSU can deliver — diagnosed undervoltage (0.75V), tuned config.txt, recovered +95mV
- Reduced GPU memory 256MB to 16MB on headless nodes. Applied conservative CPU governors. Disabled 16 skeleton microservices — freed 800 MB RAM
- Fleet averages 42°C now. Power monitoring runs every 5 minutes on all nodes, logging voltage, throttle state, and governor
**Compute Management**
- 256 systemd services and 35 timers across fleet
- 14 Docker containers via Docker Swarm
- 27 Ollama models (48.1 GB) across 3 inference nodes
- 11 PostgreSQL databases, 230 SQLite databases
**Automation**
- 212 CLI tools for fleet management
- 52 automated tasks (17 Mac cron + 35 fleet timers)
- Self-healing autonomy on all nodes (heartbeat + heal cycles)
- Fleet power optimization: CPU governors, swappiness, dirty_ratio, WiFi PM, GPU memory reduction
**The Network: Every Node Reachable, Every Path Encrypted**
- WireGuard mesh VPN (10.8.0.x) connects all nodes. RoadNet WiFi mesh (5 APs, 5 subnets) provides local coverage
- 4 Cloudflare tunnels route 48+ domains to fleet services. Tailscale overlay (9 peers) for remote management
---
## Technical Skills
**Hardware:** Raspberry Pi 5, Hailo-8 NPU, NVMe, I2C, GPIO, USB, PCIe
**Networking:** WireGuard, Tailscale, Nginx, DNS (Pi-hole, PowerDNS, dnsmasq), Cloudflare Tunnels
**OS:** Debian/Raspberry Pi OS, systemd, udev, kernel tuning (cmdline.txt, config.txt, sysctl)
**Containers:** Docker, Docker Swarm
**Monitoring:** Custom KPI system (60+ metrics), thermal monitoring, voltage tracking
Raspberry Pi, Linux, WireGuard, Nginx, systemd, Docker Swarm, Hailo-8, NVMe
---
## Metrics
| Metric | Value |
|--------|-------|
| Fleet nodes | 7 |
| Total storage | 707 GB |
| Total RAM | 20 GB |
| AI compute | 52 TOPS |
| Systemd services | 256 |
| Network connections | 106 |
| Domains served | 48+ |
| Metric | Value | Source |
|--------|-------|--------|
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| Nodes Online | *live* | fleet.sh — SSH probe to all nodes |
| Fleet Storage (GB) | *live* | fleet.sh — df via SSH |
| Fleet RAM (MB) | *live* | fleet.sh — /proc/meminfo via SSH |
| Systemd Services | *live* | services.sh — systemctl list-units via SSH |
| Nginx Sites | *live* | services.sh — /etc/nginx/sites-enabled via SSH |

View File

@@ -8,62 +8,40 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Backend engineer with 7.2M lines of code across 1,603 repos, operating 11 PostgreSQL databases, 230 SQLite databases, 14 Docker services, and multiple API backends (FastAPI, Node.js) serving 48+ domains through Nginx reverse proxy.
Every feature needs an API. Built 6+ production services, unified 283 databases across 5 engines, and designed data architectures that run on $80 hardware — because the backend doesn't care how much you spent on it.
---
## Experience
### BlackRoad OS | Founder & Backend Lead | 2025Present
### BlackRoad OS | Founder & Backend Engineer | 2025Present
**API Development**
- CECE API (FastAPI): Custom AI model interaction with streaming responses
- TTS API: Text-to-speech generation service
- Lucidia API (FastAPI): Application backend with Clerk auth integration
- Image generation API: 4 backend agents (DALL-E, Flux, SDXL, Together)
- Code search API: FTS5-powered search across 354 repos
- KPI collection API: 9 collectors aggregating 60+ daily metrics
**The APIs: Each One Solving a Real Problem**
- CECE API (FastAPI) — needed custom LLM interaction with personality. Built TTS generation endpoint. Runs on Pi 5 at the edge
- AI image generation API — 4 backend agents (DALL-E, Flux, SDXL, FAL) behind a single endpoint. Automatic model routing based on prompt type
- Code search engine — needed to find anything across 354 repos instantly. Built FTS5 index, sub-millisecond lookups across entire codebase
- Fleet health APIs — SSH-based probes collect metrics from every node. Powers the KPI dashboard and automated alerting
**Database Engineering**
- 11 PostgreSQL databases across 3 fleet nodes
- 230 SQLite databases (1.4 GB) for state management and agent memory
- 22 Cloudflare D1 databases for serverless backends
- 46 KV namespaces for edge caching
- FTS5 full-text search index across code and knowledge bases
**Service Architecture**
- 14 Docker containers on Docker Swarm with service discovery
- 48 Nginx reverse proxy sites with upstream health checking
- WebSocket real-time streaming for AI model responses
- SSE (Server-Sent Events) proxy for Ollama model streaming
- GitHub-to-Gitea relay for cross-platform repository sync
**Data Processing**
- Daily KPI aggregation across GitHub, Gitea, fleet SSH, Cloudflare APIs
- Power monitoring data collection (5-minute intervals across fleet)
- Distributed tracing with nanosecond-precision span tracking
- Cron-based batch processing (52 automated tasks)
**The Data: Right Database for the Right Job**
- 11 PostgreSQL for transactional data. 230 SQLite (1.4 GB) for agent memory and local state — embedded, zero-config, fast
- 23 Cloudflare D1 for serverless applications. 47 KV namespaces for edge configuration and caching. Qdrant for vector search
- FTS5 full-text search across 156K entries — the entire knowledge base is searchable in under a millisecond
---
## Technical Skills
**Languages:** Python (470 repos), JavaScript (114 repos), TypeScript (85 repos), Bash, Go, C
**Frameworks:** FastAPI, Node.js/Express, Next.js API routes
**Databases:** PostgreSQL, SQLite/FTS5, Cloudflare D1/KV, Qdrant (vector)
**Infrastructure:** Docker, Nginx, systemd, Linux
**APIs:** REST, WebSocket, SSE, Cloudflare Workers
Python/FastAPI, Node.js, PostgreSQL, SQLite/FTS5, D1, KV, Docker, Nginx, Redis
---
## Metrics
| Metric | Value |
|--------|-------|
| Lines of code | 7,212,320 |
| PostgreSQL DBs | 11 |
| SQLite DBs | 230 |
| D1 databases | 22 |
| API services | 6+ |
| Docker containers | 14 |
| Repos | 1,810 |
| Metric | Value | Source |
|--------|-------|--------|
| PostgreSQL DBs | *live* | services.sh — psql -l via SSH |
| SQLite DBs | *live* | local.sh — find ~/.blackroad -name *.db |
| D1 Databases | *live* | cloudflare.sh — wrangler d1 list --json |
| KV Namespaces | *live* | cloudflare.sh — wrangler kv list |
| Docker Containers | *live* | services.sh — docker ps via SSH |
| Lines of Code | *live* | loc.sh — cloc + fleet SSH |

View File

@@ -8,66 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Systems engineer managing 256 systemd services across a 7-node Linux fleet. Deep expertise in kernel tuning, power optimization, thermal management, device drivers (Hailo-8 NPU, I2C, GPIO, UART), and distributed system reliability with automated monitoring and self-healing.
When your production fleet is single-board computers, every kernel parameter matters. Tuned CPU governors, stabilized voltage, integrated PCIe AI accelerators, and squeezed production workloads from hardware that fits in your hand.
---
## Experience
### BlackRoad OS | Founder & Systems Lead | 2025Present
### BlackRoad OS | Founder & Systems Engineer | 2025Present
**Linux Systems**
- Manage 5 Raspberry Pi nodes (Debian Bookworm) + 2 Ubuntu VMs
- Configure and tune 256 systemd services and 35 timers
- Kernel parameter optimization: CPU governors, swappiness, dirty_ratio, WiFi power management
- Custom udev rules for device persistence (vcio, Hailo-8, USB peripherals)
- Boot configuration: cmdline.txt, config.txt, /etc/sysctl.d/ for persistent tuning
**The Constraint: Maximum Work from Minimum Hardware**
- A Pi 5 has 8 GB RAM, a quad-core ARM, and a 30W power budget. It needs to run Docker, Ollama, Nginx, PostgreSQL, and 50+ systemd services simultaneously
- Tuned swappiness to 10, dirty_ratio to 40, applied conservative CPU governors, capped frequency to 2 GHz — workloads stable, temperatures safe
- GPU memory reduced from 256MB to 16MB on headless nodes — freed RAM for actual compute. Disabled cups, rpcbind, nfs, lightdm across fleet
**Hardware & Device Drivers**
- 2× Hailo-8 AI accelerators: PCIe driver installation, /dev/hailo0 device management
- I2C devices: Pironman5 OLED display (0x3c), sensor interfaces
- UART/serial: CP2102 adapters, BL808 RISC-V dev boards
- GPIO: IR receivers, custom peripheral control
- NVMe storage: 1 TB drive on Octavia for high-speed workloads
**The Integration: Making Hardware Talk**
- 2x Hailo-8 NPU via PCIe — installed drivers, firmware, verified /dev/hailo0 on both nodes. 52 TOPS of AI acceleration, zero cloud cost
- NVMe SSD on Octavia (1TB) — faster I/O for Gitea, Docker images, and model weights. USB peripherals: UART, keyboards, microphones, OLED displays
- Overclock on one node caused undervoltage (0.75V) — removed overclock, tuned config.txt, recovered +95mV. Fleet-wide voltage monitoring deployed
**Performance Engineering**
- Resolved thermal throttling: identified runaway processes, reduced 73.8°C → 57.9°C
- Fixed undervoltage: removed overclock settings, tuned voltage (+95mV recovery)
- Power optimization deployed fleet-wide: conservative governors, GPU memory reduction (256→16 MB)
- Boot-persistent config via tmpfiles.d, rc.local, sysctl.d
**Networking**
- WireGuard mesh VPN across all nodes
- DNS infrastructure: Pi-hole, PowerDNS (Docker), dnsmasq with custom zones (.cece, .blackroad)
- Nginx (48 sites), Cloudflare tunnels (4), Tailscale (9 peers)
- Network diagnostics: 106 active connections monitored, 867 processes tracked
**Security**
- Credential rotation: migrated secrets from crontabs to env files (chmod 600)
- Identified and removed obfuscated cron dropper (malware)
- SSH key audit: 50+ keys on Alice/Octavia
- UFW firewall management on edge nodes
**The Discipline: 256 Services, Zero Chaos**
- 256 systemd services and 35 timers across fleet — each one has a purpose, a health check, and an owner
- Self-healing watchdogs restart failed services. Power monitoring logs every 5 minutes. Everything persistent across reboots via sysctl.d and tmpfiles.d
---
## Technical Skills
**OS:** Debian, Raspberry Pi OS, Ubuntu, systemd, udev, kernel tuning
**Hardware:** Hailo-8 NPU, Raspberry Pi 5, I2C, GPIO, UART, PCIe, NVMe
**Networking:** WireGuard, Tailscale, Nginx, DNS (Pi-hole, PowerDNS, dnsmasq)
**Security:** SSH hardening, credential management, firewall (UFW/nftables)
**Scripting:** Bash (212 tools, 91 scripts), Python
Linux kernel, systemd, sysctl, PCIe, I2C, GPIO, Hailo-8, NVMe, Bash, Python
---
## Metrics
| Metric | Value |
|--------|-------|
| Systemd services | 256 |
| Fleet nodes | 7 |
| AI acceleration | 52 TOPS |
| Processes managed | 867 |
| Network connections | 106 |
| Automated tasks | 52 |
| CLI tools | 212 |
| Metric | Value | Source |
|--------|-------|--------|
| Systemd Services | *live* | services.sh — systemctl list-units via SSH |
| Systemd Timers | *live* | services.sh — systemctl list-timers via SSH |
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| Avg Temp | *live* | fleet.sh — /sys/class/thermal via SSH |
| Fleet RAM (MB) | *live* | fleet.sh — /proc/meminfo via SSH |
| Fleet Storage (GB) | *live* | fleet.sh — df via SSH |

View File

@@ -8,64 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Edge computing engineer operating a 5-node Raspberry Pi fleet with 52 TOPS AI acceleration, 27 deployed models, WireGuard mesh networking, and carrier-grade WiFi mesh. Builds edge-native services with self-healing automation, thermal management, and hybrid edge-cloud architecture.
Cloud inference is someone else's computer running your data. Deployed 27 AI models on-device across 5 Pi nodes with 52 TOPS acceleration, built a WiFi mesh for local connectivity, and kept it all running with self-healing automation.
---
## Experience
### BlackRoad OS | Founder & Edge Lead | 2025Present
### BlackRoad OS | Founder & Edge Computing Engineer | 2025Present
**Edge Fleet**
- 5 Raspberry Pi nodes: 4× Pi 5 (8 GB RAM, NVMe), 1× Pi 400 (4 GB RAM)
- 2× Hailo-8 NPUs (26 TOPS each) for on-device AI inference
- 707 GB total fleet storage, 20 GB total RAM
- Docker Swarm orchestration with automatic service placement
**The Vision: AI at the Edge, Not in the Cloud**
- 27 Ollama models (48.1 GB) running on 3 Pi 5 nodes — inference happens on-premise, data never leaves the network
- 2x Hailo-8 NPUs (52 TOPS total) for hardware-accelerated inference — PCIe integration, driver management, firmware updates
- 4 custom fine-tuned CECE models — personality, voice, and domain expertise that can't be replicated with off-the-shelf models
**Edge AI**
- 27 Ollama models (48.1 GB) running locally across 3 nodes
- 4 custom fine-tuned models for domain-specific inference
- SSE proxy for streaming model responses to web clients
- Image generation pipeline with 4 backend agents
**The Network: Mesh Connectivity Without Internet**
- RoadNet WiFi mesh: 5 APs on channels 1/6/11, 5 subnets (10.10.x.0/24), NAT through wlan0 — devices connect to fleet directly
- WireGuard mesh for encrypted node-to-node communication. Tailscale overlay (9 peers) for remote management from anywhere
- Pi-hole DNS for local resolution + custom zones (.cece, .blackroad) — edge services discoverable by name, not IP
**Edge Networking**
- RoadNet: 5 WiFi access points (channels 1/6/11), dedicated 10.10.x.0/24 subnets
- WireGuard mesh VPN (10.8.0.x) connecting all nodes to cloud hub
- 4 Cloudflare tunnels for secure external access
- Pi-hole DNS, PowerDNS, custom dnsmasq zones at edge
**Edge Reliability**
- Self-healing cron automation on every node
- Power optimization: CPU governors, voltage tuning, thermal throttle prevention
- Avg fleet temperature: 44.8°C (down from 73.8°C peak after optimization)
- 256 systemd services managed across fleet
**Hybrid Architecture**
- Edge nodes handle AI inference, local services, DNS, monitoring
- Cloud (Cloudflare) handles 99 Pages deployments, 22 D1 databases, CDN
- DigitalOcean VMs as WireGuard hubs and public endpoints
- Tailscale overlay (9 peers) for cross-network management
**The Challenge: Keeping Edge Alive**
- Edge hardware fails differently than cloud — SD cards degrade, power supplies sag, thermal throttling kills inference mid-response
- Self-healing autonomy on every node. Power monitoring every 5 minutes. Automatic service restarts. Temperature alerts before shutdown
---
## Technical Skills
**Edge:** Raspberry Pi 5, Hailo-8 NPU, NVMe, PCIe, GPIO, I2C
**Networking:** WireGuard, WiFi mesh, Cloudflare Tunnels, DNS (Pi-hole, PowerDNS)
**AI:** Ollama, Hailo-8 inference, custom model fine-tuning
**Containers:** Docker, Docker Swarm
**Automation:** systemd (256 services), cron (52 tasks), self-healing scripts
Raspberry Pi, Hailo-8, Ollama, WireGuard, WiFi mesh, Pi-hole, Docker, Linux
---
## Metrics
| Metric | Value |
|--------|-------|
| Edge nodes | 5 |
| AI acceleration | 52 TOPS |
| Models deployed | 27 (48.1 GB) |
| WiFi APs | 5 |
| Fleet storage | 707 GB |
| Avg temperature | 44.8°C |
| Services | 256 |
| Metric | Value | Source |
|--------|-------|--------|
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| Nodes Online | *live* | fleet.sh — SSH probe to all nodes |
| AI Models | *live* | services.sh — ollama list via SSH |
| Avg Temp | *live* | fleet.sh — /sys/class/thermal via SSH |
| Tailscale Peers | *live* | services.sh — tailscale status via SSH |
| Fleet Storage (GB) | *live* | fleet.sh — df via SSH |

View File

@@ -8,65 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Automation engineer who built 212 CLI tools, 52 scheduled tasks, and self-healing fleet automation across a 7-node infrastructure. Automates KPI collection (60+ metrics daily), deployment pipelines, fleet monitoring, power optimization, and cross-platform repository sync.
A solo operator can't manually manage 256 services, 1,603 repos, and 7 nodes. Built 212 CLI tools and 52 scheduled automations that turn a one-person operation into a self-sustaining system.
---
## Experience
### BlackRoad OS | Founder & Automation Lead | 2025Present
### BlackRoad OS | Founder & Automation Engineer | 2025Present
**CLI Tooling**
- Built 212 CLI tools (121 MB) in ~/bin covering deployment, monitoring, stats, and fleet management
- 91 shell scripts for infrastructure automation
- Custom RoadC programming language with lexer, parser, and tree-walking interpreter
- `br-stats`, `br-upload`, `br-generate`, `roadid`, `carpool` — production CLI tools
**The Philosophy: If You Did It Twice, Automate It**
- 212 CLI tools (121 MB) in ~/bin — every deployment, probe, audit, sync, and report is a single command
- 91 shell scripts for fleet management. Custom brand compliance auditing. Mass update tooling across all 99 sites
- GitHub-to-Gitea relay syncs 207 repos every 30 minutes — cross-platform Git without manual intervention
**Scheduled Automation**
- 17 Mac cron jobs: health monitoring (5m), sync (15m), website automation (30m), backups (daily/6h/12h)
- 35 systemd timers across fleet for service monitoring and data collection
- Self-healing autonomy: heartbeat (1m) + heal (5m) cycles on all Pi nodes
- GitHub-to-Gitea relay: 30-minute sync across 207 repos
**The Schedule: 52 Tasks Running Without You**
- 17 Mac cron jobs + 35 fleet systemd timers = 52 automated tasks running daily, hourly, and every 5 minutes
- Daily KPI collection at 6 AM: 10 collectors pull from GitHub API, fleet SSH, Cloudflare CLI, local Mac — aggregated into daily report
- Self-healing autonomy: heartbeat every 60s, heal every 5m, power monitor every 5m — fleet maintains itself overnight
**KPI Automation**
- 9-collector daily KPI system: GitHub, Gitea, fleet, services, autonomy, LOC, local, Cloudflare, deep GitHub
- Automated fleet probing via SSH with Python-based remote execution
- Daily Slack reporting (block kit formatting)
- Markdown report generation for GitHub README auto-update
- Day-over-day delta tracking with visual progress bars
**Deployment Automation**
- 99 Cloudflare Pages with git-push deployment
- GitHub Actions CI/CD pipelines
- Docker Swarm service orchestration
- Fleet-wide power optimization with --apply mode
**Fleet Automation**
- Power monitoring deployed to all nodes (5-minute intervals)
- CPU governor persistence via tmpfiles.d
- Automated service discovery and restart
- Credential rotation from plaintext to secured env files
**The Pipeline: Data That Updates Itself**
- 10 collectors generate snapshots → aggregated into daily JSON → pushed to Cloudflare KV → live resume dashboards update automatically
- Every number on this page came from an automated collector, not a human typing it. Updated daily. Verified by source
---
## Technical Skills
**Scripting:** Bash (212 tools), Python, JavaScript
**Automation:** cron, systemd timers, GitHub Actions, Docker Swarm
**Tools:** Wrangler, gh CLI, SSH, rsync, rclone
**Monitoring:** Custom KPI system, Slack webhooks, distributed tracing
**Infrastructure:** Linux, Docker, Cloudflare, WireGuard
Bash, Python, cron, systemd timers, GitHub Actions, SSH automation, jq, curl
---
## Metrics
| Metric | Value |
|--------|-------|
| CLI tools | 212 |
| Automated tasks | 52 |
| KPI metrics | 60+ |
| Data collectors | 9 |
| Shell scripts | 91 |
| Cron jobs (Mac) | 17 |
| Fleet timers | 35 |
| Metric | Value | Source |
|--------|-------|--------|
| CLI Tools | *live* | local.sh — ls ~/bin | wc -l |
| Shell Scripts | *live* | local.sh — find ~/ -name *.sh |
| Mac Crons | *live* | local.sh — crontab -l | wc -l |
| Systemd Timers | *live* | services.sh — systemctl list-timers via SSH |
| Fleet Crons | *live* | autonomy.sh — crontab -l via SSH |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |

View File

@@ -8,67 +8,40 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Database engineer managing 283 databases across 5 engines: 11 PostgreSQL, 230 SQLite (1.4 GB), 22 Cloudflare D1, 46 KV namespaces, and Qdrant vector DB. Designs schemas for AI agent memory, metrics collection, fleet state, and full-text search with FTS5.
Different data needs different storage. Designed and operate 283 databases across 5 engines PostgreSQL for transactions, SQLite for embedded state, D1 for serverless, KV for edge config, Qdrant for vectors. Each one chosen for a reason.
---
## Experience
### BlackRoad OS | Founder & Data Lead | 2025Present
### BlackRoad OS | Founder & Database Engineer | 2025Present
**PostgreSQL**
- 11 production databases across 3 fleet nodes (Alice, Cecilia, Lucidia)
- Application state for CECE API, Lucidia API, and fleet services
- Automated backup and monitoring
**The Decision: Why 5 Engines, Not 1**
- PostgreSQL (11 DBs) for relational data that needs ACID guarantees — user state, application data, fleet metadata
- SQLite (230 DBs, 1.4 GB) for embedded, zero-config storage — agent memory, metrics history, local state. No server process, instant access
- Cloudflare D1 (23 DBs) for serverless apps at the edge — data lives next to the Workers that query it. Millisecond reads globally
- KV (47 namespaces) for configuration and caching — edge-distributed, eventually consistent, perfect for feature flags and session data
**SQLite / FTS5**
- 230 SQLite databases (1.4 GB total) in ~/.blackroad/
- Key databases: metrics.db (6 tables), systems.db (111 entries), analytics.db, agents.db
- FTS5 full-text search index for code search across 354 repos
- Agent memory databases for AI conversation state
- Observability traces database with nanosecond-precision spans
**Cloudflare D1**
- 22 serverless databases (40 MB total) for edge applications
- images-blackroad: AI image generation metadata
- index-blackroad: code search index (2,524 files)
- blackroad-auth, blackroad-saas, blackroad-logs, and 17 more
**KV / Key-Value**
- 46 Cloudflare KV namespaces for edge state and configuration
- Agent inboxes, config, telemetry, templates, users, world state
- Low-latency edge reads for Cloudflare Workers
**Vector Database**
- Qdrant on Alice for semantic search and embeddings
- Knowledge retrieval for AI agent systems
**Schema Design**
- Metrics schema: metrics, counters, alerts, system_metrics, custom_metrics, metric_definitions
- Inventory schema: inventory, movements, locations
- Observability schema: traces, spans (parent-child), metrics, logs
- Dashboard schema: services, nodes, activity logs
**The Search: Finding Anything Instantly**
- FTS5 full-text search across 156,675 memory entries — the entire knowledge base searchable in under a millisecond
- Code search engine indexing 354 repos — find any function, any file, any pattern across the whole codebase
- 111 registered systems tracked in a systems database — every device, service, and endpoint has a record
---
## Technical Skills
**Databases:** PostgreSQL, SQLite, Cloudflare D1, KV stores, Qdrant
**Search:** FTS5 full-text search, vector similarity search
**Languages:** Python, SQL, JavaScript, Bash
**Infrastructure:** Linux, Docker, Cloudflare Workers
**Tools:** psql, sqlite3, Wrangler, custom CLI tools
PostgreSQL, SQLite/FTS5, Cloudflare D1, KV stores, Qdrant, SQL, Python, database design
---
## Metrics
| Metric | Value |
|--------|-------|
| Total databases | 283 |
| PostgreSQL DBs | 11 |
| SQLite DBs | 230 (1.4 GB) |
| D1 databases | 22 (40 MB) |
| KV namespaces | 46 |
| Systems tracked | 111 |
| Repos indexed | 354 |
| Metric | Value | Source |
|--------|-------|--------|
| PostgreSQL DBs | *live* | services.sh — psql -l via SSH |
| SQLite DBs | *live* | local.sh — find ~/.blackroad -name *.db |
| Total DB Rows | *live* | local.sh — sqlite3 row count across 230 DBs |
| D1 Databases | *live* | cloudflare.sh — wrangler d1 list --json |
| KV Namespaces | *live* | cloudflare.sh — wrangler kv list |
| FTS5 Entries | *live* | local.sh — sqlite3 FTS5 count |

View File

@@ -8,65 +8,40 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Network engineer operating a multi-layer network stack: WireGuard mesh VPN, Tailscale overlay (9 peers), RoadNet WiFi mesh (5 APs), 4 Cloudflare tunnels serving 48+ domains, DNS infrastructure (Pi-hole + PowerDNS + dnsmasq), and 48 Nginx reverse proxy sites across a 7-node fleet.
Connecting 7 nodes across 3 physical locations with zero open ports. Built a multi-layer network: WireGuard mesh for encryption, Cloudflare tunnels for zero-trust access, RoadNet WiFi mesh for local coverage, and Pi-hole DNS for control.
---
## Experience
### BlackRoad OS | Founder & Network Lead | 2025Present
### BlackRoad OS | Founder & Network Engineer | 2025Present
**VPN & Mesh Networking**
- WireGuard mesh VPN (10.8.0.x subnet) with DigitalOcean hub routing to all edge nodes
- Tailscale overlay network: 9 peers for management access across network boundaries
- RoadNet WiFi mesh: 5 access points on non-overlapping channels (1, 6, 11)
- Dedicated subnets: 10.10.{1-5}.0/24 per node with NAT through wlan0
**The Layers: Defense in Depth**
- Layer 1 — WireGuard mesh VPN (10.8.0.x): encrypted tunnels between all nodes. Every packet between nodes is encrypted, period
- Layer 2 — Cloudflare tunnels (4 active): 48+ domains routed to fleet with zero open ports. External traffic never touches a public IP
- Layer 3 — Tailscale overlay (9 peers): management access from anywhere. MagicDNS for node resolution. Exit nodes for remote debugging
- Layer 4 — RoadNet WiFi mesh: 5 APs on non-overlapping channels, 5 subnets, NAT, auto-failover — local devices talk to fleet directly
**DNS Infrastructure**
- Pi-hole on Alice: fleet-wide ad blocking and DNS resolution
- PowerDNS on Lucidia (Docker): authoritative DNS for custom zones
- dnsmasq on Cecilia: custom DNS zones (.cece, .blackroad, .entity, .soul, .dream)
- Cloudflare DNS for 48+ public domains
**Reverse Proxy & Load Balancing**
- 48 Nginx sites routing traffic to fleet services
- 4 Cloudflare tunnels for zero-trust external access
- Per-service routing: API, web, git, AI inference endpoints
- SSL/TLS termination via Cloudflare
**Monitoring & Diagnostics**
- 106 active network connections monitored across fleet
- 867 processes tracked for network resource usage
- SSH-based fleet probing with health checks
- Daily KPI collection including connection counts and peer status
**Security**
- Zero-trust architecture via Cloudflare tunnels (no exposed ports)
- WireGuard encryption for all inter-node traffic
- UFW firewall on edge nodes (INPUT DROP policy)
- SSH key management across 50+ authorized keys
**The DNS: Names, Not Numbers**
- Pi-hole for ad blocking and local DNS resolution. PowerDNS Docker for custom authoritative zones
- Custom DNS zones: .cece, .blackroad, .entity, .soul, .dream — edge services discoverable by domain name within the network
- 48 Nginx reverse proxy sites with health checking — each domain routes to the right backend on the right node
---
## Technical Skills
**VPN:** WireGuard, Tailscale
**DNS:** Pi-hole, PowerDNS, dnsmasq, Cloudflare DNS
**Proxy:** Nginx, Cloudflare Tunnels
**WiFi:** hostapd, RoadNet mesh configuration
**Security:** UFW, nftables, SSH hardening, TLS
**Monitoring:** Custom probes, ss, connection tracking
WireGuard, Tailscale, Nginx, Cloudflare Tunnels, Pi-hole, PowerDNS, UFW, iptables
---
## Metrics
| Metric | Value |
|--------|-------|
| VPN nodes | 7 (WireGuard) |
| Tailscale peers | 9 |
| WiFi APs | 5 |
| Domains | 48+ |
| Nginx sites | 48 |
| CF tunnels | 4 |
| Connections | 106 |
| Metric | Value | Source |
|--------|-------|--------|
| Nginx Sites | *live* | services.sh — /etc/nginx/sites-enabled via SSH |
| Tailscale Peers | *live* | services.sh — tailscale status via SSH |
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| CF Pages | *live* | cloudflare.sh — wrangler pages list |
| Net Connections | *live* | services.sh — ss -tun via SSH |
| Systemd Services | *live* | services.sh — systemctl list-units via SSH |

View File

@@ -8,64 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Security engineer who identified and remediated malware, credential leaks, and misconfigurations across a 7-node distributed fleet. Implements zero-trust networking via Cloudflare tunnels, WireGuard encryption, firewall policies, and credential management across 256 managed services.
Found a crypto miner, a cron dropper, and a leaked PAT in my own infrastructure. Cleaned all of it, rotated credentials fleet-wide, and rebuilt security from zero-trust architecture up — because the hardest incidents are the ones inside your own network.
---
## Experience
### BlackRoad OS | Founder & Security Lead | 2025Present
### BlackRoad OS | Founder & Security Engineer | 2025Present
**Incident Response**
- Discovered and removed obfuscated cron dropper executing from /tmp/op.py (Cecilia)
- Identified leaked GitHub PAT (gho_Gfu...) in Lucidia service file, initiated rotation
- Found and investigated xmrig crypto miner service configuration on Lucidia
- Migrated credentials from plaintext crontabs to secured env files (chmod 600) fleet-wide
**The Incidents: What I Found and How I Fixed It**
- Obfuscated cron dropper on Cecilia — exec'ing from /tmp/op.py every 5 minutes. Traced it, removed the cron entry, cleaned /tmp, audited all nodes
- xmrig crypto miner service configured on Lucidia — unit file referencing mining pool. Service removed, system audited for persistence mechanisms
- Leaked GitHub PAT (gho_Gfu...) embedded in a systemd service file on Lucidia — removed from config, token revoked on GitHub, all secrets migrated to chmod 600 env files
- 50+ SSH authorized keys on some nodes — audited every key, identified which ones are active, locked down access paths
**Network Security**
- Zero-trust architecture: all external access through 4 Cloudflare tunnels (no exposed ports)
- WireGuard encryption for all inter-node communication (10.8.0.x mesh)
- UFW firewall with INPUT DROP policy on edge nodes
- Tailscale ACLs for management access (9 peers)
**The Architecture: Trust Nothing by Default**
- Zero open ports — all external access through Cloudflare tunnels. No port forwarding, no exposed SSH, no public APIs
- WireGuard encryption for all inter-node traffic. UFW with INPUT DROP policy on edge nodes. Credential rotation enforced fleet-wide
- GitHub security scanning workflows check for AWS keys, tokens, passwords on every push — catches secrets before they ship
**Access Management**
- SSH key audit: identified 50+ keys on Alice and Octavia requiring cleanup
- NOPASSWD sudo policies documented across all nodes
- Identified 3 Tailscale ghost nodes (offline 15+ days) for decommissioning
- Per-user cron job audit across all fleet nodes
**Infrastructure Hardening**
- Disabled 16 unused skeleton microservices (freed 800 MB RAM, reduced attack surface)
- Masked crash-looping services (rpi-connect-wayvnc) to prevent service abuse
- Removed overclock settings causing instability
- Secured GitHub relay credentials in ~/.github-relay.env (chmod 600)
**Monitoring & Detection**
- Self-healing autonomy scripts detecting and restarting failed services
- 12 failed systemd units tracked and investigated daily
- Fleet-wide power monitoring detecting anomalous CPU usage
- Daily KPI collection tracking security-relevant metrics
**The Lesson**
- Security isn't a feature you add — it's what you find when you actually look. Every fleet needs an adversarial audit, not just a firewall
---
## Technical Skills
**Security:** Incident response, credential management, malware removal, hardening
**Networking:** WireGuard, Cloudflare Tunnels (zero-trust), UFW, nftables, Tailscale
**Linux:** systemd, SSH, file permissions, audit, service isolation
**Monitoring:** Custom KPI system, anomaly detection, SSH probes
**Tools:** Bash (212 CLI tools), Python, GitHub CLI
incident response, malware analysis, credential rotation, WireGuard, Cloudflare tunnels, UFW, SSH, Linux hardening
---
## Metrics
| Metric | Value |
|--------|-------|
| Incidents remediated | 5+ |
| Services managed | 256 |
| Firewall policies | UFW + nftables |
| VPN tunnels | 4 CF + 7 WG |
| Services disabled | 16+ |
| Credentials rotated | 4+ |
| Fleet nodes secured | 7 |
| Metric | Value | Source |
|--------|-------|--------|
| Failed Units | *live* | services.sh — systemctl --failed via SSH |
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| Systemd Services | *live* | services.sh — systemctl list-units via SSH |
| Tailscale Peers | *live* | services.sh — tailscale status via SSH |
| Nginx Sites | *live* | services.sh — /etc/nginx/sites-enabled via SSH |
| Nodes Online | *live* | fleet.sh — SSH probe to all nodes |

View File

@@ -8,66 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Data engineer building pipelines that collect, aggregate, and report 60+ KPIs daily across 9 data sources. Manages 283 databases (PostgreSQL, SQLite, D1, KV), FTS5 search indexes, and distributed tracing infrastructure across a 7-node fleet.
Needed to prove every metric on every resume. Built a 10-collector pipeline that pulls from GitHub API, SSH fleet probes, Cloudflare CLI, and local system — 80+ KPIs aggregated daily, pushed to KV, served live on 20 dashboards.
---
## Experience
### BlackRoad OS | Founder & Data Lead | 2025Present
### BlackRoad OS | Founder & Data Engineer | 2025Present
**Data Pipelines**
- Built 9-collector KPI pipeline: GitHub API, Gitea API, SSH fleet probes, Cloudflare API, local system metrics
- Daily aggregation of 60+ metrics into timestamped JSON snapshots
- Day-over-day delta computation for trend tracking
- Multi-format output: terminal report, Slack (block kit), markdown, raw JSON
**The Problem: Unverifiable Claims Don't Get Hired**
- Resumes say "managed 200+ services" but nobody can verify it. Needed machine-verified metrics with traceable sources
- Built 10 automated collectors: GitHub, GitHub-deep, all-orgs, Gitea, fleet, services, autonomy, LOC, local, Cloudflare
- Each collector runs independently, outputs JSON snapshots. Daily aggregation merges into a single file with 80+ keys. Every number has a source
**Data Collection**
- GitHub collector: commits, PRs, events, languages, stars, forks via `gh` CLI API
- Fleet collector: Python probes piped over SSH to each node (CPU, RAM, disk, Docker, Ollama)
- Services collector: deep inspection of Ollama, Docker, PostgreSQL, Nginx, systemd per node
- Autonomy collector: self-healing events, cron jobs, timers, service restarts
- Cloudflare collector: D1 databases, KV namespaces, R2 buckets, Pages projects
**The Pipeline: Collect \u2192 Aggregate \u2192 Serve**
- Fleet probes: Python scripts piped over SSH stdin to remote nodes — avoids shell quoting issues, runs on any node without installing anything
- Cloudflare inventory: wrangler CLI queries Pages, D1, KV, R2 counts. GitHub API: paginated queries across 17 organizations, deduped
- Daily JSON pushed to Cloudflare KV → Worker serves 20 live resume dashboards. Every number on this page updated automatically at 6 AM
**Data Storage**
- 11 PostgreSQL databases across fleet
- 230 SQLite databases (1.4 GB) with 111 registered systems
- 22 D1 databases (40 MB) for serverless applications
- 46 KV namespaces for edge state
- FTS5 index across 354 repos and 156,675 memory entries
**Data Quality**
- All KPI metrics machine-verified from live sources
- Automated collection via cron (6 AM daily) + GitHub Actions
- Snapshot retention: daily JSON files with full source data
- Distributed tracing with nanosecond-precision spans
**Reporting**
- Terminal dashboard with color-coded metrics and progress bars
- Slack webhook integration with block kit formatting
- Markdown report for GitHub README auto-update
- Historical comparison with delta indicators
**The Scale: 283 Databases, One Pipeline**
- 283 databases across PostgreSQL, SQLite, D1, KV, Qdrant — each one discovered, counted, and tracked by the collectors
- FTS5 full-text search across 156K entries. 111 registered systems. Day-over-day deltas show trends, not just snapshots
---
## Technical Skills
**Pipelines:** Bash, Python, SSH, cron, GitHub Actions
**Databases:** PostgreSQL, SQLite/FTS5, Cloudflare D1, KV
**APIs:** GitHub API, Cloudflare API, Gitea API, Ollama API
**Formats:** JSON, Markdown, Slack Block Kit
**Tools:** gh CLI, Wrangler, Python (json, urllib), Bash
Python, PostgreSQL, SQLite/FTS5, Cloudflare D1, data pipelines, SSH probes, JSON, Bash
---
## Metrics
| Metric | Value |
|--------|-------|
| Daily KPIs tracked | 60+ |
| Data collectors | 9 |
| Databases managed | 283 |
| Data pipeline runs | Daily automated |
| Repos indexed | 354 |
| Memory entries | 156,675 |
| Report formats | 4 |
| Metric | Value | Source |
|--------|-------|--------|
| Lines of Code | *live* | loc.sh — cloc + fleet SSH |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |
| PostgreSQL DBs | *live* | services.sh — psql -l via SSH |
| SQLite DBs | *live* | local.sh — find ~/.blackroad -name *.db |
| Total DB Rows | *live* | local.sh — sqlite3 row count across 230 DBs |
| D1 Databases | *live* | cloudflare.sh — wrangler d1 list --json |

View File

@@ -8,64 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Solutions architect who designed and built a complete hybrid edge-cloud platform: 7 compute nodes, 99 Cloudflare deployments, 1,810 repositories, 283 databases, 27 AI models, and 48+ production domains. Architects systems spanning infrastructure, application, data, and AI layers.
Designed a hybrid architecture that combines $700 in edge hardware with Cloudflare's global network — 178 cloud resources, 48+ domains, 7 nodes, 52 TOPS AI compute, all working as one system. The proof is that it's running right now.
---
## Experience
### BlackRoad OS | Founder & Architect | 2025Present
### BlackRoad OS | Founder & Solutions Architect | 2025Present
**Architecture Design**
- Hybrid edge-cloud: 5 Pi edge nodes + 2 cloud VMs + Cloudflare serverless
- Zero-trust networking: Cloudflare tunnels for external access, WireGuard for internal mesh
- Multi-database strategy: PostgreSQL (relational), SQLite (embedded), D1 (serverless), KV (cache), Qdrant (vector)
- AI inference tier: 27 models distributed across 3 nodes with NPU acceleration (52 TOPS)
**The Design Decision: Why Hybrid**
- Pure cloud: fast to start, expensive to scale, no data sovereignty. Pure edge: cheap to run, limited reach, hard to expose
- Combined both: Cloudflare for global CDN, edge compute, and serverless databases. Pi fleet for persistent workloads, AI inference, and data ownership
- WireGuard mesh connects everything. Cloudflare tunnels expose services. Tailscale provides management plane. Three networking layers, one unified system
**Platform Scale**
- 7,212,320 lines of code across 1,603 GitHub repos (20 languages)
- 207 Gitea repos across 7 organizations (self-hosted)
- 99 Cloudflare Pages, 22 D1, 46 KV, 11 R2 for serverless layer
- 256 systemd services, 14 Docker containers, 48 Nginx sites
**The Stack: 178 Cloudflare Resources + 7 Fleet Nodes**
- 99 Pages (global CDN) + 23 D1 (serverless SQL) + 47 KV (edge config) + 11 R2 (object storage) = 178 managed resources
- 5 Pi nodes for persistent compute: Docker, Ollama, PostgreSQL, Nginx. 2 cloud VMs for VPN hub and public services
- AI inference distributed across 3 nodes with 52 TOPS — requests route to the node with the right model loaded
**Integration Design**
- GitHub ↔ Gitea sync relay (30-minute automated mirror)
- Cloudflare tunnel routing: 4 tunnels mapping 48+ domains to fleet services
- AI pipeline: model deployment → inference API → SSE streaming → web client
- KPI pipeline: 9 collectors → aggregation → Slack + markdown + terminal reports
**Brand & Product**
- 75 design templates with brand-locked design system (gradient, fonts, rules)
- 15 page template types covering full SaaS application surface
- Code search engine: 354 repos indexed with FTS5
- AI image generation hub with 4 backend agents
**Operational Architecture**
- Self-healing: heartbeat/heal cycles on all nodes
- Observability: 60+ KPIs, distributed tracing, power monitoring
- Security: credential rotation, firewall policies, zero-trust access
- Capacity: 707 GB storage, 20 GB RAM, 52 TOPS AI, auto-scaling via Cloudflare
**The Validation**
- This architecture runs 48+ production domains, serves real traffic, and costs under $50/month in cloud spend. The rest is hardware you own
- 283 databases across 5 engines — each one placed where the latency and consistency requirements demand it
---
## Technical Skills
**Architecture:** Hybrid edge-cloud, microservices, event-driven, zero-trust
**Cloud:** Cloudflare (full stack), DigitalOcean, Tailscale
**Compute:** Raspberry Pi fleet, Docker Swarm, systemd, Hailo-8 NPU
**Data:** PostgreSQL, SQLite, D1, KV, R2, Qdrant, FTS5
**Languages:** Python, JavaScript, TypeScript, Bash, Go, C
system design, Cloudflare, WireGuard, distributed systems, edge computing, AI infrastructure
---
## Metrics
| Metric | Value |
|--------|-------|
| Total LOC | 7,212,320 |
| Repos | 309 (102 GH + 207 Gitea) |
| Databases | 283 |
| CF resources | 178 (99 Pages + 22 D1 + 46 KV + 11 R2) |
| AI models | 27 (48.1 GB) |
| Domains | 48+ |
| Services | 256 |
| Metric | Value | Source |
|--------|-------|--------|
| CF Pages | *live* | cloudflare.sh — wrangler pages list |
| D1 Databases | *live* | cloudflare.sh — wrangler d1 list --json |
| KV Namespaces | *live* | cloudflare.sh — wrangler kv list |
| R2 Buckets | *live* | cloudflare.sh — wrangler r2 bucket list |
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |

View File

@@ -8,7 +8,7 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Technical lead building BlackRoad OS from zero to 7.2M LOC, 1,810 repos, and a 7-node production fleet. Drives technical direction across AI/ML, infrastructure, cloud, and developer tooling. 326 commits/day velocity with 4,019 PRs merged and 60+ KPIs tracked daily.
326 commits/day. 4,019 PRs merged. 1,603 repos across 17 organizations. 20 languages. 7.2M lines of code. This is what sustained technical velocity looks like when you architect for speed and automate everything that slows you down.
---
@@ -16,56 +16,34 @@ Technical lead building BlackRoad OS from zero to 7.2M LOC, 1,810 repos, and a 7
### BlackRoad OS | Founder & Technical Lead | 2025Present
**Technical Vision & Execution**
- Architected full-stack platform: edge compute (5 Pis) + cloud (Cloudflare + DO) + AI (27 models)
- Grew codebase to 7,212,320 LOC across 1,603 GitHub repos in 20 languages
- Maintained 326 commits/day development velocity
- Merged 4,019 pull requests across all repositories
**The Velocity: Why These Numbers Are Real**
- 326 commits/day isn't sprinting — it's the natural output of 212 CLI tools, automated pipelines, and infrastructure that doesn't fight you
- 4,019 PRs merged across all repos. Every change goes through a PR, even solo. The discipline of code review applies to yourself
- 1,603 GitHub repos across 17 organizations — each org has a purpose (AI, Cloud, Hardware, Education, etc.). 207 more on self-hosted Gitea
**Infrastructure Leadership**
- Designed hybrid edge-cloud architecture with 7 compute nodes
- 99 Cloudflare Pages, 22 D1 databases, 46 KV, 11 R2 buckets
- WireGuard mesh VPN + Cloudflare tunnels for secure distributed operations
- 256 systemd services, 14 Docker containers, 48 Nginx sites
**The Breadth: 20 Languages, One Person**
- Python (470 repos), JavaScript (114), HTML (314), Shell (160), TypeScript (85), Go, C, MDX, Dockerfile, CSS — the right language for the right job
- 7.2M lines of code tracked daily by automated LOC collector — not vanity, verification. Every line is accounted for
- Custom programming language (RoadC) with full interpreter: lexer, parser, tree-walking evaluator — because sometimes the right tool doesn't exist yet
**AI/ML Direction**
- Deployed 27 Ollama models (48.1 GB) with 52 TOPS hardware acceleration
- Fine-tuned 4 custom models for domain-specific applications
- Built AI image generation pipeline with 4 backend agents
- SSE streaming proxy for real-time model interaction
**Developer Experience**
- Built 212 CLI tools (121 MB) for team productivity
- Self-hosted Gitea with 207 repos across 7 organizations
- Code search engine indexing 354 repos with FTS5
- Automated deployment pipelines via GitHub Actions and Cloudflare
**Operational Excellence**
- Built KPI system tracking 60+ metrics across 9 data sources
- Self-healing automation on all fleet nodes
- Security hardening: malware removal, credential rotation, zero-trust networking
- Power and thermal optimization across edge fleet
**The Principle**
- Technical leadership isn't about managing people. It's about building systems so well that one person can operate what usually takes a team
---
## Technical Skills
**Leadership:** Architecture design, technical roadmap, operational excellence
**Stack:** Python, JavaScript/TypeScript, Bash, Go | FastAPI, Next.js, React
**Infrastructure:** Linux, Docker, Cloudflare, WireGuard, Raspberry Pi, Hailo-8
**Data:** PostgreSQL, SQLite, D1, KV, R2, FTS5, Qdrant
**AI:** Ollama, custom fine-tuning, NPU inference, multi-model orchestration
Python, JavaScript, TypeScript, Bash, Go, C, React, FastAPI, system design, mentorship
---
## Metrics
| Metric | Value |
|--------|-------|
| Lines of code | 7,212,320 |
| Commits/day | 326 |
| PRs merged | 4,019 |
| Repos | 1,810 |
| AI models | 27 |
| Fleet nodes | 7 |
| KPIs tracked | 60+ |
| Metric | Value | Source |
|--------|-------|--------|
| Commits Today | *live* | github.sh — gh api events |
| PRs Merged | *live* | github.sh — gh api search/issues |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |
| Lines of Code | *live* | loc.sh — cloc + fleet SSH |
| Languages | *live* | github-all-orgs.sh — repo language field |
| CLI Tools | *live* | local.sh — ls ~/bin | wc -l |

View File

@@ -8,65 +8,41 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Python developer with 470 Python repositories building FastAPI services, CLI tools, data pipelines, AI model interfaces, and automation scripts. Operates 11 PostgreSQL databases, 230 SQLite databases, and production APIs serving 27 AI models across a distributed fleet.
470 Python repos. FastAPI services handling AI inference, fleet probes, and data pipelines. Python isn't just a language in this stack — it's the glue that holds 7 nodes, 27 models, and 283 databases together.
---
## Experience
### BlackRoad OS | Founder & Python Lead | 2025Present
### BlackRoad OS | Founder & Python Developer | 2025Present
**API Development (FastAPI)**
- CECE API: AI model interaction service with streaming responses
- TTS API: Text-to-speech generation backend
- Lucidia API: Full application backend with auth integration
- KPI collection service: 9 data collectors with JSON aggregation
**The Services: Python in Production**
- CECE API (FastAPI) — custom LLM personality engine with text-to-speech. Runs on Pi 5, serves inference over HTTP
- Lucidia API (FastAPI) — application platform backend. CarPool (Next.js + Clerk) frontend, Python API layer
- Fleet probes — Python scripts piped over SSH stdin to remote nodes. No installation needed. Collects CPU, RAM, disk, Docker, Ollama, systemd stats
- KPI aggregation pipeline — 10 collectors output JSON, Python merges into daily summary with 80+ keys, pushes to KV
**Data & Automation**
- 230 SQLite databases (1.4 GB) managed via Python sqlite3
- FTS5 full-text search across 354 repos and 156,675 memory entries
- Daily KPI pipeline: GitHub/Gitea API collection, SSH fleet probing, metric aggregation
- Fleet probe scripts: Python-based remote execution piped over SSH
**AI/ML Integration**
- Ollama API integration for 27 model endpoints
- Custom fine-tuning pipeline for 4 CECE personality models
- Image generation orchestration across 4 backends (DALL-E, Flux, SDXL, Together)
- SSE/WebSocket streaming for real-time model responses
**CLI & Scripting**
- RoadC interpreter: lexer, parser, tree-walking evaluator in Python
- `roadid`: fleet identity management CLI
- `carpool`: agent routing and load balancing tool
- Fleet power optimization scripts with --apply mode
- Observability system with distributed tracing
**Infrastructure**
- 35 pip packages installed for production use
- Docker containerization of Python services
- GitHub Actions CI/CD for Python projects
- PostgreSQL (psycopg2) and SQLite database management
**The Tools: Python Solving Real Problems**
- FTS5 search engine — Python + SQLite full-text search across 156K memory entries. Sub-millisecond lookups
- RoadC interpreter — custom language with Python-style indentation. Lexer, parser, and tree-walking evaluator, all in Python
- AI image generation hub — Python orchestrating 4 backend agents (DALL-E, Flux, SDXL, FAL), automatic model selection
- Automated reporting — terminal dashboards, Slack notifications, markdown reports, resume generation. All Python
---
## Technical Skills
**Python:** FastAPI, sqlite3, json, urllib, subprocess, asyncio
**Databases:** PostgreSQL (psycopg2), SQLite/FTS5, Cloudflare D1
**AI/ML:** Ollama, model fine-tuning, inference APIs
**APIs:** REST, WebSocket, SSE, GitHub API, Cloudflare API
**Tools:** Docker, GitHub Actions, pip, venv
Python, FastAPI, SQLite, PostgreSQL, Ollama, asyncio, subprocess, json, data pipelines
---
## Metrics
| Metric | Value |
|--------|-------|
| Python repos | 470 |
| APIs built | 4+ |
| SQLite DBs | 230 |
| PostgreSQL DBs | 11 |
| pip packages | 35 |
| Models integrated | 27 |
| Lines of code | 7,212,320 |
| Metric | Value | Source |
|--------|-------|--------|
| Lines of Code | *live* | loc.sh — cloc + fleet SSH |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |
| PostgreSQL DBs | *live* | services.sh — psql -l via SSH |
| SQLite DBs | *live* | local.sh — find ~/.blackroad -name *.db |
| AI Models | *live* | services.sh — ollama list via SSH |
| Systems Registered | *live* | local.sh — sqlite3 systems count |

View File

@@ -8,7 +8,7 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Product engineer who built BlackRoad OS from concept to production: 99 deployed web properties, 48+ custom domains, 75 design templates, a daily KPI dashboard, and AI-powered tools. Ships across the full stack — frontend templates, backend APIs, infrastructure, and monitoring.
99 live sites, but no design team. Built a brand-locked design system with 75 templates, 15 page types, and automated compliance auditing — every site ships on-brand because the system won't let you ship off-brand.
---
@@ -16,56 +16,35 @@ Product engineer who built BlackRoad OS from concept to production: 99 deployed
### BlackRoad OS | Founder & Product Engineer | 2025Present
**Product Development**
- Built and deployed 99 Cloudflare Pages projects across 48+ custom domains
- Created 75 design templates (15 page types) with brand-locked design system
- AI image generation hub (images.blackroad.io) with 4 backend agents
- Code search engine (index.blackroad.io) indexing 354 repos
- Daily KPI dashboard tracking 60+ metrics with Slack integration
**The System: Brand as Code**
- Gradient spectrum locked: #FF6B2B#FF2255#CC00AA#8844FF#4488FF#00D4FF. No other colors in containers with text
- Typography locked: Space Grotesk for display, JetBrains Mono for code, Inter for body. Golden ratio spacing (φ = 1.618)
- Automated brand compliance auditing — tooling scans all 99 sites for violations. Mass update tooling applies fixes fleet-wide
**Frontend**
- 75 HTML/JSX templates: landing, dashboard, auth, settings, blog, portfolio, status, docs, pricing
- Brand system: 6-color gradient, 3 font families (Space Grotesk, JetBrains Mono, Inter)
- React/Next.js applications with real-time streaming
**The Coverage: 15 Page Types, Every SaaS Surface**
- Landing (hero, light alt), pricing, blog (listing + article), docs, dashboard, auth, portfolio, contact, error-404, status, settings, team, changelog
- 75 design templates (HTML/JSX) — each one brand-locked, responsive, and production-ready. Plug in content and deploy
- 99 Cloudflare Pages projects across 48+ custom domains — every site is live, every domain has SSL, every page loads in under 2 seconds
**Backend & APIs**
- FastAPI services: CECE (AI), TTS (speech), Lucidia (application)
- 14 Docker containers serving production workloads
- 11 PostgreSQL + 230 SQLite databases for application state
- WebSocket/SSE streaming for AI model interaction
**Infrastructure**
- 7-node hybrid edge-cloud fleet
- 27 AI models (48.1 GB) for product features
- 256 systemd services, 52 automated tasks
- Self-healing autonomy on all production nodes
**Metrics & Growth**
- 7,212,320 lines of code across 1,603 repos
- 326 commits/day development velocity
- 4,019 PRs merged
- 60+ KPIs tracked daily across 9 collectors
**The Product: AI Image Generation**
- images.blackroad.io — AI image generation hub with 4 backend agents, R2 storage, D1 metadata, single API endpoint
- Users request images by prompt. System routes to best model (DALL-E for quality, Flux for speed). Results stored and served from R2
---
## Technical Skills
**Frontend:** React, Next.js, HTML/CSS, JSX, Cloudflare Pages
**Backend:** Python/FastAPI, Node.js, Docker
**Design:** Brand systems, template architecture, responsive design
**Data:** PostgreSQL, SQLite, D1, KV, FTS5
**Infrastructure:** Cloudflare, Docker Swarm, Linux, WireGuard
React, Next.js, HTML/CSS, Cloudflare Pages, design systems, brand management, Figma
---
## Metrics
| Metric | Value |
|--------|-------|
| Deployed sites | 99 |
| Design templates | 75 |
| Domains | 48+ |
| AI models | 27 |
| Lines of code | 7,212,320 |
| Commits/day | 326 |
| KPIs tracked | 60+ |
| Metric | Value | Source |
|--------|-------|--------|
| CF Pages | *live* | cloudflare.sh — wrangler pages list |
| Templates | *live* | local.sh — ls ~/Desktop/templates |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |
| Lines of Code | *live* | loc.sh — cloc + fleet SSH |
| Nginx Sites | *live* | services.sh — /etc/nginx/sites-enabled via SSH |
| CLI Tools | *live* | local.sh — ls ~/bin | wc -l |

View File

@@ -1,6 +1,6 @@
# Alexa Amundson
**CTO / Technical Co-Founder**
**Startup CTO**
amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/blackboxprogramming)
@@ -8,87 +8,51 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
## Summary
Technical founder who built BlackRoad OS from zero to a production platform: 7.2M LOC, 1,810 repos, 7-node fleet, 99 cloud deployments, 27 AI models, 283 databases, and 48+ live domains. Operates the entire technical stack — infrastructure, backend, frontend, AI, DevOps, security, and observability — solo.
Built BlackRoad OS from nothing — no team, no funding, no existing code. One person, 7.2M lines of code, 1,810 repos, 7-node fleet, 27 AI models, 283 databases, 48+ live domains. The entire company's technical stack, soup to nuts, solo.
---
## Experience
### BlackRoad OS | Founder & CTO | 2025Present
### BlackRoad OS | Founder & Startup CTO | 2025Present
**Built From Zero**
- 7,212,320 lines of code across 1,603 GitHub + 207 Gitea repos
- 20 programming languages: JavaScript, Python, TypeScript, HTML, Shell, Dockerfile, Go, C, MDX, CSS
- 326 commits/day sustained velocity, 4,019 PRs merged
- 212 CLI tools (121 MB) built for operational efficiency
**From Zero to Production — Alone**
- Started with an idea and a credit card. Now: 7.2M lines of code, 1,603 GitHub repos across 17 orgs, 207 Gitea repos across 7 more
- 326 commits/day sustained velocity. 4,019 PRs merged. 20 programming languages. 212 CLI tools built for every operational workflow
- No investors, no employees, no outsourcing — every line of code, every server config, every DNS record is my work
**Infrastructure (designed & operated)**
- 5 Raspberry Pi edge nodes + 2 DigitalOcean VMs + Cloudflare serverless
- WireGuard mesh VPN + 4 Cloudflare tunnels + Tailscale (9 peers)
- 256 systemd services, 14 Docker containers, 48 Nginx sites
- 707 GB storage, 20 GB RAM, 52 TOPS AI compute (2× Hailo-8)
**The Infrastructure Decision: Own Everything**
- 5 Raspberry Pi edge nodes + 2 cloud VMs + Cloudflare serverless — total hardware cost under $700, cloud spend under $50/month
- 256 systemd services, 14 Docker containers, 48 Nginx sites, 27 Ollama models (48.1 GB), 52 TOPS AI compute (2x Hailo-8)
- WireGuard mesh + 4 Cloudflare tunnels + Tailscale overlay — three networking layers ensuring everything talks to everything, encrypted
**AI/ML (deployed & tuned)**
- 27 Ollama models (48.1 GB) across 3 inference nodes
- 4 custom fine-tuned models
- AI image generation with 4 backend agents
- Streaming inference via SSE proxy
**The Cloud Platform: 178 Managed Resources**
- 99 Pages, 23 D1, 47 KV, 11 R2 — Cloudflare is the global layer. Fleet is the sovereign layer. Both managed through CLI automation
- 283 databases across 5 engines. 48+ custom domains. 52 automated tasks. 60+ KPIs tracked daily across 10 collectors
**Cloud (Cloudflare full-stack)**
- 99 Pages projects, 22 D1 databases, 46 KV namespaces, 11 R2 buckets
- 48+ custom domains with SSL/TLS
- Serverless Workers for edge compute
**Data (283 databases)**
- 11 PostgreSQL, 230 SQLite (1.4 GB), 22 D1, 46 KV
- FTS5 search across 354 repos
- Distributed tracing with nanosecond precision
**Automation & Ops**
- 52 automated tasks (17 Mac cron + 35 fleet timers)
- Self-healing autonomy on all nodes
- Daily KPI collection: 60+ metrics across 9 data sources
- Slack integration for operational reporting
**Security**
- Malware discovery and removal
- Credential rotation fleet-wide
- Zero-trust architecture via Cloudflare tunnels
**Product**
- 75 design templates with brand-locked system
- 15 page types covering full SaaS surface
- Custom programming language (RoadC) with interpreter
**Why It Matters**
- This isn't a portfolio project — it's a production system serving real traffic. Every metric on this page is collected from live infrastructure, right now
- A CTO who built the whole stack understands every layer. I don't delegate debugging because I wrote the code that's breaking
---
## The Numbers
## Technical Skills
| Category | Metric | Value |
|----------|--------|-------|
| **Code** | Lines of code | 7,212,320 |
| | Commits/day | 326 |
| | PRs merged | 4,019 |
| | Repositories | 1,810 |
| | Languages | 20 |
| **Infra** | Fleet nodes | 7 |
| | Systemd services | 256 |
| | Docker containers | 14 |
| | Nginx sites | 48 |
| **AI** | Models deployed | 27 (48.1 GB) |
| | AI acceleration | 52 TOPS |
| | Custom models | 4 |
| **Cloud** | CF Pages | 99 |
| | D1 databases | 22 |
| | KV namespaces | 46 |
| | R2 buckets | 11 |
| | Domains | 48+ |
| **Data** | Total databases | 283 |
| | PostgreSQL | 11 |
| | SQLite | 230 (1.4 GB) |
| **Ops** | CLI tools | 212 |
| | Automated tasks | 52 |
| | KPIs tracked | 60+ |
| **Local** | Brew packages | 293 |
| | Templates | 75 |
| | Shell scripts | 91 |
Python, JavaScript, TypeScript, Bash, Go, C, React, Next.js, FastAPI, Docker, Linux, Nginx, WireGuard, Cloudflare, PostgreSQL, SQLite, systemd, Hailo-8, Ollama, GitHub Actions
---
## Metrics
| Metric | Value | Source |
|--------|-------|--------|
| Lines of Code | *live* | loc.sh — cloc + fleet SSH |
| Total Repos | *live* | github-all-orgs.sh — gh api repos (17 owners) |
| Commits Today | *live* | github.sh — gh api events |
| PRs Merged | *live* | github.sh — gh api search/issues |
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
| AI Models | *live* | services.sh — ollama list via SSH |
| CF Pages | *live* | cloudflare.sh — wrangler pages list |
| Docker Containers | *live* | services.sh — docker ps via SSH |
| Systemd Services | *live* | services.sh — systemctl list-units via SSH |
| Nginx Sites | *live* | services.sh — /etc/nginx/sites-enabled via SSH |