mirror of
https://github.com/blackboxprogramming/alexa-amundson-resume.git
synced 2026-03-18 05:34:08 -05:00
kpi: auto-update metrics 2026-03-13
RoadChain-SHA2048: c645c1292ab1555e RoadChain-Identity: alexa@sovereign RoadChain-Full: c645c1292ab1555ebe6982915536d1c94701ff6bb16c20ed6ef4144eb50c9f984b4bfe5b9902109e8defd958d6be43ced8ec11cf95d6241536cd4da0b75f8fb48cbeb1b9f450c8f665b73d39e837d23e73e2ba4201af4dc40c02a34283efb04b39c612083465536f194f16adfadb1b56f714a65b918f40750f54eebf7724236861de173ec31963ff3b1b988d712be7e5acc3fe391eb804d3fdcfb9ccf77afc732660d23fff801f894318327eabf775eb4f4e67f7f22d07f23b0e17f6594cfe95b83b275fb7baaa97115e86562604fc5b47cc8024574b61396924e0ee2b7e454b0a1480c3076c7ad72408ceb4a75360d2d49c7d805c37ac5315af00e4a8ca2262
This commit is contained in:
@@ -8,64 +8,43 @@ amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/bl
|
||||
|
||||
## Summary
|
||||
|
||||
Edge computing engineer operating a 5-node Raspberry Pi fleet with 52 TOPS AI acceleration, 27 deployed models, WireGuard mesh networking, and carrier-grade WiFi mesh. Builds edge-native services with self-healing automation, thermal management, and hybrid edge-cloud architecture.
|
||||
Cloud inference is someone else's computer running your data. Deployed 27 AI models on-device across 5 Pi nodes with 52 TOPS acceleration, built a WiFi mesh for local connectivity, and kept it all running with self-healing automation.
|
||||
|
||||
---
|
||||
|
||||
## Experience
|
||||
|
||||
### BlackRoad OS | Founder & Edge Lead | 2025–Present
|
||||
### BlackRoad OS | Founder & Edge Computing Engineer | 2025–Present
|
||||
|
||||
**Edge Fleet**
|
||||
- 5 Raspberry Pi nodes: 4× Pi 5 (8 GB RAM, NVMe), 1× Pi 400 (4 GB RAM)
|
||||
- 2× Hailo-8 NPUs (26 TOPS each) for on-device AI inference
|
||||
- 707 GB total fleet storage, 20 GB total RAM
|
||||
- Docker Swarm orchestration with automatic service placement
|
||||
**The Vision: AI at the Edge, Not in the Cloud**
|
||||
- 27 Ollama models (48.1 GB) running on 3 Pi 5 nodes — inference happens on-premise, data never leaves the network
|
||||
- 2x Hailo-8 NPUs (52 TOPS total) for hardware-accelerated inference — PCIe integration, driver management, firmware updates
|
||||
- 4 custom fine-tuned CECE models — personality, voice, and domain expertise that can't be replicated with off-the-shelf models
|
||||
|
||||
**Edge AI**
|
||||
- 27 Ollama models (48.1 GB) running locally across 3 nodes
|
||||
- 4 custom fine-tuned models for domain-specific inference
|
||||
- SSE proxy for streaming model responses to web clients
|
||||
- Image generation pipeline with 4 backend agents
|
||||
**The Network: Mesh Connectivity Without Internet**
|
||||
- RoadNet WiFi mesh: 5 APs on channels 1/6/11, 5 subnets (10.10.x.0/24), NAT through wlan0 — devices connect to fleet directly
|
||||
- WireGuard mesh for encrypted node-to-node communication. Tailscale overlay (9 peers) for remote management from anywhere
|
||||
- Pi-hole DNS for local resolution + custom zones (.cece, .blackroad) — edge services discoverable by name, not IP
|
||||
|
||||
**Edge Networking**
|
||||
- RoadNet: 5 WiFi access points (channels 1/6/11), dedicated 10.10.x.0/24 subnets
|
||||
- WireGuard mesh VPN (10.8.0.x) connecting all nodes to cloud hub
|
||||
- 4 Cloudflare tunnels for secure external access
|
||||
- Pi-hole DNS, PowerDNS, custom dnsmasq zones at edge
|
||||
|
||||
**Edge Reliability**
|
||||
- Self-healing cron automation on every node
|
||||
- Power optimization: CPU governors, voltage tuning, thermal throttle prevention
|
||||
- Avg fleet temperature: 44.8°C (down from 73.8°C peak after optimization)
|
||||
- 256 systemd services managed across fleet
|
||||
|
||||
**Hybrid Architecture**
|
||||
- Edge nodes handle AI inference, local services, DNS, monitoring
|
||||
- Cloud (Cloudflare) handles 99 Pages deployments, 22 D1 databases, CDN
|
||||
- DigitalOcean VMs as WireGuard hubs and public endpoints
|
||||
- Tailscale overlay (9 peers) for cross-network management
|
||||
**The Challenge: Keeping Edge Alive**
|
||||
- Edge hardware fails differently than cloud — SD cards degrade, power supplies sag, thermal throttling kills inference mid-response
|
||||
- Self-healing autonomy on every node. Power monitoring every 5 minutes. Automatic service restarts. Temperature alerts before shutdown
|
||||
|
||||
---
|
||||
|
||||
## Technical Skills
|
||||
|
||||
**Edge:** Raspberry Pi 5, Hailo-8 NPU, NVMe, PCIe, GPIO, I2C
|
||||
**Networking:** WireGuard, WiFi mesh, Cloudflare Tunnels, DNS (Pi-hole, PowerDNS)
|
||||
**AI:** Ollama, Hailo-8 inference, custom model fine-tuning
|
||||
**Containers:** Docker, Docker Swarm
|
||||
**Automation:** systemd (256 services), cron (52 tasks), self-healing scripts
|
||||
Raspberry Pi, Hailo-8, Ollama, WireGuard, WiFi mesh, Pi-hole, Docker, Linux
|
||||
|
||||
---
|
||||
|
||||
## Metrics
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Edge nodes | 5 |
|
||||
| AI acceleration | 52 TOPS |
|
||||
| Models deployed | 27 (48.1 GB) |
|
||||
| WiFi APs | 5 |
|
||||
| Fleet storage | 707 GB |
|
||||
| Avg temperature | 44.8°C |
|
||||
| Services | 256 |
|
||||
| Metric | Value | Source |
|
||||
|--------|-------|--------|
|
||||
| Fleet Nodes | *live* | fleet.sh — SSH probe to all nodes |
|
||||
| Nodes Online | *live* | fleet.sh — SSH probe to all nodes |
|
||||
| AI Models | *live* | services.sh — ollama list via SSH |
|
||||
| Avg Temp | *live* | fleet.sh — /sys/class/thermal via SSH |
|
||||
| Tailscale Peers | *live* | services.sh — tailscale status via SSH |
|
||||
| Fleet Storage (GB) | *live* | fleet.sh — df via SSH |
|
||||
|
||||
Reference in New Issue
Block a user