Files
alexa-amundson-resume/roles/02-ai-ml-engineer.md
Alexa Amundson 292fa97a8e kpi: auto-update metrics 2026-03-13
RoadChain-SHA2048: 9f948f149bd9f508
RoadChain-Identity: alexa@sovereign
RoadChain-Full: 9f948f149bd9f508d25792c617d1c4049cf814c3acbb3181886684f1d89e2ab84fdb0364ce227ef1c03c0b59335e5d1aad9434f983ad375d50eca597e7daea8f9bb2a3e40116fa13de0453865ff2665fb759fc63204fe222360becc3b8c447fb1fbe7e10a440e8107745b57c643682cb2e4f7cffbb9c8c0e1bc5b03623fcbd41d0ab39740c02f148d5309591013f3d65810692706da448cf7e04b4368ef3738898fcc0f2414377cf1ff1f5897a27cfd96289c1f1875a3a93ec732453686f07621952135ae7df10cce155ebc206d3d3a3a9931fc7683d635c74b67d080fc170a8b8238a9eda91ba9193aaeb17737276b9140330cf622d656efdb3e968f46d1a24
2026-03-13 01:07:28 -05:00

72 lines
2.7 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Alexa Amundson
**AI/ML Engineer**
amundsonalexa@gmail.com | [github.com/blackboxprogramming](https://github.com/blackboxprogramming)
---
## Summary
AI/ML engineer operating 27 deployed language models (48.1 GB) across a distributed edge fleet with 52 TOPS of dedicated AI acceleration. Builds custom model pipelines, inference APIs, and AI-powered automation across a 7-node infrastructure serving 48+ domains.
---
## Experience
### BlackRoad OS | Founder & AI Lead | 2025Present
**Model Deployment & Operations**
- Deploy and manage 27 Ollama models (48.1 GB) across 3 edge nodes
- Fine-tuned 4 custom CECE personality models for domain-specific generation
- Operate 2× Hailo-8 NPUs (26 TOPS each = 52 TOPS total) for accelerated inference
- Built Ollama Bridge SSE proxy for streaming model responses to web clients
**AI Infrastructure**
- Designed multi-node inference routing across Pi 5 fleet (20 GB RAM total)
- Built AI image generation hub (images.blackroad.io) with 4 backend agents (DALL-E, Flux, SDXL)
- Manage model lifecycle: deployment, monitoring, memory optimization, thermal throttle prevention
- Reduced inference node temperature from 73.8°C to 57.9°C by isolating runaway generation loops
**APIs & Integration**
- Built CECE API (FastAPI) for custom model interaction and TTS generation
- Created code search engine indexing 354 repos with FTS5 full-text search
- Operate 11 PostgreSQL databases for AI application state
- Deploy AI services via Cloudflare Workers (99 Pages, 22 D1 databases)
**Data & Knowledge Systems**
- Manage 230 SQLite databases (1.4 GB) for agent memory, metrics, and state
- Built FTS5 knowledge index across 156,675 memory entries
- 111 registered systems tracked in systems database
- Custom RoadC programming language with tree-walking interpreter
**Automation & Monitoring**
- Daily automated KPI collection tracking 60+ AI and infrastructure metrics
- Power optimization across all AI nodes (CPU governors, voltage tuning)
- Self-healing autonomy scripts ensuring model availability
---
## Technical Skills
**AI/ML:** Ollama, Hailo-8 NPU, DALL-E, Flux, SDXL, custom fine-tuning, LLM orchestration
**Languages:** Python (470 repos), JavaScript (114 repos), TypeScript (85 repos), Bash, Go, C
**Frameworks:** FastAPI, Next.js, React, Node.js
**Infrastructure:** Raspberry Pi 5 fleet, Docker Swarm, WireGuard, Cloudflare
**Databases:** PostgreSQL, SQLite/FTS5, Cloudflare D1, KV stores, Qdrant (vector)
**Tools:** Ollama, Docker, GitHub Actions, Gitea, Wrangler
---
## Metrics
| Metric | Value |
|--------|-------|
| Models deployed | 27 (48.1 GB) |
| AI acceleration | 52 TOPS |
| Custom models | 4 fine-tuned |
| Image gen agents | 4 |
| Databases | 241 (11 PG + 230 SQLite) |
| Lines of code | 7,212,320 |
| Repos | 1,810 |