feat: LinkedIn, Dev.to, onboarding emails, landing page
LinkedIn: 7 posts (founder story, psychology, market thesis, culture, contrarian, sovereignty, education) Dev.to: 3 articles (Pi setup guide, architecture deep-dive, psychology cross-post) Onboarding: 3-email post-deploy sequence (welcome → expand → identity) Landing page: 'Own Your Stack' full page copy with A/B test plan RoadChain-SHA2048: acf9a01550f3cb91 RoadChain-Identity: alexa@sovereign RoadChain-Full: acf9a01550f3cb91373a2a43d02efc1a6ed557778584f96778e6db55f59d3f9be7579ac754e7851ae2384c4909ab2e3c354f15b72918a6c65b22a2003531290b41e80a375437714826a03df976f32e0b2e3f8124c0a3c1e9ae9ffeac6f9f1d4c21196c3e2afd67f01ec224425e75c23d1a89020123953925768d54384553962622d1527ca1a18501defbb22c1415e165a9d8201087ca9c99fb684648edb0c7ba709697602a9b6938325a01d19ec71178899568cbf5e01c14257be63aef06244157e9bb14320a2cc9cfbf8a610b66f262a08b3f234f0ba649cdc550ffab24088829a7a01519e546f467541f493cc87b6f7b90a352f69d9c8f88d228af2a486359
This commit is contained in:
155
ads/email-sequences/onboarding-3-email.md
Normal file
155
ads/email-sequences/onboarding-3-email.md
Normal file
@@ -0,0 +1,155 @@
|
|||||||
|
# Onboarding Email Sequence (Post-First-Deploy)
|
||||||
|
|
||||||
|
**Principle:** Behavioral Attitude Formation — do → believe → stay
|
||||||
|
**Trigger:** User completes first agent deploy
|
||||||
|
**Sequence:** 3 emails over 7 days
|
||||||
|
**Sender:** Alexa Amundson <alexa@blackroad.io>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Email 1: WELCOME WIN (Day 0 — Immediate after first deploy)
|
||||||
|
|
||||||
|
**Principle:** Self-Perception Theory — reinforce the behavior they just took
|
||||||
|
|
||||||
|
**Subject:** You just deployed your first agent. Here's what happened under the hood.
|
||||||
|
|
||||||
|
**Body:**
|
||||||
|
```
|
||||||
|
You did it.
|
||||||
|
|
||||||
|
Your first BlackRoad agent is running. Here's exactly what just happened on your hardware:
|
||||||
|
|
||||||
|
1. Ollama loaded the model into memory on your Pi
|
||||||
|
2. The agent registered with the NATS mesh
|
||||||
|
3. It subscribed to its task topic
|
||||||
|
4. It's now listening for work — on YOUR hardware, on YOUR network
|
||||||
|
|
||||||
|
No cloud. No API call. No metered billing. The inference is happening on silicon you physically own.
|
||||||
|
|
||||||
|
Here's what you can do right now:
|
||||||
|
|
||||||
|
→ Query your agent: [COMMAND]
|
||||||
|
→ Check its status: [COMMAND]
|
||||||
|
→ See the logs: [COMMAND]
|
||||||
|
|
||||||
|
And here's the thing nobody tells you about self-hosted AI:
|
||||||
|
|
||||||
|
The second deploy is easier than the first. The third is automatic. By the fifth, you'll wonder why you ever rented compute.
|
||||||
|
|
||||||
|
If anything broke or feels unclear, reply to this email. I'll fix it personally.
|
||||||
|
|
||||||
|
— Alexa
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Email 2: FIRST WIN EXPANSION (Day 3)
|
||||||
|
|
||||||
|
**Principle:** Commitment/Consistency — they deployed one, now deploy another
|
||||||
|
|
||||||
|
**Subject:** 3 things your agent can do that you probably haven't tried yet
|
||||||
|
|
||||||
|
**Body:**
|
||||||
|
```
|
||||||
|
Your agent has been running for 3 days. Here are three things you can do with it right now that most new users don't discover until week two:
|
||||||
|
|
||||||
|
**1. Chain it with another agent**
|
||||||
|
Deploy a second agent on the same node (or a different Pi). Connect them via NATS pub/sub:
|
||||||
|
|
||||||
|
[COMMAND TO DEPLOY SECOND AGENT]
|
||||||
|
[COMMAND TO CONNECT VIA NATS]
|
||||||
|
|
||||||
|
Now Agent A can delegate tasks to Agent B. That's an AI pipeline running entirely on your hardware.
|
||||||
|
|
||||||
|
**2. Add RAG (Retrieval-Augmented Generation)**
|
||||||
|
Point your agent at a folder of documents:
|
||||||
|
|
||||||
|
[COMMAND TO INDEX DOCUMENTS]
|
||||||
|
[COMMAND TO QUERY WITH RAG]
|
||||||
|
|
||||||
|
Your agent now answers questions using YOUR data — not the internet's data, not a training set, YOUR documents.
|
||||||
|
|
||||||
|
**3. Set up a webhook trigger**
|
||||||
|
Make your agent respond to external events (GitHub push, form submission, cron schedule):
|
||||||
|
|
||||||
|
[COMMAND TO SET UP WEBHOOK]
|
||||||
|
|
||||||
|
Now your agent works while you sleep. On your hardware. For $0/month.
|
||||||
|
|
||||||
|
Each of these takes under 5 minutes. Which one are you trying first?
|
||||||
|
|
||||||
|
— Alexa
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Email 3: DEEPER ENGAGEMENT (Day 7)
|
||||||
|
|
||||||
|
**Principle:** Identity formation — "I am a BlackRoad operator"
|
||||||
|
|
||||||
|
**Subject:** You've been running self-hosted AI for a week. Here's what that makes you.
|
||||||
|
|
||||||
|
**Body:**
|
||||||
|
```
|
||||||
|
One week ago you deployed your first agent.
|
||||||
|
|
||||||
|
Here's what's different now:
|
||||||
|
|
||||||
|
- You have AI inference running on hardware you own
|
||||||
|
- Your data hasn't left your network once
|
||||||
|
- You've paid $0 in cloud compute
|
||||||
|
- You have a system that runs whether or not any vendor decides to change their pricing, terms, or API
|
||||||
|
|
||||||
|
You're not a BlackRoad "user." You're an operator.
|
||||||
|
|
||||||
|
The difference: users consume a service. Operators own infrastructure. You own yours now.
|
||||||
|
|
||||||
|
Here's where operators go from here:
|
||||||
|
|
||||||
|
**Level 1: Single agent** ← you are here
|
||||||
|
→ One agent, one model, one task
|
||||||
|
|
||||||
|
**Level 2: Agent mesh**
|
||||||
|
→ Multiple agents communicating via NATS pub/sub
|
||||||
|
→ Specialized agents (summarizer, classifier, coder, monitor)
|
||||||
|
|
||||||
|
**Level 3: Full stack**
|
||||||
|
→ Qdrant for vector search / RAG
|
||||||
|
→ PostgreSQL for persistent state
|
||||||
|
→ Pi-hole for network-level security
|
||||||
|
→ WireGuard for encrypted mesh
|
||||||
|
→ Monitoring + alerting
|
||||||
|
|
||||||
|
**Level 4: Production**
|
||||||
|
→ Multiple Pis in a Swarm
|
||||||
|
→ Hailo-8 accelerators for 26+ TOPS per node
|
||||||
|
→ Gitea for self-hosted git
|
||||||
|
→ Custom skills and workflows
|
||||||
|
|
||||||
|
The full architecture guide for Level 3-4 is here:
|
||||||
|
[LINK TO ARCHITECTURE PAGE]
|
||||||
|
|
||||||
|
And if you want to see what Level 4 looks like in production — that's what blackroad.io runs on. Five Pis. Thirty websites. Fifty skills. Zero cloud.
|
||||||
|
|
||||||
|
Welcome to the fleet.
|
||||||
|
|
||||||
|
— Alexa
|
||||||
|
|
||||||
|
BlackRoad OS — Pave Tomorrow.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Sequence Metrics
|
||||||
|
|
||||||
|
| Email | Success Metric | Target |
|
||||||
|
|-------|---------------|--------|
|
||||||
|
| 1 | Log command usage (did they check their agent?) | 60%+ |
|
||||||
|
| 2 | Second deploy rate | 30%+ |
|
||||||
|
| 3 | Architecture page visit | 40%+ |
|
||||||
|
|
||||||
|
## Segmentation Rules
|
||||||
|
|
||||||
|
- If they deploy a second agent after Email 2 → send "Advanced Patterns" guide
|
||||||
|
- If they don't open Email 2 → resend with alternate subject: "Your agent is lonely. Deploy a friend."
|
||||||
|
- If they visit the architecture page after Email 3 → flag as "power user" for product feedback requests
|
||||||
178
ads/landing-pages/own-your-stack.md
Normal file
178
ads/landing-pages/own-your-stack.md
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
# Landing Page: "Own Your Stack"
|
||||||
|
|
||||||
|
**URL:** blackroad.io/own-your-stack
|
||||||
|
**Principle:** ELM (Peripheral hero → Central scroll) + Primacy/Recency
|
||||||
|
**Purpose:** Campaign landing page for Q2 2026 "Own Your Stack"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## HERO (Peripheral — 3 seconds)
|
||||||
|
|
||||||
|
```
|
||||||
|
# 52 TOPS of AI inference. $200 of hardware. $0/month.
|
||||||
|
|
||||||
|
Your AI should run on your hardware, answer to your rules,
|
||||||
|
and cost you nothing after the first week.
|
||||||
|
|
||||||
|
[Deploy Your First Agent →] [See the Architecture →]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Design:** Black background. White text. Hot pink CTA button. Terminal cursor animation behind the headline. No images — just typography and the gradient shape.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PROBLEM (Bridge — stat hook)
|
||||||
|
|
||||||
|
```
|
||||||
|
## 94% of IT leaders fear vendor lock-in. You should too.
|
||||||
|
|
||||||
|
A 2026 Parallels survey found that 94% of organizations are concerned
|
||||||
|
about vendor lock-in — and 42% are actively moving workloads back on-premises.
|
||||||
|
|
||||||
|
The math is simple:
|
||||||
|
|
||||||
|
| | Cloud GPU | BlackRoad |
|
||||||
|
|---|---|---|
|
||||||
|
| Cost | $3.90/hour | $99 one-time (Hailo-8) |
|
||||||
|
| Annual | $33,696 | ~$70 electricity |
|
||||||
|
| Data privacy | Policy-dependent | Physics-guaranteed |
|
||||||
|
| Vendor dependency | Complete | Zero |
|
||||||
|
| Cancellation | Lose access | You own the hardware |
|
||||||
|
|
||||||
|
The self-hosted cloud market hit $18.48 billion in 2025.
|
||||||
|
Edge AI is growing at 21.7% CAGR.
|
||||||
|
|
||||||
|
This isn't early. This is right on time.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SOLUTION (Central Route — arguments that survive counterarguing)
|
||||||
|
|
||||||
|
```
|
||||||
|
## What BlackRoad OS Actually Does
|
||||||
|
|
||||||
|
BlackRoad is a self-hosted AI operating system that runs on Raspberry Pis.
|
||||||
|
Not a demo. Production infrastructure.
|
||||||
|
|
||||||
|
### 16 AI Models, Locally
|
||||||
|
Ollama serves Llama 3, Mistral, CodeLlama, Phi-3, Gemma, and 11 more.
|
||||||
|
On your Pi. On your network. No API key.
|
||||||
|
|
||||||
|
### 52 TOPS Neural Inference
|
||||||
|
Two Hailo-8 accelerators ($99 each) deliver 52 trillion operations per second.
|
||||||
|
Classification, detection, embedding, and inference — all on-device.
|
||||||
|
|
||||||
|
### 50 AI Skills Across 6 Modules
|
||||||
|
Summarization. Classification. Code generation. Content creation.
|
||||||
|
Analysis. Monitoring. Pre-built and extensible.
|
||||||
|
|
||||||
|
### Agent Mesh Network
|
||||||
|
NATS v2.12.3 connects agents across nodes via pub/sub messaging.
|
||||||
|
Agents delegate, collaborate, and report — without a central server.
|
||||||
|
|
||||||
|
### Complete Infrastructure
|
||||||
|
Gitea (207 repos). PostgreSQL. Qdrant vector search.
|
||||||
|
Pi-hole DNS. WireGuard encryption. Docker Swarm orchestration.
|
||||||
|
|
||||||
|
Everything you need. Nothing you rent.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PROOF (Social Validation + Authority)
|
||||||
|
|
||||||
|
```
|
||||||
|
## This Is What It Looks Like in Production
|
||||||
|
|
||||||
|
Five Raspberry Pis serve:
|
||||||
|
→ 30 websites across 20 domains
|
||||||
|
→ 207 git repositories on Gitea
|
||||||
|
→ 50 AI skills across 6 modules
|
||||||
|
→ A billing system processing real payments (RoadPay)
|
||||||
|
→ An auth system with 42 users (auth.blackroad.io)
|
||||||
|
→ 334 web applications
|
||||||
|
→ 8 AI agents responding to GitHub events across 69 repos
|
||||||
|
|
||||||
|
Total power consumption: 46 watts.
|
||||||
|
Total monthly cloud bill: $0.
|
||||||
|
|
||||||
|
This page was served by this infrastructure.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## COMPARISON (Two-sided messaging — honesty builds credibility)
|
||||||
|
|
||||||
|
```
|
||||||
|
## Where Cloud Wins vs. Where BlackRoad Wins
|
||||||
|
|
||||||
|
We'll be straight with you.
|
||||||
|
|
||||||
|
CLOUD IS BETTER FOR:
|
||||||
|
✓ Training large models (you need H100 clusters)
|
||||||
|
✓ Burst capacity (100 GPUs for 2 hours)
|
||||||
|
✓ Global distribution (12 regions simultaneously)
|
||||||
|
|
||||||
|
BLACKROAD IS BETTER FOR:
|
||||||
|
✓ Always-on inference ($99 once vs. $33,696/year)
|
||||||
|
✓ Privacy (data never leaves your hardware)
|
||||||
|
✓ Vendor independence (no API keys, no ToS changes)
|
||||||
|
✓ Latency (on-device = no network round trip)
|
||||||
|
✓ Steady-state cost (hardware pays for itself in days)
|
||||||
|
|
||||||
|
Most AI workloads are inference, not training.
|
||||||
|
If you're running models 24/7, you're overpaying by 100x.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CTA (Commitment/Consistency — small ask)
|
||||||
|
|
||||||
|
```
|
||||||
|
## Deploy Your First Agent in Under 10 Minutes
|
||||||
|
|
||||||
|
No credit card. No sales call. No "enterprise pricing" email.
|
||||||
|
|
||||||
|
One command. Your Pi. Your model. Your data. Your agent.
|
||||||
|
|
||||||
|
[Deploy Now →]
|
||||||
|
|
||||||
|
What happens when you click:
|
||||||
|
1. You'll get the setup guide (Pi 5 + Ollama + Hailo-8)
|
||||||
|
2. You'll run one command to deploy your first agent
|
||||||
|
3. Your agent starts listening for tasks — on YOUR hardware
|
||||||
|
4. You'll wonder why you ever rented compute
|
||||||
|
|
||||||
|
Already have a Pi? Skip to step 2. → [Quick Deploy →]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## FOOTER (Recency — last thing they remember)
|
||||||
|
|
||||||
|
```
|
||||||
|
BlackRoad OS — Pave Tomorrow.
|
||||||
|
|
||||||
|
"Pick up your agent. Ride the BlackRoad together.
|
||||||
|
Remember the Road. Pave Tomorrow."
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Page Specs
|
||||||
|
|
||||||
|
- **Load time target:** <1 second (served via Cloudflare)
|
||||||
|
- **Mobile-first:** All sections stack cleanly
|
||||||
|
- **No JavaScript required** for core content (progressive enhancement only)
|
||||||
|
- **Dark mode only** — black background, white text, hot pink CTAs
|
||||||
|
- **Fonts:** Space Grotesk (headlines), JetBrains Mono (code/stats), Inter (body)
|
||||||
|
- **Analytics:** stats-blackroad Worker + KV (self-hosted, no Google Analytics)
|
||||||
|
|
||||||
|
## A/B Test Plan
|
||||||
|
|
||||||
|
| Test | Variable | Hypothesis |
|
||||||
|
|------|----------|------------|
|
||||||
|
| 1 | Hero headline: "52 TOPS..." vs. "Your AI shouldn't need permission" | Concrete stat outperforms abstract claim (vividness principle) |
|
||||||
|
| 2 | CTA text: "Deploy Now" vs. "Deploy Your First Agent" | Specific verb + object outperforms generic (commitment specificity) |
|
||||||
|
| 3 | Comparison section: present vs. absent | Two-sided messaging increases conversion (credibility boost) |
|
||||||
303
ads/social/dev-to-articles.md
Normal file
303
ads/social/dev-to-articles.md
Normal file
@@ -0,0 +1,303 @@
|
|||||||
|
# BlackRoad Dev.to / Hashnode Cross-Posts
|
||||||
|
|
||||||
|
**Principle:** Authority + Reciprocity (give technical depth, earn credibility)
|
||||||
|
**Rule:** These are developer audiences. Lead with code and architecture, not marketing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Article 1: Setup Guide
|
||||||
|
|
||||||
|
**Title:** How I Run 16 AI Models on a Raspberry Pi 5 + Hailo-8 (Complete Setup Guide)
|
||||||
|
|
||||||
|
**Tags:** #ai #raspberrypi #selfhosted #tutorial
|
||||||
|
|
||||||
|
**Body:**
|
||||||
|
```markdown
|
||||||
|
I run 16 language models on a Raspberry Pi 5 with a Hailo-8 AI accelerator. This isn't a benchmark post — this is the production setup I use daily for BlackRoad OS.
|
||||||
|
|
||||||
|
Total hardware cost: $180. Total cloud bill: $0/month.
|
||||||
|
|
||||||
|
Here's exactly how to set it up.
|
||||||
|
|
||||||
|
## Hardware
|
||||||
|
|
||||||
|
| Component | Price | Purpose |
|
||||||
|
|-----------|-------|---------|
|
||||||
|
| Raspberry Pi 5 (8GB) | $80 | Compute |
|
||||||
|
| Hailo-8 M.2 AI Kit | $99 | 26 TOPS neural inference |
|
||||||
|
| 128GB microSD (A2) | $15 | Storage |
|
||||||
|
| USB-C power supply (27W) | $12 | Power |
|
||||||
|
| **Total** | **$206** | |
|
||||||
|
|
||||||
|
Optional but recommended:
|
||||||
|
- NVMe SSD via Pi 5 HAT ($30-50) — faster model loading
|
||||||
|
- Ethernet cable — more reliable than WiFi for serving
|
||||||
|
|
||||||
|
## OS Setup
|
||||||
|
|
||||||
|
Flash Raspberry Pi OS (64-bit, Bookworm) with the Raspberry Pi Imager.
|
||||||
|
|
||||||
|
Enable SSH during flashing (set hostname, username, password).
|
||||||
|
|
||||||
|
Boot and SSH in:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh pi@<your-pi-ip>
|
||||||
|
```
|
||||||
|
|
||||||
|
Update everything:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt update && sudo apt upgrade -y
|
||||||
|
```
|
||||||
|
|
||||||
|
## Install Ollama
|
||||||
|
|
||||||
|
One command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -fsSL https://ollama.com/install.sh | sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ollama --version
|
||||||
|
```
|
||||||
|
|
||||||
|
## Load Models
|
||||||
|
|
||||||
|
Pull the models you want. Here's what I run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ollama pull llama3
|
||||||
|
ollama pull mistral
|
||||||
|
ollama pull codellama
|
||||||
|
ollama pull phi3
|
||||||
|
ollama pull gemma:2b
|
||||||
|
ollama pull nomic-embed-text
|
||||||
|
```
|
||||||
|
|
||||||
|
List loaded models:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ollama list
|
||||||
|
```
|
||||||
|
|
||||||
|
You can load as many as your storage allows. Ollama swaps models in and out of memory efficiently — only one runs at a time in RAM, but switching is fast.
|
||||||
|
|
||||||
|
## Install Hailo-8 Drivers
|
||||||
|
|
||||||
|
Add the Hailo APT repository:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt install -y hailo-all
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify the accelerator is detected:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
hailortcli fw-control identify
|
||||||
|
```
|
||||||
|
|
||||||
|
You should see: `Board Name: Hailo-8` and `26 TOPS`.
|
||||||
|
|
||||||
|
## Test Inference
|
||||||
|
|
||||||
|
Run a quick test:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ollama run llama3 "What is edge AI inference?"
|
||||||
|
```
|
||||||
|
|
||||||
|
For the Hailo-8 specifically, you can run object detection and classification models natively:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
hailortcli run <model.hef>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Serving Over the Network
|
||||||
|
|
||||||
|
Ollama serves on port 11434 by default. To expose it to other nodes on your network:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
OLLAMA_HOST=0.0.0.0:11434 ollama serve
|
||||||
|
```
|
||||||
|
|
||||||
|
Now any machine on your LAN can query:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl http://<pi-ip>:11434/api/generate -d '{
|
||||||
|
"model": "llama3",
|
||||||
|
"prompt": "Explain edge computing in one sentence."
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## What I Actually Run
|
||||||
|
|
||||||
|
In production, I have 5 Pis orchestrated with Docker Swarm:
|
||||||
|
|
||||||
|
- Alice: gateway, DNS, databases
|
||||||
|
- Cecilia: AI inference (this setup + Hailo-8)
|
||||||
|
- Octavia: git server + second Hailo-8
|
||||||
|
- Aria: agent runtime + NATS messaging
|
||||||
|
- Lucidia: web apps + CI/CD
|
||||||
|
|
||||||
|
Two Hailo-8s = 52 TOPS combined. WireGuard mesh encrypts everything. Cloudflare Tunnels provide ingress without opening ports.
|
||||||
|
|
||||||
|
This serves 30 websites, 50 AI skills, and 207 git repositories. Monthly cost: electricity (~$5-8).
|
||||||
|
|
||||||
|
## The Economics
|
||||||
|
|
||||||
|
| | Cloud GPU (H100) | Raspberry Pi + Hailo-8 |
|
||||||
|
|---|---|---|
|
||||||
|
| Cost | $3.90/hr | $206 one-time |
|
||||||
|
| Annual cost | $33,696 | ~$70 electricity |
|
||||||
|
| Inference | 65 TOPS | 26 TOPS |
|
||||||
|
| Privacy | Vendor-dependent | Absolute |
|
||||||
|
| Vendor lock-in | Yes | No |
|
||||||
|
|
||||||
|
The cloud GPU is faster. But for inference workloads that run 24/7, the Pi pays for itself in 53 hours of avoided cloud compute.
|
||||||
|
|
||||||
|
## Questions?
|
||||||
|
|
||||||
|
I'm happy to answer anything about this setup. The full architecture is at blackroad.io.
|
||||||
|
|
||||||
|
The project is open and the infrastructure is real — this post was served through it.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Article 2: Architecture Deep-Dive
|
||||||
|
|
||||||
|
**Title:** I Replaced My Cloud Infrastructure with 5 Raspberry Pis — Here's the Full Architecture
|
||||||
|
|
||||||
|
**Tags:** #architecture #selfhosted #devops #ai
|
||||||
|
|
||||||
|
**Body:**
|
||||||
|
```markdown
|
||||||
|
This is the complete production architecture for BlackRoad OS. Not a demo. Not a weekend project. The actual infrastructure serving 30 websites, 207 git repos, and 50 AI skills.
|
||||||
|
|
||||||
|
## Network Topology
|
||||||
|
|
||||||
|
```
|
||||||
|
Internet
|
||||||
|
│
|
||||||
|
├── Cloudflare (DNS, CDN, Tunnels, Pages)
|
||||||
|
│ ├── 95+ Pages sites
|
||||||
|
│ ├── 40 KV namespaces
|
||||||
|
│ ├── 8 D1 databases
|
||||||
|
│ └── 10 R2 buckets
|
||||||
|
│
|
||||||
|
├── Cloudflare Tunnel → Alice (.49)
|
||||||
|
│ ├── Pi-hole (DNS filtering, 120+ blocked domains)
|
||||||
|
│ ├── PostgreSQL
|
||||||
|
│ ├── Qdrant (vector search)
|
||||||
|
│ └── Gateway / reverse proxy
|
||||||
|
│
|
||||||
|
├── Cloudflare Tunnel → Cecilia (.96)
|
||||||
|
│ ├── Ollama (16 models)
|
||||||
|
│ ├── Hailo-8 (26 TOPS)
|
||||||
|
│ └── Embedding engine (nomic-embed-text)
|
||||||
|
│
|
||||||
|
├── Cloudflare Tunnel → Octavia (.101)
|
||||||
|
│ ├── Gitea (207 repos, primary git host)
|
||||||
|
│ ├── Docker Swarm manager
|
||||||
|
│ └── Hailo-8 (26 TOPS)
|
||||||
|
│
|
||||||
|
├── Cloudflare Tunnel → Aria (.98)
|
||||||
|
│ ├── Agent runtime
|
||||||
|
│ └── NATS v2.12.3 (pub/sub mesh)
|
||||||
|
│
|
||||||
|
└── Cloudflare Tunnel → Lucidia (.38)
|
||||||
|
├── 334 web applications
|
||||||
|
└── GitHub Actions runner
|
||||||
|
```
|
||||||
|
|
||||||
|
## Internal Mesh
|
||||||
|
|
||||||
|
All nodes connect over WireGuard:
|
||||||
|
|
||||||
|
```
|
||||||
|
Alice ←→ Cecilia ←→ Octavia ←→ Aria ←→ Lucidia
|
||||||
|
↕ ↕ ↕ ↕
|
||||||
|
WireGuard encrypted mesh (10.8.0.x)
|
||||||
|
```
|
||||||
|
|
||||||
|
NATS v2.12.3 provides pub/sub messaging between 4 nodes. Agents on any node can publish tasks and subscribe to results.
|
||||||
|
|
||||||
|
## Key Design Decisions
|
||||||
|
|
||||||
|
**Why Gitea instead of GitHub?**
|
||||||
|
Gitea is primary. GitHub is a mirror. We own the git server. If GitHub changes pricing, policies, or terms — our repos don't move. Gitea runs on Octavia and syncs downstream to 17 GitHub orgs every 30 minutes.
|
||||||
|
|
||||||
|
**Why Cloudflare Tunnels?**
|
||||||
|
Zero open ports on any Pi. All ingress goes through authenticated Cloudflare Tunnels. No port scanning surface. No firewall rules to maintain (except Lucidia, which runs UFW).
|
||||||
|
|
||||||
|
**Why Pi-hole?**
|
||||||
|
Network-level DNS filtering. 120+ tracking/advertising domains blocked for every device on the network. This isn't ad-blocking — it's a security boundary.
|
||||||
|
|
||||||
|
**Why Hailo-8 instead of cloud GPUs?**
|
||||||
|
52 TOPS for $198 total vs. $33,696/year for one cloud H100. The Hailo-8 handles classification, detection, and inference acceleration. Ollama handles language model serving. Together, they cover the full inference stack.
|
||||||
|
|
||||||
|
**Why NATS?**
|
||||||
|
Lightweight pub/sub messaging designed for distributed systems. Agents on different Pis communicate by publishing to topics and subscribing to results. No message broker to maintain. No Kafka cluster to babysit.
|
||||||
|
|
||||||
|
## Services Map
|
||||||
|
|
||||||
|
| Service | Node | Port | Purpose |
|
||||||
|
|---------|------|------|---------|
|
||||||
|
| Pi-hole | Alice | 53, 80 | DNS filtering |
|
||||||
|
| PostgreSQL | Alice | 5432 | Primary database |
|
||||||
|
| Qdrant | Alice | 6333 | Vector search / RAG |
|
||||||
|
| Ollama | Cecilia | 11434 | LLM inference (16 models) |
|
||||||
|
| Gitea | Octavia | 3100 | Git hosting (207 repos) |
|
||||||
|
| NATS | Aria | 4222 | Agent messaging |
|
||||||
|
| Web apps | Lucidia | various | 334 applications |
|
||||||
|
|
||||||
|
## Monitoring & Automation
|
||||||
|
|
||||||
|
Cron jobs on the Mac (Alexandria, .28):
|
||||||
|
- Every 5 min: health check + fleet collector
|
||||||
|
- Every 15 min: Cecilia model sync
|
||||||
|
- Every 30 min: website checks + git sync
|
||||||
|
- Daily 3am: backups + sovereign mesh check
|
||||||
|
- Every 6h: Google Drive sync (rclone)
|
||||||
|
- Every 12h: rsync to DigitalOcean droplet
|
||||||
|
|
||||||
|
## Replication
|
||||||
|
|
||||||
|
This entire architecture is reproducible. Total hardware cost: ~$400. Setup time: a few hours with the guide.
|
||||||
|
|
||||||
|
Full setup guide: [link to Article 1]
|
||||||
|
|
||||||
|
Questions welcome — happy to share configs, scripts, or troubleshooting for any of these services.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Article 3: The Psychology Post (Cross-post from blog)
|
||||||
|
|
||||||
|
**Title:** 80% of Advertisements Are Misunderstood — I Studied Why, Then Built a Marketing System That Can't Be
|
||||||
|
|
||||||
|
**Tags:** #marketing #psychology #transparency #startup
|
||||||
|
|
||||||
|
*(Cross-post blog/02-advertisements-misunderstood.md with Dev.to formatting)*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Publishing Schedule
|
||||||
|
|
||||||
|
| Week | Article | Platform |
|
||||||
|
|------|---------|----------|
|
||||||
|
| 1 | Setup Guide | Dev.to + Hashnode |
|
||||||
|
| 2 | Architecture Deep-Dive | Dev.to + Hashnode |
|
||||||
|
| 3 | Psychology of Advertising | Dev.to |
|
||||||
|
| 4+ | Cross-post new blog posts as they publish | Both |
|
||||||
|
|
||||||
|
## Dev.to-Specific Rules
|
||||||
|
|
||||||
|
- Use their canonical URL feature to point back to blackroad.io/blog (SEO benefit)
|
||||||
|
- Include a "series" tag to link articles together
|
||||||
|
- Respond to every comment within 24 hours
|
||||||
|
- Don't use cover images that look like marketing — use terminal screenshots or architecture diagrams
|
||||||
276
ads/social/linkedin-posts.md
Normal file
276
ads/social/linkedin-posts.md
Normal file
@@ -0,0 +1,276 @@
|
|||||||
|
# BlackRoad LinkedIn Posts
|
||||||
|
|
||||||
|
**Principle:** Authority + Value-Expressive + Central Route (LinkedIn audience processes deeply)
|
||||||
|
**Tone:** Professional but not corporate. Founder voice. Data-driven.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Post 1: The Founder Story
|
||||||
|
|
||||||
|
```
|
||||||
|
I left finance to build AI infrastructure on Raspberry Pis.
|
||||||
|
|
||||||
|
Not because I couldn't get a cloud budget.
|
||||||
|
Because I did the math.
|
||||||
|
|
||||||
|
One NVIDIA H100 on AWS: $3.90/hour.
|
||||||
|
Run it 24/7 for a year: $33,696.
|
||||||
|
For one GPU.
|
||||||
|
|
||||||
|
Two Hailo-8 accelerators: $198 total.
|
||||||
|
52 trillion operations per second.
|
||||||
|
Run them forever. No bill. No vendor.
|
||||||
|
|
||||||
|
I have a Series 7. A Series 24. A Series 65. A Series 66.
|
||||||
|
|
||||||
|
I passed the same licensing exams Wall Street requires — then used that financial rigor to ask a simple question: "Why are we renting compute we could own?"
|
||||||
|
|
||||||
|
The answer is that nobody told small teams they could own it. The cloud providers certainly didn't.
|
||||||
|
|
||||||
|
So I built BlackRoad OS: self-hosted AI that runs on $400 of Raspberry Pi hardware. Five nodes. Sixteen models. Thirty websites. A billing system. A git server. Fifty AI skills.
|
||||||
|
|
||||||
|
Monthly cloud bill: $0.
|
||||||
|
|
||||||
|
The self-hosted cloud market hit $18.48 billion in 2025 (Grand View Research). Edge AI is growing at 21.7% CAGR.
|
||||||
|
|
||||||
|
This isn't contrarian. This is where the math points.
|
||||||
|
|
||||||
|
#SelfHosted #EdgeAI #AI #Infrastructure #Founder
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Post 2: The Psychology Angle
|
||||||
|
|
||||||
|
```
|
||||||
|
I studied the Psychology of Advertising at the University of Minnesota (JOUR 4251, Dr. Claire Segijn).
|
||||||
|
|
||||||
|
One stat changed how I build marketing:
|
||||||
|
|
||||||
|
80% of advertisements are misunderstood by their audience.
|
||||||
|
|
||||||
|
Not rejected. Not ignored. Misunderstood.
|
||||||
|
|
||||||
|
The audience processes the ad and walks away believing something the advertiser never intended.
|
||||||
|
|
||||||
|
Four mechanisms cause this:
|
||||||
|
1. Omitted comparisons — "The best AI platform" (better than what?)
|
||||||
|
2. Pragmatic inference — "May be the best" (may also be the worst)
|
||||||
|
3. Juxtaposition — "Smart people choose X" (implies causation)
|
||||||
|
4. Affirmation of consequent — "If you want Y, you need X" (false logic)
|
||||||
|
|
||||||
|
So we made a rule at BlackRoad:
|
||||||
|
|
||||||
|
Every claim must be literally, specifically, and verifiably true.
|
||||||
|
|
||||||
|
"52 TOPS of neural inference" — true, measured.
|
||||||
|
"16 Ollama models on a Pi 5" — true, run `ollama list`.
|
||||||
|
"$0/month cloud bill" — true, there is no cloud.
|
||||||
|
|
||||||
|
We even killed our own "30K agents" marketing copy when we realized it was aspirational, not factual.
|
||||||
|
|
||||||
|
The truth effect (Fennis & Stroebe) says: the more people see a claim, the more true it seems. This works on lies AND truths.
|
||||||
|
|
||||||
|
So we repeat truths. Obsessively. With sources.
|
||||||
|
|
||||||
|
It's slower than hype marketing. It's also why our users stay.
|
||||||
|
|
||||||
|
#MarketingPsychology #Advertising #Transparency #AI
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Post 3: The Market Thesis
|
||||||
|
|
||||||
|
```
|
||||||
|
Three markets are converging on the same conclusion:
|
||||||
|
|
||||||
|
1. Self-hosted cloud: $18.48B in 2025, growing 11.9% CAGR → $49.67B by 2034
|
||||||
|
2. Edge AI: $24.91B in 2025, growing 21.7% CAGR → $118.69B by 2033
|
||||||
|
3. AI inference: $106.15B in 2025, growing 19.2% CAGR → $254.98B by 2030
|
||||||
|
|
||||||
|
The conclusion: inference belongs on the edge.
|
||||||
|
|
||||||
|
Training requires cloud-scale compute. Always will.
|
||||||
|
|
||||||
|
But inference — running models, answering queries, classifying data, powering agents — runs cheaper, faster, and more privately on hardware you own.
|
||||||
|
|
||||||
|
A Hailo-8 accelerator costs $99 and delivers 26 TOPS.
|
||||||
|
A cloud GPU costs $3.90/hour — you exceed the Hailo's price in 26 hours.
|
||||||
|
|
||||||
|
94% of IT leaders fear vendor lock-in (Parallels 2026). 42% are moving workloads back on-premises.
|
||||||
|
|
||||||
|
This isn't a prediction. This is happening right now, backed by $150B+ in market activity.
|
||||||
|
|
||||||
|
BlackRoad OS sits at the intersection: self-hosted AI inference on commodity edge hardware.
|
||||||
|
|
||||||
|
The shift doesn't require faith. It requires arithmetic.
|
||||||
|
|
||||||
|
Sources: Grand View Research, MarketsandMarkets, Parallels
|
||||||
|
|
||||||
|
#EdgeAI #SelfHosted #Infrastructure #AI #MarketAnalysis
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Post 4: The Hiring/Culture Post
|
||||||
|
|
||||||
|
```
|
||||||
|
Our entire AI infrastructure runs on 5 Raspberry Pis.
|
||||||
|
|
||||||
|
Here's what that says about how we build:
|
||||||
|
|
||||||
|
1. We solve problems with architecture, not budget.
|
||||||
|
A $55 computer is a datacenter node if you know Docker, WireGuard, and DNS.
|
||||||
|
|
||||||
|
2. We own everything.
|
||||||
|
Git server. Billing system. AI models. DNS filtering. Auth. Search.
|
||||||
|
Nothing is rented. Nothing can be revoked.
|
||||||
|
|
||||||
|
3. We measure in watts, not invoices.
|
||||||
|
46 watts total. That's less than a light bulb.
|
||||||
|
Not because we're cheap. Because efficiency is a design value.
|
||||||
|
|
||||||
|
4. We verify before we claim.
|
||||||
|
Every stat in our marketing is sourced. Every number is measured.
|
||||||
|
We killed our own copy when it wasn't accurate.
|
||||||
|
|
||||||
|
5. We document everything.
|
||||||
|
207 repos on Gitea. Every decision, every config, every architecture choice.
|
||||||
|
If I get hit by a bus, the system runs itself.
|
||||||
|
|
||||||
|
This is the culture that builds BlackRoad OS.
|
||||||
|
|
||||||
|
We're not hiring right now. But when we do, this is what we look for:
|
||||||
|
People who'd rather own $400 of hardware than rent $33,696 of someone else's.
|
||||||
|
|
||||||
|
#Culture #Startup #Engineering #AI #SelfHosted
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Post 5: The Contrarian Take
|
||||||
|
|
||||||
|
```
|
||||||
|
Unpopular opinion: most AI startups are paying 100x too much for inference.
|
||||||
|
|
||||||
|
Not training. Inference.
|
||||||
|
|
||||||
|
Training a foundation model requires H100 clusters, thousands of GPUs, millions of dollars. Fair.
|
||||||
|
|
||||||
|
But RUNNING a model? Answering a query? Classifying a document? Embedding text for search?
|
||||||
|
|
||||||
|
That runs on a $99 accelerator plugged into a $55 computer.
|
||||||
|
|
||||||
|
The AI inference market is $106 billion. Most of it is cloud inference — metered by the hour, billed by the token, scaled by the credit card.
|
||||||
|
|
||||||
|
But inference doesn't need the cloud. Inference needs:
|
||||||
|
- A model (free, open-source, download it)
|
||||||
|
- Hardware (Hailo-8: $99, Pi 5: $55)
|
||||||
|
- Software (Ollama: free, one command)
|
||||||
|
|
||||||
|
Total: $154. Runs forever.
|
||||||
|
|
||||||
|
The cloud GPU business model depends on you not knowing this.
|
||||||
|
|
||||||
|
AWS doesn't want you to run `ollama serve` on a Raspberry Pi. That's a $33,696/year customer they lose.
|
||||||
|
|
||||||
|
The edge AI market is growing at 21.7% because the secret is getting out.
|
||||||
|
|
||||||
|
BlackRoad OS is how we prove it works — 16 models, 50 skills, 30 websites, $0/month.
|
||||||
|
|
||||||
|
Not a pitch. A proof.
|
||||||
|
|
||||||
|
#AI #Inference #EdgeComputing #SelfHosted #Startups
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Post 6: The Data Sovereignty Angle
|
||||||
|
|
||||||
|
```
|
||||||
|
Your AI vendor's privacy policy is not a technical guarantee.
|
||||||
|
|
||||||
|
It's a promise. Promises change.
|
||||||
|
|
||||||
|
Technical guarantees look like this:
|
||||||
|
- Pi-hole DNS filtering blocks 120+ tracking domains at the network level
|
||||||
|
- WireGuard encrypts all inter-node traffic
|
||||||
|
- No port is open to the internet (Cloudflare Tunnels for ingress only)
|
||||||
|
- Models run on hardware in your physical possession
|
||||||
|
- Data never transits a third-party network
|
||||||
|
|
||||||
|
"Never leaves your network" is a physics statement, not a policy statement.
|
||||||
|
|
||||||
|
The difference matters when:
|
||||||
|
- A vendor changes their ToS (they will)
|
||||||
|
- A government requests data (they can)
|
||||||
|
- A breach exposes your prompts (it happens)
|
||||||
|
- An acquirer gets your usage data (it's an asset)
|
||||||
|
|
||||||
|
Self-hosted AI is not about distrust. It's about architecture.
|
||||||
|
|
||||||
|
The same way HTTPS doesn't mean "I don't trust the internet" — it means "I'm not relying on trust when I can use encryption."
|
||||||
|
|
||||||
|
Self-hosted means: I'm not relying on policy when I can use physics.
|
||||||
|
|
||||||
|
#DataSovereignty #Privacy #AI #Security #SelfHosted
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Post 7: The Education Flex
|
||||||
|
|
||||||
|
```
|
||||||
|
Most marketing teams optimize for clicks.
|
||||||
|
|
||||||
|
We optimize for accurate comprehension.
|
||||||
|
|
||||||
|
Sounds the same. It's not.
|
||||||
|
|
||||||
|
Optimizing for clicks means:
|
||||||
|
- Sensational headlines
|
||||||
|
- Vague claims that imply more than they state
|
||||||
|
- CTAs designed to create urgency
|
||||||
|
- Metrics: impressions, CTR, conversions
|
||||||
|
|
||||||
|
Optimizing for comprehension means:
|
||||||
|
- Headlines with verified stats and named sources
|
||||||
|
- Claims that are literally, specifically true
|
||||||
|
- CTAs that reduce uncertainty instead of creating urgency
|
||||||
|
- Metrics: reply rate, deploy rate, retention
|
||||||
|
|
||||||
|
Why? Because the psychology (ELM, Fennis & Stroebe) says:
|
||||||
|
|
||||||
|
Central-route persuasion — where the audience thinks carefully — produces attitude change that is DEEP and LASTING.
|
||||||
|
|
||||||
|
Peripheral-route persuasion — where the audience uses shortcuts — produces change that is REAL but TEMPORARY.
|
||||||
|
|
||||||
|
Clicks are peripheral. Deploys are central.
|
||||||
|
|
||||||
|
We'd rather have 100 users who understand what BlackRoad does than 10,000 who clicked a flashy ad and bounced.
|
||||||
|
|
||||||
|
That's not idealism. It's customer acquisition cost math.
|
||||||
|
|
||||||
|
#Marketing #Psychology #ContentStrategy #AI #Startup
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Posting Schedule
|
||||||
|
|
||||||
|
| Day | Post Type | Frequency |
|
||||||
|
|-----|-----------|-----------|
|
||||||
|
| Monday | Market data / thesis | Weekly |
|
||||||
|
| Wednesday | Technical / architecture | Weekly |
|
||||||
|
| Friday | Culture / founder story | Biweekly |
|
||||||
|
| Varies | Contrarian take | Monthly |
|
||||||
|
| Varies | Psychology / education | Monthly |
|
||||||
|
|
||||||
|
## Engagement Rules
|
||||||
|
|
||||||
|
- Reply to every comment within 4 hours
|
||||||
|
- Like every comment (signal that you're paying attention)
|
||||||
|
- Never argue — redirect with data
|
||||||
|
- If someone asks a question you can't answer honestly, say "I don't know" (builds more credibility than guessing)
|
||||||
|
- Cross-reference blog posts when relevant (drives owned traffic)
|
||||||
Reference in New Issue
Block a user