feat: LinkedIn, Dev.to, onboarding emails, landing page
LinkedIn: 7 posts (founder story, psychology, market thesis, culture, contrarian, sovereignty, education) Dev.to: 3 articles (Pi setup guide, architecture deep-dive, psychology cross-post) Onboarding: 3-email post-deploy sequence (welcome → expand → identity) Landing page: 'Own Your Stack' full page copy with A/B test plan RoadChain-SHA2048: acf9a01550f3cb91 RoadChain-Identity: alexa@sovereign RoadChain-Full: acf9a01550f3cb91373a2a43d02efc1a6ed557778584f96778e6db55f59d3f9be7579ac754e7851ae2384c4909ab2e3c354f15b72918a6c65b22a2003531290b41e80a375437714826a03df976f32e0b2e3f8124c0a3c1e9ae9ffeac6f9f1d4c21196c3e2afd67f01ec224425e75c23d1a89020123953925768d54384553962622d1527ca1a18501defbb22c1415e165a9d8201087ca9c99fb684648edb0c7ba709697602a9b6938325a01d19ec71178899568cbf5e01c14257be63aef06244157e9bb14320a2cc9cfbf8a610b66f262a08b3f234f0ba649cdc550ffab24088829a7a01519e546f467541f493cc87b6f7b90a352f69d9c8f88d228af2a486359
This commit is contained in:
303
ads/social/dev-to-articles.md
Normal file
303
ads/social/dev-to-articles.md
Normal file
@@ -0,0 +1,303 @@
|
||||
# BlackRoad Dev.to / Hashnode Cross-Posts
|
||||
|
||||
**Principle:** Authority + Reciprocity (give technical depth, earn credibility)
|
||||
**Rule:** These are developer audiences. Lead with code and architecture, not marketing.
|
||||
|
||||
---
|
||||
|
||||
## Article 1: Setup Guide
|
||||
|
||||
**Title:** How I Run 16 AI Models on a Raspberry Pi 5 + Hailo-8 (Complete Setup Guide)
|
||||
|
||||
**Tags:** #ai #raspberrypi #selfhosted #tutorial
|
||||
|
||||
**Body:**
|
||||
```markdown
|
||||
I run 16 language models on a Raspberry Pi 5 with a Hailo-8 AI accelerator. This isn't a benchmark post — this is the production setup I use daily for BlackRoad OS.
|
||||
|
||||
Total hardware cost: $180. Total cloud bill: $0/month.
|
||||
|
||||
Here's exactly how to set it up.
|
||||
|
||||
## Hardware
|
||||
|
||||
| Component | Price | Purpose |
|
||||
|-----------|-------|---------|
|
||||
| Raspberry Pi 5 (8GB) | $80 | Compute |
|
||||
| Hailo-8 M.2 AI Kit | $99 | 26 TOPS neural inference |
|
||||
| 128GB microSD (A2) | $15 | Storage |
|
||||
| USB-C power supply (27W) | $12 | Power |
|
||||
| **Total** | **$206** | |
|
||||
|
||||
Optional but recommended:
|
||||
- NVMe SSD via Pi 5 HAT ($30-50) — faster model loading
|
||||
- Ethernet cable — more reliable than WiFi for serving
|
||||
|
||||
## OS Setup
|
||||
|
||||
Flash Raspberry Pi OS (64-bit, Bookworm) with the Raspberry Pi Imager.
|
||||
|
||||
Enable SSH during flashing (set hostname, username, password).
|
||||
|
||||
Boot and SSH in:
|
||||
|
||||
```bash
|
||||
ssh pi@<your-pi-ip>
|
||||
```
|
||||
|
||||
Update everything:
|
||||
|
||||
```bash
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
## Install Ollama
|
||||
|
||||
One command:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://ollama.com/install.sh | sh
|
||||
```
|
||||
|
||||
Verify:
|
||||
|
||||
```bash
|
||||
ollama --version
|
||||
```
|
||||
|
||||
## Load Models
|
||||
|
||||
Pull the models you want. Here's what I run:
|
||||
|
||||
```bash
|
||||
ollama pull llama3
|
||||
ollama pull mistral
|
||||
ollama pull codellama
|
||||
ollama pull phi3
|
||||
ollama pull gemma:2b
|
||||
ollama pull nomic-embed-text
|
||||
```
|
||||
|
||||
List loaded models:
|
||||
|
||||
```bash
|
||||
ollama list
|
||||
```
|
||||
|
||||
You can load as many as your storage allows. Ollama swaps models in and out of memory efficiently — only one runs at a time in RAM, but switching is fast.
|
||||
|
||||
## Install Hailo-8 Drivers
|
||||
|
||||
Add the Hailo APT repository:
|
||||
|
||||
```bash
|
||||
sudo apt install -y hailo-all
|
||||
```
|
||||
|
||||
Verify the accelerator is detected:
|
||||
|
||||
```bash
|
||||
hailortcli fw-control identify
|
||||
```
|
||||
|
||||
You should see: `Board Name: Hailo-8` and `26 TOPS`.
|
||||
|
||||
## Test Inference
|
||||
|
||||
Run a quick test:
|
||||
|
||||
```bash
|
||||
ollama run llama3 "What is edge AI inference?"
|
||||
```
|
||||
|
||||
For the Hailo-8 specifically, you can run object detection and classification models natively:
|
||||
|
||||
```bash
|
||||
hailortcli run <model.hef>
|
||||
```
|
||||
|
||||
## Serving Over the Network
|
||||
|
||||
Ollama serves on port 11434 by default. To expose it to other nodes on your network:
|
||||
|
||||
```bash
|
||||
OLLAMA_HOST=0.0.0.0:11434 ollama serve
|
||||
```
|
||||
|
||||
Now any machine on your LAN can query:
|
||||
|
||||
```bash
|
||||
curl http://<pi-ip>:11434/api/generate -d '{
|
||||
"model": "llama3",
|
||||
"prompt": "Explain edge computing in one sentence."
|
||||
}'
|
||||
```
|
||||
|
||||
## What I Actually Run
|
||||
|
||||
In production, I have 5 Pis orchestrated with Docker Swarm:
|
||||
|
||||
- Alice: gateway, DNS, databases
|
||||
- Cecilia: AI inference (this setup + Hailo-8)
|
||||
- Octavia: git server + second Hailo-8
|
||||
- Aria: agent runtime + NATS messaging
|
||||
- Lucidia: web apps + CI/CD
|
||||
|
||||
Two Hailo-8s = 52 TOPS combined. WireGuard mesh encrypts everything. Cloudflare Tunnels provide ingress without opening ports.
|
||||
|
||||
This serves 30 websites, 50 AI skills, and 207 git repositories. Monthly cost: electricity (~$5-8).
|
||||
|
||||
## The Economics
|
||||
|
||||
| | Cloud GPU (H100) | Raspberry Pi + Hailo-8 |
|
||||
|---|---|---|
|
||||
| Cost | $3.90/hr | $206 one-time |
|
||||
| Annual cost | $33,696 | ~$70 electricity |
|
||||
| Inference | 65 TOPS | 26 TOPS |
|
||||
| Privacy | Vendor-dependent | Absolute |
|
||||
| Vendor lock-in | Yes | No |
|
||||
|
||||
The cloud GPU is faster. But for inference workloads that run 24/7, the Pi pays for itself in 53 hours of avoided cloud compute.
|
||||
|
||||
## Questions?
|
||||
|
||||
I'm happy to answer anything about this setup. The full architecture is at blackroad.io.
|
||||
|
||||
The project is open and the infrastructure is real — this post was served through it.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Article 2: Architecture Deep-Dive
|
||||
|
||||
**Title:** I Replaced My Cloud Infrastructure with 5 Raspberry Pis — Here's the Full Architecture
|
||||
|
||||
**Tags:** #architecture #selfhosted #devops #ai
|
||||
|
||||
**Body:**
|
||||
```markdown
|
||||
This is the complete production architecture for BlackRoad OS. Not a demo. Not a weekend project. The actual infrastructure serving 30 websites, 207 git repos, and 50 AI skills.
|
||||
|
||||
## Network Topology
|
||||
|
||||
```
|
||||
Internet
|
||||
│
|
||||
├── Cloudflare (DNS, CDN, Tunnels, Pages)
|
||||
│ ├── 95+ Pages sites
|
||||
│ ├── 40 KV namespaces
|
||||
│ ├── 8 D1 databases
|
||||
│ └── 10 R2 buckets
|
||||
│
|
||||
├── Cloudflare Tunnel → Alice (.49)
|
||||
│ ├── Pi-hole (DNS filtering, 120+ blocked domains)
|
||||
│ ├── PostgreSQL
|
||||
│ ├── Qdrant (vector search)
|
||||
│ └── Gateway / reverse proxy
|
||||
│
|
||||
├── Cloudflare Tunnel → Cecilia (.96)
|
||||
│ ├── Ollama (16 models)
|
||||
│ ├── Hailo-8 (26 TOPS)
|
||||
│ └── Embedding engine (nomic-embed-text)
|
||||
│
|
||||
├── Cloudflare Tunnel → Octavia (.101)
|
||||
│ ├── Gitea (207 repos, primary git host)
|
||||
│ ├── Docker Swarm manager
|
||||
│ └── Hailo-8 (26 TOPS)
|
||||
│
|
||||
├── Cloudflare Tunnel → Aria (.98)
|
||||
│ ├── Agent runtime
|
||||
│ └── NATS v2.12.3 (pub/sub mesh)
|
||||
│
|
||||
└── Cloudflare Tunnel → Lucidia (.38)
|
||||
├── 334 web applications
|
||||
└── GitHub Actions runner
|
||||
```
|
||||
|
||||
## Internal Mesh
|
||||
|
||||
All nodes connect over WireGuard:
|
||||
|
||||
```
|
||||
Alice ←→ Cecilia ←→ Octavia ←→ Aria ←→ Lucidia
|
||||
↕ ↕ ↕ ↕
|
||||
WireGuard encrypted mesh (10.8.0.x)
|
||||
```
|
||||
|
||||
NATS v2.12.3 provides pub/sub messaging between 4 nodes. Agents on any node can publish tasks and subscribe to results.
|
||||
|
||||
## Key Design Decisions
|
||||
|
||||
**Why Gitea instead of GitHub?**
|
||||
Gitea is primary. GitHub is a mirror. We own the git server. If GitHub changes pricing, policies, or terms — our repos don't move. Gitea runs on Octavia and syncs downstream to 17 GitHub orgs every 30 minutes.
|
||||
|
||||
**Why Cloudflare Tunnels?**
|
||||
Zero open ports on any Pi. All ingress goes through authenticated Cloudflare Tunnels. No port scanning surface. No firewall rules to maintain (except Lucidia, which runs UFW).
|
||||
|
||||
**Why Pi-hole?**
|
||||
Network-level DNS filtering. 120+ tracking/advertising domains blocked for every device on the network. This isn't ad-blocking — it's a security boundary.
|
||||
|
||||
**Why Hailo-8 instead of cloud GPUs?**
|
||||
52 TOPS for $198 total vs. $33,696/year for one cloud H100. The Hailo-8 handles classification, detection, and inference acceleration. Ollama handles language model serving. Together, they cover the full inference stack.
|
||||
|
||||
**Why NATS?**
|
||||
Lightweight pub/sub messaging designed for distributed systems. Agents on different Pis communicate by publishing to topics and subscribing to results. No message broker to maintain. No Kafka cluster to babysit.
|
||||
|
||||
## Services Map
|
||||
|
||||
| Service | Node | Port | Purpose |
|
||||
|---------|------|------|---------|
|
||||
| Pi-hole | Alice | 53, 80 | DNS filtering |
|
||||
| PostgreSQL | Alice | 5432 | Primary database |
|
||||
| Qdrant | Alice | 6333 | Vector search / RAG |
|
||||
| Ollama | Cecilia | 11434 | LLM inference (16 models) |
|
||||
| Gitea | Octavia | 3100 | Git hosting (207 repos) |
|
||||
| NATS | Aria | 4222 | Agent messaging |
|
||||
| Web apps | Lucidia | various | 334 applications |
|
||||
|
||||
## Monitoring & Automation
|
||||
|
||||
Cron jobs on the Mac (Alexandria, .28):
|
||||
- Every 5 min: health check + fleet collector
|
||||
- Every 15 min: Cecilia model sync
|
||||
- Every 30 min: website checks + git sync
|
||||
- Daily 3am: backups + sovereign mesh check
|
||||
- Every 6h: Google Drive sync (rclone)
|
||||
- Every 12h: rsync to DigitalOcean droplet
|
||||
|
||||
## Replication
|
||||
|
||||
This entire architecture is reproducible. Total hardware cost: ~$400. Setup time: a few hours with the guide.
|
||||
|
||||
Full setup guide: [link to Article 1]
|
||||
|
||||
Questions welcome — happy to share configs, scripts, or troubleshooting for any of these services.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Article 3: The Psychology Post (Cross-post from blog)
|
||||
|
||||
**Title:** 80% of Advertisements Are Misunderstood — I Studied Why, Then Built a Marketing System That Can't Be
|
||||
|
||||
**Tags:** #marketing #psychology #transparency #startup
|
||||
|
||||
*(Cross-post blog/02-advertisements-misunderstood.md with Dev.to formatting)*
|
||||
|
||||
---
|
||||
|
||||
## Publishing Schedule
|
||||
|
||||
| Week | Article | Platform |
|
||||
|------|---------|----------|
|
||||
| 1 | Setup Guide | Dev.to + Hashnode |
|
||||
| 2 | Architecture Deep-Dive | Dev.to + Hashnode |
|
||||
| 3 | Psychology of Advertising | Dev.to |
|
||||
| 4+ | Cross-post new blog posts as they publish | Both |
|
||||
|
||||
## Dev.to-Specific Rules
|
||||
|
||||
- Use their canonical URL feature to point back to blackroad.io/blog (SEO benefit)
|
||||
- Include a "series" tag to link articles together
|
||||
- Respond to every comment within 24 hours
|
||||
- Don't use cover images that look like marketing — use terminal screenshots or architecture diagrams
|
||||
276
ads/social/linkedin-posts.md
Normal file
276
ads/social/linkedin-posts.md
Normal file
@@ -0,0 +1,276 @@
|
||||
# BlackRoad LinkedIn Posts
|
||||
|
||||
**Principle:** Authority + Value-Expressive + Central Route (LinkedIn audience processes deeply)
|
||||
**Tone:** Professional but not corporate. Founder voice. Data-driven.
|
||||
|
||||
---
|
||||
|
||||
## Post 1: The Founder Story
|
||||
|
||||
```
|
||||
I left finance to build AI infrastructure on Raspberry Pis.
|
||||
|
||||
Not because I couldn't get a cloud budget.
|
||||
Because I did the math.
|
||||
|
||||
One NVIDIA H100 on AWS: $3.90/hour.
|
||||
Run it 24/7 for a year: $33,696.
|
||||
For one GPU.
|
||||
|
||||
Two Hailo-8 accelerators: $198 total.
|
||||
52 trillion operations per second.
|
||||
Run them forever. No bill. No vendor.
|
||||
|
||||
I have a Series 7. A Series 24. A Series 65. A Series 66.
|
||||
|
||||
I passed the same licensing exams Wall Street requires — then used that financial rigor to ask a simple question: "Why are we renting compute we could own?"
|
||||
|
||||
The answer is that nobody told small teams they could own it. The cloud providers certainly didn't.
|
||||
|
||||
So I built BlackRoad OS: self-hosted AI that runs on $400 of Raspberry Pi hardware. Five nodes. Sixteen models. Thirty websites. A billing system. A git server. Fifty AI skills.
|
||||
|
||||
Monthly cloud bill: $0.
|
||||
|
||||
The self-hosted cloud market hit $18.48 billion in 2025 (Grand View Research). Edge AI is growing at 21.7% CAGR.
|
||||
|
||||
This isn't contrarian. This is where the math points.
|
||||
|
||||
#SelfHosted #EdgeAI #AI #Infrastructure #Founder
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post 2: The Psychology Angle
|
||||
|
||||
```
|
||||
I studied the Psychology of Advertising at the University of Minnesota (JOUR 4251, Dr. Claire Segijn).
|
||||
|
||||
One stat changed how I build marketing:
|
||||
|
||||
80% of advertisements are misunderstood by their audience.
|
||||
|
||||
Not rejected. Not ignored. Misunderstood.
|
||||
|
||||
The audience processes the ad and walks away believing something the advertiser never intended.
|
||||
|
||||
Four mechanisms cause this:
|
||||
1. Omitted comparisons — "The best AI platform" (better than what?)
|
||||
2. Pragmatic inference — "May be the best" (may also be the worst)
|
||||
3. Juxtaposition — "Smart people choose X" (implies causation)
|
||||
4. Affirmation of consequent — "If you want Y, you need X" (false logic)
|
||||
|
||||
So we made a rule at BlackRoad:
|
||||
|
||||
Every claim must be literally, specifically, and verifiably true.
|
||||
|
||||
"52 TOPS of neural inference" — true, measured.
|
||||
"16 Ollama models on a Pi 5" — true, run `ollama list`.
|
||||
"$0/month cloud bill" — true, there is no cloud.
|
||||
|
||||
We even killed our own "30K agents" marketing copy when we realized it was aspirational, not factual.
|
||||
|
||||
The truth effect (Fennis & Stroebe) says: the more people see a claim, the more true it seems. This works on lies AND truths.
|
||||
|
||||
So we repeat truths. Obsessively. With sources.
|
||||
|
||||
It's slower than hype marketing. It's also why our users stay.
|
||||
|
||||
#MarketingPsychology #Advertising #Transparency #AI
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post 3: The Market Thesis
|
||||
|
||||
```
|
||||
Three markets are converging on the same conclusion:
|
||||
|
||||
1. Self-hosted cloud: $18.48B in 2025, growing 11.9% CAGR → $49.67B by 2034
|
||||
2. Edge AI: $24.91B in 2025, growing 21.7% CAGR → $118.69B by 2033
|
||||
3. AI inference: $106.15B in 2025, growing 19.2% CAGR → $254.98B by 2030
|
||||
|
||||
The conclusion: inference belongs on the edge.
|
||||
|
||||
Training requires cloud-scale compute. Always will.
|
||||
|
||||
But inference — running models, answering queries, classifying data, powering agents — runs cheaper, faster, and more privately on hardware you own.
|
||||
|
||||
A Hailo-8 accelerator costs $99 and delivers 26 TOPS.
|
||||
A cloud GPU costs $3.90/hour — you exceed the Hailo's price in 26 hours.
|
||||
|
||||
94% of IT leaders fear vendor lock-in (Parallels 2026). 42% are moving workloads back on-premises.
|
||||
|
||||
This isn't a prediction. This is happening right now, backed by $150B+ in market activity.
|
||||
|
||||
BlackRoad OS sits at the intersection: self-hosted AI inference on commodity edge hardware.
|
||||
|
||||
The shift doesn't require faith. It requires arithmetic.
|
||||
|
||||
Sources: Grand View Research, MarketsandMarkets, Parallels
|
||||
|
||||
#EdgeAI #SelfHosted #Infrastructure #AI #MarketAnalysis
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post 4: The Hiring/Culture Post
|
||||
|
||||
```
|
||||
Our entire AI infrastructure runs on 5 Raspberry Pis.
|
||||
|
||||
Here's what that says about how we build:
|
||||
|
||||
1. We solve problems with architecture, not budget.
|
||||
A $55 computer is a datacenter node if you know Docker, WireGuard, and DNS.
|
||||
|
||||
2. We own everything.
|
||||
Git server. Billing system. AI models. DNS filtering. Auth. Search.
|
||||
Nothing is rented. Nothing can be revoked.
|
||||
|
||||
3. We measure in watts, not invoices.
|
||||
46 watts total. That's less than a light bulb.
|
||||
Not because we're cheap. Because efficiency is a design value.
|
||||
|
||||
4. We verify before we claim.
|
||||
Every stat in our marketing is sourced. Every number is measured.
|
||||
We killed our own copy when it wasn't accurate.
|
||||
|
||||
5. We document everything.
|
||||
207 repos on Gitea. Every decision, every config, every architecture choice.
|
||||
If I get hit by a bus, the system runs itself.
|
||||
|
||||
This is the culture that builds BlackRoad OS.
|
||||
|
||||
We're not hiring right now. But when we do, this is what we look for:
|
||||
People who'd rather own $400 of hardware than rent $33,696 of someone else's.
|
||||
|
||||
#Culture #Startup #Engineering #AI #SelfHosted
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post 5: The Contrarian Take
|
||||
|
||||
```
|
||||
Unpopular opinion: most AI startups are paying 100x too much for inference.
|
||||
|
||||
Not training. Inference.
|
||||
|
||||
Training a foundation model requires H100 clusters, thousands of GPUs, millions of dollars. Fair.
|
||||
|
||||
But RUNNING a model? Answering a query? Classifying a document? Embedding text for search?
|
||||
|
||||
That runs on a $99 accelerator plugged into a $55 computer.
|
||||
|
||||
The AI inference market is $106 billion. Most of it is cloud inference — metered by the hour, billed by the token, scaled by the credit card.
|
||||
|
||||
But inference doesn't need the cloud. Inference needs:
|
||||
- A model (free, open-source, download it)
|
||||
- Hardware (Hailo-8: $99, Pi 5: $55)
|
||||
- Software (Ollama: free, one command)
|
||||
|
||||
Total: $154. Runs forever.
|
||||
|
||||
The cloud GPU business model depends on you not knowing this.
|
||||
|
||||
AWS doesn't want you to run `ollama serve` on a Raspberry Pi. That's a $33,696/year customer they lose.
|
||||
|
||||
The edge AI market is growing at 21.7% because the secret is getting out.
|
||||
|
||||
BlackRoad OS is how we prove it works — 16 models, 50 skills, 30 websites, $0/month.
|
||||
|
||||
Not a pitch. A proof.
|
||||
|
||||
#AI #Inference #EdgeComputing #SelfHosted #Startups
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post 6: The Data Sovereignty Angle
|
||||
|
||||
```
|
||||
Your AI vendor's privacy policy is not a technical guarantee.
|
||||
|
||||
It's a promise. Promises change.
|
||||
|
||||
Technical guarantees look like this:
|
||||
- Pi-hole DNS filtering blocks 120+ tracking domains at the network level
|
||||
- WireGuard encrypts all inter-node traffic
|
||||
- No port is open to the internet (Cloudflare Tunnels for ingress only)
|
||||
- Models run on hardware in your physical possession
|
||||
- Data never transits a third-party network
|
||||
|
||||
"Never leaves your network" is a physics statement, not a policy statement.
|
||||
|
||||
The difference matters when:
|
||||
- A vendor changes their ToS (they will)
|
||||
- A government requests data (they can)
|
||||
- A breach exposes your prompts (it happens)
|
||||
- An acquirer gets your usage data (it's an asset)
|
||||
|
||||
Self-hosted AI is not about distrust. It's about architecture.
|
||||
|
||||
The same way HTTPS doesn't mean "I don't trust the internet" — it means "I'm not relying on trust when I can use encryption."
|
||||
|
||||
Self-hosted means: I'm not relying on policy when I can use physics.
|
||||
|
||||
#DataSovereignty #Privacy #AI #Security #SelfHosted
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post 7: The Education Flex
|
||||
|
||||
```
|
||||
Most marketing teams optimize for clicks.
|
||||
|
||||
We optimize for accurate comprehension.
|
||||
|
||||
Sounds the same. It's not.
|
||||
|
||||
Optimizing for clicks means:
|
||||
- Sensational headlines
|
||||
- Vague claims that imply more than they state
|
||||
- CTAs designed to create urgency
|
||||
- Metrics: impressions, CTR, conversions
|
||||
|
||||
Optimizing for comprehension means:
|
||||
- Headlines with verified stats and named sources
|
||||
- Claims that are literally, specifically true
|
||||
- CTAs that reduce uncertainty instead of creating urgency
|
||||
- Metrics: reply rate, deploy rate, retention
|
||||
|
||||
Why? Because the psychology (ELM, Fennis & Stroebe) says:
|
||||
|
||||
Central-route persuasion — where the audience thinks carefully — produces attitude change that is DEEP and LASTING.
|
||||
|
||||
Peripheral-route persuasion — where the audience uses shortcuts — produces change that is REAL but TEMPORARY.
|
||||
|
||||
Clicks are peripheral. Deploys are central.
|
||||
|
||||
We'd rather have 100 users who understand what BlackRoad does than 10,000 who clicked a flashy ad and bounced.
|
||||
|
||||
That's not idealism. It's customer acquisition cost math.
|
||||
|
||||
#Marketing #Psychology #ContentStrategy #AI #Startup
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Posting Schedule
|
||||
|
||||
| Day | Post Type | Frequency |
|
||||
|-----|-----------|-----------|
|
||||
| Monday | Market data / thesis | Weekly |
|
||||
| Wednesday | Technical / architecture | Weekly |
|
||||
| Friday | Culture / founder story | Biweekly |
|
||||
| Varies | Contrarian take | Monthly |
|
||||
| Varies | Psychology / education | Monthly |
|
||||
|
||||
## Engagement Rules
|
||||
|
||||
- Reply to every comment within 4 hours
|
||||
- Like every comment (signal that you're paying attention)
|
||||
- Never argue — redirect with data
|
||||
- If someone asks a question you can't answer honestly, say "I don't know" (builds more credibility than guessing)
|
||||
- Cross-reference blog posts when relevant (drives owned traffic)
|
||||
Reference in New Issue
Block a user