feat: complete ad suite — search, social, display, video, email
Paid Search: 5 ad groups, 10 ads, negative keywords, budget allocation Social: 5 Twitter threads, Reddit/HN posts for 5 subreddits + Show HN Display: 5 banner sets × 4 formats (leaderboard, rectangle, skyscraper, mobile) Video: 60s 'Own Your Stack' script + 30s 'The Math' script Email: 5-email nurture sequence with segmentation rules All backed by verified stats and named psych principles RoadChain-SHA2048: 3d877b6d5e4827c0 RoadChain-Identity: alexa@sovereign RoadChain-Full: 3d877b6d5e4827c0bb309d360e8632b675ea561b7af44fb5671f649dce88197b5f435f485de674ef1dd68ff273639d46c11fd6c966054ca43a3a28a64211ccab064271fb6cfde2bd2000032e5f94c01cc38717091db36c163cf268e477e6376926a9da420c622bdb1d7b44674dde3d646944f22ce86717d2d3131e4ebf8fe65263e6cbf3fad5d758f950d71f9d6eeea98fa005463907064ac161058d57e05b078b4983241f929afaf1ad5e9234f8292ecb394d5222a118b39fee751f18600308d3548b468314aac374ac2eee30890c48008a109fbb2d732f59cd03fd0b8a6439e5aa5a152540167ac8e376a043c9d5500c6f5fdd7343c7e6103f2b826ffd7565
This commit is contained in:
193
ads/social/reddit-hn-posts.md
Normal file
193
ads/social/reddit-hn-posts.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# BlackRoad Reddit & Hacker News Posts
|
||||
|
||||
**Principle:** Authority + Social Validation + Central Route (these audiences think deeply and counterargue)
|
||||
**Critical rule:** These audiences HATE marketing. Lead with technical substance. Never sound like an ad.
|
||||
|
||||
---
|
||||
|
||||
## Hacker News: Show HN Post
|
||||
|
||||
**Title:** Show HN: I run 16 AI models on 5 Raspberry Pis — 52 TOPS, $0/month cloud bill
|
||||
|
||||
**Body:**
|
||||
```
|
||||
I've been building self-hosted AI infrastructure on Raspberry Pis for the past year. Wanted to share what a production setup actually looks like.
|
||||
|
||||
The stack:
|
||||
- 5x Raspberry Pi (4x Pi 5, 1x Pi 4)
|
||||
- 2x Hailo-8 M.2 AI accelerators (26 TOPS each = 52 TOPS total)
|
||||
- Ollama serving 16 models (Llama 3, Mistral, CodeLlama, Phi, Gemma, etc.)
|
||||
- Qdrant for vector search / RAG
|
||||
- NATS v2.12.3 for agent-to-agent pub/sub messaging
|
||||
- Gitea hosting 207 repos (primary git — GitHub is a mirror)
|
||||
- Docker Swarm for orchestration
|
||||
- WireGuard mesh for encryption
|
||||
- Cloudflare Tunnels for ingress (no open ports)
|
||||
- Pi-hole for DNS filtering (120+ blocked domains)
|
||||
- PostgreSQL for primary database
|
||||
|
||||
This serves 30 websites across 20 domains, runs a billing system (RoadPay), processes 50 AI skills, and hosts all our code.
|
||||
|
||||
Total hardware cost: ~$400
|
||||
Monthly cloud bill: $0
|
||||
Power consumption: ~46 watts
|
||||
|
||||
For context: one H100 on AWS is $3.90/hr = $33,696/year. Two Hailo-8s cost $198 total and run forever.
|
||||
|
||||
The project is called BlackRoad OS. Everything is at blackroad.io. Happy to answer questions about the architecture, the Hailo-8 performance, Ollama on Pi, or anything else.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hacker News: Blog Post Submission
|
||||
|
||||
**Title:** 94% of IT leaders fear vendor lock-in — the self-hosted market just hit $18.48B
|
||||
|
||||
**URL:** `https://blackroad.io/blog/vendor-lock-in`
|
||||
|
||||
*(No body text for URL submissions on HN)*
|
||||
|
||||
---
|
||||
|
||||
## Reddit: r/selfhosted
|
||||
|
||||
**Title:** I replaced my entire cloud infrastructure with 5 Raspberry Pis — here's the full architecture
|
||||
|
||||
**Body:**
|
||||
```
|
||||
Been running this setup for a year now. Figured I'd share since I see a lot of "is self-hosting AI actually viable?" questions here.
|
||||
|
||||
**Hardware:**
|
||||
- Alice (Pi 5) — gateway, Pi-hole, PostgreSQL, Qdrant
|
||||
- Cecilia (Pi 5 + Hailo-8) — 16 Ollama models, embedding engine
|
||||
- Octavia (Pi 5 + Hailo-8) — Gitea (207 repos), Docker Swarm
|
||||
- Aria (Pi 5) — agent runtime, NATS messaging
|
||||
- Lucidia (Pi 4) — 334 web apps, CI/CD
|
||||
|
||||
**Networking:**
|
||||
- WireGuard mesh between all nodes
|
||||
- Cloudflare Tunnels for external access (zero open ports)
|
||||
- Pi-hole DNS filtering fleet-wide
|
||||
|
||||
**AI Stack:**
|
||||
- Ollama serves Llama 3, Mistral, CodeLlama, Phi-3, Gemma, and more
|
||||
- 2x Hailo-8 = 52 TOPS of neural inference
|
||||
- Qdrant + nomic-embed-text for RAG/semantic search
|
||||
- NATS pub/sub for agent-to-agent communication
|
||||
|
||||
**What it runs:**
|
||||
- 30 websites (20 domains)
|
||||
- 50 AI skills across 6 modules
|
||||
- Billing system (Stripe for cards, D1 for everything else)
|
||||
- Auth system (JWT, 42 users)
|
||||
- Full CI/CD pipeline
|
||||
- 207 git repositories on Gitea
|
||||
|
||||
**Cost:**
|
||||
- Hardware: ~$400 one-time
|
||||
- Monthly: electricity only (~$5-8)
|
||||
- Cloud bill: $0
|
||||
|
||||
Happy to answer questions. The project is BlackRoad OS — blackroad.io
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reddit: r/homelab
|
||||
|
||||
**Title:** My homelab runs a company — 5 Pis, 52 TOPS AI, 30 websites, $0/month
|
||||
|
||||
**Body:**
|
||||
```
|
||||
I know "homelab to production" posts get mixed reactions, but this one's been running stable for a year so I figured I'd share.
|
||||
|
||||
[PHOTO OF PI CLUSTER]
|
||||
|
||||
**The nodes:**
|
||||
| Node | Hardware | Role |
|
||||
|------|----------|------|
|
||||
| Alice | Pi 5 8GB | Gateway, Pi-hole, PostgreSQL, Qdrant |
|
||||
| Cecilia | Pi 5 + Hailo-8 | 16 AI models (Ollama), embeddings |
|
||||
| Octavia | Pi 5 + Hailo-8 | Gitea (207 repos), Docker Swarm |
|
||||
| Aria | Pi 5 | Agent runtime, NATS pub/sub |
|
||||
| Lucidia | Pi 4 | 334 web apps, GitHub Actions |
|
||||
|
||||
**Total power:** ~46W
|
||||
**Total compute:** 52 TOPS neural inference
|
||||
|
||||
This serves real production traffic — 30 websites, a billing system, auth, AI inference, CI/CD, the works.
|
||||
|
||||
The Hailo-8 has been the game changer. $99 for 26 TOPS of inference, plugs into the Pi 5 via M.2. Two of them outperform the economics of any cloud GPU for inference workloads.
|
||||
|
||||
AMA about the setup, Hailo-8 performance, Ollama on Pi, or the network architecture.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reddit: r/LocalLLaMA
|
||||
|
||||
**Title:** Running 16 Ollama models on Raspberry Pi 5 + Hailo-8 — benchmarks and setup guide
|
||||
|
||||
**Body:**
|
||||
```
|
||||
Setup: Pi 5 (8GB) + Hailo-8 M.2 (26 TOPS), running Ollama.
|
||||
|
||||
**Models currently loaded:**
|
||||
- Llama 3 8B
|
||||
- Mistral 7B
|
||||
- CodeLlama 7B
|
||||
- Phi-3 Mini
|
||||
- Gemma 2B
|
||||
- Plus 11 more specialized models
|
||||
|
||||
**What works well:**
|
||||
- Inference speed is surprisingly usable for 7-8B models
|
||||
- Hailo-8 handles classification/detection tasks natively at full 26 TOPS
|
||||
- Multiple models can be loaded (Ollama swaps efficiently)
|
||||
- Embedding (nomic-embed-text) runs smoothly for RAG
|
||||
|
||||
**The real value:**
|
||||
Running two of these nodes (52 TOPS combined) with NATS pub/sub means agents on different Pis can communicate and delegate tasks. One node runs the LLM, another handles embeddings, a third does classification.
|
||||
|
||||
It's not replacing an A100 for training. But for inference, RAG, and agent orchestration? It's production-viable and costs $200 in hardware total.
|
||||
|
||||
Full architecture at blackroad.io if you want the deep dive. Happy to share configs.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reddit: r/raspberry_pi
|
||||
|
||||
**Title:** 68 million Pis sold worldwide. Here's what 5 of them do when you treat them like a data center.
|
||||
|
||||
**Body:**
|
||||
```
|
||||
I've been running my entire company infrastructure on Raspberry Pis for a year. Not as a project. As production.
|
||||
|
||||
- 30 websites across 20 domains
|
||||
- 207 git repositories on Gitea
|
||||
- 16 AI models via Ollama
|
||||
- 52 TOPS of neural inference (2x Hailo-8)
|
||||
- Full billing system
|
||||
- Vector database for semantic search
|
||||
- Agent mesh network (NATS pub/sub)
|
||||
- Automated CI/CD pipeline
|
||||
- Pi-hole DNS filtering
|
||||
|
||||
All on 5 Pis drawing ~46 watts total.
|
||||
|
||||
The gap between "hobby project" and "production infrastructure" isn't hardware. It's architecture. Docker Swarm, WireGuard mesh, Cloudflare Tunnels, proper monitoring — and suddenly a $55 SBC is a datacenter node.
|
||||
|
||||
Happy to share the full setup. The project is called BlackRoad OS.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Posting Rules
|
||||
|
||||
1. **Never sound like an ad.** These communities will downvote anything that smells like marketing. Lead with technical substance.
|
||||
2. **Answer every comment.** Engagement in comments drives visibility on both HN and Reddit.
|
||||
3. **Be honest about limitations.** "This isn't replacing an A100 for training" builds more credibility than overclaiming.
|
||||
4. **Include the photo.** r/homelab and r/raspberry_pi are visual. Show the actual hardware.
|
||||
5. **Time the posts.** HN: Tuesday-Thursday, 9-11am ET. Reddit: varies by sub, but weekday mornings.
|
||||
6. **Don't cross-post simultaneously.** Stagger by 2-3 days so you can customize based on what resonated.
|
||||
386
ads/social/twitter-threads.md
Normal file
386
ads/social/twitter-threads.md
Normal file
@@ -0,0 +1,386 @@
|
||||
# BlackRoad Twitter/X Threads
|
||||
|
||||
**Principle:** Peripheral Route + Social Validation + Build-in-Public
|
||||
**Rule:** 80% educate, 20% promote
|
||||
|
||||
---
|
||||
|
||||
## Thread 1: The Stat-Flip (Vendor Lock-In)
|
||||
|
||||
**Type:** Educate (80%)
|
||||
|
||||
```
|
||||
1/ 94% of IT leaders fear vendor lock-in with their cloud provider.
|
||||
|
||||
Not mildly uncomfortable. Concerned.
|
||||
|
||||
42% are considering moving workloads back on-premises.
|
||||
|
||||
Here's the math nobody's talking about: 🧵
|
||||
|
||||
2/ An H100 on AWS costs $3.90/hour.
|
||||
|
||||
Run it 24/7 for a year: $33,696.
|
||||
|
||||
For ONE GPU.
|
||||
|
||||
3/ A Hailo-8 AI accelerator costs $99.
|
||||
|
||||
It delivers 26 TOPS of neural inference.
|
||||
|
||||
It plugs into a Raspberry Pi 5.
|
||||
|
||||
It runs 24/7 forever on pennies of electricity.
|
||||
|
||||
4/ In 26 hours of cloud GPU time, you've spent more than the Hailo-8 costs to OWN.
|
||||
|
||||
26 hours vs. forever.
|
||||
|
||||
That's not a pricing comparison. That's a different economic model.
|
||||
|
||||
5/ We run 16 AI models on two of these.
|
||||
|
||||
52 TOPS total.
|
||||
|
||||
5 Raspberry Pis.
|
||||
|
||||
30 websites. 50 AI skills. 207 git repos.
|
||||
|
||||
Total monthly cloud bill: $0.
|
||||
|
||||
6/ The self-hosted cloud market hit $18.48B in 2025.
|
||||
|
||||
Growing at 11.9% CAGR.
|
||||
|
||||
Edge AI growing at 21.7%.
|
||||
|
||||
This isn't a hobby. It's where the market is going because the math requires it.
|
||||
|
||||
Source: Grand View Research, Parallels 2026 Survey
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Thread 2: Build-in-Public (Infrastructure Tour)
|
||||
|
||||
**Type:** Educate (80%)
|
||||
|
||||
```
|
||||
1/ People ask what "self-hosted AI" actually looks like in production.
|
||||
|
||||
Here's our full infrastructure — every node, every service, every port.
|
||||
|
||||
Nothing hidden. 🧵
|
||||
|
||||
2/ NODE 1: Alice (.49)
|
||||
- Gateway router
|
||||
- Pi-hole DNS (blocks 120+ tracking domains)
|
||||
- PostgreSQL database
|
||||
- Qdrant vector database for RAG
|
||||
|
||||
Hardware: Raspberry Pi 5, 8GB RAM
|
||||
Power: ~8 watts
|
||||
|
||||
3/ NODE 2: Cecilia (.96)
|
||||
- 16 Ollama models (Llama, Mistral, CodeLlama, Phi, Gemma)
|
||||
- Embedding engine (nomic-embed-text)
|
||||
- Hailo-8 accelerator: 26 TOPS
|
||||
|
||||
Hardware: Raspberry Pi 5 + Hailo-8 M.2
|
||||
Power: ~12 watts
|
||||
|
||||
4/ NODE 3: Octavia (.101)
|
||||
- Gitea: 207 repositories (PRIMARY git host)
|
||||
- Docker Swarm manager
|
||||
- Hailo-8 accelerator: 26 TOPS
|
||||
|
||||
Hardware: Raspberry Pi 5 + Hailo-8 M.2
|
||||
Power: ~12 watts
|
||||
|
||||
5/ NODE 4: Aria (.98)
|
||||
- Agent runtime
|
||||
- NATS v2.12.3 pub/sub messaging
|
||||
- Agent-to-agent communication
|
||||
|
||||
Hardware: Raspberry Pi 5
|
||||
Power: ~8 watts
|
||||
|
||||
6/ NODE 5: Lucidia (.38)
|
||||
- 334 web applications
|
||||
- GitHub Actions runner
|
||||
- CI/CD pipeline
|
||||
|
||||
Hardware: Raspberry Pi 4
|
||||
Power: ~6 watts
|
||||
|
||||
7/ THE MESH:
|
||||
- WireGuard encrypts everything
|
||||
- NATS connects 4 nodes for agent messaging
|
||||
- Cloudflare Tunnels expose services (no open ports)
|
||||
- Pi-hole filters DNS fleet-wide
|
||||
|
||||
Total power: ~46 watts
|
||||
Total monthly bill: $0
|
||||
|
||||
8/ This serves 30 websites across 20 domains.
|
||||
|
||||
It processes 50 AI skills.
|
||||
|
||||
It hosts 207 repos on Gitea.
|
||||
|
||||
It runs a billing system (RoadPay) that processes real payments.
|
||||
|
||||
All on $400 of hardware.
|
||||
|
||||
9/ The question isn't whether this works.
|
||||
|
||||
You're reading this tweet on a device that loaded content served by this infrastructure.
|
||||
|
||||
It works.
|
||||
|
||||
The question is why you're still paying hourly for something that costs $400 once.
|
||||
|
||||
blackroad.io
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Thread 3: Psychology of Advertising (Educate)
|
||||
|
||||
**Type:** Educate (80%)
|
||||
|
||||
```
|
||||
1/ I studied the Psychology of Advertising at the University of Minnesota.
|
||||
|
||||
Here are 7 things I learned that changed how I think about every ad I see:
|
||||
|
||||
🧵
|
||||
|
||||
2/ 80% OF ADS ARE MISUNDERSTOOD.
|
||||
|
||||
Not ignored. Misunderstood.
|
||||
|
||||
The audience sees the ad, processes it, and walks away believing something the advertiser didn't intend.
|
||||
|
||||
(Fennis & Stroebe, Psychology of Advertising)
|
||||
|
||||
3/ There are TWO PROCESSING ROUTES.
|
||||
|
||||
Central Route: you think carefully, evaluate arguments, counterargue.
|
||||
|
||||
Peripheral Route: you use shortcuts — design, social proof, brand recognition.
|
||||
|
||||
Most ads are designed for peripheral. Most claims need central.
|
||||
|
||||
4/ THE TRUTH EFFECT.
|
||||
|
||||
The more you see a claim, the more true it seems.
|
||||
|
||||
This works on true AND false claims.
|
||||
|
||||
Ethical play: repeat things that are actually true, frequently, everywhere.
|
||||
|
||||
5/ COMPLIANCE PRINCIPLE: COMMITMENT/CONSISTENCY.
|
||||
|
||||
Once you say yes to a small thing, you're more likely to say yes to a bigger thing.
|
||||
|
||||
"Star this repo" → "try a deploy" → "become a user" → "become a customer."
|
||||
|
||||
Every funnel is a commitment ladder.
|
||||
|
||||
6/ 94% OF IT LEADERS FEAR VENDOR LOCK-IN.
|
||||
|
||||
Not because of a marketing campaign.
|
||||
|
||||
Because the math is bad and the contracts are worse.
|
||||
|
||||
The best marketing amplifies a truth people already feel.
|
||||
|
||||
7/ PERSONALIZATION HAS A CREEPY THRESHOLD.
|
||||
|
||||
"For developers who self-host" = good.
|
||||
|
||||
"Hey [name], we noticed you visited our pricing page 3 times" = creepy.
|
||||
|
||||
Segment by role, not by surveillance.
|
||||
|
||||
8/ THE MOST POWERFUL MARKETING ISN'T PERSUASION.
|
||||
|
||||
It's accurate comprehension.
|
||||
|
||||
A customer who understands what they're getting stays.
|
||||
|
||||
A customer who was tricked leaves — and tells everyone.
|
||||
|
||||
We cite our sources. We verify our stats. We show our infrastructure.
|
||||
|
||||
blackroad.io/blog
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Thread 4: Product Launch (RoadPay)
|
||||
|
||||
**Type:** Promote (20%)
|
||||
|
||||
```
|
||||
1/ We built our own billing system.
|
||||
|
||||
Not because Stripe is bad. Because Stripe is the card charger — not the billing brain.
|
||||
|
||||
RoadPay is live. Here's what it does: 🧵
|
||||
|
||||
2/ RoadPay runs on Cloudflare D1.
|
||||
|
||||
4 plans. 4 add-ons. Usage tracking. Invoice generation.
|
||||
|
||||
Stripe handles the card charge. RoadPay handles everything else.
|
||||
|
||||
3/ Why not just use Stripe Billing?
|
||||
|
||||
Because Stripe Billing is $0.50/invoice + 0.4% of revenue.
|
||||
|
||||
At scale, your billing platform takes a cut of every dollar.
|
||||
|
||||
RoadPay costs $0/month. It runs on a D1 database. We own it.
|
||||
|
||||
4/ The stack:
|
||||
- D1 (Cloudflare) for the database
|
||||
- Workers for the API
|
||||
- Stripe for card processing only
|
||||
- Auth at auth.blackroad.io (JWT, 42 users)
|
||||
|
||||
5/ 4 plans:
|
||||
|
||||
Starter → Builder → Pro → Enterprise
|
||||
|
||||
Each tier unlocks more agents, more compute, more skills.
|
||||
|
||||
No "contact sales" wall. No enterprise pricing email. Pick a plan. Start building.
|
||||
|
||||
6/ This is what "own your stack" means in practice.
|
||||
|
||||
We don't rent our billing system.
|
||||
We don't rent our git hosting.
|
||||
We don't rent our AI inference.
|
||||
We don't rent our DNS.
|
||||
|
||||
We built it. We own it. We run it.
|
||||
|
||||
RoadPay is at tollbooth.blackroad.io
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Thread 5: Edge AI Market (Educate)
|
||||
|
||||
**Type:** Educate (80%)
|
||||
|
||||
```
|
||||
1/ The edge AI market is about to 5x.
|
||||
|
||||
$24.91 billion in 2025.
|
||||
$118.69 billion by 2033.
|
||||
21.7% CAGR.
|
||||
|
||||
Here's why — and why the hardware costs $99: 🧵
|
||||
|
||||
2/ LATENCY.
|
||||
|
||||
Cloud inference = network round trip.
|
||||
|
||||
Edge inference = on-device.
|
||||
|
||||
For real-time AI (agents, sensors, interactive), the speed of light is too slow when your data center is 2,000 miles away.
|
||||
|
||||
3/ PRIVACY.
|
||||
|
||||
Edge inference = data never leaves the device.
|
||||
|
||||
Not "encrypted in transit."
|
||||
Not "processed in a secure enclave."
|
||||
|
||||
Never. Leaves. The. Device.
|
||||
|
||||
4/ COST.
|
||||
|
||||
Cloud inference: metered, billed hourly, scales linearly.
|
||||
|
||||
Edge inference: buy once, run forever. The more you use it, the cheaper per inference.
|
||||
|
||||
A Hailo-8 costs $99. An H100 on AWS costs $3.90/hour.
|
||||
|
||||
In 26 hours, the cloud costs more than owning the edge hardware forever.
|
||||
|
||||
5/ The AI inference market is $106B in 2025.
|
||||
|
||||
Most of that is cloud inference — metered by the hour.
|
||||
|
||||
Edge AI hardware is $26B, growing at 17.6%.
|
||||
|
||||
The shift is happening because the economics are undeniable.
|
||||
|
||||
6/ We run 52 TOPS of edge inference on two $99 accelerators.
|
||||
|
||||
16 language models. 50 AI skills. Production workloads.
|
||||
|
||||
On Raspberry Pis. In a closet. In Minnesota.
|
||||
|
||||
The future of inference is local. It always should have been.
|
||||
|
||||
Sources: Grand View Research, MarketsandMarkets
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Single Posts (Rotation)
|
||||
|
||||
### Build-in-Public
|
||||
```
|
||||
Shipped today: [FEATURE/FIX/IMPROVEMENT]
|
||||
|
||||
[ONE LINE: what it does]
|
||||
|
||||
[SCREENSHOT]
|
||||
```
|
||||
|
||||
### Stat-Flip
|
||||
```
|
||||
[X]% of [people] [do something painful].
|
||||
|
||||
We [do the opposite]. Here's the result: [NUMBER].
|
||||
|
||||
[LINK]
|
||||
```
|
||||
|
||||
### Community Highlight
|
||||
```
|
||||
[USER] just [deployed/built/created] [THING] with BlackRoad.
|
||||
|
||||
[THEIR QUOTE — 1 sentence]
|
||||
|
||||
This is what "own your stack" looks like.
|
||||
```
|
||||
|
||||
### 80/20 Educate
|
||||
```
|
||||
Things I wish I knew before self-hosting AI:
|
||||
|
||||
1. [INSIGHT]
|
||||
2. [INSIGHT]
|
||||
3. [INSIGHT]
|
||||
|
||||
Learned from running 16 models on Raspberry Pis for [MONTHS].
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Posting Schedule
|
||||
|
||||
| Day | Type | Ratio |
|
||||
|-----|------|-------|
|
||||
| Monday | Educate (thread or insight) | 80% |
|
||||
| Tuesday | Build-in-public | 80% |
|
||||
| Wednesday | Educate (stat or framework) | 80% |
|
||||
| Thursday | Promote (product/feature) | 20% |
|
||||
| Friday | Engage (question/poll/community) | 80% |
|
||||
Reference in New Issue
Block a user