Series · 8 parts
Labs
Local AI deployments — from hardware and OS choice to Ollama, Tailscale, and agent tooling. Written for developers who want models on their own hardware.
Read in order for the full arc, or jump to any topic.
- Part 1 of 8hardware~7 min
Hardware Requirements for an AI Lab
If you want to run AI models locally, the first question isn't which model to use — it's whether your hardware can handle it.
- Part 2 of 8local AI~7 min
Deploying LLMs and AI Agents Locally — Is It Actually Worth It?
Running AI models locally sounds appealing in theory — no API costs, no data leaving your machine, full control. But is it actually practical?
- Part 3 of 8fedora~6 min
Why Fedora Workstation Is a Great Option for Local AI
When setting up a local AI environment, your choice of operating system matters more than most tutorials acknowledge.
- Part 4 of 8tools~7 min
Core Tools and Ecosystem
The local AI ecosystem has matured quickly. A handful of tools handle the hard parts well — here's what's worth knowing.
- Part 5 of 8networking~6 min
Remote Access with Tailscale — Your Local AI, Available Anywhere
A local AI setup is only as useful as its accessibility. Tailscale makes your local AI available from anywhere.
- Part 6 of 8agents~8 min
The AI Agent Landscape — Hermes Agent, OpenClaw, and Kilocode
AI agents go beyond Q&A — this post maps where Hermes, Kilocode, and OpenClaw fit and how they differ.
- Part 7 of 8tools~5 min
Additional Tools Worth Knowing
Beyond the core stack: OpenCode and Oh My Open Agent in the local AI ecosystem.
- Part 8 of 8guide~12 min
The Full Setup Guide — Everything Connected
A single, working local AI setup: Fedora, Ollama, Tailscale, Kilocode, and OpenCode — end to end.