Part 5 of 8networking~6 min

Remote Access with Tailscale — Your Local AI, Available Anywhere

A local AI setup is only as useful as its accessibility. If you can only reach your models when you're physically at your workstation, that's a significant limitation. Tailscale solves this cleanly — here's how to use it to make your local AI available from anywhere, 24/7.

What Tailscale is

Tailscale is a VPN built on WireGuard that creates a private network between your devices without requiring you to open ports, configure a router, or manage certificates. Every device you add to your Tailscale network gets a stable private IP address, and they can communicate with each other as if they're on the same local network, regardless of where they physically are.

It's free for personal use (up to 100 devices), takes about five minutes to set up, and works reliably across NAT, firewalls, and mobile networks.

The setup

On your Fedora workstation — the machine running Ollama or your model server:

curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up

Authenticate with your Tailscale account, and the machine joins your private network. Note your machine's Tailscale IP (something like 100.x.x.x) from the Tailscale dashboard.

Install Tailscale on your other devices — laptop, phone, iPad — and authenticate them to the same account. They're now all on the same private network.

Accessing Ollama remotely

By default, Ollama listens on localhost:11434, which means it only accepts connections from the local machine. To make it accessible over Tailscale, you need to bind it to the Tailscale network interface.

Set the OLLAMA_HOST environment variable to your Tailscale IP before starting Ollama:

OLLAMA_HOST=100.x.x.x:11434 ollama serve

Or add it to your systemd service file to make it persistent. Now, from any device on your Tailscale network, you can point an OpenAI-compatible client at http://100.x.x.x:11434 and reach your local models.

Security considerations

Since Tailscale creates a private network rather than exposing ports to the public internet, the attack surface is narrow — only devices authenticated to your Tailscale account can reach your machine. That said, it's still worth thinking about what's exposed:

  • Don't run Ollama as root.
  • If you're using an agent framework that exposes additional endpoints, be deliberate about what's bound to the Tailscale interface vs localhost only.
  • Enable Tailscale's ACL (Access Control List) feature if you're sharing your network with other people — it lets you restrict which devices can reach which services.

Making it persistent

For a setup that's available 24/7, you want both Tailscale and Ollama to start automatically on boot. Tailscale installs a systemd service by default. For Ollama, create or edit its systemd service file to include the OLLAMA_HOST environment variable and ensure it's set to start on boot.

With that in place, your Fedora workstation becomes a personal AI server accessible from anywhere — without any cloud dependency, monthly inference costs, or data leaving your control.

All posts in this series