Seamlessly access local services on LAN and Tailnet
As I am passionate about self-hosting, I have been setting up various services in my homelab, in addition to those on my cloud servers. I have also been using Tailscale to access my devices and services while not at home. So I have wanted to have a seamless way to access the services, irrespective of whether I am on my home local area network (LAN) or connected to it via Tailscale. Below are my requirements for such a setup.
- All the devices/services should be accessible using a fully-qualified domain name (FQDN), under a domain that I own and control. This rules out the auto-generated Tailscale subdomains.
- I have a LinuxServer.io SWAG reverse proxy in front of all the services in my homelab, and it provides TLS termination. So I would like to access the existing services using TLS at all times.
- While I could set up a Tailscale subnet router that allows access to my LAN, I do not want to allow the devices on my Tailnet full access to my LAN. And I do not want to redo my home LAN setup to isolate things to be able to do this.
- The FQDNs of the exposed services should resolve to a LAN IP address when I am in my home LAN and to a Tailnet-specific address when I am not at home and connected to my Tailnet.
- It should be possible to expose more services using this setup in the future, even if they are not behind the SWAG reverse proxy.
- The base domain that I want to use for this should not have any publicly accessible DNS records pointing to private IP addresses for this setup to work.
- The resulting setup should integrate into my existing
docker-compose
configuration.
The Tailscale docker documentation illustrates a way to expose LAN services on a Tailnet, but the example on that page causes the service(s) to be accessibly only over the Tailnet. So it doesn’t work for me.
To start, I added a Tailscale docker container to my compose.yaml
file using a configuration like
tailscale:
image: tailscale/tailscale
container_name: tailscale
hostname: <tailnet device name>
environment:
- TS_ACCEPT_DNS=true
- TS_AUTHKEY=<authkey or OAuth2 client secret>
- TS_EXTRA_ARGS=--advertise-tags=tag:docker
- TS_ROUTES=172.21.0.0/24
volumes:
- ./config/tailscale/state:/var/lib/tailscale
- /dev/net/tun:/dev/net/tun
cap_add:
- net_admin
- sys_module
networks:
tailnet-subnet:
ipv4_address: 172.21.0.11
restart: unless-stopped
networks:
tailnet-subnet:
ipam:
config:
- subnet: 172.21.0.0/24
For this to work, I had to define a tag named docker
and add it to my Tailscale ACLs. I also added an ACL to auto-approve the routes advertised by this container.
{
// other configuration
"tagOwners": {
"tag:docker": ["autogroup:admin"],
},
"autoApprovers": {
"routes": {
"172.21.0.0/24": ["tag:docker"],
},
},
// other configuration
}
With this, all the containers that get added to the tailnet-subnet
network and have an IP address in the 172.21.0.0/24
subnet will be accessible over my Tailnet. So I updated the configuration of the swag
container to add it to the tailnet-subnet
network.
swag:
image: lscr.io/linuxserver/swag
container_name: swag
cap_add:
- NET_ADMIN
environment:
- var1=value1
- var2=value2
volumes:
- ./config/swag:/config
ports:
- 443:443
- 80:80
networks:
tailnet-subnet:
ipv4_address: 172.21.0.12
default:
restart: unless-stopped
In the above snippet, I added the tailnet-subnet
network to the networks
key and assigned it a static IP address in its subnet, 172.21.0.12
. Since the default
network was implicitly included before and adding a different network will remove the implicit inclusion, I have also explicitly added the default
network.
With these configuration changes, the swag
container was accessible at the 172.21.0.12
IP address over my Tailnet. But I still needed to set up DNS to access the services by domain name.
Tailscale provides a way to add a restricted nameserver for a specific domain using split DNS. So I needed a DNS server that resolved the domains of the services hosted on the swag
container to its Tailnet subnet IP address, 172.21.0.12
.
For this, I took inspiration from jpillora/dnsmasq and created a custom Dockerfile
that set up a dnsmasq
resolver.
FROM alpine:latest
LABEL maintainer="email@domain.tld"
RUN apk update \
&& apk --no-cache add dnsmasq
RUN mkdir -p /etc/default \
&& echo -e "ENABLED=1\nIGNORE_RESOLVCONF=yes" > /etc/default/dnsmasq
COPY dnsmasq.conf /etc/dnsmasq.conf
EXPOSE 53/udp
ENTRYPOINT ["dnsmasq", "--no-daemon"]
Then I created a dnsmasq.conf
configuration file that looks like the following snippet.
log-queries
no-resolv
address=/domain1.fqdn/172.21.0.12
address=/domain2.fqdn/172.21.0.12
Then I added the following snippet to my compose.yaml
file to add the dnsmasq
container.
dnsmasq:
build: "./build/dnsmasq"
container_name: dnsmasq
restart: unless-stopped
volumes:
- ./config/dnsmasq/dnsmasq.conf:/etc/dnsmasq.conf
networks:
tailnet-subnet:
ipv4_address: 172.21.0.3
Then I ran docker compose build
to build the container, and docker compose up -d dnsmasq
to start it. With that, I had a DNS resolver to resolve my domain names in the Tailnet.
You might notice error messages in the dnsmasq
container’s logs that look like dnsmasq: config error is REFUSED (EDE: not ready)
. This happens because we have not defined any upstream servers that dnsmasq
can use. But since we want this dnsmasq
instance to resolve only our domain names, this is okay and the error can be ignored.
Then on my Tailscale admin dashboard, I added a custom nameserver for my domain name and configured 172.21.0.3
, the IP address of the dnsmasq
container, as the address of the server to use. Now, all the devices on my Tailnet could access the services on my swag
container by domain name.
I have an existing DNS setup on my home LAN that resolves the same domain names to the LAN IP addresses. So now, with this setup for Tailscale, my devices can seamlessly access the private services on my LAN and Tailnet.
If I want to add a new service to this setup, it is as easy as adding the tailscale-subnet
network to it, and adding the DNS records to dnsmasq
docker container’s configuration file and the resolver in my home LAN.