docker-compose

As I am passionate about self-hosting, I have been setting up various services in my homelab, in addition to those on my cloud servers. I have also been using Tailscale to access my devices and services while not at home. So I have wanted to have a seamless way to access the services, irrespective of whether I am on my home local area network (LAN) or connected to it via Tailscale. Below are my requirements for such a setup.

  • All the devices/services should be accessible using a fully-qualified domain name (FQDN), under a domain that I own and control. This rules out the auto-generated Tailscale subdomains.
  • I have a LinuxServer.io SWAG reverse proxy in front of all the services in my homelab, and it provides TLS termination. So I would like to access the existing services using TLS at all times.
  • While I could set up a Tailscale subnet router that allows access to my LAN, I do not want to allow the devices on my Tailnet full access to my LAN. And I do not want to redo my home LAN setup to isolate things to be able to do this.
  • The FQDNs of the exposed services should resolve to a LAN IP address when I am in my home LAN and to a Tailnet-specific address when I am not at home and connected to my Tailnet.
  • It should be possible to expose more services using this setup in the future, even if they are not behind the SWAG reverse proxy.
  • The base domain that I want to use for this should not have any publicly accessible DNS records pointing to private IP addresses for this setup to work.
  • The resulting setup should integrate into my existing docker-compose configuration.

The Tailscale docker documentation illustrates a way to expose LAN services on a Tailnet, but the example on that page causes the service(s) to be accessibly only over the Tailnet. So it doesn’t work for me.

To start, I added a Tailscale docker container to my compose.yaml file using a configuration like

  tailscale:
    image: tailscale/tailscale
    container_name: tailscale
    hostname: <tailnet device name>
    environment:
      - TS_ACCEPT_DNS=true
      - TS_AUTHKEY=<authkey or OAuth2 client secret>
      - TS_EXTRA_ARGS=--advertise-tags=tag:docker
      - TS_ROUTES=172.21.0.0/24
    volumes:
      - ./config/tailscale/state:/var/lib/tailscale
      - /dev/net/tun:/dev/net/tun
    cap_add:
      - net_admin
      - sys_module
    networks:
      tailnet-subnet:
        ipv4_address: 172.21.0.11
    restart: unless-stopped
networks:
  tailnet-subnet:
    ipam:
      config:
        - subnet: 172.21.0.0/24

For this to work, I had to define a tag named docker and add it to my Tailscale ACLs. I also added an ACL to auto-approve the routes advertised by this container.

{
    // other configuration
	"tagOwners": {
		"tag:docker": ["autogroup:admin"],
	},
    "autoApprovers": {
		"routes": {
			"172.21.0.0/24": ["tag:docker"],
		},
	},
    // other configuration
}

With this, all the containers that get added to the tailnet-subnet network and have an IP address in the 172.21.0.0/24 subnet will be accessible over my Tailnet. So I updated the configuration of the swag container to add it to the tailnet-subnet network.

  swag:
    image: lscr.io/linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - var1=value1
      - var2=value2
    volumes:
      - ./config/swag:/config
    ports:
      - 443:443
      - 80:80
    networks:
      tailnet-subnet:
        ipv4_address: 172.21.0.12
      default:
    restart: unless-stopped

In the above snippet, I added the tailnet-subnet network to the networks key and assigned it a static IP address in its subnet, 172.21.0.12. Since the default network was implicitly included before and adding a different network will remove the implicit inclusion, I have also explicitly added the default network.

With these configuration changes, the swag container was accessible at the 172.21.0.12 IP address over my Tailnet. But I still needed to set up DNS to access the services by domain name.

Tailscale provides a way to add a restricted nameserver for a specific domain using split DNS. So I needed a DNS server that resolved the domains of the services hosted on the swag container to its Tailnet subnet IP address, 172.21.0.12.

For this, I took inspiration from jpillora/dnsmasq and created a custom Dockerfile that set up a dnsmasq resolver.

FROM alpine:latest
LABEL maintainer="email@domain.tld"
RUN apk update \
    && apk --no-cache add dnsmasq
RUN mkdir -p /etc/default \
    && echo -e "ENABLED=1\nIGNORE_RESOLVCONF=yes" > /etc/default/dnsmasq
COPY dnsmasq.conf /etc/dnsmasq.conf
EXPOSE 53/udp
ENTRYPOINT ["dnsmasq", "--no-daemon"]

Then I created a dnsmasq.conf configuration file that looks like the following snippet.

log-queries
no-resolv
address=/domain1.fqdn/172.21.0.12
address=/domain2.fqdn/172.21.0.12

Then I added the following snippet to my compose.yaml file to add the dnsmasq container.

  dnsmasq:
    build: "./build/dnsmasq"
    container_name: dnsmasq
    restart: unless-stopped
    volumes:
      - ./config/dnsmasq/dnsmasq.conf:/etc/dnsmasq.conf
    networks:
      tailnet-subnet:
        ipv4_address: 172.21.0.3

Then I ran docker compose build to build the container, and docker compose up -d dnsmasq to start it. With that, I had a DNS resolver to resolve my domain names in the Tailnet.

You might notice error messages in the dnsmasq container’s logs that look like dnsmasq: config error is REFUSED (EDE: not ready). This happens because we have not defined any upstream servers that dnsmasq can use. But since we want this dnsmasq instance to resolve only our domain names, this is okay and the error can be ignored.

Then on my Tailscale admin dashboard, I added a custom nameserver for my domain name and configured 172.21.0.3, the IP address of the dnsmasq container, as the address of the server to use. Now, all the devices on my Tailnet could access the services on my swag container by domain name.

I have an existing DNS setup on my home LAN that resolves the same domain names to the LAN IP addresses. So now, with this setup for Tailscale, my devices can seamlessly access the private services on my LAN and Tailnet.

If I want to add a new service to this setup, it is as easy as adding the tailscale-subnet network to it, and adding the DNS records to dnsmasq docker container’s configuration file and the resolver in my home LAN.

Matrix is a modern, decentralized, federated real-time communication protocol and open standard. It has a thriving ecosystem of servers, clients, and applications. Synapse is the reference server and Element is the reference client for the web, desktop and mobile platforms.

Matrix protocol logo

This is something that I have been interested in using and self-hosting for a few years now. I have had an account on the main matrix.org instance for a while now and wanted to switch to a self-hosted instance.

Since I have been using docker, docker-compose, and Ansible to deploy and run a wide range of self-hosted applications for my personal use, the spantaleev/matrix-docker-ansible-deploy was my choice for setting up my instance. I chose to use Synapse over Dendrite, the second-generation server because though it is lightweight, it is not feature-complete. All the other third-party implementations have a lot of catching up to do as well, at the time of writing this post.

I learned a bit of Terraform in my previous job, but never had a chance to learn it properly or build something from scratch using it. So armed with my little knowledge of Terraform, I created a small Terraform project to automate setting up a new Matrix instance. It provisions the DNS records needed for Matrix on Namecheap — my domain registrar and DNS host, provisions an appropriately sized Hetzner cloud instance with a floating IP address, and runs the deployment playbook in the matrix-docker-ansible-deploy repository with the provided Ansible variables file. I used the hcloud and the namecheap Terraform providers to do this.

With this, I was able to provision and set up my Matrix instance in under 10 minutes by just running

$ terraform plan -out=matrix-plan
$ terraform apply "matrix-plan"

I have released the source code for this project here on GitLab under the GNU Affero General Public License v3.0 (AGPLv3) or later. Since this project contains the matrix-docker-ansible-deploy repository as a git submodule, running git submodule update --init should automatically pull in a known good commit of that repository to use for the deployment. The README file has the instructions for using the project to set Matrix instances from scratch.

I hope it is useful for those who are looking to set up a new Matrix instance.