Category: Self-hosting

  • Redirect all outgoing DNS requests to the local Pi-hole instance using OPNsense

    In my previous post, I had explained how I set up an N100 mini PC as an OPNsense firewall in my home network. One of the main purposes of having such a firewall was to (transparently) redirect all outgoing DNS requests to the local Pi-hole instance.

    I have a Pi-hole instance serving DHCP and DNS in my LAN and most client devices respect the DNS settings provided by the DHCP server. But Android devices (I have an Android TV at home) ignore it and choose to use the Google DNS servers instead. So I need this redirect in place to have network-level ad-blocking working.

    Pi-hole logo
    Pi-hole logo

    In the previous firewalls and gateway devices, I had to set up a DNAT rule to redirect all outgoing traffic to port 53 except for the traffic originating from the Pi-hole instance to the Pi-hole instance’s address on port 53. To prevent “unexpected source” errors that can happen after adding the DNAT rule, I had to enable masquerading the redirected packets to have the firewall/gateway device’s IP address as the source IP address using SNAT.

    Now OPNsense is built on top of FreeBSD, which has its own pf firewall and that works differently from the Linux iptables firewall. So this is what I had to do to get the DNS redirect working.

    I started with a fresh installation of OPNsense. So ymmv, if you start with some existing configuration. I enabled the ‘Automatic outbound NAT for Reflection’ option in Firewall > Settings > Advanced. This is necessary for the automatic masquerading of the source address while performing the redirect, since the default for outbound NAT in OPNsense is ‘Automatic outbound NAT rule generation (no manual rules can be used)’.

    Then I went to Firewall > NAT > Port Forward and clicked on the + icon to add a new rule that looks like the below screenshot.

    Add a port forward rule

    Here are the relevant settings that I changed before saving the rule.

    • Interface: LAN
    • TCP/IP Version: IPv4
    • Protocol: TCP/UDP
    • Source/Invert: (checked)
    • Source: Single host or Network, 192.168.2.3/32 (this is the IP address of my Pi-hole instance)
    • Redirect target port: DNS (I apologize for the screenshot partially hiding this)
    • Description: Redirect all external DNS requests to PiHole
    • NAT Reflection: Enable
    • Filter rule association: Add associated filter rule

    Then I went to the Firewall > Rules > LAN page and moved the custom associated filter rule to the top of the rule list so that it looks similar to the below screenshot.

    Associated firewall rule moved to the top of the rule list
    Associated firewall rule moved to the top of the rule list

    Now, when I queried the DNS record for a domain using dig, it got a response from my Pi-hole instance. When I queried Google DNS for a blocked-in-my-Pi-hole domain using a command like dig blocked-domain.com @8.8.8.8, it got redirected to the Pi-hole DNS server and received a 0.0.0.0 address in the response as if it was from the Google DNS resolver, 8.8.8.8. So it all worked! 🎉

    Bonus reading: https://labzilla.io/blog/force-dns-pihole. This talks about doing a similar thing using pfSense, which OPNsense is a fork of and has slightly different rules starting from a different baseline to achieve a similar result. It also explains a way to block all outgoing DNS-over-HTTPS (DoH) using pfSense. The linked Hacker News thread also has insightful comments on this topic.

  • How I replaced my home firewall with an x86 OPNsense setup

    I have been working remotely for the past 6+ years, and my wife has been working remotely for the past few years. So we have 2 internet connections at our home, with one configured as a primary and the other as a backup on the TP-Link ER605 load balancer. The load balancer is configured to fail over automatically to the backup connection when the primary connection goes down.

    In our home, we have run Ethernet cables through the walls and provided one port in each of the rooms and living rooms. All these cables terminate at a central location in the hub in a switch and then go through a firewall router to the load balancer and to the internet. The high-level view of this setup looks like the following diagram.

    A high-level diagram of my home network and how it connects to the internet.
    A high-level diagram of my home network and how it connects to the internet

    As shown in the above diagram, I also have a Pi-hole instance that acts as the DHCP and DNS server for my home LAN. It works well to provide network-level ad-blocking services for all devices in the LAN. However, some devices, Android devices in particular, often ignore the DNS server provided by DHCP and use the hard-coded Google DNS instead, bypassing ad-blocking. Even that is okay in many cases, except a few. We have a Sony X90H smart television that runs the Android TV operating system. Without network-level ad-blocking, it shows a lot of non-dismissible advertisements for content from apps that we haven’t installed or used. So I have always used a firewall device of some sort to force the usage of Pi-hole as the DNS server in my LAN. I have done this in the past with a Netgear Nighthawk R7000 router running the FreshTomato firmware, a Seeed Studio reRouter, and a GL.iNet Beryl AX travel router since last evening.

    Speaking of that, the reRouter device, which I have used for 2+ years now, has been crashing and boot-looping frequently in the past few months and causing internet disconnections. I have been planning to replace that with a more reliable and powerful x86 mini-PC with OPNsense on it. I ordered the Skullsaints Onyx Intel 12th Gen N100 Mini PC last night for this new project. This was an easy choice since I have been hearing good things about N100 mini PCs on the Late Night Linux family of podcasts. While I waited for the delivery, I set up the GL.iNet Beryl AX travel router as a stop-gap replacement.

    Skullsaints Onyx Mini PC

    I bought this specific product because it has 4 2.5G Ethernet ports, which would allow me to do internet load balancing too in the future and simplify my networking setup. It came with a no-name brand 256 GB M.2 NVMe SSD preloaded with Windows 11 Pro and 8 GB of RAM. As I had read reviews about this device heating up due to lack of/dried thermal paste, I checked and confirmed that the thermal paste was intact.

    Then I downloaded the latest OPNsense image, dd‘ed it to a USB flash drive and installed it on this device. Then I opened up the OPNsense web interface and went through the setup wizard to configure the firewall. When I installed it in place of my previous firewall, nothing worked and I had no idea why. I took help from the friendly folks on the #OPNsense IRC channel on libera.chat to correct my mistakes and get the configuration working the way I wanted it to. Below are the details of how I did it.

    OPNsense wizard page showing the general system configuration options.
    General System Information configuration

    In the above page, I configured the hostname, domain and the DNS servers used by OPNsense. I specified 192.168.2.3, the IP address of my Pi-hole instance, as the primary DNS server and added the Google DNS address as the secondary. Even though it wasn’t necessary, I left the built-in Unbound resolver enabled.

    OPNsense wizard page showing the Time Server configuration options.
    Time Server configuration

    I configured my timezone in this page.

    OPNsense wizard page with configuration options for the WAN interface
    WAN interface configuration

    This page had a lot of options for configuring the WAN interface (I will need to revisit these when doing the multi-WAN load balancer setup in the future). I set up a static IP for the WAN interface in the 192.168.0.0/24 subnet, since that is what I had used in the previous setup. I also disabled the blocks for accessing RFC1918 networks and bogon networks (this was not necessary) from the WAN-side, since this device doesn’t directly connect to the internet.

    OPNsense wizard page with options for configuring the LAN interface
    LAN interface configuration

    In this page, I configured the LAN interface address to be the same as what I had in the previous setup. In the following page, I configured the root password and completed the wizard to apply the configured changes. With this setup, I had a working router between my LAN and the load balancer.

    Since the metal top of the mini PC’s case acts as a passive heat sink, I could feel it getting very hot even though the OPNsense thermal sensors showed a low, static temperature. I will monitor this in the coming days to make sure that there are no thermal issues.

    I still had to configure the firewall to force redirect all outgoing DNS requests to the local Pi-hole server, the details of which I will share in the next blog post — Redirect all outgoing DNS requests to the local Pi-hole instance using OPNsense.

  • Seamlessly access local services on LAN and Tailnet

    As I am passionate about self-hosting, I have been setting up various services in my homelab, in addition to those on my cloud servers. I have also been using Tailscale to access my devices and services while not at home. So I have wanted to have a seamless way to access the services, irrespective of whether I am on my home local area network (LAN) or connected to it via Tailscale. Below are my requirements for such a setup.

    • All the devices/services should be accessible using a fully-qualified domain name (FQDN), under a domain that I own and control. This rules out the auto-generated Tailscale subdomains.
    • I have a LinuxServer.io SWAG reverse proxy in front of all the services in my homelab, and it provides TLS termination. So I would like to access the existing services using TLS at all times.
    • While I could set up a Tailscale subnet router that allows access to my LAN, I do not want to allow the devices on my Tailnet full access to my LAN. And I do not want to redo my home LAN setup to isolate things to be able to do this.
    • The FQDNs of the exposed services should resolve to a LAN IP address when I am in my home LAN and to a Tailnet-specific address when I am not at home and connected to my Tailnet.
    • It should be possible to expose more services using this setup in the future, even if they are not behind the SWAG reverse proxy.
    • The base domain that I want to use for this should not have any publicly accessible DNS records pointing to private IP addresses for this setup to work.
    • The resulting setup should integrate into my existing docker-compose configuration.

    The Tailscale docker documentation illustrates a way to expose LAN services on a Tailnet, but the example on that page causes the service(s) to be accessibly only over the Tailnet. So it doesn’t work for me.

    To start, I added a Tailscale docker container to my compose.yaml file using a configuration like

      tailscale:
        image: tailscale/tailscale
        container_name: tailscale
        hostname: <tailnet device name>
        environment:
          - TS_ACCEPT_DNS=true
          - TS_AUTHKEY=<authkey or OAuth2 client secret>
          - TS_EXTRA_ARGS=--advertise-tags=tag:docker
          - TS_ROUTES=172.21.0.0/24
        volumes:
          - ./config/tailscale/state:/var/lib/tailscale
          - /dev/net/tun:/dev/net/tun
        cap_add:
          - net_admin
          - sys_module
        networks:
          tailnet-subnet:
            ipv4_address: 172.21.0.11
        restart: unless-stopped
    networks:
      tailnet-subnet:
        ipam:
          config:
            - subnet: 172.21.0.0/24
    

    For this to work, I had to define a tag named docker and add it to my Tailscale ACLs. I also added an ACL to auto-approve the routes advertised by this container.

    {
        // other configuration
    	"tagOwners": {
    		"tag:docker": ["autogroup:admin"],
    	},
        "autoApprovers": {
    		"routes": {
    			"172.21.0.0/24": ["tag:docker"],
    		},
    	},
        // other configuration
    }
    

    With this, all the containers that get added to the tailnet-subnet network and have an IP address in the 172.21.0.0/24 subnet will be accessible over my Tailnet. So I updated the configuration of the swag container to add it to the tailnet-subnet network.

      swag:
        image: lscr.io/linuxserver/swag
        container_name: swag
        cap_add:
          - NET_ADMIN
        environment:
          - var1=value1
          - var2=value2
        volumes:
          - ./config/swag:/config
        ports:
          - 443:443
          - 80:80
        networks:
          tailnet-subnet:
            ipv4_address: 172.21.0.12
          default:
        restart: unless-stopped
    

    In the above snippet, I added the tailnet-subnet network to the networks key and assigned it a static IP address in its subnet, 172.21.0.12. Since the default network was implicitly included before and adding a different network will remove the implicit inclusion, I have also explicitly added the default network.

    With these configuration changes, the swag container was accessible at the 172.21.0.12 IP address over my Tailnet. But I still needed to set up DNS to access the services by domain name.

    Tailscale provides a way to add a restricted nameserver for a specific domain using split DNS. So I needed a DNS server that resolved the domains of the services hosted on the swag container to its Tailnet subnet IP address, 172.21.0.12.

    For this, I took inspiration from jpillora/dnsmasq and created a custom Dockerfile that set up a dnsmasq resolver.

    FROM alpine:latest
    LABEL maintainer="email@domain.tld"
    RUN apk update \
        &amp;&amp; apk --no-cache add dnsmasq
    RUN mkdir -p /etc/default \
        &amp;&amp; echo -e "ENABLED=1\nIGNORE_RESOLVCONF=yes" > /etc/default/dnsmasq
    COPY dnsmasq.conf /etc/dnsmasq.conf
    EXPOSE 53/udp
    ENTRYPOINT ["dnsmasq", "--no-daemon"]
    

    Then I created a dnsmasq.conf configuration file that looks like the following snippet.

    log-queries
    no-resolv
    address=/domain1.fqdn/172.21.0.12
    address=/domain2.fqdn/172.21.0.12
    

    Then I added the following snippet to my compose.yaml file to add the dnsmasq container.

      dnsmasq:
        build: "./build/dnsmasq"
        container_name: dnsmasq
        restart: unless-stopped
        volumes:
          - ./config/dnsmasq/dnsmasq.conf:/etc/dnsmasq.conf
        networks:
          tailnet-subnet:
            ipv4_address: 172.21.0.3
    

    Then I ran docker compose build to build the container, and docker compose up -d dnsmasq to start it. With that, I had a DNS resolver to resolve my domain names in the Tailnet.

    You might notice error messages in the dnsmasq container’s logs that look like dnsmasq: config error is REFUSED (EDE: not ready). This happens because we have not defined any upstream servers that dnsmasq can use. But since we want this dnsmasq instance to resolve only our domain names, this is okay and the error can be ignored.

    Then on my Tailscale admin dashboard, I added a custom nameserver for my domain name and configured 172.21.0.3, the IP address of the dnsmasq container, as the address of the server to use. Now, all the devices on my Tailnet could access the services on my swag container by domain name.

    I have an existing DNS setup on my home LAN that resolves the same domain names to the LAN IP addresses. So now, with this setup for Tailscale, my devices can seamlessly access the private services on my LAN and Tailnet.

    If I want to add a new service to this setup, it is as easy as adding the tailscale-subnet network to it, and adding the DNS records to dnsmasq docker container’s configuration file and the resolver in my home LAN.

  • ActivityPub integration

    This blog now integrates with the Fediverse using the ActivityPub protocol. This means that you can follow this blog by searching for lguruprasad@www.lguruprasad.in and following that account from any of the supported platforms mentioned here! 🎉

  • How I set up my self-hosted Matrix instance

    Matrix is a modern, decentralized, federated real-time communication protocol and open standard. It has a thriving ecosystem of servers, clients, and applications. Synapse is the reference server and Element is the reference client for the web, desktop and mobile platforms.

    Matrix protocol logo

    This is something that I have been interested in using and self-hosting for a few years now. I have had an account on the main matrix.org instance for a while now and wanted to switch to a self-hosted instance.

    Since I have been using docker, docker-compose, and Ansible to deploy and run a wide range of self-hosted applications for my personal use, the spantaleev/matrix-docker-ansible-deploy was my choice for setting up my instance. I chose to use Synapse over Dendrite, the second-generation server because though it is lightweight, it is not feature-complete. All the other third-party implementations have a lot of catching up to do as well, at the time of writing this post.

    I learned a bit of Terraform in my previous job, but never had a chance to learn it properly or build something from scratch using it. So armed with my little knowledge of Terraform, I created a small Terraform project to automate setting up a new Matrix instance. It provisions the DNS records needed for Matrix on Namecheap — my domain registrar and DNS host, provisions an appropriately sized Hetzner cloud instance with a floating IP address, and runs the deployment playbook in the matrix-docker-ansible-deploy repository with the provided Ansible variables file. I used the hcloud and the namecheap Terraform providers to do this.

    With this, I was able to provision and set up my Matrix instance in under 10 minutes by just running

    $ terraform plan -out=matrix-plan
    $ terraform apply "matrix-plan"

    I have released the source code for this project here on GitLab under the GNU Affero General Public License v3.0 (AGPLv3) or later. Since this project contains the matrix-docker-ansible-deploy repository as a git submodule, running git submodule update --init should automatically pull in a known good commit of that repository to use for the deployment. The README file has the instructions for using the project to set Matrix instances from scratch.

    I hope it is useful for those who are looking to set up a new Matrix instance.