I’m using NixOS since over a year for most of my server stuff and I’m loving it. One of the missing services was my email setup, originally deployed on Debian 10, later upgraded to Debian 11. I followed a setup guide and was quite happy, tweaking it over the years to my needs. Roughly, the software stack consists of Postfix as the MTA (“SMTP server”), Dovecot as the MDA (“IMAP server”) and rspamd including its requirement, redis, for spam protection and SPF/DKIM/DMARC verification as well as DKIM signing for outgoing mail. I avoided a webmail service to keep things simple and I already use standard mail clients across my personal devices anyway.

Originally, I started hosting email myself to have a bigger part of the mail transmission pipeline under my control, and to force valid encryption against MITM attacks. That’s my alternative to using PGP. Recently, I wanted to move at least the mail storage to my personal server at home, so that no old mails lurk around in some hosted server indefinitely. So I had actually two incentives to change my mail server setup:

  1. Make the server config documented by moving to NixOS
  2. Move stored mail to my personal server for better privacy

Unfortunately, hosting a mail server from home is difficult. Many ISPs don’t provide public IPv4 addresses to their customers by default, which is unfortunately still very important for email delivery. A lot of ISPs block port 25, needed for server-to-server connections, outright; not even receiving incoming mail is possible. Even some popular cloud hosting companies block port 25 by default, although you might have a better chance requesting unblocking than asking your ISP.

Mail server config

After looking for ways to build a relatively standard mail server config for NixOS, I settled with the nixos-mailserver project. The easy configuration for a usual mail server setup is just a joy. The basic config for my server is the following:

  imports = [
    (builtins.fetchTarball {
      url = "https://gitlab.com/simple-nixos-mailserver/nixos-mailserver/-/archive/${commit}/nixos-mailserver-${commit}.tar.gz";
      sha256 = "sha256:0h35al73p15z9v8zb6hi5nq987sfl5wp4rm5c8947nlzlnsjl61x";
    })
  ];
  mailserver = {
    enable = true;
    enableImap = false;
    enableImapSsl = true;
    enablePop3 = false;
    enablePop3Ssl = false;
    enableSubmission = false;
    enableSubmissionSsl = true;
    fqdn = "mail.mynacol.xyz";
    domains = [ "mynacol.xyz" ];
    certificateScheme = 3;
    dkimKeyBits = 3072;
    dmarcReporting.enable = false;
    loginAccounts = {
      "[user]@mynacol.xyz" = {
        hashedPasswordFile = "/var/lib/dovecot/passwords/[user]@mynacol.xyz";
        aliases = [
          "@mynacol.xyz"
        ];
      };
    };
    mailDirectory = "/var/vmail";
    messageSizeLimit = 104857600; # 100 MB
    localDnsResolver = true;
  };

But, of course, this deploys both the MTA (needing a public IP with unrestricted port 25 access) and MDA (having access to stored mails) on the same server. But I wanted to have the MDA (with the stored mails) on my private server at home!

Tunneling entire public IPs

My solution is to provide unrestricted public IP addresses to my server at home via some tunneling. E-Mail is not using TLS directly by default, so tunneling solutions using TLS SNI headers to forward traffic is not possible here. But also forwarding all traffic on port 25 is not enough, as mail servers not only reply to client requests, but also initiate connections on their own when sending mail to other mail servers. Those connections have to use the right IP addresses as defined by your SPF policy, and, again, need to be able to send to port 25, which is strictly blocked by my home ISP. Therefore, I settled with a solution providing a virtual network interface: WireGuard.

After some fiddling around, I’m able to “forward” public IPv4 and IPv6 addresses originally allocated to a vServer to my private server at home. I even managed to do that without NAT and other dirty tricks. The two important parts are to set the client IPs in WireGuard to the public IPs the server originally had (after removing them from the server), and to allow forwarding on the server side with some sysctl rules. This means you need multiple public IPs (can be IPv6) allocated to this vServer.

Server config:

  boot.kernel.sysctl = {
    "net.ipv4.conf.default.forwarding" = 1;
    "net.ipv4.conf.all.forwarding" = 1;
    "net.ipv4.conf.default.proxy_arp" = 1;
    "net.ipv4.conf.all.proxy_arp" = 1;
    "net.ipv6.conf.default.forwarding" = 1;
    "net.ipv6.conf.all.forwarding" = 1;
    "net.ipv6.conf.default.proxy_ndp" = 1;
    "net.ipv6.conf.all.proxy_ndp" = 1;
  };

  networking.wireguard.interfaces = {
    wg0 = {
      # Just some private IPs
      ips = [ "10.0.0.1/32" "fdfd:1234:fedc::1/128" ];
      listenPort = 51800;
      privateKeyFile = "/wg0.privkey";
      peers = [
        {
          publicKey = "OV6Z2a2Q2Fgc2BOVAiGiPGzB5B9Ppzidm7qzTNVpKUc=";
          allowedIPs = [ "[public IPv4 to forward]" "[public IPv6 to forward]" ];
        }
      ];
    };
  };

Client config:

  networking.wireguard.interfaces = {
    wg0 = {
      ips = [ "[public IPv4 to forward]" "[public IPv6 to forward]" ];
      #interfaceNamespace = "wg";
      privateKeyFile = "/wg0.privkey";
      peers = [
        {
          publicKey = "Bk7rs7l3+3aOBgPYkHCGw1tP9aQZ4zcm9GH3kSsBQ3g=";
          # Forward all the traffic via VPN.
          allowedIPs = [ "0.0.0.0/0" "::/0" ];
          endpoint = "[server IP]";
        }
      ];
    };
  };

After I deployed this config on my server, I noticed it didn’t fully work. Outgoing traffic was routed correctly, but I noticed with curl -4 https://wtfismyip.com/text that all my IPv4 traffic was routed through the vServer. Incoming traffic on the other hand was weird. This time, IPv4 traffic was just fine, but pings to the tunneled IPv6 weren’t answered. At first I assumed firewall settings or routing at the cloud hosting company was responsible, but nothing fixed it. So I started tshark, a CLI variant of Wireshark, to capture the traffic on my server. There I noticed that the server tried to answer connection requests, but with the wrong IPv6 address. That’s the reason responses weren’t coming through.

Using Network Namespaces in NixOS

At that point I decided to isolate the network interfaces to avoid any routing (mis-)configurations altogether. This would also fix the default route for IPv4 traffic over the tunnel. While I could’ve used containers for this isolation and I believe the NixOS approach to containers is really nice, I didn’t want to mess with path mounts until the container saves the mails in a host directory. Instead, I used Network Namespaces directly.

Conveniently, the NixOS module for WireGuard interfaces added network namespaces support some time ago. Less conveniently, it requires an already existing second network namespace which you have to create manually. Fortunately, I found configs of other people doing exactly that.

While trying this method I repeatedly stumbled over errors such as RTNETLINK answers: File exists. After some research I learned this happens if you try to create a network interface with an already taken name. And as my server already had a wg0 interface in the default namespace and the new config tries to create the new WireGuard interface first there before moving it in the other network namespace, I triggered this error message and thought my config was wrong. I rebooted my server (I could have removed the interface instead), and my config finally worked.

In the end I added a separate systemd service creating and deleting the namespace as required. The WireGuard interface service depends on the netns service and networking.wireguard.interfaces.wg0.interfaceNamespace automatically moves the wg0 interface into the second network namespace. Finally, I modified the postfix, dovecot, acme and kresd service configurations to exclusively have access to the second network namespace, and consequently to the mail-reserved public IPs. For redis I avoid IP sockets by using unix sockets instead.

Full config:

let
  moveNs = {
    requires = [ "wireguard-wg0.service" ];
    after = [ "wireguard-wg0.service" ];
    serviceConfig.NetworkNamespacePath = "/var/run/netns/wg";
  };
in {
  # Move Postfix in wg network namespace
  systemd.services.postfix = moveNs;
  # Move dovecot too, for the certificate they share the same domain name and therefore IP
  systemd.services.dovecot2 = moveNs;
  # Don't use nginx, that might already be running in the default namespace
  security.acme.certs."mail.mynacol.xyz".listenHTTP = ":80";
  security.acme.certs."mail.mynacol.xyz".webroot = null;
  # Fix certificate retrieval in network namespace
  systemd.services."acme-mail.mynacol.xyz" = moveNs;
  # Fix DNS resolution
  systemd.services."kresd@" = moveNs;

  # Use unix sockets for redis
  services.redis.servers.rspamd.port = 0;
  services.rspamd.locals."redis.conf".text = lib.mkForce ''
    servers = "${config.services.redis.servers.rspamd.unixSocket}";
  '';
  systemd.services.rspamd.serviceConfig.SupplementaryGroups = config.users.users."${config.services.redis.servers.rspamd.user}".group;


  # Add network namespace
  systemd.services."netns@" = {
    description = "%I network namespace";
    before = [ "network.target" ];
    serviceConfig = {
      Type = "oneshot";
      RemainAfterExit = true;
      ExecStart = "${pkgs.iproute2}/bin/ip netns add %I";
      # Adds loopback IPs, else localhost connections are broken.
      ExecStartPost = "${pkgs.iproute2}/bin/ip netns exec %i ${pkgs.iproute2}/bin/ip link set dev lo up";
      ExecStop = "${pkgs.iproute2}/bin/ip netns del %I";
    };
  };

  networking.wireguard.interfaces.wg0.interfaceNamespace = "wg";

  # Create wg namespace for wireguard connection
  systemd.services.wireguard-wg0 = {
    bindsTo = [ "netns@wg.service" ];
    after = [ "netns@wg.service" ];
  };
}