tldr: I’d like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I’m not sure what the best/safest way to do it is. Asking my partner to use tailsclae or wireguard is asking too much unfortunately. I was curious to know what you all recommend.
I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I’m kind of unsure what the best approach is. Hosting services on the internet has risk and I’d like to reduce that risk as much as possible.
-
I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?
-
Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?
-
What’s the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.
-
Any other tips or info you care to share would be greatly appreciated.
-
Feel free to talk me out of it as well.
I use traefik with a wildcard domain pointing to a Tailscale IP for services I don’t want to be public. For the services I want to be publicly available I use cloudflare tunnels.
Why is it too much asking your partner to use wireguard? I installed wireguard for my wife on her iPhone, she can access everything in our home network like she was at home, and she doesn’t even know that she is using VPN.
A few reasons
- My partner has plenty of hobbies but sys-admin isn’t one of them. I know I’ll show them how to turn off wireguard to troubleshoot why “the internet isn’t working” but eventually they would forget. Shit happens, sometimes servers go down and sometimes turning off wireguard would allow the internet to work lol
- I’m a worrier. If there was an emergency, my partner needed to access the internet but couldn’t because my DNS server went down, my wireguard server went down, my ISP shit the bed, our home power went out, etc., and they forgot about the VPN, I’d feel terrible.
- I was a little too ambitious when I first got into self hosting. I set up services and shared them before I was ready and ended up resetting them constantly for various reasons. For example, my Plex server is on it’s 12th iteration. My partner is understandably weary to try stuff I’ve set up. I’m at a point where I don’t introduce them to a service I set up unless accessing it is no different than using an app (like the Homeassistant app) or visiting a website. That intermediary step of ensuring the VPN is on and functional before accessing the service is more than I’d prefer to ask of them
Telling my partner to visit a website seems easy, they visit websites every day, but they don’t use a VPN everyday and they don’t care to.
you’re talking to a community of admins that force their family to “use the thing”. they can’t understand why anyone can’t debug tech issues because they have surrounded themselves with people who can.
I get it, my wife isn’t technical at all. she gets online about once a week to check email. I couldn’t even begin to explain to her how to debug her connection problems past turn it off and on again.
so, to simplify things, she doesn’t connect to the home network outside of the home network. but I was able to teach her how to download movies/shows from Plex to her phone and I was able to explain why ads show up on her apps when she’s out of the house.
it’s not perfect, but it’s the best I can give her with her understanding of the technology. knowing the limitations of your user base is just as important as developing the tools they will use and how they will access them.
nixos with nginx services does all proxying and ssl stuff, fail2ban is there as well
I know I should learn NixOS, I even tried for a few hours one evening but god damn, the barrier to entry is just a little too high for me at the moment 🫤
i guess you were able to install the os ok? are you using proxmox or regular servers?
i can post an example configuration.nix for the proxy and container servers that might help. i have to admit debugging issues with configurations can be very tricky.
in terms of security i was always worried about getting hacked. the only protection for that was to make regular backups of data and config so i can restore services, and to create a dmz behind my isp router with a vlan switch and a small router just for my services to protect the rest of my home network
i guess you were able to install the os ok? are you using proxmox or regular servers?
I was. It was learning the Nix way of doing things that was just taking more time than i had anticipated. I’ll get around to it eventually though
I tried out proxmox years ago but besides the web interface, I didn’t understand why I should use it over Debian or Ubuntu. At the moment, I’m just using Ubuntu and docker containers. In previous setups, I was using KVMs too.
Correct me if I’m wrong, but don’t you have to reboot every time you change your Nix config? That was what was painful. Once it’s set up the way you want, it seemed great but getting to that point for a beginner was what put me off.
I would be interested to see the config though
i have found this reference very useful https://mynixos.com/options/
yeah proxmox is not necessary unless you need lots of separate instances to play around with
this is my container config for element/matrix podman containers do not run as root so you have to get the file privileges right on the volumes mapped into the containers. i used
top
to find out what user the services were running as. you can see there are some settings there where you can change the user if you are having permissions problems{ pkgs, modulesPath, ... }: { imports = [ (modulesPath + "/virtualisation/proxmox-lxc.nix") ]; security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ]; system.stateVersion = "23.11"; system.autoUpgrade.enable = true; system.autoUpgrade.allowReboot = false; nix.gc = { automatic = true; dates = "weekly"; options = "--delete-older-than 14d"; }; services.openssh = { enable = true; settings.PasswordAuthentication = true; }; users.users.XXXXXX = { isNormalUser = true; home = "/home/XXXXXX"; extraGroups = [ "wheel" ]; shell = pkgs.zsh; }; programs.zsh.enable = true; environment.etc = { "fail2ban/filter.d/matrix-synapse.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter '' [Definition] failregex = .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Failed password login.* .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Attempted to login as.*\n.*Invalid username or password.* ''); }; services.fail2ban = { enable = true; maxretry = 3; bantime = "10m"; bantime-increment = { enable = true; multipliers = "1 2 4 8 16 32 64"; maxtime = "168h"; overalljails = true; }; jails = { matrix-synapse.settings = { filter = "matrix-synapse"; action = "%(known/action)s"; logpath = "/srv/logs/synapse.json.log"; backend = "auto"; findtime = 600; bantime = 600; maxretry = 2; }; }; }; virtualisation.oci-containers = { containers = { postgres = { autoStart = false; environment = { POSTGRES_USER = "XXXXXX"; POSTGRES_PASSWORD = "XXXXXX"; LANG = "en_US.utf8"; }; image = "docker.io/postgres:14"; ports = [ "5432:5432" ]; volumes = [ "/srv/postgres:/var/lib/postgresql/data" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; synapse = { autoStart = false; environment = { LANG = "C.UTF-8"; # UID="0"; # GID="0"; }; # user = "1001:1000"; image = "ghcr.io/element-hq/synapse:latest"; ports = [ "8008:8008" ]; volumes = [ "/srv/synapse:/data" ]; log-driver = "json-file"; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--log-opt" "max-size=10m" "--log-opt" "max-file=1" "--log-opt" "path=/srv/logs/synapse.json.log" "--pull=newer" ]; dependsOn = [ "postgres" ]; }; element = { autoStart = true; image = "docker.io/vectorim/element-web:latest"; ports = [ "8009:80" ]; volumes = [ "/srv/element/config.json:/app/config.json" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; # dependsOn = [ "synapse" ]; }; call = { autoStart = true; image = "ghcr.io/element-hq/element-call:latest-ci"; ports = [ "8080:8080" ]; volumes = [ "/srv/call/config.json:/app/config.json" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; livekit = { autoStart = true; image = "docker.io/livekit/livekit-server:latest"; ports = [ "7880:7880" "7881:7881" "50000-60000:50000-60000/udp" "5349:5349" "3478:3478/udp" ]; cmd = [ "--config" "/etc/config.yaml" ]; entrypoint = "/livekit-server"; volumes = [ "/srv/livekit:/etc" ]; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; livekitjwt = { autoStart = true; image = "ghcr.io/element-hq/lk-jwt-service:latest-ci"; ports = [ "7980:8080" ]; environment = { LK_JWT_PORT = "8080"; LIVEKIT_URL = "wss://livekit.xxxxxx.dynu.net/"; LIVEKIT_KEY = "XXXXXX"; LIVEKIT_SECRET = "XXXXXX"; }; entrypoint = "/lk-jwt-service"; extraOptions = [ "--label" "io.containers.autoupdate=registry" "--pull=newer" ]; }; }; }; }
this is my nginx config for my element/matrix services
as you can see i am using a proxmox NixOS with an old 23.11 nix channel but i’m sure the config can be used in other NixOS environments
{ pkgs, modulesPath, ... }: { imports = [ (modulesPath + "/virtualisation/proxmox-lxc.nix") ]; security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ]; system.stateVersion = "23.11"; system.autoUpgrade.enable = true; system.autoUpgrade.allowReboot = true; nix.gc = { automatic = true; dates = "weekly"; options = "--delete-older-than 14d"; }; networking.firewall.allowedTCPPorts = [ 80 443 ]; services.openssh = { enable = true; settings.PasswordAuthentication = true; }; users.users.XXXXXX = { isNormalUser = true; home = "/home/XXXXXX"; extraGroups = [ "wheel" ]; shell = pkgs.zsh; }; programs.zsh.enable = true; security.acme = { acceptTerms = true; defaults.email = "XXXXXX@yahoo.com"; }; services.nginx = { enable = true; virtualHosts._ = { default = true; extraConfig = "return 500; server_tokens off;"; }; virtualHosts."XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/_matrix/federation/v1" = { proxyPass = "http://192.168.10.131:8008/"; extraConfig = "client_max_body_size 300M;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header X-Forwarded-Proto $scheme;"; }; locations."/" = { extraConfig = "return 302 https://element.xxxxxx.dynu.net/;"; }; extraConfig = "proxy_http_version 1.1;"; }; virtualHosts."matrix.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; extraConfig = "proxy_http_version 1.1;"; locations."/" = { proxyPass = "http://192.168.10.131:8008/"; extraConfig = "client_max_body_size 300M;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header X-Forwarded-Proto $scheme;"; }; }; virtualHosts."element.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:8009/"; extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;"; }; }; virtualHosts."call.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:8080/"; extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;"; }; }; virtualHosts."livekit.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/wss" = { proxyPass = "http://192.168.10.131:7881/"; # proxyWebsockets = true; extraConfig = "proxy_http_version 1.1;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header Connection \"upgrade\";" + "proxy_set_header Upgrade $http_upgrade;"; }; locations."/" = { proxyPass = "http://192.168.10.131:7880/"; # proxyWebsockets = true; extraConfig = "proxy_http_version 1.1;" + "proxy_set_header X-Forwarded-For $remote_addr;" + "proxy_set_header Host $host;" + "proxy_set_header Connection \"upgrade\";" + "proxy_set_header Upgrade $http_upgrade;"; }; }; virtualHosts."livekit-jwt.XXXXXX.dynu.net" = { enableACME = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:7980/"; extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;"; }; }; virtualHosts."turn.XXXXXX.dynu.net" = { enableACME = true; http2 = true; addSSL = true; locations."/" = { proxyPass = "http://192.168.10.131:5349/"; }; }; }; }
you only need to reboot Nix when something low level has changed. i honestly don’t know where that line is drawn so i reboot quite a lot when i’m setting up a Nix server and then hardly reboot it at all from then on even with auto-updates running oh and if i make small changes to the services i just run
sudo nixos-rebuild switch
and don’t reboot
A fairly common setup is something like this:
Internet -> nginx -> backend services.
nginx is the https endpoint and has all the certs. You can manage the certs with letsencrypt on that system. This box now handles all HTTPS traffic to and within your network.
The more paranoid will have parts of this setup all over the world, connected through VPNs so that “your IP is safe”. But it’s not necessary and costs more. Limit your exposure, ensure your services are up-to-date, and monitor logs.
fail2ban can give some peace-of-mind for SSH scanning and the like. If you’re using certs to authenticate rather than passwords though you’ll be okay either way.
Update your servers daily. Automate it so you don’t need to remember. Even a simple “doupdates” script that just does “apt-get update && apt-get upgrade && reboot” will be fine (though you can make it more smart about when it needs to reboot). Have its output mailed to you so that you see if there are failures.
You can register a cheap domain pretty easily, and then you can sub-domain the different services. nginx can point “x.example.com” to backend service X and “y.example.com” to backend service Y based on the hostname requested.
I would recommend automating only daily security updates, not all updates.
Ubuntu and Debian have “unattended-upgrades” for this. RPM-based distros have an equivalent.
Agree - good point.
I’ve tried 3 times so far in Python/gradio/Oobabooga and never managed to get certs to work or found a complete visual reference guide that demonstrates a complete working example like what I am looking for in a home network. (Only really commenting to subscribe to watch this post develop, and solicit advice:)
I’ve played around with reverse proxies and ssl certs and the easiest method I’ve found so far was docker. Just haven’t put anything in production yet. If you don’t know how to use docker, learn, it’s so worth it.
Here is the tutorial I used and the note I left for myself. You’ll need a domain to play around with. Once you figure out how to get NGINX and certbot set up, replacing the helloworld container with a different one is relatively straight forward.
DO NOT FORGET, you must give certbot read write permissions in the docker-compose.yml file which isn't shown in this tutorial -----EXAMPLE, NOT PRODUCTION CODE---- nginx: container_name: nginx restart: unless-stopped image: nginx depends_on: - helloworld ports: - 80:80 - 443:443 volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf - ./certbot/conf:/etc/letsencrypt:ro - ./certbot/www:/var/www/certbot:ro certbot: image: certbot/certbot container_name: certbot volumes: - ./certbot/conf:/etc/letsencrypt:rw - ./certbot/www:/var/www/certbot:rw command: certonly --webroot -w /var/www/certbot --keep-until-expiring --email *email* -d *domain1* -d *domain2* --agree-tos