Admiral Patrick

I’m surprisingly level-headed for being a walking knot of anxiety.

Ask me anything.

Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.

I also develop Tesseract UI for Lemmy/Sublinks

Avatar by @SatyrSack@feddit.org

  • 15 Posts
  • 219 Comments
Joined 3 years ago
cake
Cake day: June 6th, 2023

help-circle




  • I’m now running 9 of the Dell equivalents to those, and they’re doing well. Average 15-20 watts at normal load and usually no more than 30-35 watts running full tilt. 5 of them are unprovisioned but I got a good deal on them for $25/each so I couldn’t pass them up :shrug:.

    Attempting to cable-manage the power bricks for more than 1 of these is the worst part of using them. The only life pro tip I can offer is to ditch the power bricks and buy a 65W USB-C power delivery adapter that’s in the “wall wart” style and also one of the USB-C to Lenovo power adapter cords. Those make cable management so much better.

    Wall Wart

    Adapter Cable (these are for my Dells but they make them for most brands/styles)





  • I downgraded from used enterprise gear to those ultra small form factor PCs. They sip power well enough on their own that I haven’t really bothered tuning anything. I suppose I could cap the frequency with cpufrequtils and set the governor to conservative rather than on-demand (I do this with my battery-powered RasPi projects) but I’m not sure how much difference that’ll make for my servers.

    In the past, I had Docker Swarm setup and automation to collapse the swarm down to a single machine (powering the other ones down and back on with WoL) but that was more trouble than it was worth. On average load, the USFF PCs run at about 15 watts and don’t usually peak above 30 unless they’re rebooting or doing something very heavy. Even transcoding doesn’t break 20 watts since I’m using hardware acceleration.

    The biggest power savings I found that was worth the effort was to just get rid of the enterprise gear, switch from VMs to Docker containers where possible, and get rid of stuff I’m not using (or only run it on-demand).

    The only remaining enterprise power suck I have left is my managed switch. It’s a 2005-era dinosaur that’s loud and power hungry, but it’s been a workhorse I’m having a hard time parting with.



  • Like you’re thinking: put HAProxy on your OpenWRT router.

    That’s what I do. The HAProxy setup is kind of “dumb” L7 only (rather than HTTP/S) since I wanted all of my logic in the Nginx services. The main thing HAProxy does is, like you’re looking for, put the SPOF alongside the other unavoidable SPOF (router) and also wraps the requests in Proxy Protocol so the downstream Nginx services will have the correct client IP.

    Flow is basically:

    LAN/WAN/VPN -> HAProxy -> Two Nginx Instances -> Apps
    

    With HAProxy in the router, it also lets me set internal DNS records for my apps to my router’s LAN IP.





  • I’ve got bot detection setup in Nginx on my VPS which used to return 444 (Nginx for "close the connection and waste no more resources processing it), but I recently started piping that traffic to Nepenthes to return gibberish data for them to train on.

    I documented a rough guide in the comment here. Of relevance to you are the two .conf files at the bottom. In the deny-disallowed.conf, change the line for return 301 ... to return 444

    I also utilize firewall and fail2ban in the VPS to block bad actors, overly-aggressive scrapers, password brute forces, etc and the link between the VPS and my homelab equipment never sees that traffic.

    In the case of a DDoS, I’ve done the following:

    • Enable aggressive rate limits in Nginx (it may be slow for everyone but it’s still up)
    • Just stop either Wireguard or Nginx on the VPS until the storm blows over. (Crude but useful to avoid any bandwidth overages if you’re charged for inbound traffic).

    Granted, I’m not running anything mission-critical, just some services for friends and family, so I can deal with a little downtime.



  • I have never used it, so take this with a grain of salt, but last I read, with the free tier, you could not secure traffic between yourself and Cloudflare with your own certs which implies they can decrypt and read that traffic. What, if anything, they do with that capability I do not know. I just do not trust my hosted assets to be secured with certs/keys I do not control.

    There are other things CF can do (bot detection, DDoS protection, etc), but if you just want to avoid exposing your home IP, a cheap VPS running Nginx can work the same way as a CF tunnel. Setup Wireguard on the VPS and have your backend servers in Nginx connect to your home assets via that. If the VPS is the “server” side of the WG tunnel, you don’t have to open any local ports in your router at all. I’ve been doing that, originally with OpenVPN, since before CF tunnels were ever offered as a service.

    Edit: You don’t even need WG, really. If you setup a persistent SSH tunnel and forward / bind a port to your VPS, you can tunnel the traffic over that.



  • Admiral Patrick@dubvee.orgtoAsk Lemmy@lemmy.worldWhat's your uptime?
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    22 days ago

    My record is 9.9 years (and going). This is my old Thinkpad T420 which lives on my equipment rack to act as a SSH / web console for my equipment. I just close the lid and put it to sleep when it’s’ not in use. It doesn’t even connect to the internet, just my isolated management VLAN.

    My HomeAssistant server (also an old ThinkPad) is the next longest at just under a year. It also lives on an isolated VLAN.

    Both of these are repurposed laptops with batteries in good condition and thus have built-in UPS (in addition to the UPS they’re plugged into).

    The rest average about 4-7 months depending on power outages (rare but when they do occur, they’re longer than my UPS can provide) and rebooting for kernel updates.