Admiral Patrick

I’m surprisingly level-headed for being a walking knot of anxiety.

Ask me anything.

Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.

I also develop Tesseract UI for Lemmy/Sublinks

Avatar by @SatyrSack@feddit.org

  • 15 Posts
  • 232 Comments
Joined 3 years ago
cake
Cake day: June 6th, 2023

help-circle




  • I also run (well, ran) a local registry. It ended up being more trouble than it was worth.

    Would you have to docker load them all when rebuilding a host?

    Only if you want to ensure you bring the replacement stack back up with the exact same version of everything or need to bring it up while you’re offline. I’m bad about using the :latest tag so this is my way of version-controlling. I’ve had things break (cough Authelia cough) when I moved it to another server and it pulled a newer image that had breaking config changes.

    For me, it’s about having everything I need on hand in order to quickly move a service or restore it from a backup. It also depends on what your needs are and the challenges you are trying to overcome. i.e. When I started doing this style of deployment, I had slow, unreliable, ad heavily data-capped internet. Even if my connection was up, pulling a bunch of images was time consuming and ate away at my measly satellite internet data cap. Having the ability to rebuild stuff offline was a hard requirement when I started doing things this way. That’s now no longer a limitation, but I like the way this works so have stuck with it.

    Everything a service (or stack of services) needs is all in my deploy directory which looks like this:

    /apps/{app_name}/
        docker-compose.yml
        .env
        build/
            Dockerfile
            {build assets}
        data/
            {app_name}
            {app2_name}  # If there are multiple applications in the stack
            ...
        conf/                   # If separate from the app data
            {app_name}
            {app2_name}
            ...
        images/
            {app_name}-{tag}-{arch}.tar.gz
            {app2_name}-{tag}-{arch}.tar.gz
    

    When I run backups, I tar.gz the whole base {app_name} folder which includes the deploy file, data, config, and dumps of its services images and pipe that over SSH to my backup server (rsync also works for this). The only ones I do differently are ones with in-stack databases that need a consistent snapshot.

    When I pull new images to update the stack, I move the old images and docker save the now current ones. The old images get deleted after the update is considered successful (so usually within 3-5 days).

    A local registry would work, but you would have to re-tag all of the pre-made images to your registry (e.g. docker tag library/nginx docker.example.com/nginx) in order to push them to it. That makes updates more involved and was a frequent cause of me running 2+ year old versions of some images.

    Plus, you’d need the registry server and any infrastructure it needs such as DNS, file server, reverse proxy, etc before you could bootstrap anything else. Or if you’re deploying your stack to a different environment outside your own, then your registry server might not be available.

    Bottom line is I am a big fan of using Docker to make my complex stacks easy to port around, backup, and restore. There’s many ways to do that, but this is what works best for me.


  • Yep. I’ve got a bunch of apps that work offline, so I back up the currently deployed version of the image in case of hardware or other failure that requires me to re-deploy it. I also have quite a few custom-built images that take a while to build, so having a backup of the built image is convenient.

    I structure my Docker-based apps into dedicated folders with all of their config and data directories inside a main container directory so everything is kept together. I also make an images directory which holds backup dumps of the images for the stack.

    • Backup: docker save {image}:{tag} | gzip -9 > ./images/{image}-{tag}-{arch}.tar.gz
    • Restore docker load < ./images/{image}-{tag}-{arch}.tar.gz

    It will backup/restore with the image and tag used during the save step. The load step will accept a gzipped tar so you don’t even need to decompress it first. My older stuff doesn’t have the architecture in the filename but I’ve started adding that lately now that I have a mix of amd64 and arm64.


  • I use the web version rather than the app, but I want to say the app can store the library on the SD card if you have one of sufficient size lying around and if the Redmi has the slot for one. But as someone else said, there are smaller versions you can download if you can’t fit the full one.

    Not trying to push Kiwix on you, but I just can’t emphasize enough how handy it is to have offline Wikipedia always on hand.


    • Termux has lots of possibilities
    • Pair it with a Meshtastic node and make it a dedicated communicator
    • I run HomeAssistant and Emby and have several old smartphones to work with, so one lives in each room and act as remotes for those
    • Setup Asterisk and make a VoIP system using old smartphones and SIP clients as handsets
    • Check if PostmarketOS supports it. I haven’t used it, but it basically turns your phone into a Linux machine if I understand correctly
    • Use it as your “ugh, I have to use an app for [THIS]?!” phone. Basically things that require an app for setup or one-off apps you can’t avoid using.
    • Make your own little portable Library of Alexandria. Install Kiwix and download a bunch of ZIMs from their library. If you’ve got at least 130 GB to work with, you can even fit the entire Wikipedia dump with images and have that locally.










  • I’m now running 9 of the Dell equivalents to those, and they’re doing well. Average 15-20 watts at normal load and usually no more than 30-35 watts running full tilt. 5 of them are unprovisioned but I got a good deal on them for $25/each so I couldn’t pass them up :shrug:.

    Attempting to cable-manage the power bricks for more than 1 of these is the worst part of using them. The only life pro tip I can offer is to ditch the power bricks and buy a 65W USB-C power delivery adapter that’s in the “wall wart” style and also one of the USB-C to Lenovo power adapter cords. Those make cable management so much better.

    Wall Wart

    Adapter Cable (these are for my Dells but they make them for most brands/styles)