Mama told me not to come.

She said, that ain’t the way to have fun.

  • 3 Posts
  • 249 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • Makes sense.

    I’m more interested in cutting off-site backup costs, so my NAS has RAID mirror to reduce chance of total failure, and offsite backup only stores important data. I don’t even backup the bulk of it (ripped movies and whatnot), just the important data.

    Restore from backup looks like this for my NAS:

    1. Install OS + podman
    2. Restore data from backup
    3. Build and deploy containers from config

    Personal devices are similar, but installing packages is manual (perhaps I’ll backup my explicitly stored package list or something to speed it up a little). Setup takes longer than your method, but I think it’s worth the reduced storage costs since I’ve never actually needed to do it and a few hours of downtime is totally fine for me.



  • Your options are only as limited as your imagination and complexity of your requirements.

    If you’re only using it on your network, just use HTTP with mdns (or have static routes from your router or something, but you said you don’t want that) so you don’t have to remember IP addresses. If you want TLS, you can borrow someone else’s domain with a service like FreeDNS.afraid.org (5 free subdomains). Or if you control the devices completely, you can make a root CA and add that to each device’s trusted CA list, and then sign your own certs and eliminate MITM attacks.

    You have options, and most are overkill. The simplest, secure solution is HTTP on your local network or over a VPN you trust (if you have a publicly accessible IP, just host your own WireGuard server on/via your router).



  • What do you mean by “separately be able to clear completed tasks”?

    I just mean keep the list of completed tasks until I manually push clear, just like Google Keep does (cross them out), and only clear the completed tasks when I push a button.

    Basically, I sometimes mark tasks done on accident, and sometimes I’ll carry the extra tasks on to the next trip.

    Basically it’s the same things as text notes, just with a bit more formatting options.

    It has a lot more formatting options:

    • positionable images, w/ text flow options
    • drawings
    • tables and charts
    • print settings, like margins, header/footer, etc

    You could get something pretty useful by just making a collaborative Markdown editor, but then it’s not really a Docs replacement, but more of an Etherpad replacement.

    That’s fine, I guess I’m more concerned about scope creep ultimately killing the project.

    there must always be a protocol behind it

    Sure. I guess my point is that Matrix is targeting text, audio, and video chat with hundreds if not thousands of simultaneous users in one room, all with E2EE enabled.

    A Google Keep replacement doesn’t even need to be real time collaborative, and it certainly doesn’t need to support hundreds of simultaneous users on a given document. It’s like using a chainsaw to trim a bush, it’s way overkill, and there’s a decent chance of changes to the protocol breaking stuff for you since you don’t need most of the features.

    The backend for this just needs to notify other clients of a change, real time isn’t necessary or even particularly helpful.

    And you’d still need an application server to handle the storage and retrieval of the data, no? So all Matrix is buying you is synchronization, which is just a simple pub/sub.

    What’s the difference between chat and data?

    You don’t really need a list of changes for a shared TODO app. The data is going to be small and going back in time isn’t that useful.

    Maybe it makes sense for something with revision history, like a DIY git. But TODO lists are ephemeral, and I really don’t care about them after I’m done with my shopping trip.

    the user X is currently typing

    Seems like overkill to me.

    Maybe it makes sense for something more fancy like an Etherpad or Confluence replacement, but not for a shopping list.

    Build it however you like and prove me wrong, I’ll check it out if it solves my problem.




  • You can have btrfs subvolumes that contain only the stuff you care about. In the article, they have a subvolume for untracked files, and this doesn’t get backed up.

    I have one for /home and /srv (my self-hosted stuff), and those are the ones I care about backing up. For everything else, I keep manual copies in either /home or /srv (e.g. edits to system configs and container configs are in a git repo). So I could back up those two subvolumes and get 99% of what I care about, and hope I’ve done a good job documenting the other 1%, which are reproducible w/ a bit of time searching how to configure stuff.

    This keeps backups efficient since you can ask the filesystem to only send the changes instead of going over all the files and figuring out what has changed, so it’s probably a lot faster.


  • you most likely need a backup of your /bin/,/lib/, /usr/ and /etc/ folders, because they contain files, essential for your system operation.

    I guess we disagree about the point of backups then.

    For me, backups are about data, as in, if I switch operating systems or something, it’s what I’ll need to get set up. /bin and /lib will just cause more problems than they solve since they likely won’t be compatible w/ whatever system libraries are there. /usr is only relevant for the config files, and 90% of those are at whatever the distribution’s defaults are (and on Arch, everything is in /usr anyway). /etc is useful for configs, but like /usr/etc, 90% of it is just defaults.

    If I’m restoring from backup, I don’t want to have random breakage, like config files, libraries, and binaries being incompatible. Most of the stuff outside /home is irrelevant, and the parts I do care about can be manually backed up through improving processes.

    For example, I stick my self-hosted stuff into a git repo, and any changes I make get committed. I have scripts that deploy the changes wherever they need to go (e.g. I use quadlet, so this means systemd files), and those can be tweaked if something changes in a different OS. 99% of the things I host are containerized, so there is zero system-level impact (i.e. something that can’t easily live in git or /home). For the rest, I largely accept that I’ll probably have to fix a few things when restoring after an emergency , such as any configs for system services (e.g. /etc/sub(uid|gid), podman tweaks, users and groups, etc). I try to keep these changes extremely limited, and I keep notes in the README of my git repo, but it’s never going to be 100% accurate since I often just want to get whatever it is working and promise that I’ll update my docs later.

    That said, the most important thing you can do, IMO, is test restoring from backup. I’m moving my systems toward containerization, which gives me a chance to fix all the stuff I totally forgot about. I keep the old system around for a while to reference (on an old drive or something), pulling files (and properly backing them up) as I go, and then I’ll nuke it some months later. The important data is all in /home, which gets backed up, so it’s mostly about service configuration (which I can look up later).

    snapper

    Snapper absolutely rocks. I’m spoiled on the openSUSE line of distributions that this is baked in. When an upgrade does sideways on my Tumbleweed desktop, I simply roll back and retry the upgrade in a couple days. It’s magical.

    snapborg

    Looks really cool! I won’t be backing up most of /, but I’ll instead be a bit selective about which subvolumes I use. I already use btrfs everywhere, so this should be relatively drop-in.


  • Far more than I can reasonably support:

    • self hosted things I use - caddy, the document foundation, Jellyfin, Forgejo, etc
    • Android apps - F-Droid, NewPipe, Signal, RethinkDNS, etc
    • desktop apps - flatpak, For, Godot, etc
    • infrastructure stuff - let’s encrypt, openssh, Linux distros (mine doesn’t accept donations unfortunately), etc

    But the short list for now is:

    • Let’s Encrypt
    • Signal
    • F-Droid

    And I’ll probably run a Tor relay or something as well.


  • I test stuff on my laptop or desktop and then push it to my NAS. Everything is also containerized and snapshotted, so risk of breaking anything is pretty small.

    Also, I’ll have it run on my boot drive in “prod” for a bit before exposing it to my RAID, just in case there’s some weird issue on shared data. Switching it is just a copy + changing the container volumes.


  • 32GB of swap is huge, I wouldn’t bother with that much unless you’re suspending to disk regularly. I personally just suspend to RAM and call it a day.

    On my laptop with 24GB RAM, I have about 20GB swap, which matches the system memory (rest is dedicated to my APU). On my desktop with 16GB RAM, I have 16GB swap, because that’s what my OS picked (I would otherwise go with 10GB). On both systems, I usually only have 2-3GB in swap anyway.

    On my server, I have no swap at all and 16GB RAM (usually use <5GB at any time). I think swap is overrated and overused.




  • No, current salary for all representatives outside of a handful of leadership positions is $174,000. This is the cumulative cost for everything else (staff, expenses, etc).

    I personally think $174k is a bit low for salary, especially given the COL for the area and the importance of the role. I’d like to see that increase, while also increasing our expectations from our reps (i.e. jail time for bought politicians).



  • Simple Notes - Replacement for Google Keep

    Yes please!

    My use case is a shopping list for my SO and I. I want to be able to add stuff throughout the day, and cross them off once I grab them from the shelf, and separately be able to clear completed tasks.

    Do that and I’ll try selling my SO on it.

    Simple Chat - Replacement for WhatsApp/Slack/Meet/Teams

    Why not just use one of the other Matrix clients?

    Simple Docs - Replacement for Google Docs

    This seems overly ambitious. The thing I like about Google Docs is all the rendering features and whatnot, I don’t really use the collaborative editing part. If you really want the collaborative bits, those exist.

    do you think piggybacking the Matrix protocol would be a good choice?

    No. Matrix is designed for chat, not data, and self-hosting it requires a fair amount of resources.

    For something this simple, I don’t think there’s an “out of the box” solution here, nor do I think there needs to be one. The main things you need to handle are:

    • state changes - same as a non-collaborative app, but you also need to pick who wins
    • users and their locations in a document - can even send unsaved snippets as well

    I’d personally just DIY it since it’s really not a ton of logic, no reason to bring a whole protocol into this.

    But hey, maybe it’ll work out great. Having more options is generally a good thing.




  • Let’s put some numbers on all of this. The OTF’s total budget slowly raised from ~$10M in 2019 to a grand total of $40M in 2023, almost half of all allocated funds by Congress to promote Internet freedom globally.

    That’s an incredibly tiny amount of money.

    More recently, Congress had directed that - for both the fiscal years of 2024 and 2025 - the funding should be “not less than $43,500,000”, which guaranteed an income stream for 2025 too.

    Looks like some spicy legal action will suck up a lot of those cuts…