• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: January 17th, 2024

help-circle

  • Danitos@reddthat.comtoSelfhosted@lemmy.worldgoodbye plex
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    This is probably the wrong post to ask this question, so sorry in advance.

    I have a dual boot Linux + Windows. Jellyfin runs wonderfully on muy Linux partition with docker-compose. Anybody knows how can I clone it in my Windows partition, such that configs, metada and accounts remain the same? I’ve failed to do this, and only the media volume remaines identical on both OS.



    1. Not a node, but a proxy. Entry node’s IPs in Tor are publicly known, so they are easy to censor. With Snowflake you create a proxy (bridge) between a censored user and an entry node, and since your IP is not listed as a node, you help the user bypass the censorship.

    2. In theory, nope. But if the user is doing something bad, a prosecutor could argue you helped them to do so. I don’t know about any case like this involving Snowflake, and I am not a lawyer. You could be a target if you were to host material, which is not the case with Snowflake.

    In case it helps, I’ve been running the extension with no trouble that I’m aware of for a few years.










  • Back in university, I studied basically all day long, which was tiresome after long sessions of study, even if with friends. My great superpower is that it used to just take me ~10 seconds of resting with my eyes closed to feel a huuuuge boost of energy that lasted for 1-2 hours. After that boost expired, I just did it again.

    Incredibly useful.






  • The 1.5B version that can be run basically on anything. My friend runs it in his shitty laptop with 512MB iGPU and 8GB of RAM (inference takes 30 seconds)

    You don’t even need a GPU with good VRAM, as you can offload it to RAM (slower inference, though)

    I’ve run the 14B version on my AMD 6700XT GPU and it only takes ~9GB of VRAM (inference over 1k tokens takes 20 seconds). The 8B version takes around 5-6GB of VRAM (inference over 1k tokens takes 5 seconds)

    The numbers in your second link are waaaaaay off.