• 3 Posts
  • 270 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • Also, in addition to what others said, Brave has been involved in some shady stuff like ad substitution/injection. See:

    https://thelibre.news/no-really-dont-use-brave/

    Similarly enough, Brendan Eich’s feed also contains some worrying content, in my opinion. Ranging from, again, retweeting right-wing activists, to weird Republican propaganda. He claims to be independent and not a Republican, but this does not make me any less worried about the type of ideas he follows.

    But yeah, if you are a big fan of AI and crypto, and are okay with having advertisements in the user interface out of the box, are okay with past attempts to steal money from websites and collect donations towards people who wouldn’t necessarily even receive it, plus you can put up with occasional privacy mistakes… use Brave!


  • Why use Brave when you can use Helium or Cromite?

    Helium has full (not lite) ublock origin built in, and no junk. That’s the gold standard of Adblock.

    Cromite has rather obsessive antifingerprinting, making it extremely difficult to track you compared to Brave.


    Basically, people use Brave because it’s got SEO; it’s the first result when people type in “Adblock browser” unfortunately.


  • The real answer is because it’s default.

    People use what’s the default. It’s that simple. Any knowledge of how images are encoded is kind of unnecessary information for most folks.

    There are a ton of technical and usage-based arguments around image formats, and political complications, but ultimately jpeg’s eternal dominance comes down to people using their app defaults, and wanting stuff to just work. And PNGs are big enough to create technical issues or incompatibilities, sometimes.




  • This happens to any fandom over time. The folks engaged enough to stick around so long tend to obsess over whatever the “original” is. They peel apart every little blemish of a new release under a microscope, before it comes out.

    Toxicity breeds.

    Compared to old forums, I think the structure of places like Reddit/Discord inflames it. It’s cultish.

    So do YouTubers and such, as video algorithms likes punchy ragebait more than (say) nerdy lore analysis.


    I first experienced this in the Avatar community, and seeing so much hate breed makes me sad.

    See: the Korra/NATLA hate videos. The Reddit groupthink that parrots the same points, including straight up misinformation. Breathless worship of ATLA these days, waning interest in deeper lore like the novels and longfics. Even the idea of an upscale/touch up is shot down, because the video artifacts in the original are “perfect.”

    And a lot of that history is archived. For instance, read through the comments in the Korra episode premiere threads, and it’s nothing like the fandom now.


    But the pattern seems to be everywhere.

    And I think it’s partially a mirage. I suspect a lot of fans (like me) are just not engaged with fandom social media. And many more are “dormant” and will come back whenever something big and new drops.










  • Friend, I’m going to be blunt: I think you may have spent time creating this with help from an LLM, and it told you too much of what you want to hear because that’s what they’re literally trained to do.

    As an example…”relativistic coherence?” Computational cycles and SHA512 checksums and bit flips and prime instances? You are mixing modern technical terms and highly speculative, theoretical concepts in a way that… just isn’t really compatible.

    And the text, from what I can parse, is similar. It mixes a lot of contemporary “anthropic” concepts (money, the 24 hour day, and so on), terms that loosely apply to text LLMs, and a few highly speculative concepts that may or may not even apply to the future.


    If you are concerned about AI safety, I think you should split your attention between contemporary, concrete systems we have now and the more abstract, philosophical research that’s been going on even before the LLM craze started. Not mix them together.

    Look into what local LLM tweakers are doing. With, for instance, alignment datasets, experiments on “raw” pretrains, or more cutting edge abliration like: https://github.com/p-e-w/heretic

    In other words, look at the concrete, and how actual safety systems can be applied now. Outlines like yours are interesting, but they can’t actually be applied or enforced.

    And on the philosophical side, basically ignore any institute or effort started after 2021, when all the “Tech Bro” hype and the release of ChatGPT 3.5 in 2022 muddied the waters. But there was plenty of safety research going on before then. There are already many documents/ideas similar to what you’re getting at in your outlines: https://en.wikipedia.org/wiki/AI_safety





  • Eh I disagree with the power usage point, specifically. Don’t listen to Altman lie through his teeth; generation and training should be dirt cheap.

    See the recent Z Image, which was trained on a shoestring budget and costs basically nothing to run: https://arxiv.org/html/2511.22699v2

    The task energy per image is less than what it took for me to type out this comment.


    As for if we “need” it, yeah, that’s a good point and what I was curious about.

    But then again… I don’t get why people use a lot of porn services. As an example, I just don’t see the appeal of OF, yet it’s a colossal enterprise.