Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 67 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle

  • There’s a famous literary analysis essay about this, The Death of the Author, that argues for the latter. I happen to strongly believe this view.

    I decide what a work of fiction means to me, and since it’s a work of fiction there is no “higher” meaning than that. Other people can of course present their ideas about what it means, and if I like those ideas I’ll adopt them into my own thoughts on the matter. The creator can be one of those “other people” but he gets no special role in the argument; he has to make his case just like anyone else and I feel free to say “no, that’s dumb. I think it means something else.”





  • I occasionally deal with a mouse or two in my house, and I much prefer these kinds of traps. They’re slightly more expensive, but you don’t need many and they’re reusable so that doesn’t really matter much. The advantages are:

    • Super easy to set, just pull the jaw open by the little handle and it clicks in place. No need to touch the dead mouse, it plops right out into a garbage can.
    • I’ve never had mice successfully steal the bait, the cover forces them to put their heads in exactly the right place for the kill bar to come down on them.
    • This also means that I’ve never seen a mouse fail to get instantly and painlessly killed.

    The best places to put mousetraps are often dark and hard to see, and the bright red kill bar makes it easy to tell at a glance whether it’s triggered.




  • I was cutting a cardboard box up with a box cutter, holding the box steady with my off hand while pushing the blade downward through the cardboard. I realized that my hand was below the blade and therefore there was a risk I’d cut myself if the blade suddenly moved more quickly through the cardboard than anticipated. Safety first! So I stopped cutting, leaving the blade in the cardboard, and lifted my hand to grip the cardboard above where I was cutting instead.

    Slammed my thumb right into the blade as I moved my hand, peeling a nasty slice of skin off. Took a lot of stitches to tack it back in place, still have a scar from that.





  • However, a human would also need to verify that the generated solution actually solves a problem.

    That’s already an issue with human-generated answers to problems. :)

    “Verification” could be done by an AI agent too, though, as I described above. Depends on the sort of problem. A programming solution can be tested in a simple sandbox, a medical solution would require a bit more effort to validate (whether by human or by AI).

    I just don’t think current LLMs are quite smart enough yet.

    Certainly, we’re both speculating about future developments here.




  • I did suggest a possible solution to this - the AI search agent itself could post a question in a forum somewhere if has been unable to find an answer.

    This isn’t a feature yet of mainstream AI search agents but I’ve been following development and this sort of thing is already being done by hobbyists. Agentic AI workflows can be a lot more sophisticated than simple “do a search summarize results.” An AI agent could even try to solve the problem itself - reading source code, running tests in a sandbox, and so forth. If it figures out a solution that it didn’t find online, maybe it could even post answers to some of those unanswered forum questions. Assuming the forum doesn’t ban AI of course.

    Basically, I think this is a case of extrapolating problems without also extrapolating the possibilities of solutions. Like the old Malthusian scenario, where Malthus projected population growth without also accounting for the fact that as demand for food rises new technologies for making food production more productive would also be developed. We won’t get to a situation where most people are using LLMs for answers without LLMs being good at giving answers.


  • Thanks for showing that you have no actual arguments.

    You did it first by jumping to “think of the children!” And analogizing running a program to cannibalism.

    They have no real benefit.

    No need to ban them, then. Nobody will use them if this is true.

    They have insane energy requirements, insane hardware requirements.

    I run them locally on my computer, I know this is factually incorrect through direct experience.

    Personal experience aside, if running an LLM query really required “insane” energy and hardware expenditures then why are companies like Google so eager to do it for free? These are public companies whose mandates are to generate a profit. Whatever they’re getting out of running those LLM queries must be worth the cost of running them.

    We are working on saving our planet

    I see you’ve switched from “think of the children!” To “think of the environment!”


  • Depends which 90%.

    It’s ironic that this thread is on the Fediverse, which I’m sure has much less than 10% the population of Reddit or Facebook or such. Is the Fediverse “dead”?

    This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things

    If it’s the easiest way to get good answers for most things, that doesn’t seem like a problem to me. If it isn’t the easiest way to get good answers, then why are people switching to it en mass anyway in this scenario?