• Redjard@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    If that is true, how does the brain work?

    Call everything you have ever experienced the finite dataset.
    Constructing your brain from dna works in a timely manner.
    Then training it does too, you get visibly smarter with time, so on a linear scale.

    • xthexder@l.sw0.com
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 months ago

      I think part of the problem is that LLMs stop learning at the end of the training phase, while a human never stops taking in new information.

      Part of why I think AGI is so far away is because to run the training in real-time like a human, it would take more compute than currently exists. They should be focusing on doing more with less compute to find new more efficient algorithms and architectures, not throwing more and more GPUs at the problem. Right now 10x the GPUs gets you like 5-10% better accuracy on whatever benchmarks, which is not a sustainable direction to go.