

That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.
They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.
Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.
They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.
It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.