

Hell, I don’t submit help requests without a confident understanding of what’s wrong.
Hi Amazon. My cart, ID xyz123, failed to check out. Your browser javascript seems to be throwing an error on line 173 of “null is not an object”. I think this is because the variable is overwritten in line 124, but only when the number of items AND the total cart price are prime.
Generally, by the time I have my full support request, I have either solved my problem or solved theirs.




I tripped over this awesome analogy that I feel compelled to share. “[AI/LLMs are] a blurry JPEG of the web”.
This video pointed me to this article (paywalled)
The headline gets the major point across. LLMs are like taking the whole web as an analog image and lossily digitizing it: you can make out the general shape, but there might be missed details or compression artifacts. Asking an LLM is, in effect, googling your question using a more natural language… but instead of getting source material or memes back as a result, you get a lossy version of those sources and it’s random by design, so ‘how do I fix this bug?’ could result in ‘rm -rf’ one time, and something that looks like an actual fix the next.
Gamers’ Nexus just did a piece about how youtube’s ai summaries could be manipulative. While I think that is a possibility and the risk is real, go look at how many times elmo has said he’ll fix grok for real this time; but another big takeaway was how bad LLMs still are at numbers or tokens that have data encoded in them: There was a segment where Steve called out the inconsistent model names, and how the ai would mistake a 9070 for a 970, etc, or make up it’s own models.
Just like googling a question might give you a troll answer, querying an ai might give you a regurgitated, low-res troll answer. ew.