

Well done at making a claim without any rationale. So proud. Now show how it’s done.
Well done at making a claim without any rationale. So proud. Now show how it’s done.
Youtube blocks any attempt to comment with links
Did you learn about linear algebra through chat GPT? WTF even is “fit a curve”? Who says that? No, there’s just not enough data to actually make that determination. There’s barely any information on the Y axis, and the resolution of both axes is too low to make a precise reading of their values. You can only estimate.
Yes, those are dots
You’re both discussing maths lmao
No, since you still seem to think it’s the same as linear
Not enough information for a meaningful answer
What exactly are you trying to do with this comment?
The uncertainty comes from reverse-engineering how a specific output relates to the prompt input. It uses extremely fuzzy logic to compute the answer to “What is the closest planet to the Sun?” We can’t know which nodes in the neural network were triggered or in what order, so we can’t precisely say how the answer was computed.
What exponential growth fundamentally is.
No, no, and no. Exponential growth is always exponential.
You are, you just don’t know enough about the subject.
Pathologically pretentious
Oh, they’re coming round in popularity, particularly among Gen Z.
I never said discussing LLMs was itself philosophical. I said that as soon as you ask the question “but does it really know?” then you are immediately entering the territory of the theory of knowledge, whether you’re talking about humans, about dogs, about bees, or, yes, about AI.
I’ll preface by saying I agree that AI doesn’t really “know” anything and is just a randomised Chinese Room. However…
Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn’t know things, you’re basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can’t just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can’t- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.
When you debate whether a being truly knows something or not, you are, in fact, engaging in the philosophy of epistemology. You can no more avoid epistemology when discussing knowledge than you can avoid discussing physics when describing the flight of a baseball.
The theory of knowledge (epistemology) is a distinct and storied area of philosophy, not a debate about semantics.
There remains to this day strong philosophical debate on how we can be sure we really “know” anything at all, and thought experiments such as the Chinese Room illustrate that “knowing” is far, far more complex than we might believe.
For instance, is it simply following a set path like a river in a gorge? Is it ever actually “considering” anything, or just doing what it’s told?
Not even remotely.
Was the ellipsis indicative of your deep reasoning?