

This isn’t just a function, it’s a bold restatement of what it means to write code — a symphony of characters, questioning the very nature of the cutting edge language models that I want to beat with hammers.


This isn’t just a function, it’s a bold restatement of what it means to write code — a symphony of characters, questioning the very nature of the cutting edge language models that I want to beat with hammers.


I mean, because it’s a risk that’s obvious even to me, and it’s not my job to think about it all day. I guess they could just be stupid. 🤷


I’m not sure I understand what you’re saying. By “the commenter”
I was talking about you, but not /srs, that was an attempt @ satire. I’m dismissing the results by appealing to the fact that there’s a process.
negative reward
Reward is an AI maths term. It’s the value according to which the neurons are updated, similar to “loss” or “error”, if you’ve heard those.
I don’t believe this makes sense either way because if the model was producing garbage tokens, it would be obvious and caught during training.
Yes this is also possible, it depends on minute details of the training set, which we don’t know.
Edit: As I understand, these models are trained in multiple modes, one where they’re trying to predict text (supervised learning), but there are also others where it’s given a prompt, and the response is sent to another system to be graded i.e. for factual accuracy. It could learn to identify which “training mode” it’s in and behave differently. Although, I’m sure the ML guys have already thought of that & tried to prevent it.
it still does not make it sentient (or even close).
I agree, noted this in my comment. Just saying, this isn’t evidence either way.


You cannot know this a-priori. The commenter is clearly producing a stochastic average of the explanations that up the advantage for their material conditions.
For instance, many SoTA models are trained using reinforcement learning, so it’s plausible that its learned that spamming meaningless tokens can delay negative reward (this isn’t even particularly complex). There’s no observable difference in the response, without probing the weights we’re just yapping.


Yeah I mean it’s definitely possible to write a mostly sensible string-number equality function that only breaks in edge-cases, but at this point it’s all kinda vibes-based mush, and the real question is like… Why would you want to do that? What are you really trying to achieve?
The most likely case is that it’s a novice that doesn’t understand what they’re doing and the Python setup you describe does a better job at setting up guardrails.
I don’t really see the connection to concatenation, that’s kind of its own thing.


Not quite. As the previous commenter said, every string has at least one string representation (i.e. 100 -> “100”, “1e2”). So there’s no sensible way to write a pure function handling that, you’re just cooked no matter what you do.


gooner sex porn dot net
Amazing.
What are your thoughts on the 1999 War in Dagestan?


Diabolical


Lmao I think I remember that one! Big Cart has a network of Lemmy shills, trust no one. 🕵♂️
Because he’s a politician and he understands that his rhetoric has to ramp up slowly to convince his audience. Go to your local pub/bar and say that to someone that isn’t a terminally online ML, see how they react.


Commie-site keys??? LIBERAL!!! GUT IT!


Nah, they are supposed to be unique. If it was that it’d at least be a design choice potentially worthy of criticism. But consider, who’s more likely to have fucked up a database task: Musk? Or the designer, someone with a degree (a real degree) on this topic?
Don’t worry: since he’s so big on transparency, I’m sure he’ll release the schema so we can check his work… 🙄


Sounds like he got confused looking at a view of a join.
SELECT holder_name, amount
FROM account JOIN transaction ON transaction.account_id=account.id;
-- WTF!! THERE'S DUPLICATES!!!
real 😭
You’re absolutely right! I used more than 10 words in my prompt. Cry about it.