

I’m not sure I understand the question
Don’t DM me without permission please


I’m not sure I understand the question


I used to watch Lunduke’s linux Sucks speech every year it came out. What happened to that guy? Was he always the worst and I didn’t notice til 2020?


Well that’s embarscing


deleted by creator


This is encouraging!


I suppose you make a point, I’m not sure how my school would feel about me open sourcing my project code though 😅
Once I have more time for Personal projects I plan to open source everything.


You know we don’t like corn syrup right?


Cleaning isn’t the goal, appearing clean and absorbing excess oil is the goal, which it achieves easily.


That’s why they call it cornhole 😉


I prefer offpunk.


Some of them yeah. Axtually probably most of them, as long as they’re not the sugar free ones, those all taste like fake sugar.


You don’t need to be rude.
My original comment was in reply to someone looking for this type of information, the conversation then continued.
Disengage: I don’t want to deal with it today frankly, I don’t have time for rude people.


I like when it insists I’m using escape characters in my text when I absolutely am not and I have to convince a machine I didn’t type a certain string of characters because on its end those are absolutely the characters it recieved.
The other day I argued with a bot for 10 minutes that I used a right caret and not the html escape sequence that results in a right caret. Then I realized I was arguing with a bot, went outside for a bit, and finished my project without the slot machine.


Yes, precisely.
If you’re trying to use large models, you need more RAM than consumer grade nvidia products can supply. Without system ram sharing, the models error out and start repeating themselves or just crash and need to be restarted.
This can be fixed with CPU inferencing but would be much slower.
An 8b model will run fine on an RTX30 series, a 70b model will absolutely not. BUT you can do cpu inferencing with the 70b model if you don’t mind the wait.
Today I made vegetarian salisbury steaks using impossible patties, store bought broth, and fresh veggies and herbs (and some stuff I had laying around). I spent less than $15 total (costco, price per unit) on the ingredients. It took 2 hours of cooking.
Assuming a wage of $25/hr, lower than adequate but relatively high in service fields in the US (those who work enough that delivery is super tempting), my meal cost me $65 including my labor. That’s less than it’d cost for delivery of a similar meal, is higher quality than I could get for delivery, and I’ve got leftovers for tomorrow, which I wouldn’t get with delivery.
Delivery is a scam. Gig economy Based delivery doubly so.


8b parameter models are relatively fast on 3rd gen RTX hardware with at least 8gigs of vram, CPU inferencing is slower and requires boatloads of ram but is doable on older hardware. These really aren’t designed to run on consumer hardware, but the 8b model should do fine on relatively powerful consumer hardware.
If you have something that would’ve been a high end gaming rig 4 years ago, you’re good.
If you wanna be more specific, check huggingface, they have charts. If you’re using linux with nvidia hardware you’ll be better off doing CPU inferencing.
Edit: Omg y’all I didn’t think I needed to include my sources but this is quite literally a huge issue on nvidia. Nvidia works fine on linux but you’re limited to whatever VRAM is on your video card, no RAM sharing. Y’all can disagree all you want but those are the facts. Thays why AMD and CPU inferencing are more reliable, and allow for higher context limits. They are not faster though.
Sources for nvidia stuff https://github.com/NVIDIA/open-gpu-kernel-modules/discussions/618
https://forums.developer.nvidia.com/t/shared-vram-on-linux-super-huge-problem/336867/
https://github.com/NVIDIA/open-gpu-kernel-modules/issues/758


Just pay attention and whenever you see or hear something that Makes you say “wait what?” Go look it up and educate yourself.
You don’t necessarily need to participate in the conversation, observe it, gather opinions and cross reference with trusted sources.
Of course in person you can’t just whip out your phone and look stuff up, or shit I guess you can, but you can always say “I’m not informed enough to speak on this, can I get back to you?” and in my experience people will respect that, giving you a chance to go educate yourself on whatever the topic at hand was.


After a few years of Living downtown in a not great city my automatic on the street response has become a short “Fuck off bud”
I’ve never been followed after saying it, Only after trying to not be a bitch.
Oh, well I mean, arson isn’t normally cute is it?