There are some videos on youtube of people running local LLMs on the newer M4 chips which have pretty good AI performance. Obviously, a 5090 is going to destroy it in raw compute power, but the large unified memory on Apple Silicon is nice.
That being said, there are plenty of small ITX cases at about 13-15L that can fit a large nvidia GPU.
There are some videos on youtube of people running local LLMs on the newer M4 chips which have pretty good AI performance. Obviously, a 5090 is going to destroy it in raw compute power, but the large unified memory on Apple Silicon is nice.
That being said, there are plenty of small ITX cases at about 13-15L that can fit a large nvidia GPU.