• 0 Posts
  • 455 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle



  • Running an LLM locally is entirely possible with fairly decent modern hardware. You just won’t be running the largest versions of the models. You’re going to run ones intended for local use, almost certainly Quantized versions. Those usually are intended to cover 90% of use cases. Most people aren’t really doing super complicated shit with these advanced models. They’re asking it the same questions they typed into Google before, just using phrasing they used 20+ years ago with Ask Jeeves.


  • It’s also very likely that they have a significant amount of corporate customers actively saying they won’t purchase AI-oriented hardware for security reasons, so they’re trying to spin the consumer angle publicly to try and grab the holdouts everyone else is obviously abandoning/ignoring as a side effect. That may be giving them too much credit, but despite just being okay at just about everything, they’re still one of the large OEMs that has survived.