Dell is now shifting it focus this year away from being ‘all about the AI PC.’

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Running an LLM locally is entirely possible with fairly decent modern hardware. You just won’t be running the largest versions of the models. You’re going to run ones intended for local use, almost certainly Quantized versions. Those usually are intended to cover 90% of use cases. Most people aren’t really doing super complicated shit with these advanced models. They’re asking it the same questions they typed into Google before, just using phrasing they used 20+ years ago with Ask Jeeves.