Why AI hasn’t made the smart home smarter
The smart home has been broken for over a decade. From day one the goal was always to lock in users to an ecosystem and invade their privacy. Actually providing useful and reliable products didn’t even register as a goal.
The one way to do decent home automation is with locally run He Assistant and Zigbee or Z-Wave. It should only rely on the Internet for resources that are truly non-local like weather reports.
Thread/Matter might also be becoming an option. At this point I’m still watching to see what they do with it.
Yeah, Home Assistant is the way to go, but it’s been a slow progression because every company is more interested in proprietary lock-in than trying to push for standards like Z-Wave. It’s cloud-based bullshit everywhere, which is exactly the wrong kind of thing for in-home privacy. There needs to be a better push for standard APIs and internal wireless protocols.
This shit should be fucking easy. HVAC systems are still wired like it’s the 1930s, and all it takes is one company to just swoop in and create an all-in-one solution that uses standards and monitors inside/outdoor/room temps, humidity, occupancy, etc. It could control smart vents to close off rooms that aren’t in use, turn on humidity systems when it’s too low and isn’t too cold outside, hook into other rules from HA.
Doing the right thing could earn them millions, but nobody wants to bother actually doing it.
I have a gripe with this article and it’s the way that their “expert” Riedl talks about AI and the anthropomorphic personification inherent in the language he uses.
AI doesn’t think. It can’t overthink. It doesn’t “misunderstand”. It doesn’t understand. It doesn’t do context. So while I understand that this person is trying to communicate the differences between these two types of technology, this gives an unreasonable overestimation of the techs capabilities, making some people believe the tech is more than it is.
Some people on another thread about the same article were upset that this writer bought a coffee machine with AI integration. But that’s to be expected of people who write about tech. They try that tech out. Experience it so they can write about it. See what it does. What it’s good at. What it’s bad at. This is how we get reviews.
Yep, “AI” is a very fancy database query masquerading as science fiction.
Society has been steadily forgetting the importance of reliability, all in the name of convenience. And in the end, you get neither.
“They don’t make it like they used to”. Sure. Sure. Old man yelling at clouds. Blah blah. But when your light switches stop working because of some overly complex system that requires the switching data to travel twice around the world just to fucking turn a light on (or an AI to invent 15 Python scripts and a mathematical proof just to add two integers together), you’ve got a really fucking fragile system.
And you know what isn’t convenient? Fucking fragile products that break as soon as you touch them. Who the fuck wants a hammer made out of salami? Sure, it might look like a hammer, it might taste great, but it can’t drive a nail for shit. That’s a garbage product that belongs in the garbage.
An LLM can tell me a (lame) joke. So can Bob. Bob can also turn on the lights, and is pretty good at that. But those things together don’t automatically mean an LLM is good at turning on lights. They are fragile, by design, like the salami is!
Stay in your fucking lane tech companies.





