@gzrrt@pineapple Yeah - ideally, any voice control processing or recordings should never leave the device it’s used on. At worst, the local network.
It’s so annoying that the tech for voice recognition became usable before mobile processing power caught up but after mobile bandwidth was enough to offload the processing to someone else’s computer.
there doesn’t need to be a keyboard.
just good hand gestures which can’t be performed by accident, and good face recognition software.
if apple headset will have this, I’m gonna bankrupt.
Don’t think anything can actually replace the power and expressiveness of keyboards and text interfaces- that’s always going to be the bottom layer for a productive setup (i.e., you need to actually be able to write code, write shell scripts etc to control your machine, etc).
Guess what I really want is just some kind of Unix machine that hums along 24/7 in the background, with many different paradigms for interacting with it when you don’t have (or want) a standard keyboard and display. Putting a display over my face feels like a giant leap in the wrong direction
I got to try messing around with a Hololens a couple of years back. The hand tracking wasn’t perfect but it was pretty cool. It read my “typing in the air” gestures to set a WPA2 key very accurately (much to my surprise). The parameters of the demo I was playing around in (picking up and moving virtual packages around in a model city to control drones flying around that part of the convention center) was pretty cool.
I don’t see any downside at all if it’s layered on top of some other (very capable) keyboard-driven UI that can do all the same things.
The downside is that no existing tech company has enough self-control to actually keep these kinds of recordings private.
That’s why we need something open-source and self-hosted.
Several such solutions already exist. Problem is, only folks like us mess around with it. Non-geeks, not so much.
@gzrrt @pineapple Yeah - ideally, any voice control processing or recordings should never leave the device it’s used on. At worst, the local network.
It’s so annoying that the tech for voice recognition became usable before mobile processing power caught up but after mobile bandwidth was enough to offload the processing to someone else’s computer.
there doesn’t need to be a keyboard. just good hand gestures which can’t be performed by accident, and good face recognition software. if apple headset will have this, I’m gonna bankrupt.
Don’t think anything can actually replace the power and expressiveness of keyboards and text interfaces- that’s always going to be the bottom layer for a productive setup (i.e., you need to actually be able to write code, write shell scripts etc to control your machine, etc).
Guess what I really want is just some kind of Unix machine that hums along 24/7 in the background, with many different paradigms for interacting with it when you don’t have (or want) a standard keyboard and display. Putting a display over my face feels like a giant leap in the wrong direction
yeah, keyboard is crucial when you want to code, 100% agree here.
I got to try messing around with a Hololens a couple of years back. The hand tracking wasn’t perfect but it was pretty cool. It read my “typing in the air” gestures to set a WPA2 key very accurately (much to my surprise). The parameters of the demo I was playing around in (picking up and moving virtual packages around in a model city to control drones flying around that part of the convention center) was pretty cool.