• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle
  • I try to use both equally, because I’m always on the hook for picking the “doomed” standard in any 50/50 contest.

    I can relate to that. It usually isn’t a coin flip for me though. I’ll align with some technology over another because I truly can see an advantage. That technology might be the underdog from the beginning. Consider that we’re evaluating Firefish vs. Lemmy vs. Kbin whereas all of them combined are the underdog for certain more well established social forums. I engage with all three (and others still), because I don’t know the future.



  • I think a human might consider the meaning about what is being said whereas an LLM is only going to consider what token is the best one to use next. Humans might not be infallible, but they are presently better at detecting obvious BS that would slip undetected past an AI.

    Maybe this is an opportunity we haven’t considered. This is the chance to create a Turing CAPTCHA Test. We can’t use Glorbo to do so, because it has been written, but perhaps it makes sense that there is a nonsensical code phrase people can use to identify AIs, both with markers intentionally added to LLM training models, buried in articles written by human authors, and a challenge/response which is never written down and only passed verbally through real human-human interactions.



  • We have, and there are still things to solve before this is completely practical. This is still different than connecting to a mainframe over a 3270 terminal. A closer example of how this would work is port forwarding an X11 to a remote system or using SSH to tunnel to a server where I’ve ran screen. If I’ve connected to a GUI application running on a server or reconnected my SSH session, it is less important about where I’m connecting from. Extending this concept to Windows, you wouldn’t even need local storage for most needs. It won’t be practical for places with poor network connectivity, but where it is reliable, high bandwidth, and low latency, it won’t be so discernable from local use for most business applications. This is probably the biggest driving force behind XCloud. If Microsoft can make games run across networks with minimal problems, business applications are going to do just fine. XCloud works great for me, allowing me to stream with few problems. That’s less true for others in my family, so clearly this isn’t something which can roll out to everyone, everywhere, all at once. I think it would be great to be able to spin up additional vCPU cores or grow drive space or system RAM as needed per process so that I’m not wasting cycles or underutilizing my hardware. It seems like this would become possible with this sort of platform.


  • For a business, I see this as a strong benefit for this design. The work done for a company is the property of that company by most hiring contracts, so the work done on a remote system can by tightly controlled. At the same time, it would allow someone to use their own thin client to do both professional and personal work and keep things isolated. For someone doing freelance work, it makes sharing a natural extension of that process and access can be granted or revoked as it relates to contracts. That seems like an advantage to corporate IT departments.

    As for individuals, I don’t see how this takes away ownership. Regulations will be updated to allow users to request their data in compliance with GDPR requests, so nothing would become completely locked up. Should that be challenged ever, I don’t think any jurisdiction would say that Microsoft owns the data. What a user will be able to do with the bits they receive is a different question.


  • Long term, there is some benefit to this sort of concept. You aren’t going to have as much freedom to turn your cloud based OS into a custom build, but what you will have is a machine which will never have down time for patches and security updates. The user will be running their app remotely, using all the power and hardware of a data center, and the instance of the app can migrate from one host PC to another, seamlessly without any perception to the end user. Furthermore a user can access all their applications and data from whatever client they are using and it will migrate this session from their terminal, to their phone, to their AR HMDs.

    It isn’t going to be a change which happens over night, and it will be more like how car engine have become less user serviceable but more reliable and efficient. It will be a different experience for sure, but it has potential value beyond being a way to charge people subscriptions.






  • It got popular and lost its sense of community. It used to be common, even in large subreddits, to see someone’s username and recognize it again on another subreddit on the site. I made friends this way. It was sometimes less common, but you sometimes would see someone you knew IRL – do you let them know?

    Once karma became a currency, sometimes exchanged for real currency to buy influence, that’s when Reddit started changing. Like many things, it fell for commercialization.



  • For Mixed Reality devices, it is the same magnitude as Hololens 2 or Magic Leap 2, the biggest difference being Passthrough AR vs. Optical AR. A more comparable device is the Quest Pro, but the AR experience is probably much higher resolution and with binocular color cameras and depth sensors it hopefully doesn’t have distortion.

    Yes, it is a lot for a VR device, but for a standalone MR device it is comparable with higher fidelity, so it isn’t significantly more expensive than other devices in the same category… arguably it is the best value for the category if the cameras and screens have a wide FOV with high resolution and can accurately reproduce the HDR of an environment.