I recently watched a BobbyBrocolli video on it, the controversy mostly surrounded UofU, a quick search shows that Pons and Fleischman are from UofU. The video also mentioned that BYU also claimed to discover cold fusion, but not the energy of the future self sustaining kind.
This would be the perfect use case for that fancy Apple VR headset they released a year or two so. Since it has built-in eye tracking, it would be easy to set up a test in a controlled environment where participants navigate it while looking around.
Navigating that scene in real life (or even simulated) would make the data orders of magnitude more annoying to interpret. On a static image you can just overlay all eye movements and produce a heatmap. But for a subject that’s actually (or virtually) moving, none of the data would coincide and you’d have to manually find out which focus points were actually equal.
I feel like utilizing eye tracking would be used if they were to study this concept more deeply. That data would be more complicated to sift through given how much data and how many variables might come into play. Definitely more telling but also harder to analyze.
Considering how common and easy eye tracking is, this seems like some shitty science.
whaaaat surely BYU, the school that claimed to have done cold fusion, is an upstanding pillar of academic research
i hate defending byu, but wasn’t that UofU?
I recently watched a BobbyBrocolli video on it, the controversy mostly surrounded UofU, a quick search shows that Pons and Fleischman are from UofU. The video also mentioned that BYU also claimed to discover cold fusion, but not the energy of the future self sustaining kind.
i must have missed ybu’s announcement. no worries, there have been a lot of hoaxes in that area.
UwU?
Nah, UwU was a community college back then
Shitty science at BYU? Surely not!
Study designed around a conclusion using a borderline invalid method.
This would be the perfect use case for that fancy Apple VR headset they released a year or two so. Since it has built-in eye tracking, it would be easy to set up a test in a controlled environment where participants navigate it while looking around.
Navigating that scene in real life (or even simulated) would make the data orders of magnitude more annoying to interpret. On a static image you can just overlay all eye movements and produce a heatmap. But for a subject that’s actually (or virtually) moving, none of the data would coincide and you’d have to manually find out which focus points were actually equal.
Put the subject in an auto driving kart and make it go in same path for all of them
Sure, but any decent webcam and monitor can do this.
I feel like utilizing eye tracking would be used if they were to study this concept more deeply. That data would be more complicated to sift through given how much data and how many variables might come into play. Definitely more telling but also harder to analyze.
How so?
https://sopuli.xyz/post/41573721/22043739
Thanks. But you can use eye tracking on static images with just a good webcam on a monitor.
Also in a live environment, presumed static (no people or traffic etc) image stabilization tech makes things much simpler.