I was wondering why there isn’t or if there is, an Open Source (Free, or not) Street View like Google Maps. I like geography as a hobby, like my other hobbies, and found GeoHub which sparked my interest in being a free geoguessr alternative. I just don’t understand why Google is the only Mapping service people go to, is Google the only one that has street view? is this a hard business model to achieve?
Please, I’d love too know :)
No sorry, that’s a completely different amount of data. Look at how large (or rather, how comparatively tiny) the OSM data set is. Now see how much photo data you’d get for that amount.
you have a point there.
and yet we have the internet archive… so it seems to be possible.
Even the internet archive is nothing in comparison to the image data used for street view.
Its of course totally “technically” possible, but it would require some veeery generous donations from some pretty rich people.
Even if you get people to use their phones to just record everything around them, geo-tag it and upload it. All that data would still have to be stitched together by some big ass GPU cluster that does things that currently only big tech companies can do properly at scale.
honest curiosity, don’t want to flame war: do you have numbers for that?
stitching is no longer a requirement because of 360° cameras, is it? it could also be made on the client side if really needed. if people can use josm to contribute to osm, they can use some other software for stitching?!
have you seen that the internet archive has also quite high res books scans and videos?
if your aiming for covering every small street of the whole world tomorrow, you are right: it won’t work. but nothing would stop to start with a single city or a region?
lets agree to disagree :-)
Is the current generation of street view still just snapshots from different positions with a 360° camera? I thought it was proper 3D scans with images mapped onto it by now. I admit i havent actually used it in years.
But yeah if its just isolated 2D images then its probably not as much as i thought. The processing would still be tough i think but i dont know enough to even guess that properly.
I think doing small scale demonstrations would be cool. The community would probably be able to learn and improve from it a lot and eventually it could be scaled up.
If it was encoded as super low framerate 360deg video files and you just seek to the place in the stream that matches the coordinates, it could use kits stage and bandwidth than other solutions, but it’s still going to be a lot of bytes.
The one from Apple definitely feels 3D while moving cam around, but I guess this is only afterFXs