• [object Object]@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    That’s a rather interesting consideration as to whether rendering at smaller sizes skips decoding parts of the image.

    First, the presented file is normally always loaded in full, because that’s how file transfer works over the web. Until lately, there were no different sizes available, and that only became widely-ish spread because of Apple’s ‘Retina’ displays with different dots-per-inch resolution, mostly hidpi being two times the linear size of the standard dpi. Some sites, like Wikipedia, also support resizing images on the fly to some target dimensions, which results in a new image of the JPEG or other format. In any case, to my somewhat experienced knowledge, JPEG itself doesn’t support sending every second row or anything like that, so you always get a file of a predetermined size.

    First-and-a-half, various web apps can implement their own methods for loading lower- or higher-res images, which they prepare in advance. E.g. a local analogue to Facebook almost certainly loads various prepared-in-advance low-res images for viewing in the apps or on the site, but has the full-res images available on request, via a menu.

    Second, I would imagine that JPEG decoding always results in the image of the original size, which is then dynamically resized to the viewport of the target display — particularly since many apps allow zooming in or out of the image on the fly. Specifically, I think decoding the JPEG image creates a native lossless image similar to BMP or somesuch (essentially just a 2d array of pixel colors), which is then fed to the OS’s rendering capabilities, taking quite a chuck of memory. Of course, by now this is all accelerated by the hardware a whole lot, with the common algorithms being prepared to render raw pixels, JPEG, and a whole bunch of other formats.

    It would be quite interesting if file decoding itself could just skip some part of the rows or columns, but I don’t think that’s quite like the compression works in current formats (at least in lossy ones, which depend on the previous data to encode later data). Although afaik JPEG encodes the image in rectangles like 16x16 or something like that, so it could be that whole chunks could be skipped altogether.