Skip to main content

LCP and low-entropy images

3 min read

Since Chrome 112, Largest Contentful Paint (#LCP) ignores low-entropy images.

So what is image entropy? It measures the amount of information or disorder in an image. Put differently: how much randomness is in a picture, based on how different its pixels are. Low entropy means the image has many repeating patterns, is predictable, and carries little visual complexity.

Calculating LCP for low-entropy images doesn’t make sense because they contain very little meaningful information.

If you’re after the scientific explanation of entropy: H = -Σ (p_i * log2(p_i)), where H is the entropy, Σ represents the sum over all possible pixel values (i) and p_i is the probability of a pixel having the value i in the image.

Can you determine whether an image has low entropy? Yes. Run a bits per pixel (bpp) calculation using JavaScript. The following snippet works directly in the browser console:

console.table(
  [...document.images]
    .filter(
      (img) => img.currentSrc != '' && !img.currentSrc.includes('data:image')
    )
    .map((img) => [
      img.currentSrc,
      (performance.getEntriesByName(img.currentSrc)[0]?.encodedBodySize * 8) /
        (img.width * img.height),
    ])
    .filter((img) => img[1] !== 0)
);

Kudos to Joan León’s webperf snippets

Here’s what the output looks like when run against a page with several images:

$

Run this against any site and you’ll see results immediately. But if you try to drop this script into your application, you’ll get an empty list. Why?

document.images returns a collection of images in the current HTML document. If your images are loaded dynamically, that collection is empty when the script runs. You need to extend the approach with a MutationObserver:

function observeAndConvertImages(callback) {
  const divImagesMap = new Map();

  const observer = new MutationObserver((mutationsList) => {
    for (const mutation of mutationsList) {
      if (mutation.type === 'childList') {
        for (const addedNode of mutation.addedNodes) {
          if (addedNode instanceof HTMLDivElement) {
            const divImages = addedNode.getElementsByTagName('img');
            if (divImages.length > 0) {
              divImagesMap.set(addedNode, Array.from(divImages));
              for (const image of divImages) {
                if (!image.complete) {
                  image.addEventListener('load', () => {
                    if (allImagesLoaded()) {
                      callback(getAllImages());
                    }
                  });
                }
              }
            }
          }
        }
      }
    }
  });

  observer.observe(document, { childList: true, subtree: true });

  function allImagesLoaded() {
    for (const images of divImagesMap.values()) {
      for (const image of images) {
        if (!image.complete) {
          return false;
        }
      }
    }
    return true;
  }

  function getAllImages() {
    const allImages = [];
    for (const images of divImagesMap.values()) {
      allImages.push(...images);
    }
    return allImages;
  }
}

observeAndConvertImages((allImages) => {
  console.table(
    [...allImages]
      .filter(
        (img) => img.currentSrc != '' && !img.currentSrc.includes('data:image')
      )
      .map((img) => [
        img.currentSrc,
        (performance.getEntriesByName(img.currentSrc)[0]?.encodedBodySize * 8) /
          (img.width * img.height),
      ])
      .filter((img) => img[1] !== 0)
  );
});

This works both in the console and within your JS files, even with dynamically loaded images.

One more thing: loading images dynamically (via the Fetch API, for example) is a red flag for performance, especially if that image ends up being the LCP element.