Progressive JPEG: What It Is and When to Use It
A progressive JPEG loads as a full-size blurry placeholder that sharpens across several passes as more data arrives. A baseline JPEG loads top-to-bottom — a band of pixels paints downward until the full image appears. Same file format, same extension, entirely different loading behavior. The difference lives in a single flag in the JPEG bitstream.
For large images on slow connections, that distinction matters more than almost any other image optimization decision you can make.
Progressive vs Baseline JPEG
| Property | Progressive JPEG | Baseline JPEG |
|---|---|---|
| Loading behavior | Full-size blurry preview, sharpens in passes | Top-to-bottom band fills downward |
| File size (< 10KB) | Slightly larger (multi-scan overhead) | Smaller |
| File size (> 10KB) | 2–10% smaller on average | Larger |
| Decoding complexity | Higher (browser decodes multiple scans) | Lower |
| Perceived performance | Faster — content visible sooner | Slower — users see blank space |
| Browser support | All modern browsers, IE 6+ | Universal |
| LCP impact | Positive (image appears earlier) | Neutral |
| Server/CDN handling | No special treatment needed | No special treatment needed |
Compress JPEGs instantly
Pixotter compresses JPEG images in your browser — free, private, no server upload needed.
Try it yourself
Reduce file size without visible quality loss — free, instant, no signup. Your images never leave your browser.
How Progressive JPEG Encoding Works
JPEG compression is based on the Discrete Cosine Transform (DCT). For a full walkthrough of the algorithm, see the image compression algorithm guide. The short version: every 8×8 pixel block is decomposed into frequency components — low frequencies represent broad areas of color and brightness, high frequencies represent sharp edges and fine detail.
In a baseline JPEG, those frequency components are encoded once, sequentially, block by block, from top to bottom. The decoder outputs pixels as each block arrives, which is why you see the image fill in as a vertical sweep.
Progressive JPEG reorganizes the same frequency data across multiple scans:
Spectral Selection
DCT produces 64 coefficients per 8×8 block (one DC coefficient and 63 AC coefficients). Spectral selection controls which coefficient indices go into each scan. A typical scan script puts the DC coefficient in scan 1, AC coefficients 1–5 in scan 2, and the remaining AC coefficients in scans 3+. The decoder can render a blurry but recognizable image from just the DC and first few AC coefficients — that is your first-pass preview.
Successive Approximation
The other progressive mode. Instead of splitting by coefficient index, successive approximation sends the high-order bits of all coefficients first, then progressively refines with lower-order bits. The first pass looks slightly blurry; each subsequent pass sharpens the image uniformly. Most encoders (mozjpeg in particular) mix spectral selection and successive approximation in their scan scripts for better compression.
Scan Scripts
The exact scan ordering is controlled by a scan script — a text file that tells the encoder which coefficients go in which scan and at what bit precision. mozjpeg ships with a heavily optimized default scan script that was tuned to minimize file size while maintaining the progressive loading effect. You can also write your own if you have exotic requirements, but the mozjpeg defaults are hard to beat.
What the Browser Sees
From the browser's perspective, each scan is a complete (if low-quality) rendering of the full image. As scans arrive, the browser repaints the image. This is why progressive JPEG looks like a blur-to-sharp transition, not a top-to-bottom sweep. The total data transferred is the same — it is just sequenced differently for faster visual feedback.
Baseline vs Progressive Loading Behavior
The difference is stark on a slow 3G connection with a 400KB hero image:
Baseline: The browser displays nothing for the top portion of the image slot, then paints a horizontal band that moves downward over 1–2 seconds. Users see an incomplete image that looks broken until it finishes.
Progressive: Within the first 5–10% of data received, the browser renders a full-size blurry version of the entire image. It sharpens steadily as the rest of the data arrives. Users see the composition, colors, and overall shape immediately — even if detail is low.
The psychological effect is significant. Research on perceived performance consistently shows that users tolerate longer waits when they receive early feedback. An image that is "loading but visible" feels faster than one that is "absent then suddenly there." For above-the-fold hero images, this is a Largest Contentful Paint (LCP) optimization — the element paints earlier, and the browser can report it sooner. See the image optimization for SEO guide for how LCP ties into search ranking.
File Size: Progressive Is Usually Smaller Above 10KB
Progressive JPEG adds scan overhead — the file header records multiple scan boundaries instead of one. For very small images (under 10KB), this overhead makes progressive files slightly larger. Above that threshold, the reorganization of DCT coefficients typically allows for better Huffman table optimization, and the multi-scan structure gives encoders like mozjpeg more opportunity to apply trellis quantization across scans.
Practical numbers from a typical web image set:
- 50KB thumbnail set: progressive ~3% larger
- 200KB product photos: progressive 2–6% smaller
- 800KB hero images: progressive 5–10% smaller
The crossover point varies by image content. High-frequency images (text, sharp edges, fine patterns) compress differently than low-frequency images (smooth gradients, sky, blurred backgrounds). If file size is your primary concern, measure on your actual image set rather than relying on averages.
For a broader view of compression tradeoffs, the lossy vs lossless compression guide covers when each approach is appropriate.
When to Use Progressive JPEG
Use progressive for:
- Web images above 10KB. Perceived performance gain is real, file size is equal or better. The decision is easy.
- Above-the-fold hero images. The LCP benefit is most valuable for the largest image in the initial viewport.
- Image galleries and portfolios. Users browsing a grid see recognizable thumbnails faster before the full resolution loads.
- E-commerce product images. A blurry product photo while loading is better than an empty placeholder — customers know what they are looking at sooner.
- Any page where perceived load time affects conversion. If users bounce while waiting for images, progressive encoding is a free improvement.
Avoid progressive for:
- Thumbnails under 10KB. The scan overhead makes files larger, and the blur-to-sharp effect is too brief to matter at small sizes. Baseline is better.
- Images decoded on the server or in Node.js pipelines. Many server-side image libraries decode progressive JPEGs by buffering all scans before outputting pixels, eliminating the progressive benefit entirely. Check your specific library.
- Very high throughput server-side processing. Progressive decoding requires more CPU per image than baseline. At 10,000 decodes/second, that difference adds up. Profile before committing.
- Lossless archival workflows. If you are archiving master copies, use lossless formats like PNG or TIFF. Progressive JPEG is still lossy — the encoding mode does not change that.
How to Create Progressive JPEGs
Five tools, all capable, each suited to different contexts.
jpegtran (libjpeg-turbo 3.0.3 — IJG/BSD/zlib License)
jpegtran performs lossless progressive conversion — it reorganizes the existing JPEG data without re-encoding. No quality loss whatsoever. The right tool when you already have compressed JPEGs and want to convert them in bulk.
# Install on Ubuntu/Debian
sudo apt-get install libjpeg-turbo-progs
# Install on macOS
brew install jpeg-turbo
# Convert a single file to progressive JPEG
jpegtran -optimize -progressive -copy none -outfile output.jpg input.jpg
# Batch convert all JPEGs in a directory (in-place)
for f in *.jpg *.jpeg; do
jpegtran -optimize -progressive -copy none -outfile "$f" "$f"
done
The -copy none flag strips all metadata (Exif, ICC profiles, comments). Use -copy all if you need to preserve metadata. The -optimize flag rebuilds Huffman tables for the new scan structure — always include it.
ImageMagick 7.1.1 (Apache 2.0 License)
ImageMagick's -interlace flag controls JPEG scan type. Plane produces progressive; None produces baseline. Useful when you are already using ImageMagick for resizing or format conversion.
# Convert to progressive JPEG at quality 80
magick input.jpg -interlace Plane -quality 80 output.jpg
# Batch convert and resize for web
magick mogrify -interlace Plane -quality 80 -resize '1920x1080>' *.jpg
Note: mogrify edits files in-place. Always test on a copy first.
mozjpeg 4.1.5 (BSD 3-Clause License)
mozjpeg is Mozilla's fork of libjpeg with an optimized encoder. It produces progressive JPEGs by default with a carefully tuned scan script. At the same quality setting, mozjpeg files are typically 5–15% smaller than standard libjpeg output — the combination of trellis quantization and the optimized scan script squeezes more out of the format than any other JPEG encoder.
# Install on macOS
brew install mozjpeg
# Compress and encode as progressive at quality 80
/usr/local/opt/mozjpeg/bin/cjpeg -quality 80 -outfile output.jpg input.jpg
# Compress with explicit progressive flag (default in mozjpeg, explicit for clarity)
/usr/local/opt/mozjpeg/bin/cjpeg -quality 80 -progressive -outfile output.jpg input.jpg
mozjpeg's cjpeg reads raw PPM/BMP/TIFF input, not JPEG. If you are starting from a JPEG, pipe through djpeg first:
djpeg input.jpg | cjpeg -quality 80 -progressive -outfile output.jpg
For Node.js integration, the @squoosh/lib package wraps mozjpeg and is usable in build pipelines. See the compress JPEG guide for a detailed comparison of encoder output quality.
Python Pillow 10.4 (HPND License)
Pillow sets the progressive parameter in the save() call. Practical for build scripts, data pipelines, or batch processing in Python environments.
from PIL import Image
def save_progressive_jpeg(input_path: str, output_path: str, quality: int = 80) -> None:
"""Convert an image to progressive JPEG."""
with Image.open(input_path) as img:
# Convert to RGB if needed (PNG with alpha, etc.)
if img.mode in ("RGBA", "P"):
img = img.convert("RGB")
img.save(
output_path,
format="JPEG",
progressive=True,
quality=quality,
optimize=True, # rebuild Huffman tables
)
# Batch convert a directory
import os
from pathlib import Path
input_dir = Path("images/originals")
output_dir = Path("images/web")
output_dir.mkdir(exist_ok=True)
for src in input_dir.glob("*.jpg"):
dest = output_dir / src.name
save_progressive_jpeg(str(src), str(dest), quality=80)
print(f"{src.name}: {src.stat().st_size} → {dest.stat().st_size} bytes")
Install: pip install Pillow==10.4.0
Sharp 0.33 (Apache 2.0 License)
Sharp is the standard Node.js image processing library. It wraps libvips and exposes progressive JPEG encoding through the .jpeg() method. Fast enough for production server-side use.
import sharp from 'sharp'; // sharp@0.33.5
// Single file
await sharp('input.jpg')
.jpeg({
quality: 80,
progressive: true,
optimiseCoding: true, // optimize Huffman tables
mozjpeg: true, // use mozjpeg encoder for better compression
})
.toFile('output.jpg');
// Batch conversion with progress reporting
import { readdir } from 'fs/promises';
import path from 'path';
const files = (await readdir('images')).filter(f => /\.(jpg|jpeg)$/i.test(f));
await Promise.all(
files.map(async (file) => {
const input = path.join('images', file);
const output = path.join('images/web', file);
const info = await sharp(input)
.jpeg({ quality: 80, progressive: true, mozjpeg: true })
.toFile(output);
console.log(`${file}: ${info.size} bytes`);
})
);
The mozjpeg: true flag switches Sharp's underlying encoder to mozjpeg, combining the convenience of the Sharp API with mozjpeg's superior compression. Requires Sharp 0.32+ and the mozjpeg encoder to be available in the libvips build (it is included in Sharp's prebuilt binaries on npm).
How to Detect If a JPEG Is Progressive
Using ImageMagick
# Returns 'Interlace: Plane' for progressive, 'Interlace: None' for baseline
magick identify -verbose image.jpg | grep Interlace
Using jpegtran / exiftool
# exiftool reports 'Progressive' in the JFIF section
exiftool image.jpg | grep -i progressive
# jpegtran will warn if the file is already progressive
jpegtran -info image.jpg 2>&1 | head -5
Using the SOF Marker (Hex Dump)
The JPEG Start of Frame marker determines encoding type. In the binary stream:
FF C0— SOF0: Baseline DCT (non-progressive)FF C2— SOF2: Progressive DCT
# Check the SOF marker directly
xxd image.jpg | grep -m1 'ffc[02]'
# Or with Python
python3 -c "
import struct, sys
with open('image.jpg', 'rb') as f:
data = f.read()
# Scan for SOF0 (baseline) or SOF2 (progressive)
i = 0
while i < len(data) - 1:
if data[i] == 0xFF:
marker = data[i+1]
if marker == 0xC0:
print('Baseline JPEG (SOF0)')
break
elif marker == 0xC2:
print('Progressive JPEG (SOF2)')
break
i += 1
"
In Node.js
import { createReadStream } from 'fs';
async function isProgressiveJpeg(filePath) {
const stream = createReadStream(filePath, { end: 65535 }); // read first 64KB
const chunks = [];
for await (const chunk of stream) chunks.push(chunk);
const buf = Buffer.concat(chunks);
for (let i = 0; i < buf.length - 1; i++) {
if (buf[i] === 0xff) {
if (buf[i + 1] === 0xc0) return false; // SOF0 = baseline
if (buf[i + 1] === 0xc2) return true; // SOF2 = progressive
}
}
return null; // not a valid JPEG
}
console.log(await isProgressiveJpeg('image.jpg'));
Web Performance and Core Web Vitals
Progressive JPEG's relationship with Core Web Vitals is nuanced.
Largest Contentful Paint (LCP): The LCP metric records when the largest content element in the viewport becomes visible to the user. For a progressive JPEG that is the hero image, the browser can report an LCP event as soon as the first scan renders — even though the full image has not loaded. Browsers vary in when exactly they fire the LCP event for progressive images, but in practice, progressive encoding tends to move LCP earlier for large above-the-fold images.
Cumulative Layout Shift (CLS): No impact. CLS measures layout shifts, and a progressive JPEG that sharpens in place does not shift layout. As long as you set explicit width and height attributes on the <img> element (which you always should), there is no CLS risk from progressive encoding.
Interaction to Next Paint (INP): Indirect impact. If progressive decoding causes additional render work, it could theoretically delay interaction response times. In practice, this is negligible on modern hardware.
Practical LCP guidance:
<!-- Hero image — explicit dimensions prevent CLS, fetchpriority speeds LCP -->
<img
src="/hero.jpg"
width="1920"
height="1080"
alt="Descriptive alt text for your hero image"
fetchpriority="high"
loading="eager"
/>
For hero images, combine progressive encoding with fetchpriority="high" to ensure the browser prioritizes the fetch. Do not use loading="lazy" on above-the-fold images — it delays the fetch and hurts LCP regardless of encoding type. The optimize images for SEO guide covers the full set of image markup best practices that affect Core Web Vitals.
CDN and Server Considerations
Most CDNs and web servers treat progressive and baseline JPEGs identically — they are both image/jpeg content. No special configuration needed. A few specific cases to be aware of:
Image optimization CDNs (Cloudflare Images, Imgix, Cloudinary): These services re-encode images on delivery. Check whether their output defaults to progressive or baseline. Cloudflare Images outputs progressive by default. Imgix has an &fm=pjpg parameter for progressive output. If your CDN is re-encoding your carefully optimized progressive JPEGs as baseline on delivery, you lose the benefit — and potentially get worse compression than you started with.
HTTP/2 and HTTP/3: Early arguments against progressive JPEG cited multiplexing making it less necessary. The argument was that under HTTP/2, multiple small resources load simultaneously, so the single-connection waterfall that made progressive JPEG valuable was less of an issue. This is partially true but misses the point: progressive JPEG's benefit is about within-file byte ordering, not across-request parallelism. A large JPEG is a large single resource regardless of how many other resources load alongside it.
Server-side image processing (thumbnails, transforms): If you are generating images dynamically, make sure your image library emits progressive output. Sharp 0.33 with { progressive: true } does this. Check your library's defaults — some default to baseline even if the input was progressive.
To convert images at scale before uploading, Pixotter's converter tool handles JPEG format operations client-side with no server round-trip.
Progressive JPEG and the JPEG Standard
Progressive encoding is part of the original JPEG specification (ITU-T T.81, 1992). It is not an extension or a vendor feature — it is a first-class encoding mode alongside baseline sequential. The JPEG file format and history covers the standard in depth.
The relevant distinction within the spec:
- Baseline Sequential (SOF0): Each 8×8 block fully encoded before the next. Standard libjpeg behavior.
- Extended Sequential (SOF1): Like baseline but supports 12-bit samples. Rare.
- Progressive (SOF2): Multiple scans, spectral selection or successive approximation. What this article covers.
- Lossless (SOF3): Different algorithm entirely. Rarely used.
mozjpeg and libjpeg-turbo both implement SOF2 progressive encoding. If you are choosing between image formats for new projects, the best image format for web guide compares JPEG, WebP, AVIF, and PNG across file size, browser support, and use cases — JPEG progressive encoding keeps the format competitive for photographic content even in 2026.
Frequently Asked Questions
Does every browser support progressive JPEG?
Yes. Progressive JPEG is supported in all modern browsers and has been since IE 6. There is no polyfill needed and no feature detection required. The format is universally safe.
Will converting my existing JPEGs to progressive cause quality loss?
Only if you re-encode them. The jpegtran tool converts between baseline and progressive modes losslessly — it reorganizes the existing DCT coefficients without re-running the quantization step. No quality loss. If you use ImageMagick, mozjpeg, Pillow, or Sharp to convert, those re-encode the image and apply your specified quality setting — that is lossy. Use jpegtran for zero-loss conversion.
Should I use progressive JPEG or WebP?
Different question. WebP typically compresses 25–35% better than JPEG at equivalent quality and supports both lossy and lossless modes. If browser support and CDN compatibility are not concerns, WebP is usually the better choice for photographic web images. But JPEG with progressive encoding still has advantages in: broader toolchain support, better compatibility with very old browsers, and contexts where the ecosystem (e.g., e-commerce platforms) requires JPEG. See the best image format for web guide for a full comparison.
What is the difference between progressive JPEG and interlaced PNG?
Both produce a progressive loading effect, but they work differently and serve different use cases. Interlaced PNG uses the Adam7 algorithm, transmitting pixels from every 8th row first, then every 4th, and so on. Progressive JPEG uses DCT scan ordering. PNG interlacing actually makes files larger (unlike JPEG where progressive often reduces size). Interlaced PNG is generally not recommended for web use — the file size penalty is not worth the loading effect. Use WebP or AVIF instead of interlaced PNG for photographic content.
Does progressive JPEG affect image quality?
The encoding mode has no effect on maximum quality. At the same quantization settings, a progressive JPEG and a baseline JPEG are visually identical when fully decoded. The difference is purely in transmission ordering, not in the final rendered image.
How many scan passes does a typical progressive JPEG have?
It depends on the scan script. A simple 3-scan progressive JPEG has: scan 1 (DC coefficients), scan 2 (low-frequency AC), scan 3 (high-frequency AC). mozjpeg's default scan script uses 8–12 scans, which produces better compression at the cost of slightly higher decoding complexity. The browser handles all of this automatically — you do not control how many passes the user sees in practice, since it depends on network speed and how quickly data arrives.
Is progressive JPEG still worth it when using HTTP/2?
Yes, but the reasoning shifts. HTTP/2 multiplexing eliminates the head-of-line blocking that made progressive especially valuable over HTTP/1.1. However, for large images (100KB+), the within-file byte ordering still matters — the first scan of a progressive JPEG arrives earlier relative to the full file size regardless of how many other resources are downloading in parallel. The file size benefit (2–10% for images above 10KB) is a separate, HTTP-version-agnostic win.
Can I set the number of progressive scans?
With jpegtran and cjpeg, you can supply a custom scan script file via the -scans flag. mozjpeg's default scan script is publicly available and well-documented if you want to study or modify it. For most use cases, the default scan scripts in mozjpeg or Sharp produce near-optimal results without customization.
Try it yourself
Resize to exact dimensions for any platform — free, instant, no signup. Your images never leave your browser.