Science · Technology · The Future
← Back
COMPUTING

Your GPU Can Now Read How Complex an Image Is — In Milliseconds

Scientists in Spain used everyday file compression to measure visual complexity on a regular laptop GPU — faster and simpler than the 40-year-old technique the field still relies on.

Fig. 1 — Fractal geometry: a branching pattern that repeats at every scale of zoom
Fractals show the same intricate detail whether you zoom in or out. Measuring how complex that detail is has always been slow and error-prone — until now.

In This Article

  1. The Old Method and Its Hidden Flaw
  2. Using Compression as a Measuring Tool
  3. How Does Compression Actually Measure Complexity?
  4. Why This Matters Beyond the Lab
  5. What It Still Can't Do

Think about a doctor looking at a lung scan, trying to spot early signs of disease. The patterns — the branching, the rough edges, the textures — carry real information. Turning those patterns into a clean, reliable number has always been slow, fussy work. Researchers at Universidad de Valladolid in Spain just changed that. Their study in Fractal and Fractional shows a GPU-powered method that measures image complexity quickly and accurately — and skips a preprocessing step that scientists always assumed was unavoidable.

The Old Method and Its Hidden Flaw

The standard way to measure visual complexity is called fractal dimension — a number describing how much detail an image holds at different zoom levels. A smooth circle scores around 1. A jagged coastline scores around 1.3. Healthy and cancerous tissue score differently, which is why this matters in medicine and materials science.

The go-to technique for calculating it, called box-counting, has been around since the 1980s. Before it can run, though, the image must be converted to pure black and white — every pixel becomes either on or off. That step, called binarization, loses information and forces a judgment call: which brightness threshold separates signal from background? Different researchers make that call differently. And on high-resolution images, running the multi-scale grid operations is genuinely slow.

What Is Fractal Dimension? A single number describing how much detail a pattern contains across zoom levels. Smooth lines score near 1, filled surfaces score near 2, and most natural textures — bark, tissue, terrain — land somewhere in between. The higher the number, the more structurally complex the image.
GPU · PARALLEL CORES IMAGE IN SIZE OUT NVIDIA GPU · THOUSANDS OF PARALLEL CORES · NAVSORA TIMES

Fig. 2 — Inside a GPU: thousands of cores all working at the same time

Advertisement

Modern graphics cards are built for exactly this kind of parallelism — running the same operation on many chunks of data simultaneously. The team pushed image compression entirely onto the GPU, so images never leave its memory during processing.

Illustration: NavsoraTimes

Using Compression as a Measuring Tool

The Valladolid team built on an idea from a 2016 paper by Pedro Chamorro-Posada — one of the same researchers. The insight is counterintuitive but elegant: you can use a file compression algorithm as a ruler.

Here's the logic. Compression finds patterns and eliminates redundancy. A simple, repetitive image compresses down to almost nothing. A complex, chaotic one barely shrinks. So: take an image, scale it to nine different sizes, compress each version, and watch how the file size changes. The slope of that curve is the fractal dimension. No binarization. No grid overlays. No judgment calls — just compress and measure.

Advertisement
96
Test images across 6 resolutions
<10%
Error vs. known exact values
<50ms
Processing time at 1,000px

How Does Compression Actually Measure Complexity?

log(scale) log(compressed size) Δy = D LOG-LOG REGRESSION · SLOPE = FRACTAL DIMENSION · NAVSORA TIMES compressed sizes best-fit line (slope = D)

Fig. 3 — The core idea: plot compressed sizes against scale, fit a line, read the slope

At each scale, the algorithm compresses the image and records the file size. Those nine points go onto a log-log graph. The slope of the best-fit line is the fractal dimension — no ambiguous threshold decisions anywhere in the process.

Chart: NavsoraTimes, based on Díaz-Herrezuelo & Chamorro-Posada (2026)

The team used NVIDIA's nvCOMP library for on-device GPU compression — images stay in the GPU's memory the entire time. They also found that compressing each scaled image twice, rather than once, improved consistency at high resolutions with only a modest speed penalty.

Advertisement

Tested against Julia sets — mathematical fractals with known, exact complexity values — the error stayed below 10% across nearly all 96 test images and all six resolutions. That's on par with traditional box-counting, which had the built-in advantage of being designed specifically for these kinds of patterns.

"The GPU implementation with two compression stages offers a balanced trade-off between accuracy, stability, and execution time, especially at higher resolutions."

— Díaz-Herrezuelo & Chamorro-Posada · Universidad de Valladolid · Fractal and Fractional, 2026

Why This Matters Beyond the Lab

TISSUE SCAN · FRACTAL D = 1.38 COMPLEXITY SCORE (D) 1.22 healthy 1.38 target 1.51 abnormal FRACTAL DIMENSION · TISSUE CLASSIFICATION MEDICAL IMAGING · FRACTAL COMPLEXITY ANALYSIS · NAVSORA TIMES

Fig. 4 — How fractal dimension separates healthy from abnormal tissue

Different tissue types produce different fractal dimension scores. A pipeline that works directly on raw grayscale scans — no preprocessing choices — removes one more source of human-introduced error from clinical analysis.

Illustration: NavsoraTimes

Two things stand out about the practical value. First, the speed: at 1,000 × 1,000 pixels, the pipeline processes an image in under 50 milliseconds — fast enough for video analysis, satellite imaging, or automated screening. Second, it works directly on grayscale images with no preprocessing. No researcher has to decide which gray pixels count as "structure." That's a judgment call the old method has always required, and removing it removes a hidden source of variability.

The team also tested the pipeline on a consumer laptop GPU — an NVIDIA RTX 4050 Ti. It worked. Accuracy held up with only minor memory management tweaks needed. That means a hospital, a field research station, or a university lab could run this on hardware they already own, not a supercomputer.

30×
Repeated runs per test image
16
Distinct fractal shapes tested
13
Real-world textures validated
Tested on Real Images Too Beyond mathematically clean fractals, the team ran the pipeline on 13 textures from the classic Brodatz photography dataset — bark, wool, brick, sand. Scores stayed consistent across different sections of the same texture, and different textures scored distinctly. Exactly what you'd need for a practical classification tool.

What It Still Can't Do

The method returns one number per image — a single global complexity score. That's useful for many tasks, but some structures don't cooperate. A brain scan might be highly complex in one region and smooth in another. The current approach treats the whole image as one thing and misses local variation. Extending it to capture that kind of spatial unevenness is the most important open problem the team identifies.

Blur is the other weak spot. When the team deliberately smoothed their test images, error climbed predictably — slowly at first, then sharply. At a Gaussian blur of sigma = 5, error approached 40%. The lesson is straightforward: this pipeline needs sharp source images. Degraded or heavily processed input will produce unreliable results.

  • No binarization required — Works directly on grayscale images, removing decades of hidden subjectivity from fractal analysis.
  • Runs on a laptop GPU — A consumer NVIDIA RTX 4050 Ti matched a data-centre H100, making this practical without specialized infrastructure.
  • Blur degrades accuracy — Heavy smoothing pushes error toward 40%; best results come from sharp, high-quality source images.

"This approach establishes a reproducible evaluation framework that supports the practical deployment of compression-based fractal dimension estimation in large-scale and time-constrained image analysis systems." — Díaz-Herrezuelo & Chamorro-Posada, Fractal and Fractional, 2026.


📄 Source & Citation

Primary Source: Díaz-Herrezuelo Á, Chamorro-Posada P. (2026). GPU-Accelerated Fractal Compression Dimension Estimation. Fractal and Fractional, 10(3), 174. https://doi.org/10.3390/fractalfract10030174

Authors & Affiliations: Ángel Díaz-Herrezuelo and Pedro Chamorro-Posada, Universidad de Valladolid, Spain (LaDIS — Laboratory for Disruptive Interdisciplinary Science)

Data & Code: Julia set dataset openly available via UVaDOC Repository — uvadoc.uva.es/handle/10324/81641

Key Themes: Fractal Dimension · GPU Computing · Image Compression · CUDA · Medical Imaging

Supporting References:

[1] Chamorro-Posada P. (2016). A simple method for estimating fractal dimension from digital images: the compression dimension. Chaos, Solitons & Fractals, 91:562–572.

[2] Lopes R, Betrouni N. (2009). Fractal and multifractal analysis: a review. Medical Image Analysis, 13:634–649.

[3] Ruiz de Miras J et al. (2023). Fast computation of fractal dimension for 2D, 3D and 4D data. Journal of Computational Science, 66:101908.

👁65 views
7 min read
💬0 comments

No comments yet. Be the first to share your thoughts.

Leave a Comment

⏳ Comments are reviewed before publishing. Please keep discussion respectful and on-topic.
SV
Written by
Sanjay Verma
Founder & Editor-in-Chief · Science Journalist · 8+ Years Covering Research

Sanjay Verma is the founder and editor-in-chief of NavsoraTimes. He reports on peer-reviewed research across molecular biology, AI, space science and medicine — translating complex findings into clear, accurate language for a general audience.

View all articles →