In This Article
- The Old Method and Its Hidden Flaw
- Using Compression as a Measuring Tool
- How Does Compression Actually Measure Complexity?
- Why This Matters Beyond the Lab
- What It Still Can't Do
Think about a doctor looking at a lung scan, trying to spot early signs of disease. The patterns — the branching, the rough edges, the textures — carry real information. Turning those patterns into a clean, reliable number has always been slow, fussy work. Researchers at Universidad de Valladolid in Spain just changed that. Their study in Fractal and Fractional shows a GPU-powered method that measures image complexity quickly and accurately — and skips a preprocessing step that scientists always assumed was unavoidable.
The Old Method and Its Hidden Flaw
The standard way to measure visual complexity is called fractal dimension — a number describing how much detail an image holds at different zoom levels. A smooth circle scores around 1. A jagged coastline scores around 1.3. Healthy and cancerous tissue score differently, which is why this matters in medicine and materials science.
The go-to technique for calculating it, called box-counting, has been around since the 1980s. Before it can run, though, the image must be converted to pure black and white — every pixel becomes either on or off. That step, called binarization, loses information and forces a judgment call: which brightness threshold separates signal from background? Different researchers make that call differently. And on high-resolution images, running the multi-scale grid operations is genuinely slow.
Fig. 2 — Inside a GPU: thousands of cores all working at the same time
Modern graphics cards are built for exactly this kind of parallelism — running the same operation on many chunks of data simultaneously. The team pushed image compression entirely onto the GPU, so images never leave its memory during processing.
Illustration: NavsoraTimes
Using Compression as a Measuring Tool
The Valladolid team built on an idea from a 2016 paper by Pedro Chamorro-Posada — one of the same researchers. The insight is counterintuitive but elegant: you can use a file compression algorithm as a ruler.
Here's the logic. Compression finds patterns and eliminates redundancy. A simple, repetitive image compresses down to almost nothing. A complex, chaotic one barely shrinks. So: take an image, scale it to nine different sizes, compress each version, and watch how the file size changes. The slope of that curve is the fractal dimension. No binarization. No grid overlays. No judgment calls — just compress and measure.
How Does Compression Actually Measure Complexity?
Fig. 3 — The core idea: plot compressed sizes against scale, fit a line, read the slope
At each scale, the algorithm compresses the image and records the file size. Those nine points go onto a log-log graph. The slope of the best-fit line is the fractal dimension — no ambiguous threshold decisions anywhere in the process.
Chart: NavsoraTimes, based on Díaz-Herrezuelo & Chamorro-Posada (2026)
The team used NVIDIA's nvCOMP library for on-device GPU compression — images stay in the GPU's memory the entire time. They also found that compressing each scaled image twice, rather than once, improved consistency at high resolutions with only a modest speed penalty.
Tested against Julia sets — mathematical fractals with known, exact complexity values — the error stayed below 10% across nearly all 96 test images and all six resolutions. That's on par with traditional box-counting, which had the built-in advantage of being designed specifically for these kinds of patterns.
"The GPU implementation with two compression stages offers a balanced trade-off between accuracy, stability, and execution time, especially at higher resolutions."
— Díaz-Herrezuelo & Chamorro-Posada · Universidad de Valladolid · Fractal and Fractional, 2026Why This Matters Beyond the Lab
Fig. 4 — How fractal dimension separates healthy from abnormal tissue
Different tissue types produce different fractal dimension scores. A pipeline that works directly on raw grayscale scans — no preprocessing choices — removes one more source of human-introduced error from clinical analysis.
Illustration: NavsoraTimes
Two things stand out about the practical value. First, the speed: at 1,000 × 1,000 pixels, the pipeline processes an image in under 50 milliseconds — fast enough for video analysis, satellite imaging, or automated screening. Second, it works directly on grayscale images with no preprocessing. No researcher has to decide which gray pixels count as "structure." That's a judgment call the old method has always required, and removing it removes a hidden source of variability.
The team also tested the pipeline on a consumer laptop GPU — an NVIDIA RTX 4050 Ti. It worked. Accuracy held up with only minor memory management tweaks needed. That means a hospital, a field research station, or a university lab could run this on hardware they already own, not a supercomputer.
What It Still Can't Do
The method returns one number per image — a single global complexity score. That's useful for many tasks, but some structures don't cooperate. A brain scan might be highly complex in one region and smooth in another. The current approach treats the whole image as one thing and misses local variation. Extending it to capture that kind of spatial unevenness is the most important open problem the team identifies.
Blur is the other weak spot. When the team deliberately smoothed their test images, error climbed predictably — slowly at first, then sharply. At a Gaussian blur of sigma = 5, error approached 40%. The lesson is straightforward: this pipeline needs sharp source images. Degraded or heavily processed input will produce unreliable results.
- No binarization required — Works directly on grayscale images, removing decades of hidden subjectivity from fractal analysis.
- Runs on a laptop GPU — A consumer NVIDIA RTX 4050 Ti matched a data-centre H100, making this practical without specialized infrastructure.
- Blur degrades accuracy — Heavy smoothing pushes error toward 40%; best results come from sharp, high-quality source images.
"This approach establishes a reproducible evaluation framework that supports the practical deployment of compression-based fractal dimension estimation in large-scale and time-constrained image analysis systems." — Díaz-Herrezuelo & Chamorro-Posada, Fractal and Fractional, 2026.
📄 Source & Citation
Primary Source: Díaz-Herrezuelo Á, Chamorro-Posada P. (2026). GPU-Accelerated Fractal Compression Dimension Estimation. Fractal and Fractional, 10(3), 174. https://doi.org/10.3390/fractalfract10030174
Authors & Affiliations: Ángel Díaz-Herrezuelo and Pedro Chamorro-Posada, Universidad de Valladolid, Spain (LaDIS — Laboratory for Disruptive Interdisciplinary Science)
Data & Code: Julia set dataset openly available via UVaDOC Repository — uvadoc.uva.es/handle/10324/81641
Key Themes: Fractal Dimension · GPU Computing · Image Compression · CUDA · Medical Imaging
Supporting References:
[1] Chamorro-Posada P. (2016). A simple method for estimating fractal dimension from digital images: the compression dimension. Chaos, Solitons & Fractals, 91:562–572.
[2] Lopes R, Betrouni N. (2009). Fractal and multifractal analysis: a review. Medical Image Analysis, 13:634–649.
[3] Ruiz de Miras J et al. (2023). Fast computation of fractal dimension for 2D, 3D and 4D data. Journal of Computational Science, 66:101908.
No comments yet. Be the first to share your thoughts.
Leave a Comment