🔗 Floyd-Steinberg Dithering Algorithm

🔗 Computer science

Floyd–Steinberg dithering is an image dithering algorithm first published in 1976 by Robert W. Floyd and Louis Steinberg. It is commonly used by image manipulation software, for example when an image is converted into GIF format that is restricted to a maximum of 256 colors.

The algorithm achieves dithering using error diffusion, meaning it pushes (adds) the residual quantization error of a pixel onto its neighboring pixels, to be dealt with later. It spreads the debt out according to the distribution (shown as a map of the neighboring pixels):

[ ∗ 7 16 … … 3 16 5 16 1 16 … ] {\displaystyle {\begin{bmatrix}&&*&{\frac {\displaystyle 7}{\displaystyle 16}}&\ldots \\\ldots &{\frac {\displaystyle 3}{\displaystyle 16}}&{\frac {\displaystyle 5}{\displaystyle 16}}&{\frac {\displaystyle 1}{\displaystyle 16}}&\ldots \\\end{bmatrix}}}

The pixel indicated with a star (*) indicates the pixel currently being scanned, and the blank pixels are the previously-scanned pixels. The algorithm scans the image from left to right, top to bottom, quantizing pixel values one by one. Each time the quantization error is transferred to the neighboring pixels, while not affecting the pixels that already have been quantized. Hence, if a number of pixels have been rounded downwards, it becomes more likely that the next pixel is rounded upwards, such that on average, the quantization error is close to zero.

The diffusion coefficients have the property that if the original pixel values are exactly halfway in between the nearest available colors, the dithered result is a checkerboard pattern. For example, 50% grey data could be dithered as a black-and-white checkerboard pattern. For optimal dithering, the counting of quantization errors should be in sufficient accuracy to prevent rounding errors from affecting the result.

In some implementations, the horizontal direction of scan alternates between lines; this is called "serpentine scanning" or boustrophedon transform dithering.

In the following pseudocode we can see the algorithm described above. This works for any approximately linear encoding of pixel values, such as 8-bit integers, 16-bit integers or real numbers in the range [0,1].

for each y from top to bottom do
    for each x from left to right do
        oldpixel := pixel[x][y]
        newpixel := find_closest_palette_color(oldpixel)
        pixel[x][y] := newpixel
        quant_error := oldpixel - newpixel
        pixel[x + 1][y    ] := pixel[x + 1][y    ] + quant_error × 7 / 16
        pixel[x - 1][y + 1] := pixel[x - 1][y + 1] + quant_error × 3 / 16
        pixel[x    ][y + 1] := pixel[x    ][y + 1] + quant_error × 5 / 16
        pixel[x + 1][y + 1] := pixel[x + 1][y + 1] + quant_error × 1 / 16

When converting 16 bit greyscale to 8 bit, find_closest_palette_color() may perform just a simple rounding, for example:

find_closest_palette_color(oldpixel) = round(oldpixel / 256)

The pseudocode can result in pixel values exceeding the valid values (such as greater than 1 in a [0,1] representation). Such values should ideally be clipped by the find_closest_palette_color() function, rather than clipping the intermediate values, since a subsequent error may bring the value back into range. However, if fixed-width integers are used, wrapping of intermediate values would cause inversion of black and white, and so should be avoided.

Discussed on