使用抖动将 24 位位图转换为 16 位的好的、优化的 C/C++ 算法是什么?

2022-01-06 00:00:00 c c++ bitmap gdi+ dithering

我一直在寻找一种优化(即快速)算法,使用抖动将 24 位 RGB 位图转换为 16 位 (RGB565) 位图.我正在寻找 C/C++ 中的一些东西,我可以在其中实际控制如何应用抖动.GDI+ 似乎提供了一些方法,但我不知道它们是否会抖动.而且,如果他们确实抖动,他们使用的是什么机制(Floyd-Steinberg?)

I've been looking for an optimized (i.e., quick) algorithm that converts a 24-bit RGB bitmap to a 16-bit (RGB565) bitmap using dithering. I'm looking for something in C/C++ where I can actually control how the dithering is applied. GDI+ seems to provide some methods, but I can't tell if they dither or not. And, if they do dither, what mechanism are they using (Floyd-Steinberg?)

有没有人有使用抖动进行位图颜色深度转换的好例子?

Does anyone have a good example of bitmap color-depth conversion with dithering?

推荐答案

正如您提到的,Floyd-Steinberg 抖动方法很受欢迎,因为它简单快速.对于 24 位和 16 位颜色之间的细微差异,结果在视觉上几乎是最佳的.

As you mentioned, the Floyd-Steinberg dithering method is popular because it's simple and fast. For the subtle differences between 24-bit and 16-bit color the results will be nearly optimal visually.

有人建议我使用示例图片 Lena 但我决定反对;尽管它作为测试图像的历史悠久,但我认为它对于现代情感来说过于性别歧视.相反,我展示了一张我自己的照片.首先是原始的,然后转换为抖动的 RGB565(并转换回 24 位用于显示).

It was suggested that I use the sample picture Lena but I decided against it; despite its long history as a test image I consider it too sexist for modern sensibilities. Instead I present a picture of my own. First up is the original, followed by the conversion to dithered RGB565 (and converted back to 24-bit for display).

和代码,在 C++ 中:

And the code, in C++:

inline BYTE Clamp(int n)
{
    n = n>255 ? 255 : n;
    return n<0 ? 0 : n;
}

struct RGBTriplet
{
    int r;
    int g;
    int b;
    RGBTriplet(int _r = 0, int _g = 0, int _b = 0) : r(_r), g(_g), b(_b) {};
};

void RGB565Dithered(const BYTE * pIn, int width, int height, int strideIn, BYTE * pOut, int strideOut)
{
    std::vector<RGBTriplet> oldErrors(width + 2);
    for (int y = 0;  y < height;  ++y)
    {
        std::vector<RGBTriplet> newErrors(width + 2);
        RGBTriplet errorAhead;
        for (int x = 0;  x < width;  ++x)
        {
            int b = (int)(unsigned int)pIn[3*x] + (errorAhead.b + oldErrors[x+1].b) / 16;
            int g = (int)(unsigned int)pIn[3*x + 1] + (errorAhead.g + oldErrors[x+1].g) / 16;
            int r = (int)(unsigned int)pIn[3*x + 2] + (errorAhead.r + oldErrors[x+1].r) / 16;
            int bAfter = Clamp(b) >> 3;
            int gAfter = Clamp(g) >> 2;
            int rAfter = Clamp(r) >> 3;
            int pixel16 = (rAfter << 11) | (gAfter << 5) | bAfter;
            pOut[2*x] = (BYTE) pixel16;
            pOut[2*x + 1] = (BYTE) (pixel16 >> 8);
            int error = r - ((rAfter * 255) / 31);
            errorAhead.r = error * 7;
            newErrors[x].r += error * 3;
            newErrors[x+1].r += error * 5;
            newErrors[x+2].r = error * 1;
            error = g - ((gAfter * 255) / 63);
            errorAhead.g = error * 7;
            newErrors[x].g += error * 3;
            newErrors[x+1].g += error * 5;
            newErrors[x+2].g = error * 1;
            error = b - ((bAfter * 255) / 31);
            errorAhead.b = error * 7;
            newErrors[x].b += error * 3;
            newErrors[x+1].b += error * 5;
            newErrors[x+2].b = error * 1;
        }
        pIn += strideIn;
        pOut += strideOut;
        oldErrors.swap(newErrors);
    }
}

我不保证这段代码是完美的,我已经不得不修复我在另一条评论中提到的那些细微错误之一.但是它确实产生了上述结果.它采用 Windows 使用的 BGR 顺序的 24 位像素,并以小端顺序生成 R5G6B5 16 位像素.

I won't guarantee this code is perfect, I already had to fix one of those subtle errors that I alluded to in another comment. However it did generate the results above. It takes 24-bit pixels in BGR order as used by Windows, and produces R5G6B5 16-bit pixels in little endian order.

相关文章