Do I need to gamma correct the final color output on a modern computer/monitor

前端 未结 1 1362
挽巷
挽巷 2020-12-13 07:46

I\'ve been under the assumption that my gamma correction pipeline should be as follows:

  • Use sRGB format for all textures loaded in (GL_SRGB8_ALPHA8
相关标签:
1条回答
  • 2020-12-13 08:07

    First of all you must understand that the nonlinear mapping applied to the color channels is often more than just a simple power function. sRGB nonlinearity can be approximated by about x^2.4, but that's not really the real deal. Anyway your primary assumptions are more or less correct.

    If your textures are stored in the more common image file formats, they will contain the values as they are presented to the graphics scanout. Now there are two common hardware scenarios:

    • The scanout interface outputs a linear signal and the display device will then internally apply a nonlinear mapping. Old CRT monitors were nonlinear due to their physics: The amplifiers could put only so much current into the electron beam, the phosphor saturating and so on – that's why the whole gamma thing was introduced in the first place, to model the nonlinearities of CRT displays.

    • Modern LCD and OLED displays either use resistor ladders in their driver amplifiers, or they have gamma ramp lookup tables in their image processors.

    • Some devices however are linear, and ask the image producing device to supply a proper matching LUT for the desired output color profile on the scanout.

    On most computers the effective scanout LUT is linear! What does this mean though? A little detour:


    For illustration I quickly hooked up my laptop's analogue display output (VGA connector) to my analogue oscilloscope: Blue channel onto scope channel 1, green channel to scope channel 2, external triggering on line synchronization signal (HSync). A quick and dirty OpenGL program, deliberately written with immediate mode was used to generate a linear color ramp:

    #include <GL/glut.h>
    
    void display()
    {
        GLuint win_width = glutGet(GLUT_WINDOW_WIDTH);
        GLuint win_height = glutGet(GLUT_WINDOW_HEIGHT);
    
        glViewport(0,0, win_width, win_height);
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();
        glOrtho(0, 1, 0, 1, -1, 1);
    
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
    
        glBegin(GL_QUAD_STRIP);
            glColor3f(0., 0., 0.);
            glVertex2f(0., 0.);
            glVertex2f(0., 1.);
            glColor3f(1., 1., 1.);
            glVertex2f(1., 0.);
            glVertex2f(1., 1.);
        glEnd();
    
        glutSwapBuffers();
    }
    
    int main(int argc, char *argv[])
    {
        glutInit(&argc, argv);
        glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
        
        glutCreateWindow("linear");
        glutFullScreen();
        glutDisplayFunc(display);
    
        glutMainLoop();
    
        return 0;
    }
    

    The graphics output was configured with the Modeline

    "1440x900_60.00"  106.50  1440 1528 1672 1904  900 903 909 934 -HSync +VSync
    

    (because that's the same mode the flat panel runs in, and I was using cloning mode)

    • gamma=2 LUT on the green channel.
    • linear (gamma=1) LUT on the blue channel

    This is how the signals of a single scanout line look like (upper curve: Ch2 = green, lower curve: Ch1 = blue):

    Analogue Video Signals Gamma=1 and Gamma=2

    You can clearly see the x⟼x² and x⟼x mappings (parabola and linear shapes of the curves).


    Now after this little detour we know, that the pixel values that go to the main framebuffer, go there as they are: The OpenGL linear ramp underwent no further changes and only when a nonlinear scanout LUT was applied it altered the signal sent to the display.

    Either way the values you present to the scanout (which means the on-screen framebuffers) will undergo a nonlinear mapping at some point in the signal chain. And for all standard consumer devices this mapping will be according to the sRGB standard, because it's the smallest common factor (i.e. images represented in the sRGB color space can be reproduced on most output devices).

    Since most programs, like webbrowsers assume the output to undergo a sRGB to display color space mapping, they simply copy the pixel values of the standard image file formats to the on-screen frame as they are, without performing a color space conversion, thereby implying that the color values within those images are in sRGB color space (or they will often merely convert to sRGB, if the image color profile is not sRGB); the correct thing to do (if, and only if the color values written to the framebuffer are scanned out to the display unaltered; assuming that scanout LUT is part of the display), would be conversion to the specified color profile the display expects.

    But this implies, that the on-screen framebuffer itself is in sRGB color space (I don't want to split hairs about how idiotic that is, lets just accept this fact).

    How to bring this together with OpenGL? First of all, OpenGL does all it's color operations linearly. However since the scanout is expected to be in some nonlinear color space, this means, that the end result of the rendering operations of OpenGL somehow must be brougt into the on-screen framebuffer color space.

    This is where the ARB_framebuffer_sRGB extension (which went core with OpenGL-3) enters the picture, which introduced new flags used for the configuration of window pixelformats:

    New Tokens
    
        Accepted by the <attribList> parameter of glXChooseVisual, and by
        the <attrib> parameter of glXGetConfig:
    
            GLX_FRAMEBUFFER_SRGB_CAPABLE_ARB             0x20B2
    
        Accepted by the <piAttributes> parameter of
        wglGetPixelFormatAttribivEXT, wglGetPixelFormatAttribfvEXT, and
        the <piAttribIList> and <pfAttribIList> of wglChoosePixelFormatEXT:
    
            WGL_FRAMEBUFFER_SRGB_CAPABLE_ARB             0x20A9
    
        Accepted by the <cap> parameter of Enable, Disable, and IsEnabled,
        and by the <pname> parameter of GetBooleanv, GetIntegerv, GetFloatv,
        and GetDoublev:
    
            FRAMEBUFFER_SRGB                             0x8DB9
    

    So if you have a window configured with such a sRGB pixelformat and enable sRGB rasterization mode in OpenGL with glEnable(GL_FRAMEBUFFER_SRGB); the result of the linear colorspace rendering operations will be transformed in sRGB color space.

    Another way would be to render everything into an off-screen FBO and to the color conversion in a postprocessing shader.

    But that's only the output side of rendering signal chain. You also got input signals, in the form of textures. And those are usually images, with their pixel values stored nonlinearly. So before those can be used in linear image operations, such images must be brought into a linear color space first. Lets just ignore for the time being, that mapping nonlinear color spaces into linear color spaces opens several of cans of worms upon itself – which is why the sRGB color space is so ridiculously small, namely to avoid those problems.

    So to address this an extension EXT_texture_sRGB was introduced, which turned out to be so vital, that it never went through being ARB, but went straight into the OpenGL specification itself: Behold the GL_SRGB… internal texture formats.

    A texture loaded with this format undergoes a sRGB to linear RGB colorspace transformation, before being used to source samples. This gives linear pixel values, suitable for linear rendering operations, and the result can then be validly transformed to sRGB when going to the main on-screen framebuffer.



    A personal note on the whole issue: Presenting images on the on-screen framebuffer in the target device color space IMHO is a huge design flaw. There's no way to do everything right in such a setup without going insane.

    What one really wants is to have the on-screen framebuffer in a linear, contact color space; the natural choice would be CIEXYZ. Rendering operations would naturally take place in the same contact color space. Doing all graphics operations in contact color spaces, avoids the opening of the aforementioned cans-of-worms involved with trying to push a square peg named linear RGB through a nonlinear, round hole named sRGB.

    And although I don't like the design of Weston/Wayland very much, at least it offers the opportunity to actually implement such a display system, by having the clients render and the compositor operate in contact color space and apply the output device's color profiles in a last postprocessing step.

    The only drawback of contact color spaces is, that there it's imperative to use deep color (i.e. > 12 bits per color channel). In fact 8 bits are completely insufficient, even with nonlinear RGB (the nonlinearity helps a bit to cover up the lack of perceptible resolution).


    Update

    I've loaded a few images (in my case both .png and .bmp images) and examined the raw binary data. It appears to me as though the images are actually in the RGB color space, as if I compare the values of pixels with an image editing program with the byte array I get in my program they match up perfectly. Since my image editor is giving me RGB values, this would indicate the image stored in RGB.

    Yes, indeed. If somewhere in the signal chain a nonlinear transform is applied, but all the pixel values go unmodified from the image to the display, then that nonlinearity has already been pre-applied on the image's pixel values. Which means, that the image is already in a nonlinear color space.

    2 - "On most computers the effective scanout LUT is linear! What does this mean though?

    I'm not sure I can find where this thought is finished in your response.

    This thought is elaborated in the section that immediately follows, where I show how the values you put into a plain (OpenGL) framebuffer go directly to the monitor, unmodified. The idea of sRGB is "put the values into the images exactly as they are sent to the monitor and build consumer displays to follow that sRGB color space".

    From what I can tell, having experimented, all monitors I've tested on output linear values.

    How did you measure the signal response? Did you use a calibrated power meter or similar device to measure the light intensity emitted from the monitor in response to the signal? You can't trust your eyes with that, because like all our senses our eyes have a logarithmic signal response.


    Update 2

    To me the only way I could see what you're saying to be true then is if the image editor was giving me values in sRGB space.

    That's indeed the case. Because color management was added to all the widespread graphics systems as an afterthought, most image editors edit pixel values in their destination color space. Note that one particular design parameter of sRGB was, that it should merely retroactively specify the unmanaged, direct value transfer color operations as they were (and mostly still are done) done on consumer devices. Since there happens no color management at all, the values contained in the images and manipulated in editors must be in sRGB already. This works for so long, as long images are not synthetically created in a linear rendering process; in case of the later the render system has to take into account the destination color space.

    I take a screenshot and then use an image editing program to see what the values of the pixels are

    Which gives you of course only the raw values in the scanout buffer without the gamma LUT and the display nonlinearity applied.

    0 讨论(0)
提交回复
热议问题