Color spaces, gamma and image enhancement

孤者浪人 提交于 2019-12-02 16:50:48

Unfortunately OpenGL by itself doesn't define a colour space. It's just defined that the RGB values passed to OpenGL form a linear vector space. The values of the rendered framebuffer are then sent to the display device as they are. OpenGL just passes through the value.

Gamma services two purposes:

  • Sensory perception is nonlinear
  • In the old days, display devices had a nonlinear response

The gamma correction is used to compensate for both.

The transformation is just "linear value V to some power Gamma", i.e. y(v) = v^gamma

Colorspace transformations involve the complete chain from input values to whats sent to the display, so this includes the gamma correction. This also implies that you should not manipulate the gamma ramp youself.

For a long time the typical Gamma value used to be 2.2. However this caused some undesireable quantisation of low values, so Adobe introduced a new colour space, called sRGB which has a linear part for low values and a powerfunction with exponential ~2.3 for the higher values. Most display devices these days use sRGB. Also most image files these days are in sRGB.

So if you have a sRGB image, and display it as-is on a sRGB display device with a linear gamma ramp configured on the device (i.e. video driver gamma=1) you're good by simply using sRGB texturing and framebuffer and not doing anything else.

EDIT due to comments

Just to summarize:

If your system does not support sRGB framebuffers:

Wait, all this information for what? If I can use sRGB framebuffer and load sRGB textures, process that textures without sRGB conversions... why should I correct gamma?

Generally, you don't. The purpose of the sRGB texturing and framebuffers is so that you don't have to manually do gamma correction. Reads from sRGB textures are converted to a linear colorspace, and writes to sRGB framebuffers take linear RGB values and convert them to sRGB values. This is all automatic, and more to the point free, performance-wise.

The only time you will need to do gamma correction is if the monitor's gamma does not match the sRGB gamma approximation of 2.2 gamma. Rare is the monitor that does this.

Your textures do not have to be in sRGB colorspace. However, most image creation applications will save images in sRGB and works with colors in sRGB, so odds are most of your textures are already in sRGB whether you want them to be or not. The sRGB texture feature simply allows you to actually get the correct color values, rather than the color values you've been getting up until now.

And brightness and contrast: shall they applied before or after gamma correction?

I don't know what you mean by brightness and contrast. That's something that should be set by the monitor, not your application. But virtually all math operations you will want to do on image data should be done in a linear colorspace. Therefore, if you are given an image in the sRGB colorspace, you need to linearize it before you can do any math on it. The sRGB texture feature makes this free, rather than having to do complex shader math.

RGB

RGB: three values normalized in the range [0.0,1.0], which have the meaning of the intensity of the color components Red Green Blue; this intensity is meant as linear, isn't?

No. RGB values are meaningless numbers unless their relevance to a particular space/encoding is defined. They may be linear, gamma encoded, or log encoded, or use a compound transfer curve like the Rec709 and sRGB specs.

Also, they are relative to their primaries and whitepoint as defined in the colorspace, so for instance, #00FF00 in sRGB is a different color than #00FF00 in DCI-P3.

To define how an RGB pixel value should be displayed, you need not only the RGB triplet, but you need to know the colorspace it is intended for, which needs to include the primary coordinates, whitepoint, and transfer curve.

sRGB is the default "standard" RGB colorspace for the Web and general purpose computing. It is related to Rec709, the standard colorspace for HDTV.

GAMMA aka TRANSFER CURVE

Gamma. As far I can understand, gamma is a function which maps RGB color components to another value.

Image gamma takes advantage of the non-linearity of human perception to make the best use of the limited data size of 8 bit per channel images. The human eye is more sensitive to changes in darker colors, so more bits ae used to define the darker colors in a gamma encoded image.

Before digital, gamma was also used in the NTSC broadcast system which suppressed the apparent noise in the signal, in a way similar to how image gamma prevents an 8-bit per channel image from having "banding" artifacts.

First, I shall determine the gamma ramp. How could I determine it? (analitically or using lookup tables)

Gamma CURVE. The sRGB gamma curve is easily accessed. Here is the Wikipedia link for go from sRGB to linear. You can also use the "simplified" method which simply uses a 2.2 exponent curve:

linearVideo = sRGBvideo^2.2 and the simplified inverse, to go back to sRGB:

sRGBvideo = linearVideo^0.4545

Using the simplified version will introduce some minor gamma errors, it is advised to use the "correct" curve for critical operations or where an image will be "round tripped" multiple times.

There's another question. In the case the device gamma is different from the "standard" 2.2, how do I "accumulate" different gamma corrections? I don't know if it is clear: in the case image RGB values are already corrected for a monitor with a gamma value of 2.2, but the monitor has a gamma of value 2.8, how to I correct gamma?

2.8 ??? What monitor is that? PAL? This is unusual — While the PAL spec says that, 2.8 isn't "practical". Monitors are typically around 2.3 to 2.5 depending on how they are setup. When you adjust black level and contrast (white level) you are in essence adjusting the perceived gamma to match the viewing environment (room lighting).

Just FYI, while the sRGB "signal" has an encoded gamma of 1/2.2, the monitor normally adds an exponent of about 1.1

For Rec709, the encoded signal has an effective gamma of about 1/1.9 ish but the monitor in the reference viewing environment is about 2.4

In both cases there is an intentional system gamma gain.

If you wanted to encode an image with a gamma for a 2.8 display and you wanted no system gamma gain, then the exponent is 1/2.8

The "highest" gamma in common use is for digital cinema (and also Rec2020), at 2.6 For those of you thinking PAL & 2.8, I encourage you to read Poynton on that subject:

HIGHLY RECOMMENDED READING

Charles Poynton's Gamma FAQ is an easy read and completely describes these issues and why they are important in an image pipeline. Also read his Color FAQ at the same link.

A FEW WORDS ON LINEAR vs sRGB

Working on images in a linear workspace is typically ideal, as it not only simplifies the math, but emulates light in the real world. Light in the world works in a linear manner (additive). But if working in linear, you need adequate bit depth, and 8 bits is not enough.

Human perception is NON linear. Image gamma encoding takes advantage of the non linearity to make the most use of 8 bit image containers. When you convert to linear YOU NEED MORE BITS. 12 bit per chan is considered a minimum, but 16bit float is the minimum "recommend best practice" for linear workspaces.

If using textures in a linear rendering environment, those textures need to be transformed to a linear space (and often a deeper bit depth). While the added bits increase data bandwidth, the simplified math often allows faster computation.

sRGB is a DISPLAY REFERRED space, it is intended for DISPLAY PURPOSES, and for storing images in a compact "display ready" state. Black is 0 and white is 255, and the transfer curve is close to 1/2.2

sRGB is based on Rec709 (HDTV), and uses identical primaries and whitepoint. But the transfer curve and data encoding are different. Rec709 is intended for display on a higher gamma monitor in a darkened livingroom, and encodes black at 16 and white at 235.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!