Rendering in linear space is good because it is simple; lighting contributions sum, material reflectance values multiply. Everything in the linear world is simple and intuitive - inhabiting the linear world is a sure way of preventing your brain from squirting out through your nose.
If the output of our display monitors was linear then this would be the end of the story. Alas, this is not the case...
Non-linear OutputsThe graph below shows how the output intensity of a typical monitor looks (the orange line) compared to linear intensity (the blue line). A monitor's response curve sags away from linear such that a pixel with a linear intensity of 0.5 appears about one fifth as bright as a pixel with a linear intensity of 1 (not half as bright, as we might have expected).
The result of this sag is that any uncorrected linear output from the display will appear much darker than it should.
This effectively cancels out the display's response curve to maintain a linear relationship between different intensities in the output. Problem solved. Well, not quite...
Non-linear InputsIt is highly likely that some of the inputs to our linear rendering will be textures and that those textures will have been created from non-linear photographs and/or manipulated to look 'right' on a non-linear monitor. Hence these input textures are themselves non-linear; they are innately 'pre-corrected' for the display which was used to create them. This actually turns out to be a good thing (especially if we're using an 8 bit-per-channel format) as it increases precision at lower intensities to which the human eye is more sensitive.
We can't use these non-linear textures directly as inputs to a linear shading function (e.g. lighting) - the results would simply be incorrect. Instead we need to linearize texels as they are fetched using the same method as above. This can be done manually in a shader or have the graphics driver do it automagically for us by using an sRGB format texture.
End of story? Not quite...
PrecisionFor a deferred renderer there is a pitfall which programmers should be aware of. If we linearize a non-linear input texture, then store the linear result in a g-buffer prior to the lighting stage we will lose all of the low-intensity precision benefits of having non-linear data in the first place. The result of this is just horrible - take a look at the low-intensity ends of the gradients in the left image below:
What do I mean by 'appropriate'?
To Be (Linear), Or Not To Be (Linear)?Which parts of the g-buffer require this treatment? It depends on the g-buffer organisation, but in general I'd say that any colour information (diffuse albedo/specular colour) should be treated as non-linear; it was probably prepared (pre-corrected) to 'look right' on non-linear display. Any geometric or other non-colour information (normals/material properties) should be treated as linear; they don't encode 'intensity' as colour textures do.
Think of this post as a sort of quick-reference card; for more in-depth information take a look at the following resources:
"The Importance of Being Linear" Larry Gritz/Eugene d'Eon, GPU Gems 3
"Uncharted 2: HDR Lighting" John Hable's must-read GDC presentation
Wikipedia's gamma correction entry (and donate some money to Wikipedia while you're at it)