We should begin by saying that the point of all of this is not just to criticize one of the most repeated photographic myths out there – that's just a tool, because an incorrect understanding of the way that exposure and ISO work, as well as their roles during the shooting process, leads to a falloff in the quality of shots that photographers get.
So our goal is to show how, by rejecting this myth, you (the now-enlightened user) will be able to more fully utilize the capabilities that your camera has, as well as to improve the quality of your photographs.
Every day one can see threads on photographic forums where members discuss the various different modes of automatic exposure, trying to find the perfect one. As a rule, these discussions result in the same question – what compensation to automatic metering ought one set to get consistently good exposure? It turns out that no autoexposure mode universally guarantees good out-of-box results.
We are planning to demonstrate that one of the ways of getting good exposure is metering while using the in-camera spotmeter on the lightest part of the scene that needs to maintain full detail (white clouds, snow, etc.) and applying the appropriate compensation to the exposure recommended by the spotmeter.
They say that "a histogram is a graphical representation of the pixels exposed in your image" or "when judging exposure, the primary areas of the histogram to be concerned with are the right and left edges".
We are going to demonstrate the following:
In-camera histograms don't really allow one to analyze the shadows and highlights zones of an image.
An in-camera histogram changes significantly with changes in the camera settings such as contrast, picture style, brightness, etc.
So, no. By no means can the in-camera histogram be used by a RAW shooter to evaluate exposure.
“How do you know, when you think blue — when you say blue — that you are talking about the same blue as anyone else?"
Christopher Moore, Sacre Bleu: A Comedy d'Art
The goals of this article are twofold: the first is to demonstrate that out-of-camera JPEGs, including in-camera previews, can’t be implicitly, with no checking, used to evaluate color (as we already know, the in-camera histogram is misleading, too). The second is to show that it isn’t necessary that the camera manufacturer-recommended converter be specifically tuned to match the out-of-camera JPEG.
Some Internet discussions claim that it is easier to push shadows up on one camera model compared to another one. Turns out, such an impression may result from a certain trick. First, we will see the trick, and next we will expose it.
If we compare two cameras (or different settings for the same camera; or even the same shot from the same camera but processed in different ways), and we’re doing this by looking at two files in a RAW converter, we need to:
either be sure that those two files were processed identically (and no, that doesn’t mean pushing the same buttons or moving the same sliders in a converter / converters);
or, if they WERE processed in different ways, understand exactly what the difference is and how to get to the lowest common denominator, if that’s the goal.
When a new camera reaches the market one of the most discussed subjects is if the color it records is the same, better, or worse compared to a previous model. It often happens that the color is evaluated based on the rendering provided by some RAW converter. That is, an unknown parameter, that being the color profiles or transforms that a RAW converter uses for these models, comes into play. Another problem with such comparisons is that often they are based on shots taken with different lenses, under different light, and with effectively different exposures in RAW (while in fact the exposure settings itself may be the same).
Let's have a look how cameras compare in RAW if the set-up is kept very close to the same and the exposure in RAW is equalized.
We've already received a lot of feedback where the effect of the highlights being preserved in the auxiliary subframe is attributed to the parallax and the razor-thin shape of the highlights in the still-life shot, not to what it really is: a ≈1 stop difference in clipping between main and auxiliary subframes.
Given the mechanism behind the formation and recording of dual-pixel raw data, there is no relation to the size or shape of the highlight area.
To give an example, please consider this photo by Calle Rosenqvist / Kamera & Bild, a dual-pixel raw taken at ISO 400, (you can download it from page 3 of the article, it is the street scene shot _91A0045.CR2). This is definitely not a case with some razor-thin highlights, it is a rather extensive blown out area that, as we will see, can be recovered using the data from the auxiliary subframe.
Let's take a close look at a dual-pixel raw file from Canon 5D Mark IV using RawDigger 1.2.13
The dual-pixel raw contains 2 raw data sets, we will be calling them main subframe and auxiliary subframe.
We'll show that the difference between the main and auxiliary subframes is nearly 2x, or 1 stop, and that the auxiliary subframe can be used for highlight recovery (again, an additional 1 stop of highlights is preserved in the auxiliary subframe while it is clipped in the main subframe), effectively providing one more stop of the headroom in highlights; and the dual-pixel raw file for this camera contains 15 bits of raw data, if you consider main and auxiliary subframes together.
"...Really, why do you even want to look at RAW files? The whole point of RAW is to be processed according to your taste into a JPEG. I never look at RAW files; I never need to. They are loaded into LR, processed, and I look at the processed images.”
So here's a question - is it really necessary that one look at RAW when you're selecting RAW files, whether to convert or to present? Isn't a preview enough? You might not know exactly what settings were applied to it, but so what? What's so untrustworthy about embedded and rendered JPEGs and previews? And what's wrong with the preview and histogram on the back of the camera?
All these questions and more will be answered, and we intend on top of that to show how large of a gulf there is between real RAW data and the previews of it.
Very often images that are technically fine are being tossed out, and technically inferior ones are kept. Why? Because people aren't shown the truth about RAW. Here, we intend to show why people need to see and analyze actual RAW data, in advance of choosing which images to discard and which to keep and edit.
Fast Raw Viewer is the first and the only dedicated tool specifically designed and developed for extremely fast display, visual and technical analysis, basic corrections, sorting and setting aside or directly transferring for further processing of RAW images.