about white balance

Forums: 

Sorry if this topic had been discussed.

How does RawDigger determine the white Level and the black Level of a raw file ?
By reading the exif of the file when already calculated white/black levels are available ?
But, how about the case where they are unavailable ?
While calculating the black level with the help from optical black levels would be one of the best way, I don't have an idea on how to calculates the white level.
Could you elaborate on it ?

Many thanks in advance for your response.

White balance has little to

White balance has little to do with white or black levels. To determine white and black levels we study raw samples from each camera. Information in makernotes, even if present, is not always reliable.

Sorry for the title that was

Sorry for the title that was my complete mistake.

So if I understand correctly, RawDigger has the table of white & black levels acquired by the studies of each camera, and applies corresponding levels to the raw of a camera when decoding it in consultation with its EXIF (because RawDigger has to identify the model of the camera).

Is this the story ?

We do not always use EXIF to

We do not always use EXIF to identify the camera - sometimes raw files do not even have an EXIF, sometimes EXIF is manipulated, while sometimes the same camera has more than one EXIF identification (like Sony clones by Hasselblad, Panasonic clones by Leica, early Samsung/Pentax, Canon/Fujifilm/Panasonic camera names depending on the market.

The values for black and white levels are determined based on our studies, and if a study shows that makernotes or optical black data is reliable, we happily use it. If not, we use our own (may be based on a table, may be through some formulae). Everything we consider to be of community interest is included in our open source LibRaw library.

Thanks again for response.

Thanks again for response.
Just for follow-ups.
Among consumer camera such as Canon, Nikon,Sony,Pentax,and Olympus, does there exist raw files that do not contain black & white levels in its exif, apart from some exceptional cases ?
Secondly,how do you identify the model of a raw file without reference to exif ? ---though this is a sidelined question.

To your first question, yes.

To your first question, yes.

To your second question, the answers are in the source code of LibRaw, please have a look at https://github.com/LibRaw/LibRaw - in dcraw.c below the comment /* Identify which camera created this file, and set global variables accordingly. */ ;)

For example, which models do

For example, which models do not have their black and white level in the exif ?
As far as I know, most raw files could be decoded with rawnalyze with black and white point.
As regards second question, do you mean that identification of the model of the camera is done with an ample data base ?

To your first question,

To your first question, exiftool tag references contain the answers: for example, no information on white level is known for Nikon, have a look at https://sno.phy.queensu.ca/~phil/exiftool/TagNames/Nikon.html

To your second question, we use any means necessary to identify cameras, including signatures (like "XPDS"), as well as raw data size, dimensions, margins, etc, have a look at static const libraw_custom_camera_t const_table[].

Continuingly about first

Continuingly about first question.

Can you confirm that there is no slim chance where Nikon puts its white level data in the proprietary zone in exif which exiftool is unable to decode ?

The only trusted source for

The only trusted source for makernotes format and field meaning is camera vendor. AFAIK, no vendors publish this data.

In summary, you have to study

In summary, you have to study and calculate black and white levels because camera vendors never disclose key data for decoding their raw files. Right ?

If you allow me another

If you allow me another follow-up question, please let me clarify the pipeline of decoding process.

First you have to get crude AUDs of each pixel as recorded. For compressed raws, you must linearize pixel values. Scale them (subtract black level from each pixel, set white level, and map values to a logical range of 0-1. Finally convert to CIE XYZ. Am I right ?

There is no need to use XYZ

There is no need to use XYZ specifically, XYZ is yet another RGB space.

"Crude ADUs" are not so crude, they are "raw ADUs", or raw data numbers, raw DNs.

Sorry for my "crude" wording

Sorry for my "crude" wording :)
Apart from CIE XYZ, is my understanding about the pipeine roughly correct ?
I refered to XYZ because I assume that the raw reference values must be mapped to an internal color space, likely XYZ, before being translated to final sRGB or adobeRGB space.

If your understanding would

Should your understanding be incorrect, I would say so, or keep silent :)

There is no rules for a final space, it can be BetaRGB, ProPhoto RGB, Lab, CMYK, anything else, doesn't matter. And there are no rules for the internal colour space, too. Anything colorimetric with sufficiently wide gamut will do. In general terms, conversion is done through some profile link, camera RGB to internal working colour space at the first stage, and working colour space to the output colour space (in case output space differes from the internal).

Your answer is very good for

Your answer is very good for me to understand !
By the way, when does white balance intervene in the course of the pipeline ?
Supposedly, before mapping the subtracted values to the logical range of 0-1 ?

Application of white balance

Application of white balance can be done at any stage before displaying an image, the only concern here is not to cause permanent clipping of data. Suppose, the initial white balance caused red to be multiplied by 2. As a result, some of the red pixels are clipped. Next, one decides to decease brightness to bring some of that red back, but that causes colour distortion - as red is permanently clipped (that's why I do not shoot JPEGs; anything clipped in a JPEG, including because of the white balance, can't be recovered). So, a sane raw converter needs to maintain a copy of un-balanced normalized raw data, or have it balanced, non-normalized, keep a few extra bits for white balance and brightness manipulations, and track the normalization. For the 14-bit raw data, that second scheme means something like 20..22 bits per pixel internally.

Sorry for my capricious

Sorry for my capricious questions.
How have you got tone curve tables that decode/linearize compressed raw's curve ?
This seems to be more laborious than black and white level.
Always by Studying ? Or by way of exif ?

Great! Allow me a chance for

Great! Allow me a chance for the final follow-up for the moment :)
Dcraw has such commands as -d, -D, and -W.When executing -d -W, it is assumed that the defaut white level of a raw concerned and "a fixed white level" of -W are implemented respectively. Looking at results,however, the defaut is always superior to the fixed. Why is this ? On the other hand, -D -W produces only "a fixed white level" due to lack of scaling performance in -D, thus leaving any specific white level untapped.As far as -D -W is concerned, the fixed white level stands far away at around the upper end of 16bit/65536 regardless of 12bit or 14bit raws. Why is this ? With a view to comparing every raw file under 16bit in a "impartial" way ?

dcraw's -W is 'Don't

dcraw's -W is 'Don't automatically brighten the image'

Without this switch dcraw performs auto-brigtening on output stage (linear to gamma-corrected conversion). All previous steps are unchanged.

I understand this option as

I understand this option as 'do not scale output result'. Maximum value used in dcraw raw processing is correct (in most cases) and it is not  65535, but change from camera to camera.

Hi lexa. Thanks for your

Hi lexa. Thanks for your intervention. I have difficulty in clarifying two things about dcraw's some commands. Firstly, -D does not scale pixel values. Scaling consists of, I think, 5 elements. Linearization of compressed raw values if a compressed raw is concerned. Setting of black level. Setting of white level. Subtraction of black level values from each pixel. And mapping each pixel values to a logical range of 0 - 1. Do you think that -D ignores all of these 5 operations ? Secondly, how does -W determine a fixed white level of each camera ? If you execute -D -W with a 12bit raw file and another 14bit one, the result is always that the image from the 14bit is brigter to an extent of roughly 4 times than 12bit, except compressed or cooked raws. Why is this ?

dcraw.c source code is

dcraw.c source code is available for everyone.

-W switch sets no_auto_bright variable to 1. It is very easy to see when this variable is used.

Sorry, but there is nothing

Sorry, but there is nothing left to guess or think in regards to dcraw operation if you turn to the source code.

RawDigger forum is hardly the place to discuss dcraw.

hi lexa, I picked up dcraw

hi lexa, I picked up dcraw just to discuss " what raw data is as recorded". So please let me clarify it in a generalised way. My understanding is that ADUs of each pixels are recorded in a raw file. When decoding them, they are scaled to a logical range of 0 - 1by decoders before being mapped to an inner color space, then to a final output color space. Do you mean that a second scaling usually takes place when mapping to the inner or final color space too ? I had assumed no second scaling when the scaled logical range data are converted by way of a matrix to the inner or final color space.

ADU (Analog-to-Digital Unit)

ADU (Analog-to-Digital Unit) for a pixel is recorded only if manipulations are not applied (pedestal black level, noise reduction, black frame subtraction / long exposure noise reduction,...). With ADUs it is supposed that a one-to-one mapping exists beteen pixel charge and a value in raw data, and that this mapping is fixed beteen the shots. Thinking in terms of ADUs makes the whole thing ambiguous. More, it opens the backtrack analysis (that's what an attempt to figure out sensor properties based on raw data numbers is) to systematic and random errors exactly because the mapping changes and must be figured out for each test, especially when ISO speed tests, noise tests, or exposure calibration tests are performed. Here is one of the myths ("intermediate ISO settings are less noisy"), stemming out of ignoring changes to mapping, busted: https://www.rawdigger.com/howtouse/the-riddle-of-intermediate-iso-setting

We usually just say that a raw file contains metadata and "raw" data in the form of raw data numbers, raw DNs. And from the previous abridged list of the transformations applied before raw data is recorded you can see that raw is not "pure" raw. 

The place to discuss raw conversion workflow is https://www.libraw.org/forum -- please continue there.

Add new comment