Digging into Nikon RAW Size S NEFs

For those who don’t want to read all the details of the format analysis here are our conclusions:

Conclusions

With D4s camera Nikon introduced the new file format they refer to as RAW Size S NEF.

  • It contains not RGB but YCbCr data much like a JPEG;
  • The data is 11-bit;
  • The tone curve is applied to the data;
  • The in-camera white balance is applied to the data;
  • The pixel count is 4 times less than with regular NEF files;
  • The color information is shared between two adjacent pixels (in other words, chroma is recorded for each other pixel);
  • The file size is insignificantly smaller than full resolution 12-bit lossy compressed NEF;
  • Compared to a regular NEFs the data needs additional processing (linearization) while converting it to a TIF/JPEG, that may cause additional problems during the conversion (see Study 3 below) as well as some additional computational errors;
  • There is some loss of color accuracy in shadows, which negatively affects the usable dynamic range if color accuracy is important (see Study 2).

If you are want to see how different settings for NEF compression, bits per sample, and Image area affect the file size, the list is in a tab-separated D4s_Modes.txt. The NEF files listed are ihttp://updates.rawdigger.com/data/D4s/.

If you are interested in the detective story, here goes. We will be using sNEF as a shortcut to Nikon’s “RAW Size S” term.

Deciphering and unpacking sNEF format

Looking into EXIF of an sNEF file shot in FX mode we see the following:

  • Image Size is 2464 x 1640 pixels,
  • Image Data Length is 12122880 bytes (as given in Strip Byte Counts field).

> exiftool -imagewidth -imageheight -stripbytecounts D4s_NEF_S_FX.NEF

Image Width      : 2464

Image Height     : 1640

Strip Byte Counts            : 12122880

Dividing Image Data length (given in bytes) by the image pixel dimensions we can see that for each pixel there are 3 bytes, or 24 bits: 12122880/2464/1640 = 3. From the camera specifications, the file is uncompressed and 12-bit. For such a file to contain RGB data one pixel should be represented by 12x3 = 36 bits instead of 24. That means we are dealing with something else but not an RGB file.

The next hypothesis we are going to check is that the file contains YYCbCr data, i.e. the luminance component (luma, or Y) is recorded for each pixel while the color-difference components Cb and Cr (called chroma) are recorded for each other pixel  – in other words each set of color-difference components is shared between 2 pixels. If it is the case, for each two pixels we should have 12x2 bits of luma and 12x2 bits of chroma for the total of 48 bits or 6 bytes, which is averaged to 3 bytes per pixel – the number we are after.

To check if our hypothesis holds water let’s take a totally black frame (ISO 100, Auto WB mode1, Picture Control set to Neutral modified with sharpness parameter set to 0, camera set to Adobe RGB space, the viewfinder and the lens are closed, lens set to the smallest aperture f/22, shutter speed set to the shortest    1/8000 s).

From the EXIF we can read the offset of the RAW image:

> exiftool -stripoffsets Black_AutoWB1.NEF

Strip Offsets       : 813568

Bringing the file into a hex editor and entering the offset (813568=0xC6A00) to position the cursor onto the start of the RAW data we start trying to make sense of the data. Let’s examine a small chunk of data, the first 24 bytes that according to our hypothesis should contain first 8 pixels of the image (vertical dividers are to split those 24 bytes into the groups of 6 bytes; each group presumably contains data for two sequential pixels):

01 00 00 00 08 80 | 00 00 00 00 08 80 | 01 20 00 00 08 80 | 00 00 00 00 08 80

YYCbCr hypothesis suggests that in each 2-pixel group represented by 6 bytes we should have two 12-bit luma values that need to be very close to each other (because luma hardly varies significantly in the black frame), and the next two 12-bit values for CbCr chroma channels containing values very close to neutral. We can see this condition met with the following permutation of nibbles (4-bit aggregations):

n0n1 n2n3 n4n5 n6n7 n8n9 n10n11 => n3n0n1 n4n5n2 n9n6n7 n10n11n8,

which is sort of logical as the file is recorded using little endian. What we have after applying the above alchemy recipe is:

001 000 800 800 | 000 000 800 800 | 001 002 800 800 | 000 000 800 800

After unpacking the whole image data this way we can see that the luma is represented mostly with 0s and 1s (which is expected for the black frame), while the chroma channels are mostly 0x800=2048 (meaning 2048 should represent neutral value for chroma channels). The results are presented on the histogram below:

Figure 1

Typically the chroma channels are encoded in such a way that neutral value is represented by the middle-of-the-road code. That is for a 12-bit coding which spans from 0 to 4095 the value of 2048 is middle and we can be fairly certain that it represents neutral value. Values larger or smaller than 2048 represent non-neutral colors; and the color becomes progressively more saturated with the increase of the delta between the chroma value for the pixel and 2048.

The result we have just obtained seems to support our hypothesis. The next step is to unpack some regular shot by separating a 2 pixel group of Y1Y2CbCr into two fully defined pixels Y1CbCrY2CbCr, and converting YCbCr to RGB using the following trivial set of equations:

R = Y + (Cr-2048)

G = Y – (Cb-2048) – (Cr-2048)

B = Y + (Cb-2048)

The resulting image is:

Figure 2

Now the things are really starting to take shape: the colors are in the correct order, neutral patches are indeed looking neutral (by the way it means that the YCbCr data is indeed recorded after applying the white balance to source RGBG data coming from the sensor). However, the image is dark in spite of the presence of the fully blown-out highlights, like those specular spots on the chrome ball. We can also note that the fully blown-out areas are intensively purple. The contrast of the image is close to what we can expect from a high gamma coding, which means that not only white balance, but also a tone curve is embedded into the YCbCr data.

Now we know enough to implement initial support for the sNEF format in RawDigger. What we can add at this stage is unpacking, as well as display and statistical analysis of the luma and chroma channels for this format.

Normalizing for the brightness and saturation

The camera specifications suggest that sNEF files are encoded in 12 bits. Normally it means that the maximum in the luma channel is 2^12-1=4095. Insufficient brightness of the image, which nevertheless contains fully blown out areas, suggests otherwise: full 12 bits are not used for luma channel. Hence we need to find what the actual maximum is.

For this matter we shot several fully blown-out frames and studied them in RawDigger. Indeed that maximum value in the luma channel does not reach 4095 and is also dependent on the white balance setting in the camera. The lowest maximum value of 2392 was obtained when the camera white balance was set to 2500K, while the highest, 2549, corresponds to while balance setting of 10000K. This way we can say that the luma data is coded with log2(2549) ≈ 11.32, or in other words the coding uses slightly more than 11 bits instead of full 12.

Normalizing the luma channel Y to 2549, we can see the brightness much closer to the reality:

Figure 3

Now after we iterated the brightness, let’s try to figure out the saturation. For now it is low. Most of the cases the saturation is low when the range in chroma channels is smaller than the coding allows. Once again, we need to normalize the chroma values through defining the maximums. We find some reassurance to the idea that the chroma range is narrower than 0..4095, because we have already established that it is the case for luma.

To find the maximums we will use red and blue color separation Lee filters to shoot a color target (X-Rite ColorChecker Digital SG), which spans approximately through Adobe RGB gamut, varying exposure to achieve maximums for chroma channels.

Resulting images suggest that the chroma channels range is very slightly wider than 2048±1024, which, by the way, also closely corresponds to 11 bits.

Figure 4. Shot through 25 Tricolour Red, Cr channel

Figure 5: Shot through 47B Tricolour Blue, Cb channel

Now, normalizing the chroma channels to 1024, here is what we have:

Figure 6

As we can see the saturation is improved. If you want to exclude the slightest possibility of color clipping your image converter may use the range of 2048±1280 for chroma channels (please see scYCC-nl). That will cause a slight drop in saturation, but it can be easily compensated and one of the accurate ways of doing it is by using a color profile.

Normalizing for Hue

As we have just seen the YCbCr data already has a white balance and tone curve applied to it. This suggests that YCbCr data is a sort of an intermediate in the sequence that normally results in a JPEG image. If indeed it is a case we can try to substitute our trivial formulae for deriving RGB from YCbCr for the one that is normally used while unpacking JPEGs to RGB (sYCC color space, considering that in our case the middle value is 128 instead of 2048):

R = Y + 1.40200*(Cr - 2048)

G = Y - 0.34414*(Cb - 2048) - 0.71414*(Cr - 2048)

B = Y + 1.77200*(Cb - 2048)

It is important to note that YCbCr is actually a representation of a certain RGB color space. If you need to define the conversion to some particular color space like Adobe RGB, you can derive the coefficients for the formulae based on the chromaticity of the color space or you can use regression to calculate the coefficients.

Let’s see how the image looks now:

Figure 7

We got the color close but we still need to address the magenta highlights.

In order to understand what causes the unwanted magenta-pinkish tint on highlights we need to study the YCbCr data close to the clipping point. To do so let’s take two shots with the same exposure settings, one as NEF 14-bit uncompressed and the other in sNEF mode. Our target will be a neutral gray step wedge, Sekonic Exposure Profile Target. We have chosen this target because it is very neutral and the 1/6 EV step between patches is the smallest among ready available targets. The exposure settings is such that the lightest gray patch A (the leftmost in the row, marked with the red selection rectangular; the specification sheet for the target refers to it as #2) is very close to clipping on the NEF 14-bit uncompressed shot.

Figure 8

Filling the table with the data from the both shots we got the following (the patches are named left to right; the first column is the patch name, the next three columns – Y, Cb, CR – are averages, extracted from the sNEF shot, and the last four columns – R, G, B, G2 –  are averages from the 14-bit uncompressed NEF):

SN

Y

Cb

Cr

R

G

B

G2

A

2167.3

2053.03

2070.01

9456.02

15567.84

9980.3

15597.55

B

2084.57

2043.87

2052.01

8482.34

14623.81

9019.13

14567.09

C

1967.3

2042.36

2053.8

7508.67

12899.58

7933.75

12845.02

D

1858.41

2042.67

2053.73

6647.39

11408.01

7019.2

11366.79

E

1753.19

2044.02

2052.05

5857.83

10074

6214.68

10050.75

F

1689.2

2043.08

2052.32

5419.68

9314.95

5730.37

9295.07

G

1580.99

2042.23

2052.95

4718

8095.99

4969.32

8077.63

As we see for the 14-bit uncompressed NEF, none of the R, G, B, G2 channels reach the maximum of 15615 (2^14-1-768 = 15615, where 768 is the black level). However the chroma channels Cb and Cr on the patch A have the values that differ significantly from the values on the other patches: for Cb we have 2053 instead of 2042..2044, and for Cr we have 2070 instead of 2052..2054.

To check that the patch A is truly following the neutrality pattern of the other patches let’s take a shot of the same target reducing the exposure by 1 EV. Here is what we have got:

SN

Y

Cb

Cr

R

G

B

G2

A

1577.31

2042.2

2052

4643.4

7980.83

4885.13

7953.5

B

1499.64

2044.4

2050.65

4166.45

7183.03

4418.2

7160.95

C

1411.97

2043.48

2052.6

3694.65

6328.99

3889.19

6307.26

D

1330.74

2044.1

2052.81

3273.2

5595.41

3444.71

5577.88

E

1252.22

2045.19

2051.74

2886.6

4942.2

3053.22

4933.94

F

1204.61

2044.66

2052

2671.84

4568.37

2817.11

4563.2

G

1124.21

2044

2052.76

2329.95

3972.55

2444.34

3967.36

Now the Cb and Cr values for the patch A are very close to Cb and Cr values extracted from the rest of the patches. It means that the patch A does not differ in neutrality from the other patches, and the skewed values on the patch A we got from the first shot indicate the loss of color linearity. We additionally checked the target with spectrophotometer and its readings confirm that the patch A is no different from the other patches in terms of color.

The results we have just obtained mean that the loss of color linearity is caused by the high brightness level, even before the clipping point – as soon as the luma component exceeds approximately 2100 the color information becomes fairly meaningless.

On a side note, if you look at the histograms of the 14-bit uncompressed NEF you will see that the histograms of all the channels on the patch A have normal bell-shaped curves and no clipping. The only indicator that things start to go wrong is that G/R and G/B ratios for the patch A are different from the corresponding ratios for the other patches. It is the common case. For this particular camera at base ISO the practical exposure limit before the white balance starts to be skewed in highlights is 1/3 EV below the clipping point.

Looking at the tables above we can deduce that the sNEF format is designed in such a way that the luma channel should be clipped to 2047 at the conversion stage. However RawDigger is not a converter. Not only the ultimate accuracy of the tone and color are not the goals here but also it would be counter-productive to limit our users in researching unmodified data. That is why we decided to raise the limit for the luma channel, setting the clipping point at 2549 (which helps in preserving the luminosity details in the extreme highlights), but to avoid magenta highlights we will still be neutralizing the chroma channels by forcing them to 2048 as soon as the luma channel exceeds 2047:

Figure 9

We can see that unwanted colorization of highlights is now gone.

Data linearization

To change white balance and apply exposure correction in converters without seriously modifying the innards of the raw converter it is often convenient to have linear RGB data. We have just demonstrated that if starting from sNEF YCbCr data we naturally come up with non-linear RGB. In order to facilitate processing in converters we need to linearize data in LibRaw, which is the foundation of RawDigger.

Generally speaking the YCC format does not imply that the Y component is necessarily non-linear. It can be linear and in this case the one stop change in exposure causes 2x change in luma. When luma is coded linearly the resulting RGB data is also linear. If the luma component is coded non-linearly with a tone curve applied the resulting RGB data will be also non-linear.

To be sure we are going to shot Kodak Q13 gray step wedge. Kodak datasheet specifies that the step between the patches on the gray scale is about 1/3 EV. If the data is coded linearly the histogram will show approximately 3 picks per 1 EV.

Opening the shot in RawDigger we select the step wedge (red rectangular marks the selected area) and additionally put a sample (green square) onto the brightest patch A to align the histogram to the maximum of the brightness of the scale:

Figure 10

Let’s study the histogram of the luma channel Y (displayed in white):

Figure 11

We can see that each stop contains 6-7 peaks instead of 3-4. It means that the luma channel is non-linear, and the tone curve in use compresses the data approximately 2 times.

To restore the linearity in RGB we can use gamma ≈2, which will clip the luma channel close to its design range.

If we want to avoid luma clipping we can construct our own linearization curve by shooting the gray step wedge in both 14-bit uncompressed NEF and sNEF modes. We need to make several shots in order to have data enough to accommodate for 11 or even 14 stops (that is if you believe that the data below 11 stops from raw data maximum is reliable). The exposure for the first shot taken in 14-bit uncompressed mode needs to be such that the average values for the green channels of the lightest patch are about 1/6 EV below the saturation. If you are using Q13 the next shots may be -3 EV and -6 EV below; and to accommodate for 14 bits one more shot with -9 EV is needed. Since the curve is composite, that is there is one curve for all channels, it can be constructed either as Y -> G_linear or as G(YCC) -> G_linear, and then applied to all RGB channels derived from sNEF data.

Finally, after performing the linearization, we are ready to support RGB render of sNEF files in RawDigger:

Figure 12

Now we can study some features of the sNEF format.

Study #1: does Picture Control affect sNEF data?

This question is triggered by the discovery that sNEF data depends on the in-camera white balance setting. It is possible that other camera settings may change the image data in sNEFs. We will be looking into two settings – picture control and the choice of the in-camera color space, which can be either Adobe RGB or sRGB.

Having fixed the exposure values and using stabilized lights we have shot the X-Rite ColorChecker Digital SG applying wildly different Picture Control settings in the camera, and varying the setting for in-camera color space. In spite of the fact that the embedded JPEGs look very different the difference between sNEF shots does not exceed 0.175 deltaE94, which is negligible.

The conclusion is that neither Picture Control nor the choice of a color space does affect sNEF data.

Study #2: comparing color in shadows between 14-bit uncompressed NEF and sNEF modes

With the 11-bit data representation in sNEF mode it is interesting to see how it holds color in shadows.

To preform the comparison we decided to shot similarly exposed pairs in sNEF and 14-bit uncompressed NEF modes. The target was X-Rite ColorChecker Digital SG. This target spans for about 6.5 EV. The first pair of shots was in ETTR mode (2/3 EV higher than in-camera exposure meter suggested). The second pair was one stop below the first, the third pair – 2 stops below the first, and the forth pair was 2 2/3 stops below the first. All shots were processed through the “recommended developer”, Nikon Capture NX2. The data was extracted and compared through CIE94 formula using BabelColor PatchTool.

The table below summarizes the difference within the pairs:

 

ETTR

ETTR -1EV

ETTR -2EV

ETTR -2 2/3EV

 

deltaE94

10th percentile:

0.05

0.07

0.09

0.11

Median (50th perc.):

0.15

0.19

0.29

0.49

90th percentile:

0.58

0.77

1.14

2.19

95th percentile:

0.71

0.85

1.3

2.44

Of all samples:

0.9

1.09

1.63

2.88

 

delta L*

10th percentile:

0.03

0.02

0

0.04

Median (50th perc.):

0.06

0.06

0.09

0.2

90th percentile:

0.41

0.3

0.57

0.61

95th percentile:

0.47

0.36

0.61

0.73

Of all samples:

0.53

0.63

0.92

0.94

 

delta a*

10th percentile:

0.02

0.07

0.08

0.08

Median (50th perc.):

0.2

0.25

0.34

0.5

90th percentile:

0.34

0.62

1.04

2.04

95th percentile:

0.5

0.73

1.11

2.27

Of all samples:

0.6

1.22

1.49

2.73

 

delta b*

10th percentile:

0.01

0.01

0.01

0.02

Median (50th perc.):

0.06

0.13

0.2

0.32

90th percentile:

0.31

0.53

0.85

1.23

95th percentile:

0.46

0.69

0.95

1.5

Of all samples:

0.81

1.24

1.6

2.16

The table shows the tendency of the difference between 14-bit uncompressed NEF and sNEF files increasing with the exposure decrease. Already at ETTR–2EV (which corresponds 8.5 EV scene) the difference becomes significant. It is worth noting that the difference lays not so much in luminosity L* but mostly in color channels a* and b*. For the ETTR–2 2/3 EV exposure (that is for the scene that has a little less than 9 stops of the dynamic range) the difference exceeds 2 deltaE while 1 deltaE is considered to be “just noticeable difference” (JND).

Study #3: changing white balance on a sNEF may cause a problem

The more the data format deviates from the common practice, the more problems can it cause during the image processing. We are using Adobe software just for illustration purposes; however, other converters may have similar problems as well.

Let’s take 3 shots of the same scene in sNEF mode. The light is the same between the shots, so is the exposure. The first shot is taken with the custom white balance. For the second shot we set the camera white balance to 2500K, while for the third this setting is 10000K.

First we convert the sNEF files to dng format using DNG Converter 8.4 Release Candidate, and next we open them in ACR CS5. We are setting the initial exposure compensation to -0.5 EV to get the correct reading of the green channel on the first patch of the white balanced shot. To monitor what is going on we add color samples to patches 1, M and B:

Figure 13. Shot with a custom white balance

Figure 14. Shot with white balance set to 2500К

Figure 15. Shot with white balance set to 10000К

Applying the white balance values from the first shot to two other shots we can see a very significant overexposure causing the patches up to #3 to blow out. To compensate we need to apply about -1 EV of the additional negative exposure compensation…

Figure 16

 

Figure 17

… and only after that the readings from the samples become close.

The effect is caused by normalization problems and it is not guaranteed that the fine details in highlights are fully restored after additional negative exposure compensation is applied on top of the overflow.

Once again, we used Adobe software in this example just for illustration purposes; any raw converter, and that includes Nikon's own, can cause some problems while working with sNEF files. If you are using sNEF format it is probably a good idea to test how the raw converter of your choice copes with the task and what additional workflow steps may be necessary to obtain high quality results.


The Unique Essential Workflow Tool

for Every RAW Shooter

FastRawViewer is a must have; it's all you need for extremely fast and reliable culling, direct presentation, as well as for speeding up of the conversion stage of any amounts of any RAW images of every format.

FastRawViewer 1.3 Program Window

Now with Grid Mode View, Select/Deselect and Multiple Files operations, Screen Sharpening, Highlight Inspection and more.


16 Comments

Thanks for a very informative

Thanks for a very informative analysis.
I only shoot 14bit RAW, AdobeRGB for the highest DR & quality possible.

Dear Eric,

Dear Eric,

Technically, the colour space choice does not affect raw data; but helps a little if you are evaluating histograms on the back screen of the camera.

If you are above ISO 800, 12 bits is OK as the lowest two of 14 are just noise.

--
Best regards,
Iliah Borg

12 vs 14 bits

Hi Iliah, could you tell us more about the number of bits when usinf nef files ? I did try to see a defference with my D800 and I can't find any....

Dear Thierry,

Dear Thierry,

Can you please describe how you checked? 

--
Best regards,
Iliah Borg

12 vs 14 bits

I did a series of pictures (a simple landscape) under or over exposed (from -3EV to +3 EV) in raw lossless compressed 12 or 14 bits, then I just had a look at the most under and over exposed parts of the pictures, correcting the exposition using captureNX2. however I did all the pictures at ISO 100. Only the speed was changed.

Dear Thierry,

Dear Thierry,

Nikon Capture NX2 imposes limitations to what can be done, specifically when it comes to exposure compensation and noise in shadows. That comes partially from not so accurate arithmetics. Nikon are notorious for using fixed point imprecise calculations, even in the cameras (white balance pre-conditioning). Funny enough, they are a bit more careful with calculations when the underexposure is caused by active D-Lighting.

--
Best regards,
Iliah Borg

Is this true for all cameras,

Is this true for all cameras, or are you talking specifically about the D4s? Wouldn't the quality of ISO 800 be different for different cameras? (this website is amazing - thanks!)

This is true for all cameras,

This is true for all cameras, the reason being is that ISO 800 is at least 2 stops amplification over base ISO, thus 2 last bits of the data are mostly noise.

Thank you for your kind words.

--
Best regards,
Iliah Borg

Nikon sRAW

Thanks for an illuminating analysis. sRAW sounds like a great, higher-quality alternative to jPEGs when the image quality requirements are not as high while still allowing leeway in post-production work.

> sRAW sounds like a great,

> sRAW sounds like a great, higher-quality alternative to jPEGs

Apart from shooting targets, I used sRAW + JPEG option for street shooting, and I do not see any benefits in sRAW; and given the size of full resolution high quality JPEG file - even less.

--
Best regards,
Iliah Borg

Digging into Nikon RAW Size S NEFs

Iliah - Thanks for the first-hand data. Do you find they hold up equally well in post production when changing overall exposure by a stop or so?

Well, not if we speak about

Well, not if we speak about D4s - especially if one needs to bump shadows. But it is not a given we will have the exact same sRAW in "D800s".

--
Best regards,
Iliah Borg

Yes but Nikon perhaps

Yes but Nikon perhaps is just getting us used to this alternate sRAW format, it seems to me as a pretty nifty solution particularly if sensor size increases dramatically. The sport shooters may really like this

> The sport shooters may

> The sport shooters may really like this

placebo Domino in regione vivorum

--
Best regards,
Iliah Borg

Does Nikon sNEF still use non-linear compression?

Hi Iliah,

Thanks for the very informative article, and extensive work you do. Quick question I wasn't able to parse out of the article:

Nikon 12-bit compression uses a non-linear compression curve to take advantage of the fact that higher bits typically just over-quantize shot noise. Therefore, by boosting lower tones more than higher tones (using a gamma-esque curve), they can typically still retain all shadow information despite a 12-bit file that represents a pixel capable of 14-bits of data (or 14 stops of dynamic range). Sony does this as well, right?

Now, I know you're saying that they actually seem to max out at less than 12-bit (i.e. a value of 2549 for the high kelvin rating), but are they still using that non-linear compression curve? Else your dynamic range would drop drastically (to ~11 EV), correct? I'm assuming that doesn't happen...

So my interpretation would be: they're still using the non-linear compression curve they use for 12-bit RAW compression, just with a lower max value? That might ultimately allow them to preserve *almost* as much DR as a 12-bit RAW, but not quite as much if their nonlinear compression curve is optimized to take 14-bit data and preserve it as 12-bit. If they use that same curve but scale to a max of 2549, there'd be some quantization error down in the deep tones, which might explain your result of chroma error in the shadows (that's where quantization error would manifest itself first, I presume, since they're lower signals to begin with).

Many thanks in advance for any clarification, and I've been meaning to follow up with you over at DPR as well :)

Cheers
Rishi

Dear Rishi,

Dear Rishi,

> non-linear compression curve to take advantage of the fact that higher bits typically just over-quantize shot noise

The question whether it is an acceptable thing is still open. Thing is, the full well depth is not used in most cases, especially when the ISO is set to higher than base. So higher bits containg nothing but noise is more of a marketing explanation. At ISO setting of 200 the midtone is about the same as the highlights at ISO setting 1600.

> are they still using that non-linear compression curve?

The sRAW Nikon scheme is based on gamma-esque tone curve; with white balance pre-applied in linear domain. It is in fact very similar to 11-bit JPEG.

 

--
Best regards,
Iliah Borg

Add new comment