how is conversion to RGB done?

Forums: 

Hi, thanks for RawDigger, I was pleased to find it as a way to view all photosites in a raw file. Two questions:

1) When Display is set to 'RGB render', I gather white balancing and exposure may be adjusted, as controlled by Preferences. When building the RGB values for a pixel, is any information other than the 4 RGBG values used to build the RGB value? I ask because dcraw documentation talks about interpolation, etc, suggesting that it (and thus maybe s/w based on it) use values from neighbouring photosites as well as just the four 'local' values.

The reason I'm interested is because I'm seeing an odd pattern in the pixels along a slanted edge transition (see attached) and wonder if it truly reflects the 'real values' or a processing artifact. I'm guessing/hoping that RawDigger is building pixels strictly from their four photosites, so that pattern is truly in the data from the camera? (Thus illustrating a value of RawDigger over other s/w.)

2) I don't get how the coordinates (displayed at the top-center) work. When not in 2x2 mode (ie., 2x2 off), at 1000% zoom, 'Raw channel' B (say) shows B photosites surrounded by black, nicely illustrating how one B photosite exists per RGB pixel. I'd expect the coordinates to advance one unit per B photosite -- but the coordinates advance as the mouse moves over each photosite position, ie., the coordinate for neighbouring B photosites differs by two, not one. Similarly, in 'Raw composite' mode, each photosite has a coordinate and the bottom-right corner is not 2x the image dimension but 1x. Then in 'RGB render' mode, the size of the displayed squares is the same as it was for a photosite and the coordinates increment as for photosites, but the RGBG2 values change only every two coordinates, suggesting each coordinate is a photosite not a pixel -- yet the botton-right corner is still 1x the image dimensions.

So I'm confused. In particular, why does the 'RGB render' display show photosite-sized 'pixels' rather than pixels the size of four photosites, ie., 2x2 size? Isn't a RGB pixel derived from the four photosites and thus would occupy a 2x2-size square?

I'd expect a distinction between pixel addressing (eg., W x H) and photosite addressing (eg., 2W x 2H). Alternatively, I could understand a system like 'pixel 1000, photosite B' or '1000B' (or '1000G2', etc).

What am I missing?

Thanks for your help

Below is the image (a piece of screen captured from RawDigger when displaying 'RGB render' at 1000% zoom, such that RGB pixels are visible as squares). There's an odd alternating pattern in the rows.

Image: 

All numbers in Image

All numbers in Image statistics, Selection/Sample, and under mouse cursor, so all, are RAW values processed according to Data Processing section in Preferences (by default, black is subtracted with Auto black level setting).

For bayer image, numbers under 'mouse cursor' are shown according to '2x2 pixels' setting: only one component is non-zero for '2x2' is off, and all 4 numbers in 2x2 block is displayer otherwise.

RGB Render is for 'pleasing display' only. It is only displayed (and exported, if requested). RGB-processed numbers are never displayed in top information panels.

RGB render is done in usual way: black subtraction, white balance, bayer data intepolation (sure, adjacent pixels are in use), cut of out-of-range values, gamma correction, and finally display.

If you want to see 'original'

If you want to see 'original' pixel values, use these settings:

 - 2x2 pixels off
 - gamma/autoscale - also off (to see unscaled and linear image)
 - Display mode: Raw composite.

Thanks, I think I get it. I

Thanks, I think I get it. I didn't realize that 'photosites' are shared or 'reused' to build a RGB pixel; that an individual photosite ends up contributing to more than one RGB pixel. Thus a W x H image is built from W x H 'photosites'.

I gather from the 'interpolation options' on the dcraw man page that there are different methods of 'reusing' the adjacent photosites.

I wonder how RawDigger does it, for 'RGB render'.

Thanks again for cluing me in.

Each 'original' pixel

Each 'original' pixel contains only single color channel. So, to build full-color image with same resolution one need to interpolate adjacent pixels to get missing values.

RawDigger do not use dcraw.c

RawDigger do not use dcraw.c 'as is'

We use own opensource LibRaw library. This library was started with dcraw.c source (about 8 years ago), but evolved far from it.

RawDigger is not intended to study different demosaic methods. RGB render is provided only to get familiar rendering (not greenish 'RAW composite). RawDigger purpose is to get access to raw data.

Add new comment