For a long time I’ve wondered what kind of image test could reveal a camera system’s ability to accurately capture color tonality and separation. Grapes, peaches, apples, leaves on a tree, skin and all kinds of things really aren’t just one color but a whole range and often speckled with lot’s of small bits of color. Some sensor’s like the fat pixel Kodak pro MFDB sensors do a great job with this when shot at base ISO but a lot of the newer CMOS sensors for MFDB don’t and in fact don’t even do so well when shot in multi or micro step mods. My hunch is this boils down to a design decision to optimize the camera for better high ISO performance at the cost of color tonality and separation at base. I think the choice of the CFA array is the biggest change they make but there could be other design changes. Older fat pixel backs were made at a time when the market for such expensive sensors were mostly pros who used studio lights and never needed to shoot higher than base. Anyhow we have tones of MTF like tests that basically test luminosity or single channel performance, but we don’t have many tests that show how some sensor and camera systems can discern changes in color. This article from Jack’s website made me think, and maybe some of what’s there would be helpful. Norman Koren’s Imatest has the fallen coins/ dead leaves test but not sure it’s really the same. In the linked article, Jack wonders what are the closest two colors that can be distinguishable by humans, and I’m wondering about how devise a test to see what are closest colors a system can distinguish and if so with what strength – somewhat like contrast in MTF curves.
While checking some out-of-gamut tones on an xy Chromaticity Diagram I started to wonder how far two tones needed to be in order for an observer to notice a difference. Were the tones in the yello…