Hacker News new | past | comments | ask | show | jobs | submit
> The article, and Oklab, is not by a color scientist. He is/was a video game developer taking some time between jobs to do something on a lark.

As a non-color scientist sometimes dealing with color, it would probably be nice if the color scientists came out sometimes and wrote articles that as readable as what Ottosson produces. You can say CIECAM16 is the solution as much you want, but just looking at the CIECAM02 page on Wikipedia makes my brain hurt (how do I use any of this for anything? The correlate for chroma is t^0.9 sqrt(1/100) J (1.64 - 0.29^n)^0.73, where J comes from some Chtulhu formula?). It's hard enough to try to explain gamma to people writing image scaling code, there's no way ordinary developers can understand all of this until it becomes more easily available somehow. :-) Oklab, OTOH, I can actually relate to and understand, so guess which one I'd pick.

Mark Fairchild, one of the authors of CIECAM02, recently published a paper that heavily simplified that equation: https://markfairchild.org/PDFs/PAP45.pdf

If the link doesn't work, the paper is called: Brightness, lightness, colorfulness, and chroma in CIECAM02 and CAM16.

Also, if you want a readable introduction to color science, you can check out his book Color Appearance Models.

Thanks for the link! To anyone looking for a summary: Fairchild's paper explains the origin and nature of various arbitrary and empirically/theoretically unjustified and computationally expensive complications of CAM16 (from Hunt's models from the 80s–90s via CIECAM02 and CIECAM97s) which were apparently originated as duct-taped workarounds that are no longer relevant but were kept based on inertia. And it proposes alternatives which match empirical data better.

Mark Fairchild is great in general, and anyone wanting to learn about color science should go skim through his book and papers: he does the serious empirical work needed to justify his conclusions, and writes clearly. It was nice to drop by his office and shake his hand a few years ago.

In an email a couple years ago he explained that he had nothing to do with CAM16 specifically because the CIE wouldn't let him on their committees anymore (even as a volunteer advisor) without signing some kind of IP release giving them exclusive rights to any ideas discussed.

J is the lightness channel and similar to other lightness formulas in other colorspaces for SDR lightness. I.e. usually idea is to take a lightness formula, and just arrange hues/chromas for each value of J.

Yea, Jab instead of Lab in ciecam haha. Btw, ciecam is pretty bad predicting highlights, it was designed for SDR to begin with. Lightness formula in ICtCp is more interesting (and here it is "I").

But yea, difficulty of ciecam02 comes from the fact that it tries to work for different conditions, i.e. if usual colorspaces just need to worry about how everything works with one color temperature (usually 5500 or 6500K), ciecam02 tries to predict how colorspace would look like for different tempratures and for different viewing conditions (viewing conditions do not contribute much difference though).

Oh, and of course, ciecam02 defines 3 colorspaces, because it is impossible to arrange ab channels in euclidean space :) TLDR, there is metric de2000 to compare 2 colors, but this metric defines non-euclidean space. While any colorspace tries to bend that metric to fit into euclidean space. So, we have a lot of spaces that try it with different degree of success.

Cam02 is over-engeneered, but it is pretty easy to use, if you just care about cam-ucs colorspace (one of these three) and standard viewing conditions.

If you kinda just wanna see difference between colorspaces, good papers comparing colorspaces have actually nice visual graphs. If you want to compare them for color editing, I've implemented a colorgrading plugin for photoshop: colorplane (ah, kinda ad ;)).

From most interesting spaces, I would say colorspaces, optimized using machine learning are pretty interesting (papers from 2023/2024). But yeah, this means they work using tensorflow, so you need to use batching, when converting from/to RGB. But yeah, what they did, they took CieLab (yes, that old one), used L from it and stretched AB channels to better fit de2000 metric prediction. Basically, like many other colorspaces are designed, just machine learning is cool to minimise errors in half-automatic way. Heh, someday I should write a looong comparison of colorspaces in an easy language with examples :)