-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Add colour vision deficiency simulation #20649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This is based on matplotlib#3279 in essence, but rewritten to use colorspacious and modernized somewhat.
Do we already have colorspacious as a dependency? I'm surprised this was on the CZI grant - I guess I need to read for more detail next time. Are we taking on a burden for these being "correct" in perpetuity? Or can we farm complaints off to colorspacious? Because I'm not aware we have any experts in color perception. This seems pretty niche to get a button on our main GUIs toolbar, and is the kind of thing I'd expect users to have a standalone package to use to explore. |
Oh sorry, I think that was github UI weirdness when I was adding the label/didn't intend it. Sorry didn't catch it |
Sorry for the multi post - I didn't feel that strongly. Internet is flakey here. I appreciate that grants sometimes go in on short deadlines, but I do wonder if new promised features would benefit from fulsome discussion before being added as deliverables. Or having a list of these in our back pockets for use in short notice. |
This was listed as one of the possible 'mid scale' projects that would be enabled by CZI. I think that this is a serious accessibility issue that we should address. However, I agree with @QuLogic that we do not want to get into the business of maintaining the correct conversion functions (as we are not experts on colour theory).
I do not think so and this may be a compelling reason to start making use of the "extra" dependencies and should provide a nice error message of colorspacious is missing. We currently depend on it as documentation requirement. How much work would it be to implement a simpler greyscale conversion? |
A slightly more general point of review: this would add yet another mapping of string names to "python objects" (here, callables). We already have plenty of such mappings (projections, scales, colormaps, arrowstyles, etc.) and I think there's, as always in such cases, two questions to ask: 1) would it be simpler to not support string names, but directly expose the python objects as such (here, it would mean adding e.g. (*) An intermediate possibility, in case explicitly typing the imports is considered too onerous (I had mostly though about this for colormaps, but the idea is the same here) would be to support passing a fully qualified name, e.g. |
This sort of creep is why I am very leery about this. We already have controversy about how to downsample colormapped images (in data space or rgba space?), now we are considering adding another colormapping! When is this to happen? Before or after downsampling? Again I'm not really sure why this isn't more appropriate to an external tool like imagemagick or a graphics package. |
While I don't claim to be an expert, I've read extensively on the topic of color perception and color vision deficiencies. Colorspacious implements the method of Machado et al. (2009), which is to my knowledge the best available color vision deficiency simulation method (the only other one I would consider reasonable is that of Brettel et al. (1997)). As for correctness, the method was devised using experimental data with a tiny sample size and what I consider to be insufficient validation, which makes me hesitant to trust it completely. However, the results seem to be reasonable to first order, and I don't know of any better methods (but do have ideas for running experiments that would either validate the existing method or create a better one). This of course side steps the question of whether something like this should be included. If it is included, we can at least be confident that the simulation method is reasonable, unlike when Chrome and Firefox first included color vision deficiency simulations, which were both based on a method with no scientific provenance that produced clearly-incorrect results. |
Thanks for commenting @mpetroff ! I've moved this to 3.6. |
@QuLogic I wanted to play a bit with this and rebased it on master; do you mind if I push the rebase? |
Also, this seems to map protanopia, deuteranopia, and tritanopia to protanomaly, deuteranomaly, and tritanomaly respectively, even though wikipedia claims that -opias are different from -anomalies (see https://en.wikipedia.org/wiki/File:Color_blindness.png and the corresponding article). Perhaps @mpetroff or someone else with CVD expertise can weigh in? |
@anntzer That image from Wikipedia is misleading, since it suggests that the "-omalies" are discrete conditions. The "-omalies" each represent a spectrum, where the spectral response of one of the cones is shifted, which is parameterized in the Machado et al. (2009) method implemented by Colorspacious as a percentage. Exploring the limiting cases, on one extreme, 0%, there's no shift, corresponding to typical color vision. On the other extreme, 100%, the spectral response of the shifted cone matches that of another cone, i.e., there are only two distinct cones, corresponding to an "-opia." Thus, the "-omaly" simulation with a severity of 100% is an "-opia" simulation. |
Ah, thanks for the clarification. |
Let's just track the remaining work at #25356. |
PR Summary
This is a spiritual, if not implementation-wise, resurrection of #3279. Unlike that PR though, I do not want to get in the business of implementing such transforms. So this reaches out to
colorspacious
to do the actual simulation (which looks entirely different for tritanopia, so that PR might not have implemented it correctly.) Whilecolorspacious
does support simulating partial colour blindness, I did not implement that here as I think it's overkill. I'm also aware thatcolorspacious
may be lacking maintainers, but this does not directly expose details, so we can swap that out as necessary later. This PR should be good enough to review the API to get the filters working.Currently, I have implemented a UI for this in Qt and GTK, and can do the rest if this all seems good. I don't quite like that every backend needs to re-create the list of filters, but the way that
NavigationToolbar.toolitems
is defined does not give much flexibility here. In any case, everything UI-wise is self-contained in the backends right now, so if we refactor to simplify things, we won't be breaking API, and can do that in followup PRs.This is a thing we said I'd get done for the CZI grant, and though it's not fully complete, I'm opening the PR now so I can show it off at SciPy.
PR Checklist
pytest
passes).flake8
on changed files to check).flake8-docstrings
and runflake8 --docstring-convention=all
).doc/users/next_whats_new/
(follow instructions in README.rst there).doc/api/next_api_changes/
(follow instructions in README.rst there).