Skip to content

Add colour vision deficiency simulation #20649

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed

Conversation

QuLogic
Copy link
Member

@QuLogic QuLogic commented Jul 15, 2021

PR Summary

This is a spiritual, if not implementation-wise, resurrection of #3279. Unlike that PR though, I do not want to get in the business of implementing such transforms. So this reaches out to colorspacious to do the actual simulation (which looks entirely different for tritanopia, so that PR might not have implemented it correctly.) While colorspacious does support simulating partial colour blindness, I did not implement that here as I think it's overkill. I'm also aware that colorspacious may be lacking maintainers, but this does not directly expose details, so we can swap that out as necessary later. This PR should be good enough to review the API to get the filters working.

Currently, I have implemented a UI for this in Qt and GTK, and can do the rest if this all seems good. I don't quite like that every backend needs to re-create the list of filters, but the way that NavigationToolbar.toolitems is defined does not give much flexibility here. In any case, everything UI-wise is self-contained in the backends right now, so if we refactor to simplify things, we won't be breaking API, and can do that in followup PRs.

This is a thing we said I'd get done for the CZI grant, and though it's not fully complete, I'm opening the PR now so I can show it off at SciPy.

PR Checklist

  • Has pytest style unit tests (and pytest passes).
  • Is Flake 8 compliant (run flake8 on changed files to check).
  • New features are documented, with examples if plot related.
  • Documentation is sphinx and numpydoc compliant (the docs should build without error).
  • Conforms to Matplotlib style conventions (install flake8-docstrings and run flake8 --docstring-convention=all).
  • New features have an entry in doc/users/next_whats_new/ (follow instructions in README.rst there).
  • API changes documented in doc/api/next_api_changes/ (follow instructions in README.rst there).

@QuLogic QuLogic added this to the v3.5.0 milestone Jul 15, 2021
@story645 story645 requested a review from anntzer July 15, 2021 05:07
@jklymak
Copy link
Member

jklymak commented Jul 15, 2021

Do we already have colorspacious as a dependency?

I'm surprised this was on the CZI grant - I guess I need to read for more detail next time. Are we taking on a burden for these being "correct" in perpetuity? Or can we farm complaints off to colorspacious? Because I'm not aware we have any experts in color perception.

This seems pretty niche to get a button on our main GUIs toolbar, and is the kind of thing I'd expect users to have a standalone package to use to explore.

@anntzer
Copy link
Contributor

anntzer commented Jul 15, 2021

@story645 any specific reason you asked me to review this? I guess my comments mostly follow @jklymak's... Perhaps also consider making this use the MEP22 API instead?

@anntzer anntzer removed their request for review July 15, 2021 18:33
@story645
Copy link
Member

Oh sorry, I think that was github UI weirdness when I was adding the label/didn't intend it. Sorry didn't catch it

@jklymak
Copy link
Member

jklymak commented Jul 15, 2021

Sorry for the multi post - I didn't feel that strongly. Internet is flakey here.

I appreciate that grants sometimes go in on short deadlines, but I do wonder if new promised features would benefit from fulsome discussion before being added as deliverables. Or having a list of these in our back pockets for use in short notice.

@tacaswell
Copy link
Member

I'm surprised this was on the CZI grant - I guess I need to read for more detail next time.

This was listed as one of the possible 'mid scale' projects that would be enabled by CZI. I think that this is a serious accessibility issue that we should address. However, I agree with @QuLogic that we do not want to get into the business of maintaining the correct conversion functions (as we are not experts on colour theory).

Do we already have colorspacious as a dependency?

I do not think so and this may be a compelling reason to start making use of the "extra" dependencies and should provide a nice error message of colorspacious is missing. We currently depend on it as documentation requirement.


How much work would it be to implement a simpler greyscale conversion?

@anntzer
Copy link
Contributor

anntzer commented Jul 17, 2021

A slightly more general point of review: this would add yet another mapping of string names to "python objects" (here, callables). We already have plenty of such mappings (projections, scales, colormaps, arrowstyles, etc.) and I think there's, as always in such cases, two questions to ask: 1) would it be simpler to not support string names, but directly expose the python objects as such (here, it would mean adding e.g. matplotlib.colors.deuteranomaly as a free function), and let the API be simply to pass the function itself rather than its string name (*), and 2) if we really want string names, should the mapping be user-extensible? do we want to replicate e.g. the colormap machinery for protecting builtin names from being overridden? etc.

(*) An intermediate possibility, in case explicitly typing the imports is considered too onerous (I had mostly though about this for colormaps, but the idea is the same here) would be to support passing a fully qualified name, e.g. set_filter("matplotlib.colors.deuteranomaly") or set_colormap("mycmappackage.mycmapname") and internally perform the import, which may be a reasonable middle ground? (Users don't have to type an extra import, and we don't have to maintain a separate mapping.)

@jklymak
Copy link
Member

jklymak commented Jul 17, 2021

This sort of creep is why I am very leery about this. We already have controversy about how to downsample colormapped images (in data space or rgba space?), now we are considering adding another colormapping! When is this to happen? Before or after downsampling?

Again I'm not really sure why this isn't more appropriate to an external tool like imagemagick or a graphics package.

@mpetroff
Copy link
Contributor

Are we taking on a burden for these being "correct" in perpetuity? Or can we farm complaints off to colorspacious? Because I'm not aware we have any experts in color perception.

While I don't claim to be an expert, I've read extensively on the topic of color perception and color vision deficiencies. Colorspacious implements the method of Machado et al. (2009), which is to my knowledge the best available color vision deficiency simulation method (the only other one I would consider reasonable is that of Brettel et al. (1997)). As for correctness, the method was devised using experimental data with a tiny sample size and what I consider to be insufficient validation, which makes me hesitant to trust it completely. However, the results seem to be reasonable to first order, and I don't know of any better methods (but do have ideas for running experiments that would either validate the existing method or create a better one).

This of course side steps the question of whether something like this should be included. If it is included, we can at least be confident that the simulation method is reasonable, unlike when Chrome and Firefox first included color vision deficiency simulations, which were both based on a method with no scientific provenance that produced clearly-incorrect results.

@tacaswell tacaswell modified the milestones: v3.5.0, v3.6.0 Aug 14, 2021
@tacaswell
Copy link
Member

Thanks for commenting @mpetroff !


I've moved this to 3.6.

@anntzer
Copy link
Contributor

anntzer commented Jan 2, 2022

@QuLogic I wanted to play a bit with this and rebased it on master; do you mind if I push the rebase?

@anntzer
Copy link
Contributor

anntzer commented Jan 2, 2022

Also, this seems to map protanopia, deuteranopia, and tritanopia to protanomaly, deuteranomaly, and tritanomaly respectively, even though wikipedia claims that -opias are different from -anomalies (see https://en.wikipedia.org/wiki/File:Color_blindness.png and the corresponding article). Perhaps @mpetroff or someone else with CVD expertise can weigh in?

@mpetroff
Copy link
Contributor

mpetroff commented Jan 2, 2022

@anntzer That image from Wikipedia is misleading, since it suggests that the "-omalies" are discrete conditions. The "-omalies" each represent a spectrum, where the spectral response of one of the cones is shifted, which is parameterized in the Machado et al. (2009) method implemented by Colorspacious as a percentage. Exploring the limiting cases, on one extreme, 0%, there's no shift, corresponding to typical color vision. On the other extreme, 100%, the spectral response of the shifted cone matches that of another cone, i.e., there are only two distinct cones, corresponding to an "-opia." Thus, the "-omaly" simulation with a severity of 100% is an "-opia" simulation.

@anntzer
Copy link
Contributor

anntzer commented Jan 5, 2022

Ah, thanks for the clarification.

@QuLogic QuLogic modified the milestones: v3.6.0, v3.7.0 Aug 24, 2022
@tacaswell tacaswell modified the milestones: v3.7.0, v3.8.0 Dec 16, 2022
@anntzer
Copy link
Contributor

anntzer commented Feb 2, 2023

@QuLogic I think this can mostly be closed as covered by #22316 (perhaps we can specifically release mplcvd as a separate package), but I could use some help to make that example work with gtk4, which I just realized is not working yet.

@anntzer
Copy link
Contributor

anntzer commented Mar 1, 2023

Let's just track the remaining work at #25356.

@anntzer anntzer closed this Mar 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants