Skip to content

DOC Clarify how mixed array input types handled in array api #31452

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

lucyleeow
Copy link
Member

@lucyleeow lucyleeow commented May 29, 2025

Reference Issues/PRs

Update documentation now that we have reach consensus on these issues:

closes #31286 (I think no more work is required for this as currently all converted metrics are outputting array in the same namespace/device as input)

Towards #28668
Towards #31274

I have not said close for these 2 because we would need to implement conversion of mixed inputs.

What does this implement/fix? Explain your changes.

Clarify in docs:

  • everything follows X for estimators
  • everything follows y_pred for scoring functions
  • when a scoring function outputs several values, an array of the same namespace and device is output

I realise that this update to the docs is not yet true, but I think this okay as I don't think this will get into 1.7, thus will be in dev docs only, and I think we will have implemented it by next release.
Otherwise, happy to leave this PR and implement the mixed input type conversion first.

Any other comments?

I tried to give reasons as to why we made these decisions. Hopefully it is clear and concise (and correct...), but happy to change/fix.

cc @ogrisel @betatim maybe @StefanieSenger

Copy link

github-actions bot commented May 29, 2025

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: 3cbfeea. Link to the linter CI: here

Copy link
Contributor

@StefanieSenger StefanieSenger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your work @lucyleeow!

I like the clarity of how this new addition is written and I am sure it will help many users. I have left a few minor comments, and I have some more general comments.

  1. What about starting this section with a fast to grasp takeaway for our user's mental model, that we have taken months to uncover: To clearly state that other array inputs follow X and y_pred in terms of array library and device?

  2. Another feedback would be to re-think the heading of this paragraph. It now not only talks about return types, but also on the internal processing of inputs that follow X and y_pred. Users interested in that might not expect this information under this heading.

  3. And, there is something about a sub-part that I do not understand (and which might also happen to users): In lines 194-212 you showcase an example, where Xs namespace is changed during the process of going through a pipeline and there are several things unclear to me:

  • When a pipeline starts out on user input with X and y both being on CPU, then I would not expect scikit-learn to take any measures on automatically processing these on GPU for intermediate steps, since we don't even know if users have GPU...
  • At first sight it seems that the y follows X principle here is broken and upon thinking about it, I could then see that this part is to demonstrate how y being on the different namespace on the beginning of a new step of a pipeline demonstrates exactly how it follows X without that the user needs to take care of it. But right now, there is only a very subtle hint on this in this paragraph and this idea is not clearly expressed.
  • When you write "Note that scikit-learn pipelines do not allow transformation of y (to avoid
    :ref:leakage <data_leakage>).": does that even include changes in which namespace y is stored, is that the reason you bring this up here?
  • I wonder, why X and y both need to be on CPU for TargetEncoder? Is there a technical reason or a more goal-oriented reason in terms of performance?

Oh my, that has gotten to become a lot of feedback. 😬 (sorry)

Comment on lines +216 to 218
The `predict` and `transform` method subsequently expect
inputs from the same array library and device as the data passed to the `fit`
method.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe mention that as of now, scikit-learn does not interfere with how array libraries handle arrays from a different namespace or device passed to predict or transform. Something like:

"If arrays from a different library or on a different device are passed, behavior depends on the array library: it may raise an error or silently convert them."

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I am confused here, why would an array library be doing things within our predict and transform methods? 🤔

Copy link
Contributor

@StefanieSenger StefanieSenger May 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When arrays from different namespaces are passed into a function, then the namespace that the function comes from determines the handling. For instance in xp.add(array1, array2 ) the namespace of xp could be torch and array1 is a torch array, but array2 is not.Then torch will handle this or raise an error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay you've raised some good points 🤔 I'm going to raise these back in the issue

Comment on lines +225 to +228
:class:`~sklearn.linear_model.Ridge`. `y` cannot be transformed by the pipeline
(recall scikit-learn pipelines do not allow transformation of `y`) but as
:class:`~sklearn.linear_model.Ridge` is able to accept mixed input types,
this is not a problem and the pipeline is able to be run.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At first sight it seems that the y follows X principle here is broken and upon thinking about it, I could then see that this part is to demonstrate how y being on the different namespace on the beginning of a new step of a pipeline demonstrates exactly how it follows X without that the user needs to take care of it. But right now, there is only a very subtle hint on this in this paragraph and this idea is not clearly expressed.

@StefanieSenger this is the only part that explicit talks about this. I tried to make it clearer but open to suggests to improve. Thanks!

@lucyleeow
Copy link
Member Author

lucyleeow commented May 30, 2025

These are very good suggestions. I have tried to address them all.

then I would not expect scikit-learn to take any measures on automatically processing these on GPU for intermediate steps, since we don't even know if users have GPU...

Sorry I have made it more explicit now. The function transformer step is: FunctionTransformer(partial(torch.asarray, device="cuda:0")), - which explicitly moves X to GPU.
For reference I was mostly copying Oliviers example here: #31274 (comment)

I wonder, why X and y both need to be on CPU for TargetEncoder? Is there a technical reason or a more goal-oriented reason in terms of performance?

In my mind, the categorical (string) data in X was being target encoded to numbers in TargetEncoder so X has to be on CPU (as due to it containing string data), and y "needs" to be on the same device as X for the estimator to work (TargetEncoder also uses y) - though if we are doing everything follows X, y technically doesn't need to be on CPU. I will try to make things clearer. I was worried I was going into too much detail and making things too long 🤔

When you write "Note that scikit-learn pipelines do not allow transformation of y (to avoid
:ref:leakage <data_leakage>).": does that even include changes in which namespace y is stored, is that the reason you bring this up here?

Yes, pipelines can't touch y.

Comment on lines 184 to 186
Estimators and scoring functions are able to accept input arrays
from different array libraries and/or devices. When mixed input arrays are
passed, scikit-learn silently converted to make them all consistent.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've just realised that this means that I can/should/could call some_estimator.fit(X_torch_on_gpu0, y_numpy, sample_weights=weights_cupy_gpu1) or some other more obscure combination. Feels like mayhem, but I guess that is "fine"? Maybe somewhere in a guide we can mention that this is technically allowed but that maybe people shouldn't do this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion (copied and slightly adjusted from the numpy docs):

.. warning::
  While the mixing of array types from different array libraries may be convenient, it is 
  not recommended. It might have unexpected behavior in corner cases. Users should 
  prefer explicitly passing matching types whenever possible.

Copy link
Member Author

@lucyleeow lucyleeow Jun 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had exactly the same thought. The other consideration is that, currently implementation wise:

supported arrays -> numpy (_convert_to_numpy does a reasonable job)
numpy -> supported arrays (xp.asarray is reasonable)

supported array -> different supported array (you need to convert to numpy and then convert to the desired array type)

I am wondering if we should issue a warning when doing the latter (especially as it requires 2 conversions)?

Also I have been meaning to check dlpack support in torch and cupy (I heard it is supported in cupy v13.x), which could help our decision. Though even when it is supported, what do we want to do about min versions...?

For scoring functions the rule is **"everything follows `y_pred`"** - mixed array
inputs are converted so they are all match the array library and device of `y_pred`.

When a function or method has been called with Array API compatible inputs, the
convention is to return array values of the same array container type and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using "array container type" as a way to make it clear that we are talking about numpy, cupy, pytorch, etc arrays and not "an array of float32s"?

The sentences above are already quite long, so I'm not sure if we should do this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the expression "array container type" (which is already in the sentence) quite vague. In other parts of the docs we have talked about "array library" and "array namespace", so maybe like that (?):

Suggested change
convention is to return array values of the same array container type and
convention is to return a type from the same array library and

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks all, I used:

Suggested change
convention is to return array values of the same array container type and
convention is to return arrays from the same array library and on the same
device as the input data.

We rarely return more than one array but I went for plural. Suggestions welcome!

Comment on lines +206 to +209
This allows estimators to accept mixed input types, enabling `X` to be moved
to a different device within a pipeline, without explicitly moving `y`.
Note that scikit-learn pipelines do not allow transformation of `y` (to avoid
:ref:`leakage <data_leakage>`).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This allows estimators to accept mixed input types, enabling `X` to be moved
to a different device within a pipeline, without explicitly moving `y`.
Note that scikit-learn pipelines do not allow transformation of `y` (to avoid
:ref:`leakage <data_leakage>`).
This behaviour enables pipelines to switch from processing on
the CPU to processing on the GPU at a specific point in the pipeline.

This is still a bit clunky :-/ I think it is enough to mention the use-case and then show the example below, which contains a longer explanation/reminder about the fact that you can't move y.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about:
"This behaviour enables pipelines to switch from processing on
the CPU to processing on the GPU within the pipeline."
?

Copy link
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some partial feedback.

Copy link
Contributor

@StefanieSenger StefanieSenger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are cool improvements, thank you @lucyleeow!
Upon reading through, I came across some minor things, that could be improved.

For scoring functions the rule is **"everything follows `y_pred`"** - mixed array
inputs are converted so they are all match the array library and device of `y_pred`.

When a function or method has been called with Array API compatible inputs, the
convention is to return array values of the same array container type and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the expression "array container type" (which is already in the sentence) quite vague. In other parts of the docs we have talked about "array library" and "array namespace", so maybe like that (?):

Suggested change
convention is to return array values of the same array container type and
convention is to return a type from the same array library and

Comment on lines 216 to 217
* :class:`~sklearn.preprocessing.FunctionTransformer`, with the function
`partial(torch.asarray, device="cuda")`, which moves `X` to GPU, to improve
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if it renders correctly, but in my mind that's a bit more obvious (and spares people to look up the usage of FunctionTransformer before they can read on:

Suggested change
* :class:`~sklearn.preprocessing.FunctionTransformer`, with the function
`partial(torch.asarray, device="cuda")`, which moves `X` to GPU, to improve
* :class:`~sklearn.preprocessing.FunctionTransformer`
(func=partial(torch.asarray, device="cuda")), which moves `X` to GPU, to improve

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried using an 'explicit' title, lets see if it renders correctly...

`partial(torch.asarray, device="cuda")`, which moves `X` to GPU, to improve
performance in the next step.
* :class:`~sklearn.linear_model.Ridge`, whose performance can be improved when
passed arrays on GPU, as it only takes numerical inputs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On "as it only takes numerical inputs": that is not making for improved performance in GPU compared to CPU. We could instead name an actual reason (some matrix computation going on within Ridge) or leave this out completely?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I said: "as they can handle large matrix operations very efficiently"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Clarification of output array type when metrics accept multiclass/multioutput
4 participants