Skip to content

ENH: add a casting option 'same_value' and use it in np.astype #29129

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

mattip
Copy link
Member

@mattip mattip commented Jun 5, 2025

Users have asked for an option to raise an error when casting overflows. This is hard to do with the NEP-50 based casting, which only offers blanket rules for casting one dtype to another, without taking the actual values into account. This PR adds a new same_value argument to the casting kwarg (implemented in PyArray_CastingConverter), and extends the casting loop functions to raise a ValueError if same_value casting and the value would be changed by the cast. So far this is only implemented for ndarray.astype, i.e. np.array([1000]).astype(np.int8, casting='same_value') will now raise an error.

Performance: since the PR touches the assignment deep in the inner loop, I checked early on that it does not impact performance when not using same_value casting. The loop pseudo code now looks like this, and when compiled with -O3 (via a gcc pragma specific to the loop), it seems the compiler is smart enough to make the if condition not impact performance.

int same_value_casting = <check some condition>
type2 dst_value; type1 src_value;
while (<condition>) {
    <setup dst_value, src_value>
    dst_value = (cast)src_value;
    if (same_value_casting) {
        <do some checking>;
    }
    <iterate>;
}

Protection: This PR only changes ndarray.astype. Future PRs may implement more. I tried to guard the places that use PyArray_CastingConverter to raise an error if 'same_value' is passed in:

  • array_datetime_as_string (by disallowing 'same_value' in can_cast_datetime64_units)
  • array_copyto
  • array_concatenate (by disallowing 'same_value' in PyArray_ConcatenateInto)
  • array_einsum (by disallowing 'same_value' in PyArray_EinsteinSum)
  • ufunc_generic_fastcall

'same_value' is allowed in

  • array_astype (the whole goal of this PR)
  • array_can_cast_safely
  • npyiter_init (I am pretty sure that is OK?)
  • NpyIter_NestedIters (I am pretty sure that is OK?)

Testing: I added tests for ndarray.astype() covering all combinations of built-in types, and also some tests for properly blocking casting='same_value'.

TODO:

  • disallow astype(..., casting='same_value') with datetime64 or user-defined dtypes?
  • rerun benchmarks against main

Copy link
Member

@seberg seberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is pretty cool that we can do it! That said, I would prefer some more tinkering about the approach.

Basically, I think making it work everywhere is not nearly as hard as it may seem and so I think we have to at least try.

I think the only good way to do this is that the new cast level(s) are reported by the loops (via resolve_descriptors) and integrated into PyArray_MinCastSafety. By doing so, we will achieve two things:

  1. Things should just work everywhere as long as you also set the context.flag correctly everywhere (should also not be too much).
  2. I am very sure that currently things are broken for datatypes that don't implement this flag yet. And unfortunately, that is not great. If a user does casting="same_value" a "safe" cast is fine but we need to reject "unsafe" and "same_kind" casts.

So, I think we really need something for 2, and honestly think the easiest way to solve it is this same path that gives us full support everywhere.
(And yes, I don't love having two "same_value" levels, but I doubt it can be helped.)

(All other things are just nitpicks at this time.)

@@ -2461,7 +2466,7 @@ convert_pyobject_to_datetime(PyArray_DatetimeMetaData *meta, PyObject *obj,
* With unsafe casting, convert unrecognized objects into NaT
* and with same_kind casting, convert None into NaT
*/
if (casting == NPY_UNSAFE_CASTING ||
if (casting == NPY_UNSAFE_CASTING || (casting == NPY_SAME_VALUE_CASTING) ||
(obj == Py_None && casting == NPY_SAME_KIND_CASTING)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think only the None -> NaT thing makes sense here (not including it at all may also make sense).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, reverting this change.

@@ -984,7 +984,7 @@ NpyDatetime_MakeISO8601Datetime(
* the string representation, so ensure that the data
* is being cast according to the casting rule.
*/
if (casting != NPY_UNSAFE_CASTING) {
if ((casting != NPY_UNSAFE_CASTING) && (casting != NPY_SAME_VALUE_CASTING)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are parsing strings, should we actually consider that same value? (seems dubious to me, since this probably doesn't necessarily round-trip, either.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, reverting.

@@ -107,6 +107,12 @@ typedef struct PyArrayMethod_Context_tag {

/* Operand descriptors, filled in by resolve_descriptors */
PyArray_Descr *const *descriptors;
void * padding;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might rename it to _reserved or so to be clearer.

Also, this is public API, so it needs a:

#if NPY_FEATURE_VERSION > NPY_2_3_API_VERSION

guard.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, fixed.

@@ -144,6 +150,11 @@ typedef struct {
#define NPY_METH_contiguous_indexed_loop 9
#define _NPY_METH_static_data 10

/*
* Constants for same_value casting
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be constant for special array method return values, I think (i.e. not just applicable to casts).

Is it really worth to just keep this inside the inner-loop? Yes, you need to grab the GIL there, but on the plus side, you could actually consider reporting the error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the point of having a return value from the inner loop function is so we can use it to report errors. In general I prefer a code style that tries as much as possible to separate programming metaphors: keep the python c-api calls separate from the "pure" C functions.

Copy link
Member

@seberg seberg Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can be convinced to add this. But it is public API and we call these loops from a lot of places so we need machinery to make sure that all of these places give the same errors.

That may be a great refactor! E.g. we could have a single function that does HandleArrayMethodError("name", method_result, method_flags).
But I guess without such a refactor it feels very bolted on to me, because whether or not we check for this return depends on where we call/expect it.

EDIT: I.e. basically, wherever we have PyUFunc_GiveFloatinpointErrors() we would also pass the actual return value and do the needed logic there.

int fpes = npy_get_floatstatus_barrier((char*)&src_data);
if (fpes && PyUFunc_GiveFloatingpointErrors("cast", fpes) < 0) {
return -1;
}
}

return 0;
same_value_overflow:
PyErr_SetString(PyExc_ValueError, "overflow in 'same_value' casting");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need NPY_END_THREADS; before this, but I guess only a debug build may actually notice.
(But either way, I would prefer pushing it into the loop, in general. Even if it duplicates things in the loops a bit... in the end we also call these loops from a few places, so...)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. There is only one goto fail in these functions. Is there a preference for using goto rather than <do cleanup>; return -1 in cases where there is only one goto?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds fine either way to me. I guess sometimes the goto may make sense, just because it is easier to refactor. (Maybe it also hints the compiler similar to an "unlikely", but that is unlikely to matter...)

@@ -2088,6 +2088,7 @@ PyUFunc_GeneralizedFunctionInternal(PyUFuncObject *ufunc,
.caller = (PyObject *)ufunc,
.method = ufuncimpl,
.descriptors = operation_descrs,
.flags = 0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we bother adding the padding, we should probably bother to set it to NULL, I think. That way, if we start using it users can know that the NumPy version is new enough to fill it in.

cast_info.context.flags |= NPY_SAME_VALUE_CASTING;
PyErr_SetString(PyExc_NotImplementedError, "'same_value' casting not implemented yet");
return -1;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we really shouldn't need any of these checks.

What you need to do is modify:

PyArray_MinCastSafety()

to correctly propagate the new cast level. If we can correctly propagate cast levels, everything else will just drop out for free. Although, you have to also report the correct cast level in the cast loops (that is the *_resolve_descriptors functions.)

What I am not quite sure about is how same-value and same-kind interact, unfortunately.
For the user, this doesn't matter, but we do the opposite internally, we say:
This cast is "SAFETY", which is why I suggested a flag, but maybe it is just two levels: same-value-unsafe and same-value-same-kind.

Before you think what madness begot Sebastian... The two levels would not to be user exposed to python users at all! The user would just use same_value (translating to same-value-unsafe). But a cast will be able to report same-value-same-kind to know that both can_cast(x, y, "same_value") and can_cast(x, y, "same_kind") is valid.

(I might be tempted to make this a "flag", in the sense of making sure there is one bit involved that isn't used by other cast levels.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I want to expand the scope of this PR to support other things. Here are the other functions that support a casting kwarg via PyArray_CastingConverter, which do you imagine should support same_value in this PR?

  • datetime_as_string
  • copyto
  • concatenate
  • einsum
  • can_cast (agreed, it should "support" same_value like it does unsafe)
  • npyiter_init, NpyIter_NestedIters (??? anyhow, should NOT support it in this PR)
  • convert_ufunc_arguments, py_resolve_dtypes_generic (should NOT support it in this PR)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I was asking for basically two things:

  1. Fix the MinCastSafety logic and return the (unfortunately two) "same_value" casting levels from the resolve_dtypes functions. This:
    • Fixes user-dtype and other-dtype support.
    • Is the only non "just bolt it on" way to fix np.can_cast (which this will automatically fix).
  2. We should pass cast safety to PyArray_GetDTypeTransferFunction so that we can correctly initialize the context there already. This would:
    • Fix any problems with chained casts (this is a lingering bug, but dunno if it is worth much in practice; casting a structured dtype with one field is probably broken with "same_value" without this).
    • I honestly think that if you do this, you will solve everything else. (except I suggest to not do that magic return in this PR, because doing it right would probably touch more code than doing this right.)

Now 1. seems just necessary to me, to be honest?
2. seems hard to avoid long-term, because we want to support the level in more places, I think and also because of lingering small bugs.

I suspect that doing 2. is mostly easy. The one thing that is probably annoying is threading it through the internal/recrusive calls to PyArray_GetDTypeTransferFunction.

convert_ufunc_arguments, py_resolve_dtypes_generic (should NOT support it in this PR)

Well, while I agree it is not important for them to do so. It also would be completely fine if it drops out. I don't think there is an API reason to dislike it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would actually be happy to try and split out 1. into it's own PR, although we would have to revert it if we don't follow-up. I.e. add support for can_cast() and correct reporting for it, but just ignore it in practice when used?

(Just to keep this more focused and since I think it's very unlikely we won't see this to it's end.)

npy_clear_floatstatus_barrier((char*)&src_data);
}

if (same_value_cast) {
cast_info.context.flags |= NPY_SAME_VALUE_CASTING;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we try to refactor this a big (maybe I always should have). Basically, if you do a:

git grep "\.caller ="

you find a few places where context is initialized (that is cast_info.context here). Since we need to do more complex things here now, I think it would make sense to create a static inline NPY_init_context() that is passed "casting".

Yes, that will requires you to thread the cast level through PyArray_GetDTypeTransferFunction as well. But that is also a feature. While you don't use it, the context is already user exposed by the time we reach here.

I.e. in principle it could be used to return a specialized loop. That isn't particularly interesting for "same-value" maybe, but I don't see much reason not too right now.
Threading it through may be a bit tedious in some places (I hope not very, though), but it should be the only thing needed to make this work for all functions!

@seberg seberg added the 56 - Needs Release Note. Needs an entry in doc/release/upcoming_changes label Jun 5, 2025
Copy link
Member

@jorenham jorenham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add this option to

_CastingKind: TypeAlias = L["no", "equiv", "safe", "same_kind", "unsafe"]

@mattip
Copy link
Member Author

mattip commented Jun 5, 2025

It seems I touched something in the float -> complex astype. Running

$ spin bench --compare -t bench_ufunc.NDArrayAsType.time_astype

gives me

Change Before [d52bccb] After [23dda18] Ratio Benchmark (Parameter)
+ 2.60±0.04μs 6.45±0.05μs 2.48 bench_ufunc.NDArrayAsType.time_astype(('float32', 'complex64'))
+ 4.08±0.01μs 9.21±0.3μs 2.26 bench_ufunc.NDArrayAsType.time_astype(('float64', 'complex128'))
+ 3.99±0.04μs 8.62±0.04μs 2.16 bench_ufunc.NDArrayAsType.time_astype(('float32', 'complex128'))
+ 15.8±0.4μs 17.8±0.2μs 1.13 bench_ufunc.NDArrayAsType.time_astype(('float16', 'float32'))
+ 16.0±0.1μs 18.0±0.3μs 1.13 bench_ufunc.NDArrayAsType.time_astype(('float16', 'int64'))
+ 2.14±0μs 2.39±0.03μs 1.11 bench_ufunc.NDArrayAsType.time_astype(('int32', 'float32'))
+ 18.0±0.2μs 19.9±0.2μs 1.1 bench_ufunc.NDArrayAsType.time_astype(('float16', 'complex128'))
- 2.20±0.03μs 1.98±0.04μs 0.9 bench_ufunc.NDArrayAsType.time_astype(('int16', 'int32'))

@seberg
Copy link
Member

seberg commented Jun 6, 2025

It seems I touched something in the float -> complex astype.

Maybe the compiler just didn't decide to lift the branch for some reason? I have to say that complex -> real casts, wouldn't be my biggest concern, there is this weird ComplexWarning on it (in some branches at least), for a reason...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
01 - Enhancement 56 - Needs Release Note. Needs an entry in doc/release/upcoming_changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants