Skip to content

NEP 35: Finalize like= argument behaviour before 1.21 release #17075

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
seberg opened this issue Aug 12, 2020 · 40 comments
Closed

NEP 35: Finalize like= argument behaviour before 1.21 release #17075

seberg opened this issue Aug 12, 2020 · 40 comments

Comments

@seberg
Copy link
Member

seberg commented Aug 12, 2020

Before we release 1.20, we should finalize the behaviour. Or possibly hide/replace the API. Unless NEP 35 was officially accepted, there are still alternative suggestions such as np.get_array_module which may win the race.

There are also a few issues which require attention if we decide to keep the API largely untouched:

  1. (This is currently NOT valid anymore, so it is the strict version) l = [1, 2]; np.asarray(l, like=l) is currently valid. However, modern API may want to only accept array-object as input and not objects such as lists/tuples, which the user should convert ahead of time. The above spelling could raise an error here, requiring the use of a different function (e.g. np.asduckarray()) to achieve the no-error version, which most existing API currently uses:

    def strict_library(x):
         x = np.asarray(x, like=x)  # raise error if x is a list
    
    def non_strict_library(x):
         x = np.asduckarray(x)

    the alternative might be to force the use of a different API:

    def strict_library(x):
         x = np.asarray(x, like=x, strict=True)

    (Note that there might be a use for supporting an obj.__asduckarray__() method to mirror obj.__array__(), which would require np.asduckarray())

  2. (This seems like an extension, since 1. is strict it is still possible) There may be a use for finding a common array-like for multiple input functions:

    def function(arr1, arr2):
        tmp = np.arange(1000, like=np.common_arraylike(arr1, arr2))

    it was previously discussed that this could be achieved by making like work on tuples: np.arange(1000, like=(arr1, arr2).

  3. Ensure that the name (currently like=) is thought out and has been discussed sufficiently.

  4. (seberg) Are we sure that some of these function should never dispatch based on the existing (usually scalar) input? (I guess so, but I thought I saw np.linspace in there, which made me wonder, but I suppose its not there). In any case, we may want to review each function more individually. (Linspace is not include, so this seems fine)

  5. (We currently dispatch aliases) Should we dispatch the aliases np.asarray(), etc. or rather only use the full np.array() call. The reason for inclusion is currently that CuPy would have issues with the last version, because it doesn't support the subok argument (there are no cupy subclasses):

@pentschev
Copy link
Contributor

pentschev commented Aug 19, 2020

A few comments on the 4 points from the description:

  1. This currently follows the same pattern of __array_function__, where dispatching on a list would return a np.ndarray, for example:

    >>> import numpy as np
    >>> np.ones_like([1, 2, 3])
    array([1, 1, 1])
    >>> np.concatenate(([1, 2], (3, 4)))
    array([1, 2, 3, 4])

    I think changing that would go against the behavior in the rest of __array_function__.

  2. I'm still not certain if this is a job for NumPy, maybe it would make more sense that the application handles that? For example, Dask may want to implement a da.common_arraylike and decide what to do if the user passes (cupy_array, sparse_array), or any other special cases it wants to support, maybe?

  3. The naming like= was mainly chosen and somewhat loosely agreed upon given its interchangeability with the name of empty_like and derived functions, where like= behaves similarly to the intent that _like as a suffix have in those functions.

  4. I originally mentioned np.linspace in NEP-35 because I failed to notice that it already dispatches on __array_function__, that mention has been removed in NEP: Adjust NEP-35 to make it more user-accessible #17093 .

@rgommers
Copy link
Member

I'm still not certain if this is a job for NumPy, maybe it would make more sense that the application handles that?

I agree, a lot of effort was spent into making this work with weird corner cases, but that it was a good idea to do wasn't spelled out clearly anywhere. There may be a use case where one would like to mix something like a Pint array with a numpy array (not sure if that's a subclass), but things like mixing cupy and sparse arrays should just error out.

@rgommers
Copy link
Member

  1. The naming like= was mainly chosen and somewhat loosely agreed upon given its interchangeability with the name of empty_like and derived functions, where like= behaves similarly to the intent that _like as a suffix have in those functions.

That's a little deceiving rather than helpful maybe. One takes over shape, dtype attributes, the other one only the array type.

@rgommers
Copy link
Member

I think changing that would go against the behavior in the rest of __array_function__.

It's very different though, like= is a keyword in functions that's clearly user-visible, and therefore should have unambiguous behaviour.

The __array_function__ link is of secondary importance, that's not what a user sees - it's an implementation detail. Saying like=list and then not getting a list back is odd.

@pentschev
Copy link
Contributor

I'm still not certain if this is a job for NumPy, maybe it would make more sense that the application handles that?

I agree, a lot of effort was spent into making this work with weird corner cases, but that it was a good idea to do wasn't spelled out clearly anywhere. There may be a use case where one would like to mix something like a Pint array with a numpy array (not sure if that's a subclass), but things like mixing cupy and sparse arrays should just error out.

I'm not sure if you're saying common_arraylike should or should NOT be part of NumPy. I agree some specific mixing isn't always desired, but it's still unclear whether this is NumPy's job to decide or not.

@pentschev
Copy link
Contributor

  1. The naming like= was mainly chosen and somewhat loosely agreed upon given its interchangeability with the name of empty_like and derived functions, where like= behaves similarly to the intent that _like as a suffix have in those functions.

That's a little deceiving rather than helpful maybe. One takes over shape, dtype attributes, the other one only the array type.

Yes, but taking shape and dtype is part of empty_like's implementation, a downstream library could still choose not to implement or ignore those attributes.

@pentschev
Copy link
Contributor

I think changing that would go against the behavior in the rest of __array_function__.

It's very different though, like= is a keyword in functions that's clearly user-visible, and therefore should have unambiguous behaviour.

The __array_function__ link is of secondary importance, that's not what a user sees - it's an implementation detail. Saying like=list and then not getting a list back is odd.

But saying np.array(..., like=list) would also be deceiving then, is np.array supposed to give you an array or not?

Note that if we error when like=list, it would render like virtually useless in trying to make code more agnostic, consider the following example:

def my_pad(arr, padding):
    padding = np.asarray(padding, like=arr)
    return np.concatenate([padding, arr, padding])

my_pad([1, 2, 3], [0, 0])  # Errors because `arr` is a `list`

@rgommers
Copy link
Member

But saying np.array(..., like=list) would also be deceiving then, is np.array supposed to give you an array or not?

  • np.array is supposed to give you an array.
  • When you use like= I expect "an array of type given by the like= parameter".
  • Returning "an array of type list" makes no sense. That's exactly why it seems logical to raise a TypeError instead.

Note that if we error when like=list, it would render like virtually useless in trying to make code more agnostic, consider the following example:
...

That's exactly why @seberg talks about __duckarray__.

@rgommers
Copy link
Member

Returning "an array of type list" makes no sense. That's exactly why it seems logical to raise a TypeError instead.

For more context: for creating new libraries, it would be great if there was a clean way of accepting multiple array types, but not accept any kind of sequence or generator. So separating the ultra-flexible (and error-prone) array_like stuff from the support for multiple array types seems like a good idea.

@rgommers
Copy link
Member

I'm not sure if you're saying common_arraylike should or should NOT be part of NumPy. I agree some specific mixing isn't always desired, but it's still unclear whether this is NumPy's job to decide or not.

I'm saying "I agree with you, it's unclear". Would be nice to go back and retro-actively do for NEP 18 what you just did for your NEP - add motivation, scope, use cases. That would sort this out.

@seberg
Copy link
Member Author

seberg commented Aug 21, 2020

I think the like argument discussion is a bit tricky, partially because we either need extremely complete use cases (which are difficult), or we need to sort out the basic concepts and boil actually map out the differences a bit theoretically (which is also difficult). Ideally, we probably should have both!

The difference in the like argument is seems a lot like the difference between implicit and explicit that NEP 37 details. The current (no error) version extends the implicit dispatching. The version which errors for non-array-object inputs fits much better with an explicit concept where the user picks explicitly what they want.

Now, you may be arguing: That doesn't make sense, both versions pick the output explicitly here! That is true, because there is only a single argument, making implicit and explicit pretty much the same thing.
Thus, my opinion on the implementation PR that we probably don't need to worry too much about it.

But, realizing that differences should show up for multiple arguments, we can remember np.linspace (which was even included in a first NEP draft!):

np.linspace(1, 2, num=10000, like=cupy_array)

very much looks like it should work, but doesn't, because we force you to spell it:

np.linspace(cupy.array(1), 2, num=10000)

Adding a like= argument to linspace is interesting, because the desired behaviour becomes muddled:

np.linspace(cupy.array(1), 2, num=10000, like=numpy_array)

looks a lot like "Try to create a NumPy array, or fail" (although I am sure you can give meaning to returning a cupy array here). But:

np.linspace(cupy.array(1), 2, num=10000, like=[1, 2, 3])

is probably something we can agree on being unclear as to what should be expected.

@seberg
Copy link
Member Author

seberg commented Aug 21, 2020

But, that in large parts circles back to the NEP 37 "We need something explicit" argument. So I don't know if it helps with the specific issue...

With respect to NEP 37, I think a problem is that we do not know how much we need explicit dispatching. E.g. we could also allow to write the above as:

np.call_dipatched(np.linspace, cupy_array)(1, 2, num=1000)

instead of adding like=. Which looks like a kludge, but maybe it could be propped into many of the remaining holes where we want explicit dispatching but __array_function__ doesn't offer it. (Although explicit vs. implicit was just one consideration where NEP 37 differs.)

@rgommers
Copy link
Member

Let me be more explicit on this example:

def my_pad(arr, padding):
    padding = np.asarray(padding, like=arr)
    return np.concatenate([padding, arr, padding])

This my_pad would accept multiple array types, and nothing else. This is how I think we'd prefer to write future libraries like SciPy or scikit-learn (be more strict than currently, don't accept pretty much anything for array_like) - that's already how things built on top of PyTorch, TensorFlow, CuPy etc. tend to work.

Also think of the situation where one would like to write a new strict library like that - if like= is too forgiving, that is annoying to do - it would need an extra check like

if not hasattr(arr, '__array_function__'):
    raise TypeError("arr isn't a supported array type")

Now, for existing libraries that already use asarray and accept array_like, one would have to write this like:

def my_pad(arr, padding):
    arr = np.asduckarray(arr)   # convert array_like's to numpy arrays
    padding = np.asarray(padding, like=arr)
    return np.concatenate([padding, arr, padding])

When you're modifying existing code to add like=, adding a separate one-line check for array_like's is not much effort.

The difference in the like argument is seems a lot like the difference between implicit and explicit that NEP 37 details.
....
But, that in large parts circles back to the NEP 37 "We need something explicit" argument

It doesn't, it's completely unrelated. I suggest to keep the discussion to like= and __array_function__ and make that a consistent and well-motivated design.

Saying that like= is implicit is odd, it's a very explicit request. like=instance_of_some_type should give back an instance of some_type.

np.linspace(cupy.array(1), 2, num=10000, like=[1, 2, 3])
is probably something we can agree on being unclear as to what should be expected.

Yes, and if it's ambiguous like that it should error out.

@pentschev
Copy link
Contributor

I've been thinking about this the past few days, and I see your point @rgommers in dealing with existing vs future libraries. Generally speaking, I understand the point and have no direct objections to requiring something like the following, as per your example:

arr = np.asduckarray(arr)   # convert array_like's to numpy arrays
padding = np.asarray(padding, like=arr)

However, I don't really like that within Dask, for example, we would need probably everywhere to call np.asduckarray and pass the result to like=, which would probably overcomplicate things and eventually lead to many places where people will just forget the np.asduckarray call before np.some_array_creation(..., like=arr). Granted, like= is also probably easy to forget as well, but requiring both is going to be burdensome for maintainers. The second point is we'll need to create that new array for all non-arrays, thus increasing the memory footprint for an array that is not even going to be used apart from verifying its type, which seems odd too.

A suboptimal solution would be to have two arguments, like= (for future libraries) and duck_like= (for existing libraries, where this would internally be forgiving to non-arrays), but I emphasize that is suboptimal and very likely undesired as well. Perhaps there's some alternative solution, but I can't think of one now.

@rgommers
Copy link
Member

However, I don't really like that within Dask, for example, we would need probably everywhere to call np.asduckarray and pass the result to like=,

My intuition says you're not having to add an extra asduckarray call, but are amending an existing check. But I'm not sure - can you point to some examples of code in Dask where you'd like to add like=?

@pentschev
Copy link
Contributor

However, I don't really like that within Dask, for example, we would need probably everywhere to call np.asduckarray and pass the result to like=,

My intuition says you're not having to add an extra asduckarray call, but are amending an existing check. But I'm not sure - can you point to some examples of code in Dask where you'd like to add like=?

Certainly, for example https://github.com/dask/dask/blob/51509339c1cc476f5f96ae78e61daf125037ca20/dask/array/slicing.py#L878 . Note that it explicitly checks if it's an instance of list or np.ndarray in the line just above, which would probably require extending to or hasattr(ind, "__array_function__") or something similar.

@rgommers
Copy link
Member

That code is pretty weird, I hope there's not too many explicit isinstance checks like that. I imagine you want to write it like

elif isinstance(ind, (list, np.ndarray) or hasattr(ind, "__array_function__")):
    x = np.asanyarray(ind, like=ind)
    if x.dtype == bool:
        ...

and avoid turning a duckarray that's not a subclass into a numpy.ndarray with asanyarray?

There's no reason to use like= at all there, looks to me like that code would be better written as:

if isinstance(ind, list);
    ind = np.asarray(ind)

if hasattr(ind, "__array_function__")):
    if x.dtype == bool:
       ...

It likely doesn't matter much, but that way it's also faster - avoids the asanyarray call for array input.

@pentschev
Copy link
Contributor

Sorry @rgommers for the late reply, I'm still OOO until the end of this week and have been forcing myself on reducing screen time.

I agree that the example above is weird, and perhaps a better one would be in https://github.com/dask/dask/blob/51509339c1cc476f5f96ae78e61daf125037ca20/dask/array/slicing.py#L539-L542 , where we have index being an array (e.g., cupy.ndarray) and chunks as a tuple, and could rewrite it as follows:

    index = np.asanyarray(index)
    cum_chunks = cached_cumsum(chunks)
    cum_chunks = np.asarray(cum_chunks, like=index)

    chunk_locations = np.searchsorted(cum_chunks, index, side="right")

@rgommers
Copy link
Member

rgommers commented Sep 1, 2020

@pentschev no worries at all, reduced screen time sounds like an excellent idea.

index = np.asanyarray(index)

That will still convert to numpy.ndarray, so that should be either

index = np.asanyarray(index, like=index)

or

# note: this won't quite do asanyarray for older numpy's according to NEP 30, so may need
# a tweak - maybe add a as_subclass=False keyword?
index = np.duckarray(index)

The first option is very weird, so I'd say the duckarray one is preferred.

Also there is no cupy.searchsorted yet, but once there is, your code looks about right. The extra np.asarray is indeed needed, to turn tuples into arrays rather than let numpy.searchsorted do that internally. However, it is needed either way, independent of the decision on the more strict like= that would raise an exception as I propose. So it's still not a counter-example of the strict mode being more burdensome for Dask.

@pentschev
Copy link
Contributor

index = np.asanyarray(index)

That will still convert to numpy.ndarray, so that should be either

index = np.asanyarray(index, like=index)

Sorry @rgommers , indeed, what I meant was index = np.asanyarray(index, like=index), I was focusing specifically on the chunks part and ignored index.

Also there is no cupy.searchsorted yet, but once there is, your code looks about right.

I wasn't focusing on the downstream functionality either, but in this case it is implemented, see cupy/cupy#2726, documentation was missing and got fixed in cupy/cupy#3908 though.

The extra np.asarray is indeed needed, to turn tuples into arrays rather than let numpy.searchsorted do that internally. However, it is needed either way, independent of the decision on the more strict like= that would raise an exception as I propose. So it's still not a counter-example of the strict mode being more burdensome for Dask.

I agree that wasn't the best example, I would have to look more carefully at all use cases in Dask to quantify how much of a burden that would be. Doing another quick search, one other example I can find that I think would match that is https://github.com/dask/dask/blob/377965addc6be168ad1c697999ded6f9abdf6cce/dask/array/core.py#L3964 . Consider for instance we decide to follow the same ordering as of __array_function__, as described in NEP-18:

NumPy will gather implementations of __array_function__ from all specified inputs and call them in order: subclasses before superclasses, and otherwise left to right. Note that in some edge cases involving subclasses, this differs slightly from the current behavior of Python.

If that's how we should handle this case, we would then rewrite that similar to the following:

args = [np.asarray(a, like=args[0]) if isinstance(a, (list, tuple)) else a for a in args]`

The leftmost argument in that case would dominate the array type, and if it's list or tuple, then we would need to handle the TypeError, probably with np.duckarray, which would be a bit burdensome. Perhaps this case is a bit of stretch though as I don't know if left-to-right ordering makes the most sense for this particular case.

@pentschev
Copy link
Contributor

Sorry for the silence here @rgommers and @seberg , but I think I'm now confident on picking this backup to finalize it for 1.20.

I did quite some testing of NEP-35 in Dask (see dask/dask#6738) and it was very positive overall. I was able to cover pretty much everything, with a couple of exceptions (random.choice and some linalg, details in the PR) for unrelated reasons. With that said, I'm confident NEP-35 is able to push Dask's coverage of __array_function__ forward, currently I'm not aware of any __array_function__ limitations that's still holding Dask back.

I also wasn't able to find any cases where lists and tuples would pose a problem with like=, nor make the code more complex due to that, at least not in the already existing Dask tests or the ones that I introduced. Given that, I would be equally supportive of either strict or non-strict usage for those types with like=.

Apart from that, I just opened #17678 . The PR suggests allowing the passage of like= downstream, refer to its description, and let's please try to keep the discussion about that particular change in the PR itself, so nobody who's involved with that misses any important information related to that in here.

Finally, I have no relevant updates to my comment in #17075 (comment) -- and follow-up discussion -- that address the original description of this issue. I see only that a new item (number 5) was added since my reply, addressing whether we should dispatch only np.array or aliases such as np.asarray too as we briefly discussed in #16935, and I still believe we should dispatch aliases as well, this is because downstream libraries are not required to implement such aliases as functions of np.array itself, some relevant comments begin in #16935 (comment) .

@seberg
Copy link
Member Author

seberg commented Nov 12, 2020

We have to make a final decision soon for the 1.20 release. From my perspective we were tending towards the strict like= argument (it seems like gh-17678 is unnecessary and superseded now)?

My problem is that I think we need to do something, but I am not sure if this can be a final right solution. I don't really see the like argument being major hassle/addition API wise, it should be fairly easy to deprecate again (if annoying I admit) and it seems like a major improvement for __array_function__ which is not going to go away in the midterm. And even if it is limited to the dask/cupy world, I guess the impact may effectively be pretty big, but I can't really judge it.

Dispatching aliases seems fine to me. Did we already have a decision on whether the tuple syntax is useful here?

We could plan a meeting dedicated to these decisions end of next week or so, if it helps. We definitely have to call this experimental, and I am not quite sure what it would mean for dask if we need to modify behaviour.

I am more than happy to do the code fixups, getting the decisions out of the way is the tricky part here.

@hameerabbasi
Copy link
Contributor

We could plan a meeting dedicated to these decisions end of next week or so, if it helps. We definitely have to call this experimental, and I am not quite sure what it would mean for dask if we need to modify behaviour.

A meeting with all interested parties seems like the best way to go about this.

@pentschev
Copy link
Contributor

We have to make a final decision soon for the 1.20 release. From my perspective we were tending towards the strict like= argument (it seems like gh-17678 is unnecessary and superseded now)?

I think that seems to be a preference for you and @rgommers at least. From my perspective, either way will work, so I'll defer the final solution to both of you.

My problem is that I think we need to do something, but I am not sure if this can be a final right solution. I don't really see the like argument being major hassle/addition API wise, it should be fairly easy to deprecate again (if annoying I admit) and it seems like a major improvement for __array_function__ which is not going to go away in the midterm. And even if it is limited to the dask/cupy world, I guess the impact may effectively be pretty big, but I can't really judge it.

Indeed, this is a major improvement for Dask especially, it will finally allow us to have complete (or at least complete to the extent of what I've seen so far) coverage of dask.array with __array_function__.

Dispatching aliases seems fine to me. Did we already have a decision on whether the tuple syntax is useful here?

I'm not sure I understand what you mean by tuple syntax, are you referring to things like like=(arr1, arr2)?

We could plan a meeting dedicated to these decisions end of next week or so, if it helps. We definitely have to call this experimental, and I am not quite sure what it would mean for dask if we need to modify behaviour.

I'd be ok with that, if others feel that's necessary.

I am more than happy to do the code fixups, getting the decisions out of the way is the tricky part here.

I can also take care or help with that if we have a decision before December. I'll be moving in early-/mid-December, so my availability may be compromised then.

@eric-wieser
Copy link
Member

@eric-wieser curious do you have a preference in any case? My tendency is still towards strict (and it seems like the more conservative start)

My preference here is towards permissive, but I agree it's easy to make things more permissive later and hard to make them more conservative.

@seberg seberg modified the milestones: 1.20.0 release, 1.21.0 release Nov 23, 2020
@seberg
Copy link
Member Author

seberg commented Nov 23, 2020

@charris I think we are OK for 1.20, I moved the milestone. I am sure we should revisit it then, maybe even go with the permissive one.

@seberg
Copy link
Member Author

seberg commented Dec 3, 2020

This may be slightly annoying:

class myarr(np.ndarray): 
    pass 

arr = np.empty(5).view(myarr)  
res = np.asarray(3, like=arr)

Will not use subok and return a base-class ndarray. It basically only works due to "optimizations" right now. This also means that such a pattern, which used to work for subclasses is broken for the like= argument:

def __array_function__(self, func, types, args, kwargs):
    return super().__array_function__(func, types, args, kwargs)

although I doubt there was ever any much intention that the above really should work.

The reason why is this:

  • __array_function__ requires there to be a non-dispatching version of a function, currently stored as public_api._implementation
  • The like argument on np.array must be defined in C, the python wrapper we use is just unacceptably slow. But in C providing two implementations is annoying, and storing it as public_api._implementation would require custom callable objects.

So, the current solution is inconvenient for these C implemented functions. __array_ufunc__ solves this differently: It guarantees that the dispatching will never be performed on ndarray.__array_function__ (which could be inherited). This removes the need for an _implementation (by removing the possibility of recursive calls). It also disables the ability to use return super().__array_ufunc__ (it should cause a recursive call).

Basically, the NumPy implementation is special, for one it has to avoid actually calling ndarray.__array_function__ as that would be far too slow. So... the simplification gained by avoiding the dance __array_ufunc__ does ends up being a bit of a complication since it adds the requirement of having a non-dispatching "internal" version of the function around (that requirement costs little in Python, but complicates things for C – and possibly some optimization schemes).

@seberg
Copy link
Member Author

seberg commented Dec 3, 2020

Ok, to be honest, the result of dropping the subclass is fine. What bothers me is probably mainly that it only works due to optimizations. And right now the __array_ufunc__ approach seems a bit easier from an implementation standpoint (i.e. more freedom within NumPy).

@pentschev
Copy link
Contributor

Honestly, I don't see why this is a problem or annoying. As per both NEP-18 and NEP-35, we're dispatching to the downstream library, and the downstream library is responsible for deciding what's the appropriate behavior given its capabilities. For instance, neither CuPy nor Dask implement the subok, if we were to force that it wouldn't be possible to use np.asarray at all.

@seberg
Copy link
Member Author

seberg commented Dec 3, 2020

Yeah, behaviour wise it probably doesn't matter much. The way __array_ufunc__ works gives us more freedom. I.e. np.array is defined in C, but right now it would be very inconvenient to define np.may_share_memory(a, b) in C.

@charris
Copy link
Member

charris commented May 4, 2021

ISTR this going by. What is the current status?

@charris charris added the triage review Issue/PR to be discussed at the next triage meeting label May 4, 2021
@seberg
Copy link
Member Author

seberg commented May 5, 2021

@pentschev do you think there is any usage we have to reconsider here, or must clarify. There is a slippery slope to being de-facto not experimental api anymore unless we revisit this soon enough. (Ideally by accepting the NEP as final.)

@pentschev
Copy link
Contributor

So far I haven't really encountered anything we need to clarify or reconsider. Examples of real usage of NEP-35 can be found in Dask, particularly dask/dask#6738 which introduces support for NEP-35 and several functions that were impossible to implement before that, as well as percentile support (dask/dask#7162), lstsq/cholesky (dask/dask#7563) and bincount slicing (dask/dask#7391).

IMO at this time NEP-35 as it stands is already successful. As we approach NumPy 1.21 release I think it would be great to consider accepting it as final, unless something presents itself until then.

@charris charris removed the triage review Issue/PR to be discussed at the next triage meeting label May 6, 2021
@charris charris changed the title NEP 35: Finalize like= argument behaviour before 1.20 release NEP 35: Finalize like= argument behaviour before 1.21 release May 6, 2021
@seberg
Copy link
Member Author

seberg commented May 6, 2021

I guess the main volatility may have been around whether to relax the like= to allow non-array-likes. But, that is the one thing we can also just do without much concern I think.

@pentschev
Copy link
Contributor

I guess the main volatility may have been around whether to relax the like= to allow non-array-likes. But, that is the one thing we can also just do without much concern I think.

Apologies for the late reply. I agree we could adjust later, but so far even that I'm not sure would be necessary. Therefore, I'm +1 on just accepting NEP-35 as is at this time, and we can revisit that later if a real use case shows up in the future.

seberg added a commit to seberg/numpy that referenced this issue Jun 7, 2021
This accepts NEP 35 as final.  There has been no discussion about it
in a long time.  The current mode is strict about type input
(`like=` must be an array-like).  So that most of the "open" points
are OK to remain open.
Unless we need to discuss the name `like` or the fact that we pass
an array-like itself, the previously noted open points numpygh-17075
all seem not very relevant anymore.
seberg added a commit to seberg/numpy that referenced this issue Jun 7, 2021
This accepts NEP 35 as final.  There has been no discussion about it
in a long time.  The current mode is strict about type input
(`like=` must be an array-like).  So that most of the "open" points
are OK to remain open.
Unless we need to discuss the name `like` or the fact that we pass
an array-like itself, the previously noted open points numpygh-17075
all seem not very relevant anymore.
charris pushed a commit to charris/numpy that referenced this issue Jun 7, 2021
This accepts NEP 35 as final.  There has been no discussion about it
in a long time.  The current mode is strict about type input
(`like=` must be an array-like).  So that most of the "open" points
are OK to remain open.
Unless we need to discuss the name `like` or the fact that we pass
an array-like itself, the previously noted open points numpygh-17075
all seem not very relevant anymore.
@charris charris closed this as completed Jun 8, 2021
@charris charris modified the milestones: 1.22.0 release, 1.21.0 release Jun 8, 2021
dependabot bot added a commit to val-verde/python-numpy that referenced this issue Jan 2, 2022
This accepts NEP 35 as final.  There has been no discussion about it
in a long time.  The current mode is strict about type input
(`like=` must be an array-like).  So that most of the "open" points
are OK to remain open.
Unless we need to discuss the name `like` or the fact that we pass
an array-like itself, the previously noted open points numpygh-17075
all seem not very relevant anymore.
dependabot bot added a commit to val-verde/python-numpy that referenced this issue Jan 2, 2022
This accepts NEP 35 as final.  There has been no discussion about it
in a long time.  The current mode is strict about type input
(`like=` must be an array-like).  So that most of the "open" points
are OK to remain open.
Unless we need to discuss the name `like` or the fact that we pass
an array-like itself, the previously noted open points numpygh-17075
all seem not very relevant anymore.
dependabot bot added a commit to val-verde/python-numpy that referenced this issue Jan 2, 2022
This accepts NEP 35 as final.  There has been no discussion about it
in a long time.  The current mode is strict about type input
(`like=` must be an array-like).  So that most of the "open" points
are OK to remain open.
Unless we need to discuss the name `like` or the fact that we pass
an array-like itself, the previously noted open points numpygh-17075
all seem not very relevant anymore.
dependabot bot added a commit to val-verde/python-numpy that referenced this issue Jan 2, 2022
This accepts NEP 35 as final.  There has been no discussion about it
in a long time.  The current mode is strict about type input
(`like=` must be an array-like).  So that most of the "open" points
are OK to remain open.
Unless we need to discuss the name `like` or the fact that we pass
an array-like itself, the previously noted open points numpygh-17075
all seem not very relevant anymore.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants