-
-
Notifications
You must be signed in to change notification settings - Fork 18.8k
BUG: Fix all-NaT when ArrowEA.astype to categorical #62055
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
BUG: Fix all-NaT when ArrowEA.astype to categorical #62055
Conversation
pandas/core/arrays/base.py
Outdated
dtype = pandas_dtype(dtype) | ||
if dtype == self.dtype: | ||
if not copy: | ||
return self | ||
else: | ||
return self.copy() | ||
|
||
if isinstance(dtype, CategoricalDtype): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty weird to do this in the base class. Can you track down where the actual problem is? Could be hiding other bugs eg in categorical.fromsequence
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah okay i'll investigate further
pandas/core/arrays/categorical.py
Outdated
if is_datetime64_any_dtype(cat_dtype) or is_timedelta64_dtype( | ||
cat_dtype | ||
): | ||
values = values.to_numpy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can be expensive, particularly in dt64tz. have you figured out why the problem is happening? i.e. if you step through the code, where does the first wrong result show up
Using In the first example, the code does not think they are comparable in |
So we should start by fixing should_compare? |
Sorry I missed this earlier. The fix I pushed addresses that and resolves the second issue too. |
@@ -3674,6 +3674,14 @@ def get_indexer( | |||
orig_target = target | |||
target = self._maybe_cast_listlike_indexer(target) | |||
|
|||
from pandas.api.types import is_timedelta64_dtype |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can go at the top of the file
@@ -3674,6 +3674,14 @@ def get_indexer( | |||
orig_target = target | |||
target = self._maybe_cast_listlike_indexer(target) | |||
|
|||
from pandas.api.types import is_timedelta64_dtype | |||
|
|||
if target.dtype == "string[pyarrow]" and is_timedelta64_dtype(self.dtype): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i dont think we generally do this implicit casting anymore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's fair, but the array is initialized as string[pyarrow] instead of a pyarrow time scalar. Not sure how to compare string with timestamp64 without some coercion step
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at this more closely, i was wrong about not casting strings.
tdi = pd.date_range("2016-01-01", periods=3) - pd.Timestamp("2016-01-01")
target = pd.array(["0 days"], dtype="string[pyarrow]")
>>> tdi.get_indexer(["0 days"])
array([0])
>>> tdi._maybe_cast_listlike_indexer(["0 days"])
TimedeltaIndex(['0 days'], dtype='timedelta64[ns]', freq=None)
>>> tdi._maybe_cast_listlike_indexer(target)
TimedeltaIndex(['0 days'], dtype='timedelta64[ns]', freq=None)
If a patch is needed, can it go in _maybe_cast_listlike_indexer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah i was thinking about using a maybe helper function too
@@ -384,6 +384,18 @@ def _is_comparable_dtype(self, dtype: DtypeObj) -> bool: | |||
if self.tz is not None: | |||
# If we have tz, we can compare to tzaware | |||
return isinstance(dtype, DatetimeTZDtype) | |||
|
|||
from pandas import ArrowDtype |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can go at the top of the file
|
||
return ( | ||
pa.types.is_date32(dtype.pyarrow_dtype) | ||
or pa.types.is_date64(dtype.pyarrow_dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think timestamp is comparable but date is not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original issue was with pyarrow date dtypes, which compare fine when using astype, so I think they should be treated as comparable here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dti = pd.date_range("2016-01-01", periods=3)
item = dti[0].date()
>>> (item == dti)[0]
np.False_
We don't have a non-pyarrow date dtype, but if we did, it would not be considered comparable to datetime64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in that case the question is whether we want astype with categoricals to succeed here, or whether astype between pyarrow date and datetime64 should be disallowed for consistency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have analogous special-casing for the non-pyarrow dt64 that im missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that I know of
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I expect it shouldn’t be necessary here. I’ll take a closer look on monday
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think the relevant special-casing is in Index._maybe_downcast_for_indexing. Take a look for the inferred_type checks
@@ -160,3 +163,20 @@ def test_astype_category_readonly_mask_values(self): | |||
result = arr.astype("category") | |||
expected = array([0, 1, 2], dtype="Int64").astype("category") | |||
tm.assert_extension_array_equal(result, expected) | |||
|
|||
def test_arrow_array_astype_to_categorical_dtype_temporal(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you also test the intermediate steps that used to fail
Added type annotations to new arguments/methods/functions.doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.