-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
[MRG] Ignore and pass-through NaNs in RobustScaler and robust_scale #11308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
b1c5a21
EHN accept to fit sparse matrices
glemaitre 196892e
fix
glemaitre 36273e0
iter
glemaitre ed25163
TST check attributes and corner case sparse matrix
glemaitre d97fee5
DOC whats new entry
glemaitre 296bfd0
TST check equivalence between sparse and dense
glemaitre b6f1df6
FIX back-port nanmedian
glemaitre de147a4
Merge remote-tracking branch 'origin/master' into nan_robust_scaler
glemaitre db7cb4b
TST add more test case for sparse matrices
glemaitre f884532
TST additional test for random sparse matrix
glemaitre 0e4e8de
Merge remote-tracking branch 'origin/master' into nan_robust_scaler
glemaitre 7f22b3f
address comments
glemaitre cea6b8b
Merge remote-tracking branch 'origin/master' into nan_robust_scaler
glemaitre eecb39a
address comments
glemaitre 02a9811
joel comments
glemaitre 86cf707
Update data.py
glemaitre File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -24,7 +24,7 @@ | |
from ..utils import check_array | ||
from ..utils.extmath import row_norms | ||
from ..utils.extmath import _incremental_mean_and_var | ||
from ..utils.fixes import boxcox, nanpercentile | ||
from ..utils.fixes import boxcox, nanpercentile, nanmedian | ||
from ..utils.sparsefuncs_fast import (inplace_csr_row_normalize_l1, | ||
inplace_csr_row_normalize_l2) | ||
from ..utils.sparsefuncs import (inplace_column_scale, | ||
|
@@ -1092,18 +1092,6 @@ def __init__(self, with_centering=True, with_scaling=True, | |
self.quantile_range = quantile_range | ||
self.copy = copy | ||
|
||
def _check_array(self, X, copy): | ||
"""Makes sure centering is not enabled for sparse matrices.""" | ||
X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy, | ||
estimator=self, dtype=FLOAT_DTYPES) | ||
|
||
if sparse.issparse(X): | ||
if self.with_centering: | ||
raise ValueError( | ||
"Cannot center sparse matrices: use `with_centering=False`" | ||
" instead. See docstring for motivation and alternatives.") | ||
return X | ||
|
||
def fit(self, X, y=None): | ||
"""Compute the median and quantiles to be used for scaling. | ||
|
||
|
@@ -1113,39 +1101,60 @@ def fit(self, X, y=None): | |
The data used to compute the median and quantiles | ||
used for later scaling along the features axis. | ||
""" | ||
if sparse.issparse(X): | ||
raise TypeError("RobustScaler cannot be fitted on sparse inputs") | ||
X = self._check_array(X, self.copy) | ||
# at fit, convert sparse matrices to csc for optimized computation of | ||
# the quantiles | ||
X = check_array(X, accept_sparse='csc', copy=self.copy, estimator=self, | ||
dtype=FLOAT_DTYPES, force_all_finite='allow-nan') | ||
|
||
q_min, q_max = self.quantile_range | ||
if not 0 <= q_min <= q_max <= 100: | ||
raise ValueError("Invalid quantile range: %s" % | ||
str(self.quantile_range)) | ||
|
||
if self.with_centering: | ||
self.center_ = np.median(X, axis=0) | ||
if sparse.issparse(X): | ||
raise ValueError( | ||
"Cannot center sparse matrices: use `with_centering=False`" | ||
" instead. See docstring for motivation and alternatives.") | ||
self.center_ = nanmedian(X, axis=0) | ||
else: | ||
self.center_ = None | ||
|
||
if self.with_scaling: | ||
q_min, q_max = self.quantile_range | ||
if not 0 <= q_min <= q_max <= 100: | ||
raise ValueError("Invalid quantile range: %s" % | ||
str(self.quantile_range)) | ||
quantiles = [] | ||
for feature_idx in range(X.shape[1]): | ||
if sparse.issparse(X): | ||
column_nnz_data = X.data[X.indptr[feature_idx]: | ||
X.indptr[feature_idx + 1]] | ||
column_data = np.zeros(shape=X.shape[0], dtype=X.dtype) | ||
column_data[:len(column_nnz_data)] = column_nnz_data | ||
else: | ||
column_data = X[:, feature_idx] | ||
|
||
q = np.percentile(X, self.quantile_range, axis=0) | ||
self.scale_ = (q[1] - q[0]) | ||
quantiles.append(nanpercentile(column_data, | ||
self.quantile_range)) | ||
|
||
quantiles = np.transpose(quantiles) | ||
|
||
self.scale_ = quantiles[1] - quantiles[0] | ||
self.scale_ = _handle_zeros_in_scale(self.scale_, copy=False) | ||
else: | ||
self.scale_ = None | ||
|
||
return self | ||
|
||
def transform(self, X): | ||
"""Center and scale the data. | ||
|
||
Can be called on sparse input, provided that ``RobustScaler`` has been | ||
fitted to dense input and ``with_centering=False``. | ||
|
||
Parameters | ||
---------- | ||
X : {array-like, sparse matrix} | ||
The data used to scale along the specified axis. | ||
""" | ||
if self.with_centering: | ||
check_is_fitted(self, 'center_') | ||
if self.with_scaling: | ||
check_is_fitted(self, 'scale_') | ||
X = self._check_array(X, self.copy) | ||
check_is_fitted(self, 'center_', 'scale_') | ||
X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy, | ||
estimator=self, dtype=FLOAT_DTYPES, | ||
force_all_finite='allow-nan') | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FYI this affected dask-ml. Previously the logic in this transformer was equally applicable to numpy and dask arrays. Now it auto-converts dask arrays to numpy arrays. |
||
|
||
if sparse.issparse(X): | ||
if self.with_scaling: | ||
|
@@ -1165,11 +1174,10 @@ def inverse_transform(self, X): | |
X : array-like | ||
The data used to scale along the specified axis. | ||
""" | ||
if self.with_centering: | ||
check_is_fitted(self, 'center_') | ||
if self.with_scaling: | ||
check_is_fitted(self, 'scale_') | ||
X = self._check_array(X, self.copy) | ||
check_is_fitted(self, 'center_', 'scale_') | ||
X = check_array(X, accept_sparse=('csr', 'csc'), copy=self.copy, | ||
estimator=self, dtype=FLOAT_DTYPES, | ||
force_all_finite='allow-nan') | ||
|
||
if sparse.issparse(X): | ||
if self.with_scaling: | ||
|
@@ -1242,7 +1250,8 @@ def robust_scale(X, axis=0, with_centering=True, with_scaling=True, | |
(e.g. as part of a preprocessing :class:`sklearn.pipeline.Pipeline`). | ||
""" | ||
X = check_array(X, accept_sparse=('csr', 'csc'), copy=False, | ||
ensure_2d=False, dtype=FLOAT_DTYPES) | ||
ensure_2d=False, dtype=FLOAT_DTYPES, | ||
force_all_finite='allow-nan') | ||
original_ndim = X.ndim | ||
|
||
if original_ndim == 1: | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume you realise that there should be a more asymptotically efficient way to handle the sparse case, as it should be easy to work out whether a percentile is zero, positive or negative, then adjust the quantile parameter...
But this is fine in the first instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest, I don't know about it.