-
Notifications
You must be signed in to change notification settings - Fork 228
[MRG] Create new Mahalanobis mixin #96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Create new Mahalanobis mixin #96
Conversation
- Make them inherit from Mahalanobis Mixin, and implement the metric_ property - Improve metric_ property by checking if it exists and raising the appropriate warning if not - Make tests work, by replacing metric() with metric_
metric_learn/base_metric.py
Outdated
@@ -28,7 +18,7 @@ def transformer(self): | |||
------- | |||
L : upper triangular (d x d) matrix | |||
""" | |||
return cholesky(self.metric()).T | |||
return cholesky(self.metric_).T |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To access this member, it seems we'd need to inherit from the mahalanobis mixin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, it should not be there. I was waiting to implement ExplicitMixin
so that I could move them there, and I left it here to pass tests but I should rather have copied it in every Metric Learner class to be coherent with inheritance. Anyway now that we've decided to implement everything in MahalanobisMixin
for now (I updated the PR message), I'll just need to move transformer
as well as transform
in MahalanobisMixin
.
metric_learn/base_metric.py
Outdated
The learned Mahalanobis matrix. | ||
""" | ||
|
||
@property |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the eventual idea to expose a read-only property foo.metric
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I saw the @property
decorator as a way to enforce children to have a metric_
attribute but I forgot that it would make metric_
read-only. Indeed, I do not see any reason why it would be better to have it read-only, so maybe we could do it another way ? Like set a metric_
argument to None in the __init__
? Or maybe after all there is no need to enforce it: we could just document it, for instance putting metric_
in MahalanobisMixin
's docstring under Attributes
, and doing it also for children Metric Learners, like for instance in scikit-learn BaseEnsemble
documents an attribute estimators_
that is also documented in RandomForestClassifier
(see
https://github.com/scikit-learn/scikit-learn/blob/7a23b2493fbe094db2528114d75edc2e9739da17/sklearn/ensemble/base.py#L60-L85 and https://github.com/scikit-learn/scikit-learn/blob/4d9a12d175a38f2bcb720389ad2213f71a3d7697/sklearn/ensemble/forest.py#L738-L862 )
metric_learn/nca.py
Outdated
@property | ||
def metric_(self): | ||
check_is_fitted(self, 'A_') | ||
return self.A_.T.dot(self.A_) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems a lot of these implementations could be simplified by deferring to transformer()
and having that method check if things have been fitted.
Yes I agree. We would then have something like this (looking more like what is currently in master): class MahalanobisMixin:
def metric(self):
if hasattr(self, 'metric_'):
return self.metric_ # in this case the estimator is fitted
elif hasattr(self, 'A0') and _is_arraylike(self.A0):
return check_array(self.A0)
elif hasattr(self, 'L0') or hasattr(self, 'transformer_'):
return self.metric_from_transformer(self.transformer())
else: # extracted from scikit-learn's check_is_fitted function
msg = ("This %(name)s instance is not fitted yet, and is neither "
"initialized with an explicit matrix. Call 'fit' with appropriate"
" arguments before using this method, or initialize the metric_ "
"with ``prior`` equals a matrix, not None.")
raise NotFittedError(msg % {'name': type(self).__name__})
def transformer(self):
if hasattr(self, 'transformer_'):
return self.transformer_ # in this case the estimator is fitted
elif hasattr(self, 'L0') and _is_arraylike(self.L0):
return check_array(self.L0)
elif hasattr(self, 'A0') or hasattr(self, 'metric_'):
return self.transformer_from_metric(self.metric())
else: # extracted from scikit-learn's check_is_fitted function
msg = ("This %(name)s instance is not fitted yet, and is neither "
"initialized with an explicit matrix. Call 'fit' with appropriate"
" arguments before using this method, or initialize the metric_ "
"with ``prior`` equals a matrix, not None.")
raise NotFittedError(msg % {'name': type(self).__name__})
def metric_from_transformer(self, transformer):
return L.T.dot(L)
def transformer_from_metric(self, metric):
return cholesky(metric).T This way children classes only need to have a What do you think ? |
I like that idea. I think it covers a lot of common cases in a consistent way, with room to specialize where needed. Instead of using an |
# Conflicts: # metric_learn/base_metric.py # metric_learn/covariance.py # metric_learn/itml.py # metric_learn/lfda.py # metric_learn/lmnn.py # metric_learn/lsml.py # metric_learn/mlkr.py # metric_learn/mmc.py # metric_learn/nca.py # metric_learn/rca.py # metric_learn/sdml.py
… new check_no_attributes_set_in_init" This new function was introduced through PR scikit-learn/scikit-learn#9450 in scikit-learn. It also allows to pass tests that would otherwise not pass: indeed having abstract attributes as properties threw an error. But the new test functions handles well this property inheritance.
…scikit-learn#9450 Indeed, in the PR this function is modified to support python 2. This should solve the CI error.
After discussing with William, we came to the conclusion that it would be better to have a standard attribute/property that represents the metric for all Mahalanobis learners (regardless of whether they learn a PSD matrix |
Yes, and it would simplify the above code by having to implement only the |
I don't think |
The problem is that if we want to be scikit-learn compliant, (and therefore pass checks that are copied from scikit-learn def __init__(self, L0):
self.transformer_ = L0 Calling an init argument |
Yes, that's right |
I'm thinking maybe we could also allow the user to choose the decomposition type ? Because for large matrices (like 4096x4096 for instance, like the olivetti faces dataset), eigenvalues decomoposition is quite more expensive: In [32]: import numpy as np
...: M = np.random.randn(4096, 4096)
...: M = M.dot(M.T)
...: %timeit np.linalg.eigh(M)
...: %timeit np.linalg.cholesky(M)
...:
26 s ± 7.13 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.05 s ± 31.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) |
Good point. For now, maybe we should always use cholesky of |
Yes I agree, I just added it to the todo |
…() in MahalanobisMixin
… computed from the transformer
- Make MahalanobisMixin inherit from BaseMetricLearner to give a concrete implementation of score_pairs - Use score_pairs to compute more easily predict - Add docstring - TST: for every algorithm: - test that using score_pairs pairwise returns an euclidean distance matrix - test that score_pairs works for 3D arrays of several pairs as well as 2D arrays of one pair (and there returns only a scalar) - test that score_pairs always returns a finite output
- add the function and docstring - use it for score_pairs - TST : - should be finite - have right output dimension - embedding should be linear - should work on a toy example
- remove unusual s to test functions - remove redundant parenthesis
I just updated the docstrings for the I think the PR is now ready to merge, which will allow to read more easily the diff PR #117. |
metric_learn/_util.py
Outdated
""" | ||
# If input is scalar raise error | ||
if len(tuples.shape) == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right
Done
metric_learn/base_metric.py
Outdated
|
||
L = cholesky(M).T | ||
class MetricTransformer(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
subclass from object
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
Done
metric_learn/base_metric.py
Outdated
@@ -154,4 +255,5 @@ def score(self, quadruplets, y=None): | |||
score : float | |||
The quadruplets score. | |||
""" | |||
return - np.mean(np.sign(self.decision_function(quadruplets))) | |||
quadruplets = check_tuples(quadruplets) | |||
return - np.mean(self.predict(quadruplets)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The space between the -
and np.mean
is a little confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed
Done
test/test_mahalanobis_mixin.py
Outdated
assert np.array_equal(pairwise, pairwise.T) # symmetry | ||
assert (pairwise.diagonal() == 0).all() # identity | ||
# triangular inequality | ||
for i in range(pairwise.shape[1]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about this:
rows, cols = pairwise.shape
tol = 1e-3 # seems kinda large, but we can tune this
for i in range(rows):
for j in range(i, cols):
direct_dist = pairwise[i, j]
triangle_dist = np.add.outer(pairwise[i], pairwise[:, j])
assert (direct_dist <= triangle_dist + tol).all()
That's better indeed I didn't know about |
test/test_mahalanobis_mixin.py
Outdated
assert (pairwise[i, j] - (pairwise[i, k] + pairwise[k, j]) <= 0 + | ||
1e-3).all() | ||
rows, cols = pairwise.shape | ||
tol = np.finfo(float).eps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe here we can start with the smallest value possible, and if some distance function fails to pass the test, we could investigate more and decide what is a reasonable value ?
This change breaks the test, I think in fact it is not exactly what we want to test ? Because it will test the triangle inequality but also some inequalities like d(x1, x2) <= d(x1, x3) + d(x4, x2), which has no reason to be true isn't it ? |
test/test_mahalanobis_mixin.py
Outdated
assert (direct_dist <= triangle_dist + tol).all() | ||
tol = 1e-15 | ||
assert (pairwise <= pairwise[:, :, np.newaxis] | ||
+ pairwise[:, np.newaxis, :] + tol).all() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would work, checking that each pairwise[i, j] <= pairwise[i, :] + pairwise[:, j]
(the strict version of this would be with pairwise.T[:, :, np.newaxis] + pairwise[:, np.newaxis, :]
but since pairwise is already checked to be symmetric we can avoid the transpose)
For the value 1e-15
, the test failed with 1e-16 for lsml, when sometimes running the tests, and sometimes not. I think 1e-15 is a reasonable value to set for now, but I will also make a commit to set all random_states (this made me realized they were not)
Ah, you're right, we don't want the outer add there. I should have just suggested (But in general, I do like using explicit "outer" rather than indexing, as it can sometimes be confusing what the intent is, especially for readers without a lot of experience with numpy.) The full vectorized version is fine, as long as the total size of the matrix is relatively small. |
As this PR isn't going to affect the master branch, I think it's ready to merge now. Any objections? |
I can take a last look early next week if you want, I think I missed the last few commits |
Sounds good. I just didn't want this PR to stall if it's ready to go. |
Since for now we decided to use transform even for PairsClassifierMixin and such, shouldn't MahalanobisMixin be a TransformerMixin? This would avoid having to write explicitly that supervised algorithms inherit from both. |
The problem is that
|
…es/metric-learn into feat/mahalanobis_class
(sorry for the last commits 779a93a and 75d4ad2, I did some changes on a local repo that was not up to date, so I had to do a merge afterwards) the commit is basically just making the |
Since it is an abstract class it already returns an error at instanciation: TypeError: Can't instantiate abstract class BaseMetricLearner with abstract methods score_pairs
OK then I think this is ready to merge. Whether or not to keep |
Allright, merging |
* WIP create MahalanobisMixin * ENH Update algorithms with Mahalanobis Mixin: - Make them inherit from Mahalanobis Mixin, and implement the metric_ property - Improve metric_ property by checking if it exists and raising the appropriate warning if not - Make tests work, by replacing metric() with metric_ * FIX: add missing import * FIX: update sklearn's function check_no_fit_attributes_set_in_init to new check_no_attributes_set_in_init" This new function was introduced through PR scikit-learn/scikit-learn#9450 in scikit-learn. It also allows to pass tests that would otherwise not pass: indeed having abstract attributes as properties threw an error. But the new test functions handles well this property inheritance. * FIX: take function ``_get_args`` from scikit-learn's PR scikit-learn/scikit-learn#9450 Indeed, in the PR this function is modified to support python 2. This should solve the CI error. * ENH: add transformer_ attribute and improve docstring * WIP: move transform() in BaseMetricLearner to transformer_from_metric() in MahalanobisMixin * WIP: refactor metric to original formulation: a function, with result computed from the transformer * WIP: make all Mahalanobis Metric Learner algorithms have transformer_ and metric() * ENH Add score_pairs function - Make MahalanobisMixin inherit from BaseMetricLearner to give a concrete implementation of score_pairs - Use score_pairs to compute more easily predict - Add docstring - TST: for every algorithm: - test that using score_pairs pairwise returns an euclidean distance matrix - test that score_pairs works for 3D arrays of several pairs as well as 2D arrays of one pair (and there returns only a scalar) - test that score_pairs always returns a finite output * TST add test on toy example for score_pairs * ENH Add embed function - add the function and docstring - use it for score_pairs - TST : - should be finite - have right output dimension - embedding should be linear - should work on a toy example * FIX fix error in slicing of quadruplets * FIX minor corrections * FIX minor corrections - remove unusual s to test functions - remove redundant parenthesis * FIX fix PEP8 errors * FIX remove possible one-sample scoring from docstring for now * REF rename n_features_out to num_dims to be more coherent with current algorithms * MAINT: Adress #96 (review) - replace embed by transform and add always the input X in calling the function - mutualize _transformer_from_metric not to be overwritten in MMC - improve test_mahalanobis_mixin.test_score_pairs_pairwise according to #96 (comment) - improve test_mahalanobis_mixin.check_is_distance_matrix - correct typos and nitpicks * ENH: Add check_tuples * FIX: fix parenthesis * ENH: First commit adding a preprocessor * ENH: Improve check_tuples with more comments and deal better with ensure_min_features * STY: remove unexpected spaces * FIX: Raise more appropriate error message The previous error message would have said "[...], shape=(shape_of_the_2D_array_extracted_from_3D)" But it is clearer to print the shape of the actual 3D initial array (tuples in input) * FIX: fix string formatting and refactor name and context to use context in custom functions but to give name to check_array * FIX: only allow 2D if preprocessor and 3D if not preprocessor * FIX: put format arguments in the right order * MAINT: better to say the preprocessor than a preprocessor in messages * FIX: numeric should be default if NO preprocessor * FIX: preprocessor argument has to be a boolean (before this change was a callable or an array) * FIX: fix preprocessor argument passed in check_tuples in base_metric * MAINT: say a preprocessor rather than the preprocessor * DOC: fix docstring of t in check_tuples * MAINT: make error messages better by only printing presence of preprocessor if the error is because not 2D or 3D shape * TST: Add tests for check_tuples * TST: simplify tests by removing the test for messages with the estimator name, but incorporating it in all other tests through parametrization * STY: remove unnecessary parenthesis * FIX: put back else statement that probably was wrongfully merged * TST: add tests for weakly supervised estimators and preprocessor that fetches indices * TST: add tests for preprocessor * FIX: remove redundant metric and transformer function, wrongly merged * MAINT: rename format_input into preprocess_tuples and input into tuples * MAINT: fixes and enhancements - fix the format_input previous incomplete refactoring - mututalize check_tuples for Weakly Supervised Algorithms - fix test of string representations * MAINT: mutualize check_tuples * MAINT: refactor SimplePreprocessor into ArrayIndexer * MAINT: improve check_tuples and tests * TST: add random seed for _Supervised classes * TST: Adapt test pipeline - use random state for _Supervised classes - call transform only for pipeline with a TransformerMixin * TST: fix test_progress_message_preprocessor_tuples by making func return an np.array * Remove deprecated cross_validation import and put model_selection instead * WIP replace checks by unique check_input function * Fixes some tests: - fixes dtype checks and conversion by ensuring the checks and conversions are made on the preprocessed array and not the input array (which can be an array of indices) - fix tests that were failing due to the wrong error message * TST: Cherry pick from new sklearn version ac0e230 [MRG] Quick fix of failed tests due to new scikit-learn version (0.20.0) (#130) * TST: Quick fix of failed tests due to new scikit-learn version (0.20.0) * FIX update values to pass test * FIX: get changes from master to pass test iris for NCA * FIX fix tests that were failing due to the error message FIX check_tuples at the end and only at the end, because the number of tuples should not be modified from the beginning, and we want to check them also at the end * TST: fix test_check_input_invalid_t that changed since we test t at the end now * TST fix NCA's iris test taking code from master * FIX fix tests: - use check_array instead of conversion to numpy array to ensure that sparse array are kept as such and not wrapped into a regular numpy array - check_array the metric with the right arguments for covariance in order not to fail if the array is a scalar (for one-feature samples) * FIX fix previous modification that removed self.X_ but was modifying the input at fit time * FIX ensure at least 2d only for checking the metric because after check_array of inputs the metric should be a scalar or an array so we only need to ensure it is an array (2D array) * STY: Fix PEP8 violations * MAINT: Refactor error messages with the help of numerical codes * MAINT: mutualize check_preprocessor and check_input for every estimator * FIX: remove format_map for python2.7 compatibility * DOC: Add docstring for check_input and fix some bugs * DOC: add docstrings * MAINT: Removing changes not related to this PR, and fixing previous probable unsuccessfull merge * STY: Fix PEP8 errors * STY: fix indent problems * Fixing docstring spaces * DOC: add preprocessor docstring when missing * STY: PEP8 fixes * MAINT: refactor the global check function into _prepare_input * FIX: fix quadruplets scoring and delete useless comments * MAINT: remove some enhancements to be coherent with previous code and simplify review * MAINT: Improve test messages * MAINT: reorganize tests * FIX: fix typo in LMNN shogun and clean todo for the equivalent code in python_LMNN * MAINT: Rename inputs and input into input_data * STY: add backticks to None * MAINT: add more detailed comment of first checks and remove old comment * MAINT: improve comments for checking num_features * MAINT: Refactor t into tuple_size * MAINT: Fix small PEP8 error * MAINT: FIX remaining t into tuple_size and replace hasattr if None by getattr with default None * MAINT: remove misplaced comment * MAINT: Put back/add docstrings for decision_function/predict * MAINT: remove unnecessary ellipsis and upadate docstring of decision_function * Add comments in LMNN for arguments useful for the shogun version that are not used in the python version * MAINT: Remove useless mock_preprocessor * MAINT: Remove useless loop * MAINT: refactor test_dict_unchanged * MAINT: remove _get_args copied from scikit-learn and replace it by an import * MAINT: Fragment check_input by extracting blocks into check_input_classic and check_input_tuples * MAINT: ensure min_samples=2 for supervised learning algorithms (we should at least have two datapoints to learn something) * ENH: Return custom error when some error is due to the preprocessor * MAINT: Refactor algorithms preprocessing steps - extract preprocessing steps in main fit function - remove self.X_ when found and replace it by X (Fixes #134) - extract the function to test collapsed pairs as _utils.check_collapsed_pairs and test it - note that the function is now not used where it was used before but the user should be responsible for having coherent input (if he wants he can use the helper function as a preprocessing step) * MAINT: finish the work of the previous commit * TST: add test for cross-validation: comparison of manual cross-val and scikit-learn cross-val * ENH: put y=None by default in LSML for better compatibility. This also allows to simplify some tests by removing the need for a separate case for lsml * ENH: add error message when type of inputs is not some expected type * TST: add test that checks that 'classic' is the default behaviour * TST: remove unnecessary conversion to vertical vector of y * FIX: remove wrong condition hasattr 'score' at top of loop * MAINT: Add comment to explain why we return twice X for build_regression and build_classification * ENH: improve test for preprocessor and return error message if the given preprocessor has the bad type * FIX: fix wrong type_of_inputs in a test * FIX: deal with the case where preprocessor is None * WIP refactor build_dataset * MAINT: refactor bool preprocessor to with_preprocessor * FIX: fix build_pairs and build_quadruplets because 'only named arguments should follow expression' * STY: fix PEP8 error * MAINT: mututalize test_same_with_or_without_preprocessor_tuples and test_same_with_or_without_preprocessor_classic * TST: give better names in test_same_with_or_without_preprocessor * MAINT: refactor list_estimators into metric_learners * TST: uniformize names input_data - tuples, labels - y * FIX: fix build_pairs and build_quadruplets * MAINT: remove forgotten code duplication * MAINT: address #117 (review)
This PR creates a new Mahalanobis Mixin (cf issue #91), which is a common interface for all algorithms that learn a Mahalanobis type (pseudo) distance (of the form
(x - x')^T M (x - x')
) (right now all algorithms are this form but there might be others in the future).This interface will enforce that an attribute
metric_
exists, add documentation for it in the docstring of child classes, and will allow to factorize computations ofembed
functions (similar to what is done now withtransform
), andscore_pairs
function (these functions will come in later PRs, therefore right now this Mixin seems a bit artificial but it is temporary).I also used the opportunity of this PR to improve the way the argument
metric_
is returned, checking that the matrix indeed exists (i.e. it has been explicitely initialized or the estimator has been fitted), and raising a warning otherwise. Don't hesitate to comment on this last part, or to tell me if it should belong to a separate PR.TODO:
metric_
propertyFix docstrings to render nicely (as if metric_ was a regulardone by copying, see [MRG] Create new Mahalanobis mixin #96 (comment)Attribute
of the class)EDIT:
Initially we were thinking of doing also an
ExplicitMixin
that would be for metric learning algorithms that have a way to embed points in a space where the metric is the learned one. Since all algorithms are of this form for now, we will not implement it but rather implement all the functions inMahalanobisMixin
(see #95 (comment))[ ] Add=> EDIT: for now we will let only a transform function (see [MRG] Create new Mahalanobis mixin #96 (comment))embed
functionscore_pairs
function