-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
sample_weight in LinearSVR .fit #6862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks @imaculate. Would you consider doing the same for |
sure , Ill look into it. |
This was referenced Jun 24, 2016
Closed
olologin
pushed a commit
to olologin/scikit-learn
that referenced
this issue
Aug 24, 2016
… and documentation. Fixes scikit-learn#6862 (scikit-learn#6907) * Make KernelCenterer a _pairwise operation Replicate solution to scikit-learn@9a52077 except that `_pairwise` should always be `True` for `KernelCenterer` because it's supposed to receive a Gram matrix. This should make `KernelCenterer` usable in `Pipeline`s. Happy to add tests, just tell me what should be covered. * Adding test for PR scikit-learn#6900 * Simplifying imports and test * updating changelog links on homepage (scikit-learn#6901) * first commit * changed binary average back to macro * changed binomialNB to multinomialNB * emphasis on "higher return values are better..." (scikit-learn#6909) * fix typo in comment of hierarchical clustering (scikit-learn#6912) * [MRG] Allows KMeans/MiniBatchKMeans to use float32 internally by using cython fused types (scikit-learn#6846) * Fix sklearn.base.clone for all scipy.sparse formats (scikit-learn#6910) * DOC If git is not installed, need to catch OSError Fixes scikit-learn#6860 * DOC add what's new for clone fix * fix a typo in ridge.py (scikit-learn#6917) * pep8 * TST: Speed up: cv=2 This is a smoke test. Hence there is no point having cv=4 * Added support for sample_weight in linearSVR, including tests and documentation * Changed assert to assert_allclose and assert_almost_equal, reduced the test tolerance * Fixed pep8 violations and sampleweight format * rebased with upstream
TomDLT
pushed a commit
to TomDLT/scikit-learn
that referenced
this issue
Oct 3, 2016
… and documentation. Fixes scikit-learn#6862 (scikit-learn#6907) * Make KernelCenterer a _pairwise operation Replicate solution to scikit-learn@9a52077 except that `_pairwise` should always be `True` for `KernelCenterer` because it's supposed to receive a Gram matrix. This should make `KernelCenterer` usable in `Pipeline`s. Happy to add tests, just tell me what should be covered. * Adding test for PR scikit-learn#6900 * Simplifying imports and test * updating changelog links on homepage (scikit-learn#6901) * first commit * changed binary average back to macro * changed binomialNB to multinomialNB * emphasis on "higher return values are better..." (scikit-learn#6909) * fix typo in comment of hierarchical clustering (scikit-learn#6912) * [MRG] Allows KMeans/MiniBatchKMeans to use float32 internally by using cython fused types (scikit-learn#6846) * Fix sklearn.base.clone for all scipy.sparse formats (scikit-learn#6910) * DOC If git is not installed, need to catch OSError Fixes scikit-learn#6860 * DOC add what's new for clone fix * fix a typo in ridge.py (scikit-learn#6917) * pep8 * TST: Speed up: cv=2 This is a smoke test. Hence there is no point having cv=4 * Added support for sample_weight in linearSVR, including tests and documentation * Changed assert to assert_allclose and assert_almost_equal, reduced the test tolerance * Fixed pep8 violations and sampleweight format * rebased with upstream
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is there any reason why there's no
sample_weight
param forLinearSVR.fit
?The text was updated successfully, but these errors were encountered: