Skip to content

[MRG+1] DOC Emphasis on "higher return values are better..." #6909

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 19, 2016

Conversation

yoavram
Copy link

@yoavram yoavram commented Jun 19, 2016

Emphasis on "higher return values are better than lower return values" to make sure that it jumps up on casual browsing of the docs.

@yoavram yoavram changed the title Emphasis on "higher return values are better..." DOC Emphasis on "higher return values are better..." Jun 19, 2016
@jnothman
Copy link
Member

Seems a good idea to me. +1

@jnothman jnothman changed the title DOC Emphasis on "higher return values are better..." [MRG+1] DOC Emphasis on "higher return values are better..." Jun 19, 2016
@agramfort agramfort merged commit 6301b1f into scikit-learn:master Jun 19, 2016
@agramfort
Copy link
Member

thx @yoavram

@yoavram yoavram deleted the patch-1 branch June 19, 2016 13:29
imaculate pushed a commit to imaculate/scikit-learn that referenced this pull request Jun 23, 2016
agramfort pushed a commit that referenced this pull request Jun 23, 2016
… and documentation. Fixes #6862 (#6907)

* Make KernelCenterer a _pairwise operation

Replicate solution to 9a52077 except that `_pairwise` should always be `True` for `KernelCenterer` because it's supposed to receive a Gram matrix. This should make `KernelCenterer` usable in `Pipeline`s.

Happy to add tests, just tell me what should be covered.

* Adding test for PR #6900

* Simplifying imports and test

* updating changelog links on homepage (#6901)

* first commit

* changed binary average back to macro

* changed binomialNB to multinomialNB

* emphasis on "higher return values are better..." (#6909)

* fix typo in comment of hierarchical clustering (#6912)

* [MRG] Allows KMeans/MiniBatchKMeans to use float32 internally by using cython fused types (#6846)

* Fix sklearn.base.clone for all scipy.sparse formats (#6910)

* DOC If git is not installed, need to catch OSError

Fixes #6860

* DOC add what's new for clone fix

* fix a typo in ridge.py (#6917)

* pep8

* TST: Speed up: cv=2

This is a smoke test. Hence there is no point having cv=4

* Added support for sample_weight in linearSVR, including tests and documentation

* Changed assert to assert_allclose and assert_almost_equal, reduced the test tolerance

* Fixed pep8 violations and sampleweight format

* rebased with upstream
olologin pushed a commit to olologin/scikit-learn that referenced this pull request Aug 24, 2016
olologin pushed a commit to olologin/scikit-learn that referenced this pull request Aug 24, 2016
… and documentation. Fixes scikit-learn#6862 (scikit-learn#6907)

* Make KernelCenterer a _pairwise operation

Replicate solution to scikit-learn@9a52077 except that `_pairwise` should always be `True` for `KernelCenterer` because it's supposed to receive a Gram matrix. This should make `KernelCenterer` usable in `Pipeline`s.

Happy to add tests, just tell me what should be covered.

* Adding test for PR scikit-learn#6900

* Simplifying imports and test

* updating changelog links on homepage (scikit-learn#6901)

* first commit

* changed binary average back to macro

* changed binomialNB to multinomialNB

* emphasis on "higher return values are better..." (scikit-learn#6909)

* fix typo in comment of hierarchical clustering (scikit-learn#6912)

* [MRG] Allows KMeans/MiniBatchKMeans to use float32 internally by using cython fused types (scikit-learn#6846)

* Fix sklearn.base.clone for all scipy.sparse formats (scikit-learn#6910)

* DOC If git is not installed, need to catch OSError

Fixes scikit-learn#6860

* DOC add what's new for clone fix

* fix a typo in ridge.py (scikit-learn#6917)

* pep8

* TST: Speed up: cv=2

This is a smoke test. Hence there is no point having cv=4

* Added support for sample_weight in linearSVR, including tests and documentation

* Changed assert to assert_allclose and assert_almost_equal, reduced the test tolerance

* Fixed pep8 violations and sampleweight format

* rebased with upstream
TomDLT pushed a commit to TomDLT/scikit-learn that referenced this pull request Oct 3, 2016
TomDLT pushed a commit to TomDLT/scikit-learn that referenced this pull request Oct 3, 2016
… and documentation. Fixes scikit-learn#6862 (scikit-learn#6907)

* Make KernelCenterer a _pairwise operation

Replicate solution to scikit-learn@9a52077 except that `_pairwise` should always be `True` for `KernelCenterer` because it's supposed to receive a Gram matrix. This should make `KernelCenterer` usable in `Pipeline`s.

Happy to add tests, just tell me what should be covered.

* Adding test for PR scikit-learn#6900

* Simplifying imports and test

* updating changelog links on homepage (scikit-learn#6901)

* first commit

* changed binary average back to macro

* changed binomialNB to multinomialNB

* emphasis on "higher return values are better..." (scikit-learn#6909)

* fix typo in comment of hierarchical clustering (scikit-learn#6912)

* [MRG] Allows KMeans/MiniBatchKMeans to use float32 internally by using cython fused types (scikit-learn#6846)

* Fix sklearn.base.clone for all scipy.sparse formats (scikit-learn#6910)

* DOC If git is not installed, need to catch OSError

Fixes scikit-learn#6860

* DOC add what's new for clone fix

* fix a typo in ridge.py (scikit-learn#6917)

* pep8

* TST: Speed up: cv=2

This is a smoke test. Hence there is no point having cv=4

* Added support for sample_weight in linearSVR, including tests and documentation

* Changed assert to assert_allclose and assert_almost_equal, reduced the test tolerance

* Fixed pep8 violations and sampleweight format

* rebased with upstream
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants