Skip to content

Commit 6972d6c

Browse files
amuellerjnothman
authored andcommitted
DOC minor doc fixes for sphinx. (scikit-learn#7357)
1 parent 680ab51 commit 6972d6c

File tree

11 files changed

+23
-21
lines changed

11 files changed

+23
-21
lines changed

doc/modules/linear_model.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -754,7 +754,7 @@ For large dataset, you may also consider using :class:`SGDClassifier` with 'log'
754754

755755
* :ref:`sphx_glr_auto_examples_linear_model_plot_logistic_path.py`
756756

757-
* :ref:`example_linear_model_plot_logistic_multinomial.py`
757+
* :ref:`sphx_glr_auto_examples_linear_model_plot_logistic_multinomial.py`
758758

759759
.. _liblinear_differences:
760760

@@ -1118,7 +1118,7 @@ in the following ways.
11181118

11191119
.. topic:: Examples:
11201120

1121-
* :ref:`example_linear_model_plot_huber_vs_ridge.py`
1121+
* :ref:`sphx_glr_auto_examples_linear_model_plot_huber_vs_ridge.py`
11221122

11231123
.. topic:: References:
11241124

doc/modules/mixture.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ points.
175175

176176
.. topic:: Examples:
177177

178-
* See :ref:`plot_bayesian_gaussian_mixture.py` for a comparaison of
178+
* See :ref:`sphx_glr_auto_examples_plot_bayesian_gaussian_mixture.py` for a comparaison of
179179
the results of the ``BayesianGaussianMixture`` for different values
180180
of the parameter ``dirichlet_concentration_prior``.
181181

@@ -190,10 +190,10 @@ Pros
190190
expectation-maximization solutions.
191191

192192
:Automatic selection: when `dirichlet_concentration_prior` is small enough and
193-
`n_components` is larger than what is found necessary by the model, the
194-
Variational Bayesian mixture model has a natural tendency to set some mixture
195-
weights values close to zero. This makes it possible to let the model choose a
196-
suitable number of effective components automatically.
193+
`n_components` is larger than what is found necessary by the model, the
194+
Variational Bayesian mixture model has a natural tendency to set some mixture
195+
weights values close to zero. This makes it possible to let the model choose a
196+
suitable number of effective components automatically.
197197

198198
Cons
199199
.....

doc/modules/model_evaluation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1083,7 +1083,7 @@ Here is a small example of usage of this function:::
10831083

10841084
.. topic:: Example:
10851085

1086-
* See :ref:`example_calibration_plot_calibration.py`
1086+
* See :ref:`sphx_glr_calibration_plot_calibration.py`
10871087
for an example of Brier score loss usage to perform probability
10881088
calibration of classifiers.
10891089

doc/testimonials/testimonials.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,9 @@ Greg Lamp, Co-founder Yhat
292292
.. raw:: html
293293

294294
</span>
295-
------------------------------------------
295+
296+
`Rangespan <https://www.rangespan.com>_`
297+
----------------------------------------
296298

297299
.. raw:: html
298300

doc/tutorial/statistical_inference/finding_help.rst

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,6 @@ Q&A communities with Machine Learning practitioners
1919
also features some interesting discussions:
2020
https://www.quora.com/topic/Machine-Learning
2121

22-
Have a look at the best questions section, eg: `What are some
23-
good resources for learning about machine learning`_.
24-
2522
:Stack Exchange:
2623

2724
The Stack Exchange family of sites hosts `multiple subdomains for Machine Learning questions`_.

doc/whats_new.rst

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -290,7 +290,7 @@ Enhancements
290290
- Added support for substituting or disabling :class:`pipeline.Pipeline`
291291
and :class:`pipeline.FeatureUnion` components using the ``set_params``
292292
interface that powers :mod:`sklearn.grid_search`.
293-
See :ref:`example_plot_compare_reduction.py`. By `Joel Nothman`_ and
293+
See :ref:`sphx_glr_plot_compare_reduction.py`. By `Joel Nothman`_ and
294294
`Robert McGibbon`_.
295295

296296
- Simplification of the ``clone`` function, deprecate support for estimators
@@ -395,7 +395,7 @@ Bug fixes
395395
Oliveira <https://github.com/caioaao>`_.
396396

397397
- Fix :class:`linear_model.ElasticNet` sparse decision function to match
398-
output with dense in the multioutput case.
398+
output with dense in the multioutput case.
399399

400400
API changes summary
401401
-------------------
@@ -4468,3 +4468,5 @@ David Huard, Dave Morrill, Ed Schofield, Travis Oliphant, Pearu Peterson.
44684468
.. _Mads Jensen: https://github.com/indianajensen
44694469

44704470
.. _Sebastián Vanrell: https://github.com/srvanrell
4471+
4472+
.. _Robert McGibbon: https://github.com/rmcgibbo

sklearn/datasets/descr/breast_cancer.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ Data Set Characteristics:
3030
- WDBC-Benign
3131

3232
:Summary Statistics:
33+
3334
===================================== ====== ======
3435
Min Max
3536
===================================== ====== ======

sklearn/decomposition/kernel_pca.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ class KernelPCA(BaseEstimator, TransformerMixin):
100100
101101
dual_coef_ : array, (n_samples, n_features)
102102
Inverse transform matrix. If `fit_inverse_transform=False`,
103-
dual_coef_ is not present.
103+
``dual_coef_`` is not present.
104104
105105
X_transformed_fit_ : array, (n_samples, n_components)
106106
Projection of the fitted data on the kernel principal components.

sklearn/decomposition/pca.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ class PCA(_BasePCA):
183183
components_ : array, [n_components, n_features]
184184
Principal axes in feature space, representing the directions of
185185
maximum variance in the data. The components are sorted by
186-
explained_variance_.
186+
``explained_variance_``.
187187
188188
explained_variance_ : array, [n_components]
189189
The amount of variance explained by each of the selected components.
@@ -514,7 +514,7 @@ def score(self, X, y=None):
514514

515515
@deprecated("RandomizedPCA was deprecated in 0.18 and will be removed in 0.20. "
516516
"Use PCA(svd_solver='randomized') instead. The new implementation "
517-
"DOES NOT store whiten components_. Apply transform to get them.")
517+
"DOES NOT store whiten ``components_``. Apply transform to get them.")
518518
class RandomizedPCA(BaseEstimator, TransformerMixin):
519519
"""Principal component analysis (PCA) using randomized SVD
520520

sklearn/multioutput.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -147,8 +147,8 @@ def score(self, X, y, sample_weight=None):
147147
predicts the expected value of y, disregarding the input features,
148148
would get a R^2 score of 0.0.
149149
150-
Note
151-
----
150+
Notes
151+
-----
152152
R^2 is calculated by weighting all the targets equally using
153153
`multioutput='uniform_average'`.
154154

0 commit comments

Comments
 (0)