Skip to content

DOC Add dropdowns to module 1.4 SVM #26641

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 21, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 24 additions & 9 deletions doc/modules/svm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,10 @@ multi-class strategy, thus training `n_classes` models.
See :ref:`svm_mathematical_formulation` for a complete description of
the decision function.

|details-start|
**Details on multi-class strategies**
|details-split|

Note that the :class:`LinearSVC` also implements an alternative multi-class
strategy, the so-called multi-class SVM formulated by Crammer and Singer
[#8]_, by using the option ``multi_class='crammer_singer'``. In practice,
Expand Down Expand Up @@ -199,6 +203,8 @@ Then ``dual_coef_`` looks like this:
|for SVs of class 0 |for SVs of class 1 |for SVs of class 2 |
+--------------------------------------------------------------------------+-------------------------------------------------+-------------------------------------------------+

|details-end|

.. topic:: Examples:

* :ref:`sphx_glr_auto_examples_svm_plot_iris_svc.py`,
Expand Down Expand Up @@ -505,9 +511,9 @@ is advised to use :class:`~sklearn.model_selection.GridSearchCV` with
* :ref:`sphx_glr_auto_examples_svm_plot_rbf_parameters.py`
* :ref:`sphx_glr_auto_examples_svm_plot_svm_nonlinear.py`


Custom Kernels
--------------
|details-start|
**Custom Kernels**
|details-split|

You can define your own kernels by either giving the kernel as a
python function or by precomputing the Gram matrix.
Expand Down Expand Up @@ -571,6 +577,7 @@ test vectors must be provided:
>>> clf.predict(gram_test)
array([0, 1, 0])

|details-end|

.. _svm_mathematical_formulation:

Expand Down Expand Up @@ -667,8 +674,9 @@ term :math:`b`
estimator used is :class:`~sklearn.linear_model.Ridge` regression,
the relation between them is given as :math:`C = \frac{1}{alpha}`.

LinearSVC
---------
|details-start|
**LinearSVC**
|details-split|

The primal problem can be equivalently formulated as

Expand All @@ -683,10 +691,13 @@ does not involve inner products between samples, so the famous kernel trick
cannot be applied. This is why only the linear kernel is supported by
:class:`LinearSVC` (:math:`\phi` is the identity function).

|details-end|

.. _nu_svc:

NuSVC
-----
|details-start|
**NuSVC**
|details-split|

The :math:`\nu`-SVC formulation [#7]_ is a reparameterization of the
:math:`C`-SVC and therefore mathematically equivalent.
Expand All @@ -699,6 +710,7 @@ to a sample that lies on the wrong side of its margin boundary: it is either
misclassified, or it is correctly classified but does not lie beyond the
margin.

|details-end|

SVR
---
Expand Down Expand Up @@ -747,8 +759,9 @@ which holds the difference :math:`\alpha_i - \alpha_i^*`, ``support_vectors_`` w
holds the support vectors, and ``intercept_`` which holds the independent
term :math:`b`

LinearSVR
---------
|details-start|
**LinearSVR**
|details-split|

The primal problem can be equivalently formulated as

Expand All @@ -760,6 +773,8 @@ where we make use of the epsilon-insensitive loss, i.e. errors of less than
:math:`\varepsilon` are ignored. This is the form that is directly optimized
by :class:`LinearSVR`.

|details-end|

.. _svm_implementation_details:

Implementation details
Expand Down