Skip to content

DOC improvements in plot_lasso_lasso_lars_elasticnet_path.py #30032

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

Rachit23110261
Copy link
Contributor

@Rachit23110261 Rachit23110261 commented Oct 8, 2024

Reference Issues/PRs

Fixes #29963

What does this implement/fix?

Added explanations to examples and plots, explaining how they are converging and what are the differences in the methods leading to differences in the regularization path
refactoring of code in a notebook-like tutorial structure
cc: @glemaitre

Copy link

github-actions bot commented Oct 8, 2024

❌ Linting issues

This PR is introducing linting issues. Here's a summary of the issues. Note that you can avoid having linting issues by enabling pre-commit hooks. Instructions to enable them can be found here.

You can see the details of the linting issues under the lint job here


black

black detected issues. Please run black . locally and push the changes. Here you can see the detected issues. Note that running black might also fix some of the issues which might be detected by ruff. Note that the installed black version is black=24.3.0.


--- /home/runner/work/scikit-learn/scikit-learn/examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py	2024-10-08 15:16:16.229319+00:00
+++ /home/runner/work/scikit-learn/scikit-learn/examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py	2024-10-08 15:16:24.785804+00:00
@@ -1,35 +1,37 @@
-'''
+"""
 # Regularization Paths for Lasso, Lasso-LARS, and ElasticNet
 
-In this example, we will explore and compare the regularization paths of three important 
-linear models used for regularization: 
+In this example, we will explore and compare the regularization paths of three important
+linear models used for regularization:
 - :func:`~sklearn.linear_model.Lasso`
 - :func:`~sklearn.linear_model.LassoLars`
 - :func:`~sklearn.linear_model.ElasticNet`
 
 ## What is a Regularization Path?
 
 Regularization path is a plot between model coefficients  and the regularization parameter (alpha).
-For models like Lasso and ElasticNet, the path shows how coefficients 
-are shrunk towards zero as regularization becomes stronger. This helps in feature selection 
+For models like Lasso and ElasticNet, the path shows how coefficients
+are shrunk towards zero as regularization becomes stronger. This helps in feature selection
 and model interpretability.
 
-We will dive into comparing Lasso vs ElasticNet, 
+We will dive into comparing Lasso vs ElasticNet,
 and Lasso vs Lasso-LARS, focusing on their regularization paths.
-'''
+"""
 
 import matplotlib.pyplot as plt
 from itertools import cycle
 from sklearn.datasets import load_diabetes
 from sklearn.linear_model import enet_path, lasso_path, lars_path
 
 # Load the dataset
 X, y = load_diabetes(return_X_y=True)
-X /= X.std(axis=0)  # Standardize data (this ensures the features have mean 0 and variance 1)
+X /= X.std(
+    axis=0
+)  # Standardize data (this ensures the features have mean 0 and variance 1)
 
-'''
+"""
 ### 1. Lasso and ElasticNet: A Comparison of Regularization path
 
 Lasso (Least Absolute Shrinkage and Selection Operator) uses L1 regularization, meaning it 
 penalizes the absolute value of the coefficients. As a result, Lasso tends to produce sparse 
 models, where some coefficients are exactly zero.
@@ -38,11 +40,11 @@
 helps overcome some limitations of Lasso, particularly when features are highly correlated.
 
 $$ \text{ElasticNet Loss} = \frac{1}{2n_{\text{samples}}} \|y - Xw\|^2_2 + \alpha \rho \|w\|_1 + \alpha (1 - \rho) \|w\|_2^2 $$
 
 where $\rho$ is the mix ratio between Lasso (L1) and Ridge (L2) penalties.
-'''
+"""
 
 eps = 5e-3  # A smaller value of eps leads to a longer regularization path
 
 # Compute the regularization path for Lasso
 alphas_lasso, coefs_lasso, _ = lasso_path(X, y, eps=eps)
@@ -64,18 +66,18 @@
 plt.title("Lasso vs ElasticNet Regularization Path")
 plt.legend(["Lasso", "ElasticNet (L1 ratio = 0.8)"], loc="upper left")
 plt.axis("tight")
 plt.show()
 
-'''
+"""
 We can see in the plot that as alpha increases (more regularization), both Lasso and ElasticNet drive coefficients towards 
 zero. However, ElasticNet's combination of L1 and L2 regularization causes coefficients to 
 shrink more smoothly as compared to Lasso. This allows ElasticNet to handle correlated features 
 better, whereas Lasso might arbitrarily select one of the correlated features and set the rest to zero.
-'''
+"""
 
-'''
+"""
 ### 2. Lasso vs Lasso-LARS: Regularization Path
 
 The main difference between Lasso and Lasso-LARS is the method it uses
 to minimize loss.
 Lasso uses cordinate descent to minimize the loss function which is an computationally
@@ -83,11 +85,11 @@
 algorithm. It finds the minimum solution by moving in a path of most correlated 
 features. The regularization path of lasso and lasso-lars would similar, but 
 lasso-lars would be must faster when there are many correlated features.
 
 Let compute and compare their regularization paths.
-'''
+"""
 
 # Compute the regularization path for Lasso-LARS
 alphas_lars, _, coefs_lars = lars_path(X, y, method="lasso")
 
 # Plot the paths for Lasso and Lasso-LARS
@@ -103,37 +105,45 @@
 plt.title("Lasso vs Lasso-LARS Regularization Path")
 plt.legend(["Lasso", "Lasso-LARS"], loc="upper right")
 plt.axis("tight")
 plt.show()
 
-'''
+"""
 As We can see the paths for Lasso and Lasso-LARS are close to each other. But lasso-LARS has a more direct 
 path instead of a curve smooth path, that is because of its method of implementation.
 Both methods set some coefficients to exactly zero, but the LARS algorithm moves in the 
 direction of the strongest feature correlation, making it particularly suited for sparse models.
-'''
+"""
 
-'''
+"""
 ### 3. Positive Constraints
 
 Both Lasso and ElasticNet can also enforce positive constraints on the coefficients by 
 specifying `positive=True`.
 
 Lets see how positive constraints impact the regularization paths for Lasso.
-'''
+"""
 
-alphas_positive_lasso, coefs_positive_lasso, _ = lasso_path(X, y, eps=eps, positive=True)
-alphas_positive_enet, coefs_positive_enet, _ = enet_path(X, y, eps=eps, l1_ratio=l1_ratio, positive=True)
-alphas_positive_lars, _, coefs_positive_lars = lars_path(X, y, method="lasso", positive=True)
+alphas_positive_lasso, coefs_positive_lasso, _ = lasso_path(
+    X, y, eps=eps, positive=True
+)
+alphas_positive_enet, coefs_positive_enet, _ = enet_path(
+    X, y, eps=eps, l1_ratio=l1_ratio, positive=True
+)
+alphas_positive_lars, _, coefs_positive_lars = lars_path(
+    X, y, method="lasso", positive=True
+)
 
 # Plot all three subplots in one row
 fig, axes = plt.subplots(1, 3, figsize=(18, 6))
 
 colors = cycle(["b", "r", "g", "c", "k"])
 
 # First plot: Lasso vs Positive Lasso
-for coef_lasso, coef_positive_lasso, c in zip(coefs_lasso, coefs_positive_lasso, colors):
+for coef_lasso, coef_positive_lasso, c in zip(
+    coefs_lasso, coefs_positive_lasso, colors
+):
     axes[0].semilogx(alphas_lasso, coef_lasso, c=c)
     axes[0].semilogx(alphas_positive_lasso, coef_positive_lasso, linestyle="--", c=c)
 
 axes[0].set_xlabel("alpha")
 axes[0].set_ylabel("coefficients")
@@ -165,20 +175,20 @@
 
 # Display the plots
 plt.tight_layout()
 plt.show()
 
-'''
+"""
 When we enforce positive constraints on Lasso, the regularization path differs, as coefficients 
 are restricted to positive values only. This constraint leads to a different path, particularly 
 for coefficients that would have otherwise become negative.
-'''
+"""
 
-'''
+"""
 ## Conclusion:
 
 This example illustrates how the choice of regularization method and solver impacts the 
 regularization path. Lasso and ElasticNet differ in their penalties (L1 vs a mix of L1 and L2), 
 while Lasso and Lasso-LARS differ in their solvers, with LARS being more efficient for 
 high-dimensional problems. Additionally, positive constraints can lead to different paths, 
 forcing non-negative coefficients in models like Lasso and ElasticNet.
-'''
+"""
would reformat /home/runner/work/scikit-learn/scikit-learn/examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py

Oh no! 💥 💔 💥
1 file would be reformatted, 924 files would be left unchanged.

ruff

ruff detected issues. Please run ruff check --fix --output-format=full . locally, fix the remaining issues, and push the changes. Here you can see the detected issues. Note that the installed ruff version is ruff=0.5.1.


examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:1:1: CPY001 Missing copyright notice at top of file
  |
1 | '''
  |  CPY001
2 | # Regularization Paths for Lasso, Lasso-LARS, and ElasticNet
  |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:4:89: E501 Line too long (89 > 88)
  |
2 | # Regularization Paths for Lasso, Lasso-LARS, and ElasticNet
3 | 
4 | In this example, we will explore and compare the regularization paths of three important 
  |                                                                                         ^ E501
5 | linear models used for regularization: 
6 | - :func:`~sklearn.linear_model.Lasso`
  |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:4:89: W291 Trailing whitespace
  |
2 | # Regularization Paths for Lasso, Lasso-LARS, and ElasticNet
3 | 
4 | In this example, we will explore and compare the regularization paths of three important 
  |                                                                                         ^ W291
5 | linear models used for regularization: 
6 | - :func:`~sklearn.linear_model.Lasso`
  |
  = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:5:39: W291 Trailing whitespace
  |
4 | In this example, we will explore and compare the regularization paths of three important 
5 | linear models used for regularization: 
  |                                       ^ W291
6 | - :func:`~sklearn.linear_model.Lasso`
7 | - :func:`~sklearn.linear_model.LassoLars`
  |
  = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:12:89: E501 Line too long (99 > 88)
   |
10 | ## What is a Regularization Path?
11 | 
12 | Regularization path is a plot between model coefficients  and the regularization parameter (alpha).
   |                                                                                         ^^^^^^^^^^^ E501
13 | For models like Lasso and ElasticNet, the path shows how coefficients 
14 | are shrunk towards zero as regularization becomes stronger. This helps in feature selection 
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:13:70: W291 Trailing whitespace
   |
12 | Regularization path is a plot between model coefficients  and the regularization parameter (alpha).
13 | For models like Lasso and ElasticNet, the path shows how coefficients 
   |                                                                      ^ W291
14 | are shrunk towards zero as regularization becomes stronger. This helps in feature selection 
15 | and model interpretability.
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:14:89: E501 Line too long (92 > 88)
   |
12 | Regularization path is a plot between model coefficients  and the regularization parameter (alpha).
13 | For models like Lasso and ElasticNet, the path shows how coefficients 
14 | are shrunk towards zero as regularization becomes stronger. This helps in feature selection 
   |                                                                                         ^^^^ E501
15 | and model interpretability.
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:14:92: W291 Trailing whitespace
   |
12 | Regularization path is a plot between model coefficients  and the regularization parameter (alpha).
13 | For models like Lasso and ElasticNet, the path shows how coefficients 
14 | are shrunk towards zero as regularization becomes stronger. This helps in feature selection 
   |                                                                                            ^ W291
15 | and model interpretability.
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:17:49: W291 Trailing whitespace
   |
15 | and model interpretability.
16 | 
17 | We will dive into comparing Lasso vs ElasticNet, 
   |                                                 ^ W291
18 | and Lasso vs Lasso-LARS, focusing on their regularization paths.
19 | '''
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:21:1: I001 [*] Import block is un-sorted or un-formatted
   |
19 |   '''
20 |   
21 | / import matplotlib.pyplot as plt
22 | | from itertools import cycle
23 | | from sklearn.datasets import load_diabetes
24 | | from sklearn.linear_model import enet_path, lasso_path, lars_path
25 | | 
26 | | # Load the dataset
   | |_^ I001
27 |   X, y = load_diabetes(return_X_y=True)
28 |   X /= X.std(axis=0)  # Standardize data (this ensures the features have mean 0 and variance 1)
   |
   = help: Organize imports

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:28:89: E501 Line too long (93 > 88)
   |
26 | # Load the dataset
27 | X, y = load_diabetes(return_X_y=True)
28 | X /= X.std(axis=0)  # Standardize data (this ensures the features have mean 0 and variance 1)
   |                                                                                         ^^^^^ E501
29 | 
30 | '''
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:33:89: E501 Line too long (91 > 88)
   |
31 | ### 1. Lasso and ElasticNet: A Comparison of Regularization path
32 | 
33 | Lasso (Least Absolute Shrinkage and Selection Operator) uses L1 regularization, meaning it 
   |                                                                                         ^^^ E501
34 | penalizes the absolute value of the coefficients. As a result, Lasso tends to produce sparse 
35 | models, where some coefficients are exactly zero.
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:33:91: W291 Trailing whitespace
   |
31 | ### 1. Lasso and ElasticNet: A Comparison of Regularization path
32 | 
33 | Lasso (Least Absolute Shrinkage and Selection Operator) uses L1 regularization, meaning it 
   |                                                                                           ^ W291
34 | penalizes the absolute value of the coefficients. As a result, Lasso tends to produce sparse 
35 | models, where some coefficients are exactly zero.
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:34:89: E501 Line too long (93 > 88)
   |
33 | Lasso (Least Absolute Shrinkage and Selection Operator) uses L1 regularization, meaning it 
34 | penalizes the absolute value of the coefficients. As a result, Lasso tends to produce sparse 
   |                                                                                         ^^^^^ E501
35 | models, where some coefficients are exactly zero.
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:34:93: W291 Trailing whitespace
   |
33 | Lasso (Least Absolute Shrinkage and Selection Operator) uses L1 regularization, meaning it 
34 | penalizes the absolute value of the coefficients. As a result, Lasso tends to produce sparse 
   |                                                                                             ^ W291
35 | models, where some coefficients are exactly zero.
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:37:89: E501 Line too long (92 > 88)
   |
35 | models, where some coefficients are exactly zero.
36 | 
37 | ElasticNet, on the other hand, is a combination of L1 and L2 regularization. The L2 penalty 
   |                                                                                         ^^^^ E501
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:37:92: W291 Trailing whitespace
   |
35 | models, where some coefficients are exactly zero.
36 | 
37 | ElasticNet, on the other hand, is a combination of L1 and L2 regularization. The L2 penalty 
   |                                                                                            ^ W291
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:38:89: E501 Line too long (91 > 88)
   |
37 | ElasticNet, on the other hand, is a combination of L1 and L2 regularization. The L2 penalty 
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
   |                                                                                         ^^^ E501
39 | 
40 | $$ \text{ElasticNet Loss} = \frac{1}{2n_{\text{samples}}} \|y - Xw\|^2_2 + \alpha \rho \|w\|_1 + \alpha (1 - \rho) \|w\|_2^2 $$
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:40:59: W605 [*] Invalid escape sequence: `\|`
   |
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
39 | 
40 | $$ \text{ElasticNet Loss} = \frac{1}{2n_{\text{samples}}} \|y - Xw\|^2_2 + \alpha \rho \|w\|_1 + \alpha (1 - \rho) \|w\|_2^2 $$
   |                                                           ^^ W605
41 | 
42 | where $\rho$ is the mix ratio between Lasso (L1) and Ridge (L2) penalties.
   |
   = help: Add backslash to escape sequence

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:40:67: W605 [*] Invalid escape sequence: `\|`
   |
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
39 | 
40 | $$ \text{ElasticNet Loss} = \frac{1}{2n_{\text{samples}}} \|y - Xw\|^2_2 + \alpha \rho \|w\|_1 + \alpha (1 - \rho) \|w\|_2^2 $$
   |                                                                   ^^ W605
41 | 
42 | where $\rho$ is the mix ratio between Lasso (L1) and Ridge (L2) penalties.
   |
   = help: Add backslash to escape sequence

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:40:88: W605 [*] Invalid escape sequence: `\|`
   |
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
39 | 
40 | $$ \text{ElasticNet Loss} = \frac{1}{2n_{\text{samples}}} \|y - Xw\|^2_2 + \alpha \rho \|w\|_1 + \alpha (1 - \rho) \|w\|_2^2 $$
   |                                                                                        ^^ W605
41 | 
42 | where $\rho$ is the mix ratio between Lasso (L1) and Ridge (L2) penalties.
   |
   = help: Add backslash to escape sequence

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:40:89: E501 Line too long (127 > 88)
   |
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
39 | 
40 | $$ \text{ElasticNet Loss} = \frac{1}{2n_{\text{samples}}} \|y - Xw\|^2_2 + \alpha \rho \|w\|_1 + \alpha (1 - \rho) \|w\|_2^2 $$
   |                                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E501
41 | 
42 | where $\rho$ is the mix ratio between Lasso (L1) and Ridge (L2) penalties.
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:40:91: W605 [*] Invalid escape sequence: `\|`
   |
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
39 | 
40 | $$ \text{ElasticNet Loss} = \frac{1}{2n_{\text{samples}}} \|y - Xw\|^2_2 + \alpha \rho \|w\|_1 + \alpha (1 - \rho) \|w\|_2^2 $$
   |                                                                                           ^^ W605
41 | 
42 | where $\rho$ is the mix ratio between Lasso (L1) and Ridge (L2) penalties.
   |
   = help: Add backslash to escape sequence

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:40:116: W605 [*] Invalid escape sequence: `\|`
   |
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
39 | 
40 | $$ \text{ElasticNet Loss} = \frac{1}{2n_{\text{samples}}} \|y - Xw\|^2_2 + \alpha \rho \|w\|_1 + \alpha (1 - \rho) \|w\|_2^2 $$
   |                                                                                                                    ^^ W605
41 | 
42 | where $\rho$ is the mix ratio between Lasso (L1) and Ridge (L2) penalties.
   |
   = help: Add backslash to escape sequence

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:40:119: W605 [*] Invalid escape sequence: `\|`
   |
38 | helps overcome some limitations of Lasso, particularly when features are highly correlated.
39 | 
40 | $$ \text{ElasticNet Loss} = \frac{1}{2n_{\text{samples}}} \|y - Xw\|^2_2 + \alpha \rho \|w\|_1 + \alpha (1 - \rho) \|w\|_2^2 $$
   |                                                                                                                       ^^ W605
41 | 
42 | where $\rho$ is the mix ratio between Lasso (L1) and Ridge (L2) penalties.
   |
   = help: Add backslash to escape sequence

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:70:89: E501 Line too long (123 > 88)
   |
69 | '''
70 | We can see in the plot that as alpha increases (more regularization), both Lasso and ElasticNet drive coefficients towards 
   |                                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E501
71 | zero. However, ElasticNet's combination of L1 and L2 regularization causes coefficients to 
72 | shrink more smoothly as compared to Lasso. This allows ElasticNet to handle correlated features 
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:70:123: W291 Trailing whitespace
   |
69 | '''
70 | We can see in the plot that as alpha increases (more regularization), both Lasso and ElasticNet drive coefficients towards 
   |                                                                                                                           ^ W291
71 | zero. However, ElasticNet's combination of L1 and L2 regularization causes coefficients to 
72 | shrink more smoothly as compared to Lasso. This allows ElasticNet to handle correlated features 
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:71:89: E501 Line too long (91 > 88)
   |
69 | '''
70 | We can see in the plot that as alpha increases (more regularization), both Lasso and ElasticNet drive coefficients towards 
71 | zero. However, ElasticNet's combination of L1 and L2 regularization causes coefficients to 
   |                                                                                         ^^^ E501
72 | shrink more smoothly as compared to Lasso. This allows ElasticNet to handle correlated features 
73 | better, whereas Lasso might arbitrarily select one of the correlated features and set the rest to zero.
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:71:91: W291 Trailing whitespace
   |
69 | '''
70 | We can see in the plot that as alpha increases (more regularization), both Lasso and ElasticNet drive coefficients towards 
71 | zero. However, ElasticNet's combination of L1 and L2 regularization causes coefficients to 
   |                                                                                           ^ W291
72 | shrink more smoothly as compared to Lasso. This allows ElasticNet to handle correlated features 
73 | better, whereas Lasso might arbitrarily select one of the correlated features and set the rest to zero.
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:72:89: E501 Line too long (96 > 88)
   |
70 | We can see in the plot that as alpha increases (more regularization), both Lasso and ElasticNet drive coefficients towards 
71 | zero. However, ElasticNet's combination of L1 and L2 regularization causes coefficients to 
72 | shrink more smoothly as compared to Lasso. This allows ElasticNet to handle correlated features 
   |                                                                                         ^^^^^^^^ E501
73 | better, whereas Lasso might arbitrarily select one of the correlated features and set the rest to zero.
74 | '''
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:72:96: W291 Trailing whitespace
   |
70 | We can see in the plot that as alpha increases (more regularization), both Lasso and ElasticNet drive coefficients towards 
71 | zero. However, ElasticNet's combination of L1 and L2 regularization causes coefficients to 
72 | shrink more smoothly as compared to Lasso. This allows ElasticNet to handle correlated features 
   |                                                                                                ^ W291
73 | better, whereas Lasso might arbitrarily select one of the correlated features and set the rest to zero.
74 | '''
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:73:89: E501 Line too long (103 > 88)
   |
71 | zero. However, ElasticNet's combination of L1 and L2 regularization causes coefficients to 
72 | shrink more smoothly as compared to Lasso. This allows ElasticNet to handle correlated features 
73 | better, whereas Lasso might arbitrarily select one of the correlated features and set the rest to zero.
   |                                                                                         ^^^^^^^^^^^^^^^ E501
74 | '''
   |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:82:77: W291 Trailing whitespace
   |
80 | to minimize loss.
81 | Lasso uses cordinate descent to minimize the loss function which is an computationally
82 | expensive method but Lasso-LARS (Least Angle Regression) is a more efficient 
   |                                                                             ^ W291
83 | algorithm. It finds the minimum solution by moving in a path of most correlated 
84 | features. The regularization path of lasso and lasso-lars would similar, but 
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:83:80: W291 Trailing whitespace
   |
81 | Lasso uses cordinate descent to minimize the loss function which is an computationally
82 | expensive method but Lasso-LARS (Least Angle Regression) is a more efficient 
83 | algorithm. It finds the minimum solution by moving in a path of most correlated 
   |                                                                                ^ W291
84 | features. The regularization path of lasso and lasso-lars would similar, but 
85 | lasso-lars would be must faster when there are many correlated features.
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:84:77: W291 Trailing whitespace
   |
82 | expensive method but Lasso-LARS (Least Angle Regression) is a more efficient 
83 | algorithm. It finds the minimum solution by moving in a path of most correlated 
84 | features. The regularization path of lasso and lasso-lars would similar, but 
   |                                                                             ^ W291
85 | lasso-lars would be must faster when there are many correlated features.
   |
   = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:109:89: E501 Line too long (107 > 88)
    |
108 | '''
109 | As We can see the paths for Lasso and Lasso-LARS are close to each other. But lasso-LARS has a more direct 
    |                                                                                         ^^^^^^^^^^^^^^^^^^^ E501
110 | path instead of a curve smooth path, that is because of its method of implementation.
111 | Both methods set some coefficients to exactly zero, but the LARS algorithm moves in the 
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:109:107: W291 Trailing whitespace
    |
108 | '''
109 | As We can see the paths for Lasso and Lasso-LARS are close to each other. But lasso-LARS has a more direct 
    |                                                                                                           ^ W291
110 | path instead of a curve smooth path, that is because of its method of implementation.
111 | Both methods set some coefficients to exactly zero, but the LARS algorithm moves in the 
    |
    = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:111:88: W291 Trailing whitespace
    |
109 | As We can see the paths for Lasso and Lasso-LARS are close to each other. But lasso-LARS has a more direct 
110 | path instead of a curve smooth path, that is because of its method of implementation.
111 | Both methods set some coefficients to exactly zero, but the LARS algorithm moves in the 
    |                                                                                        ^ W291
112 | direction of the strongest feature correlation, making it particularly suited for sparse models.
113 | '''
    |
    = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:112:89: E501 Line too long (96 > 88)
    |
110 | path instead of a curve smooth path, that is because of its method of implementation.
111 | Both methods set some coefficients to exactly zero, but the LARS algorithm moves in the 
112 | direction of the strongest feature correlation, making it particularly suited for sparse models.
    |                                                                                         ^^^^^^^^ E501
113 | '''
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:118:87: W291 Trailing whitespace
    |
116 | ### 3. Positive Constraints
117 | 
118 | Both Lasso and ElasticNet can also enforce positive constraints on the coefficients by 
    |                                                                                       ^ W291
119 | specifying `positive=True`.
    |
    = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:124:89: E501 Line too long (89 > 88)
    |
122 | '''
123 | 
124 | alphas_positive_lasso, coefs_positive_lasso, _ = lasso_path(X, y, eps=eps, positive=True)
    |                                                                                         ^ E501
125 | alphas_positive_enet, coefs_positive_enet, _ = enet_path(X, y, eps=eps, l1_ratio=l1_ratio, positive=True)
126 | alphas_positive_lars, _, coefs_positive_lars = lars_path(X, y, method="lasso", positive=True)
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:125:89: E501 Line too long (105 > 88)
    |
124 | alphas_positive_lasso, coefs_positive_lasso, _ = lasso_path(X, y, eps=eps, positive=True)
125 | alphas_positive_enet, coefs_positive_enet, _ = enet_path(X, y, eps=eps, l1_ratio=l1_ratio, positive=True)
    |                                                                                         ^^^^^^^^^^^^^^^^^ E501
126 | alphas_positive_lars, _, coefs_positive_lars = lars_path(X, y, method="lasso", positive=True)
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:126:89: E501 Line too long (93 > 88)
    |
124 | alphas_positive_lasso, coefs_positive_lasso, _ = lasso_path(X, y, eps=eps, positive=True)
125 | alphas_positive_enet, coefs_positive_enet, _ = enet_path(X, y, eps=eps, l1_ratio=l1_ratio, positive=True)
126 | alphas_positive_lars, _, coefs_positive_lars = lars_path(X, y, method="lasso", positive=True)
    |                                                                                         ^^^^^ E501
127 | 
128 | # Plot all three subplots in one row
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:134:89: E501 Line too long (89 > 88)
    |
133 | # First plot: Lasso vs Positive Lasso
134 | for coef_lasso, coef_positive_lasso, c in zip(coefs_lasso, coefs_positive_lasso, colors):
    |                                                                                         ^ E501
135 |     axes[0].semilogx(alphas_lasso, coef_lasso, c=c)
136 |     axes[0].semilogx(alphas_positive_lasso, coef_positive_lasso, linestyle="--", c=c)
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:171:89: E501 Line too long (96 > 88)
    |
170 | '''
171 | When we enforce positive constraints on Lasso, the regularization path differs, as coefficients 
    |                                                                                         ^^^^^^^^ E501
172 | are restricted to positive values only. This constraint leads to a different path, particularly 
173 | for coefficients that would have otherwise become negative.
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:171:96: W291 Trailing whitespace
    |
170 | '''
171 | When we enforce positive constraints on Lasso, the regularization path differs, as coefficients 
    |                                                                                                ^ W291
172 | are restricted to positive values only. This constraint leads to a different path, particularly 
173 | for coefficients that would have otherwise become negative.
    |
    = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:172:89: E501 Line too long (96 > 88)
    |
170 | '''
171 | When we enforce positive constraints on Lasso, the regularization path differs, as coefficients 
172 | are restricted to positive values only. This constraint leads to a different path, particularly 
    |                                                                                         ^^^^^^^^ E501
173 | for coefficients that would have otherwise become negative.
174 | '''
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:172:96: W291 Trailing whitespace
    |
170 | '''
171 | When we enforce positive constraints on Lasso, the regularization path differs, as coefficients 
172 | are restricted to positive values only. This constraint leads to a different path, particularly 
    |                                                                                                ^ W291
173 | for coefficients that would have otherwise become negative.
174 | '''
    |
    = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:179:88: W291 Trailing whitespace
    |
177 | ## Conclusion:
178 | 
179 | This example illustrates how the choice of regularization method and solver impacts the 
    |                                                                                        ^ W291
180 | regularization path. Lasso and ElasticNet differ in their penalties (L1 vs a mix of L1 and L2), 
181 | while Lasso and Lasso-LARS differ in their solvers, with LARS being more efficient for 
    |
    = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:180:89: E501 Line too long (96 > 88)
    |
179 | This example illustrates how the choice of regularization method and solver impacts the 
180 | regularization path. Lasso and ElasticNet differ in their penalties (L1 vs a mix of L1 and L2), 
    |                                                                                         ^^^^^^^^ E501
181 | while Lasso and Lasso-LARS differ in their solvers, with LARS being more efficient for 
182 | high-dimensional problems. Additionally, positive constraints can lead to different paths, 
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:180:96: W291 Trailing whitespace
    |
179 | This example illustrates how the choice of regularization method and solver impacts the 
180 | regularization path. Lasso and ElasticNet differ in their penalties (L1 vs a mix of L1 and L2), 
    |                                                                                                ^ W291
181 | while Lasso and Lasso-LARS differ in their solvers, with LARS being more efficient for 
182 | high-dimensional problems. Additionally, positive constraints can lead to different paths, 
    |
    = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:181:87: W291 Trailing whitespace
    |
179 | This example illustrates how the choice of regularization method and solver impacts the 
180 | regularization path. Lasso and ElasticNet differ in their penalties (L1 vs a mix of L1 and L2), 
181 | while Lasso and Lasso-LARS differ in their solvers, with LARS being more efficient for 
    |                                                                                       ^ W291
182 | high-dimensional problems. Additionally, positive constraints can lead to different paths, 
183 | forcing non-negative coefficients in models like Lasso and ElasticNet.
    |
    = help: Remove trailing whitespace

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:182:89: E501 Line too long (91 > 88)
    |
180 | regularization path. Lasso and ElasticNet differ in their penalties (L1 vs a mix of L1 and L2), 
181 | while Lasso and Lasso-LARS differ in their solvers, with LARS being more efficient for 
182 | high-dimensional problems. Additionally, positive constraints can lead to different paths, 
    |                                                                                         ^^^ E501
183 | forcing non-negative coefficients in models like Lasso and ElasticNet.
184 | '''
    |

examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py:182:91: W291 Trailing whitespace
    |
180 | regularization path. Lasso and ElasticNet differ in their penalties (L1 vs a mix of L1 and L2), 
181 | while Lasso and Lasso-LARS differ in their solvers, with LARS being more efficient for 
182 | high-dimensional problems. Additionally, positive constraints can lead to different paths, 
    |                                                                                           ^ W291
183 | forcing non-negative coefficients in models like Lasso and ElasticNet.
184 | '''
    |
    = help: Remove trailing whitespace

Found 54 errors.
[*] 7 fixable with the `--fix` option (23 hidden fixes can be enabled with the `--unsafe-fixes` option).

Generated for commit: e40b45d. Link to the linter CI: here

@glemaitre
Copy link
Member

@virchan already claimed the issue and open a pull-request already: #30028

@glemaitre glemaitre closed this Oct 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

DOC rework the example presenting the regularization path of Lasso, Lasso-LARS, and Elastic Net
2 participants