How to Develop LASSO Regression Models in Python

Regression is a modeling task that involves predicting a numeric value given an input.

Linear regression is the standard algorithm for regression that assumes a linear relationship between inputs and the target variable. An extension to linear regression invokes adding penalties to the loss function during training that encourages simpler models that have smaller coefficient values. These extensions are referred to as regularized linear regression or penalized linear regression.

Lasso Regression is a popular type of regularized linear regression that includes an L1 penalty. This has the effect of shrinking the coefficients for those input variables that do not contribute much to the prediction task. This penalty allows some coefficient values to go to the value of zero, allowing input variables to be effectively removed from the model, providing a type of automatic feature selection.

In this tutorial, you will discover how to develop and evaluate Lasso Regression models in Python.

After completing this tutorial, you will know:

  • Lasso Regression is an extension of linear regression that adds a regularization penalty to the loss function during training.
  • How to evaluate a Lasso Regression model and use a final model to make predictions for new data.
  • How to configure the Lasso Regression model for a new dataset via grid search and automatically.

Let’s get started.

How to Develop LASSO Regression Models in Python

How to Develop LASSO Regression Models in Python
Photo by Phil Dolby, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  1. Lasso Regression
  2. Example of Lasso Regression
  3. Tuning Lasso Hyperparameters

Lasso Regression

Linear regression refers to a model that assumes a linear relationship between input variables and the target variable.

With a single input variable, this relationship is a line, and with higher dimensions, this relationship can be thought of as a hyperplane that connects the input variables to the target variable. The coefficients of the model are found via an optimization process that seeks to minimize the sum squared error between the predictions (yhat) and the expected target values (y).

  • loss = sum i=0 to n (y_i – yhat_i)^2

A problem with linear regression is that estimated coefficients of the model can become large, making the model sensitive to inputs and possibly unstable. This is particularly true for problems with few observations (samples) or less samples (n) than input predictors (p) or variables (so-called p >> n problems).

One approach to address the stability of regression models is to change the loss function to include additional costs for a model that has large coefficients. Linear regression models that use these modified loss functions during training are referred to collectively as penalized linear regression.

A popular penalty is to penalize a model based on the sum of the absolute coefficient values. This is called the L1 penalty. An L1 penalty minimizes the size of all coefficients and allows some coefficients to be minimized to the value zero, which removes the predictor from the model.

  • l1_penalty = sum j=0 to p abs(beta_j)

An L1 penalty minimizes the size of all coefficients and allows any coefficient to go to the value of zero, effectively removing input features from the model.

This acts as a type of automatic feature selection.

… a consequence of penalizing the absolute values is that some parameters are actually set to 0 for some value of lambda. Thus the lasso yields models that simultaneously use regularization to improve the model and to conduct feature selection.

— Page 125, Applied Predictive Modeling, 2013.

This penalty can be added to the cost function for linear regression and is referred to as Least Absolute Shrinkage And Selection Operator regularization (LASSO), or more commonly, “Lasso” (with title case) for short.

A popular alternative to ridge regression is the least absolute shrinkage and selection operator model, frequently called the lasso.

— Page 124, Applied Predictive Modeling, 2013.

A hyperparameter is used called “lambda” that controls the weighting of the penalty to the loss function. A default value of 1.0 will give full weightings to the penalty; a value of 0 excludes the penalty. Very small values of lambda, such as 1e-3 or smaller, are common.

  • lasso_loss = loss + (lambda * l1_penalty)

Now that we are familiar with Lasso penalized regression, let’s look at a worked example.

Example of Lasso Regression

In this section, we will demonstrate how to use the Lasso Regression algorithm.

First, let’s introduce a standard regression dataset. We will use the housing dataset.

The housing dataset is a standard machine learning dataset comprising 506 rows of data with 13 numerical input variables and a numerical target variable.

Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve a mean absolute error (MAE) of about 6.6. A top-performing model can achieve a MAE on this same test harness of about 1.9. This provides the bounds of expected performance on this dataset.

The dataset involves predicting the house price given details of the house suburb in the American city of Boston.

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads and loads the dataset as a Pandas DataFrame and summarizes the shape of the dataset and the first five rows of data.

Running the example confirms the 506 rows of data and 13 input variables and a single numeric target variable (14 in total). We can also see that all input variables are numeric.

The scikit-learn Python machine learning library provides an implementation of the Lasso penalized regression algorithm via the Lasso class.

Confusingly, the lambda term can be configured via the “alpha” argument when defining the class. The default value is 1.0 or a full penalty.

We can evaluate the Lasso Regression model on the housing dataset using repeated 10-fold cross-validation and report the average mean absolute error (MAE) on the dataset.

Running the example evaluates the Lasso Regression algorithm on the housing dataset and reports the average MAE across the three repeats of 10-fold cross-validation.

Your specific results may vary given the stochastic nature of the learning algorithm. Consider running the example a few times.

In this case, we can see that the model achieved a MAE of about 3.711.

We may decide to use the Lasso Regression as our final model and make predictions on new data.

This can be achieved by fitting the model on all available data and calling the predict() function, passing in a new row of data.

We can demonstrate this with a complete example, listed below.

Running the example fits the model and makes a prediction for the new rows of data.

Your specific results may vary given the stochastic nature of the learning algorithm. Try running the example a few times.

Next, we can look at configuring the model hyperparameters.

Tuning Lasso Hyperparameters

How do we know that the default hyperparameter of alpha=1.0 is appropriate for our dataset?

We don’t.

Instead, it is good practice to test a suite of different configurations and discover what works best for our dataset.

One approach would be to gird search alpha values from perhaps 1e-5 to 100 on a log-10 scale and discover what works best for a dataset. Another approach would be to test values between 0.0 and 1.0 with a grid separation of 0.01. We will try the latter in this case.

The example below demonstrates this using the GridSearchCV class with a grid of values we have defined.

Running the example will evaluate each combination of configurations using repeated cross-validation.

Your specific results may vary given the stochastic nature of the learning algorithm. Try running the example a few times.

You might see some warnings that can be safely ignored, such as:

In this case, we can see that we achieved slightly better results than the default 3.379 vs. 3.711. Ignore the sign; the library makes the MAE negative for optimization purposes.

We can see that the model assigned an alpha weight of 0.01 to the penalty.

The scikit-learn library also provides a built-in version of the algorithm that automatically finds good hyperparameters via the LassoCV class.

To use the class, the model is fit on the training dataset as per normal and the hyperparameters are tuned automatically during the training process. The fit model can then be used to make a prediction.

By default, the model will test 100 alpha values. We can change this to a grid of values between 0 and 1 with a separation of 0.01 as we did on the previous example by setting the “alphas” argument.

The example below demonstrates this.

Running the example fits the model and discovers the hyperparameters that give the best results using cross-validation.

Your specific results may vary given the stochastic nature of the learning algorithm. Try running the example a few times.

In this case, we can see that the model chose the hyperparameter of alpha=0.0. This is different from what we found via our manual grid search, perhaps due to the systematic way in which configurations were searched or selected.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

APIs

Articles

Summary

In this tutorial, you discovered how to develop and evaluate Lasso Regression models in Python.

Specifically, you learned:

  • Lasso Regression is an extension of linear regression that adds a regularization penalty to the loss function during training.
  • How to evaluate a Lasso Regression model and use a final model to make predictions for new data.
  • How to configure the Lasso Regression model for a new dataset via grid search and automatically.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Discover Fast Machine Learning in Python!

Master Machine Learning With Python

Develop Your Own Models in Minutes

...with just a few lines of scikit-learn code

Learn how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more...

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

See What's Inside

35 Responses to How to Develop LASSO Regression Models in Python

  1. Bappa Das October 12, 2020 at 6:02 pm #

    How can I export the vector of predicted values in .csv instead of only MAE?

  2. Divyosmi Goswami October 16, 2020 at 5:32 am #

    Wow great blog I loved it and I learned a new algorithm. Waiting for a article on implementation of LASSO in pure python 3.

  3. shaheen mohammed saleh October 16, 2020 at 3:16 pm #

    Hi Jason may god bless you we want nonlinear regression algorithoms

  4. George November 13, 2020 at 4:41 pm #

    Hi Jason,

    how to implement tuning hyperparameter when i want to do classification with Lasso?

  5. Bahar December 9, 2020 at 12:42 am #

    Thank you very much for such a useful tutorial.
    Can we use Standard Scaler and PCA when we use Lasso?
    Thanks in advance

  6. Priya March 19, 2021 at 8:45 pm #

    Hello,
    Does scaling (normalization/standardization) negatively affect LASSO regression? In my project after scaling the variables the RMSE has increased from 135 to 220. Please clear my doubt.

    • Priya March 19, 2021 at 8:46 pm #

      The code is correct because for other regression models I am getting the required results with normalization.

    • Jason Brownlee March 20, 2021 at 5:19 am #

      No, scaling will help most linear models – but it may not help in all cases.

  7. Mansoor May 20, 2021 at 1:03 pm #

    How likert scale type of data can be used in LASSO regression. please suggest

  8. Ayah Mamdouh May 29, 2021 at 11:04 pm #

    Hi Jason,

    Great blog. I want to know how to implement lasso for classification in python.

    • Jason Brownlee May 30, 2021 at 5:50 am #

      Thanks!

      Sorry, I don’t have an example of coding lasso from scratch at this stage.

  9. R. Oberoi July 4, 2021 at 12:05 pm #

    Hi Jason,

    Does LassoCV take into account standardizing for each fold when it produces the optimal alpha? If not, how can we standardize each the different folds separately so that there is no data leakage?

    Thanks,
    Roi

  10. Ali Zohair October 5, 2021 at 4:24 pm #

    “This is particularly true for problems with few observations (samples) or more samples (n) than input predictors (p) or variables (so-called p >> n problems).”

    – I believe you might have meant it the other way? Unless I misunderstand, you’re referring to when there are more features than samples right?

    • Adrian Tam
      Adrian Tam October 6, 2021 at 10:31 am #

      Correct. Thanks for pointing that out.

  11. Craig Y December 1, 2021 at 12:40 pm #

    Hi Jason,

    Do you know how to output the prediction interval of LASSO? How to get the confidence interval of LASSO?

    • Adrian Tam
      Adrian Tam December 2, 2021 at 2:37 am #

      You’re just doing regression with a different function. So how you did in linear regression, for example, it is how you do here.

  12. D.K April 5, 2022 at 10:53 pm #

    What about panel data? Simple and dynamic ones, with instrumental variables categorical and dummies?

    • James Carmichael April 6, 2022 at 8:42 am #

      Hi D.K…please provide a more specific question so that we may better assist you.

  13. sara November 10, 2022 at 5:39 pm #

    Hi,How to use LASSO or elasticnet for feature selection in classification problem? I would be grateful if someone could guide me in this matter.

  14. Reclusive November 11, 2022 at 4:14 am #

    So, how is the ML Lasso model different than the non-ML Lasso regression models in Sparse Representation?

  15. Endre August 22, 2024 at 11:02 pm #

    This is an example where Lasso is practically the same as normal regression. Would it not be better to create an example where alpha is found > 0? It probably requires feature standardization.

    • James Carmichael August 23, 2024 at 8:10 am #

      Hi Endre…You’re correct that in some cases, when the alpha parameter in Lasso regression (also known as the regularization parameter) is set too low, Lasso may behave similarly to ordinary least squares (OLS) regression, with minimal shrinkage of coefficients. To demonstrate the effect of Lasso more effectively, it would be better to choose an example where the alpha parameter has a noticeable impact (i.e., where \(\alpha > 0\)).

      ### Steps to Enhance the Example with Lasso Regression:
      1. **Feature Standardization:**
      – Lasso regression is sensitive to the scale of the features. It’s important to standardize or normalize the features so that they all contribute equally to the model.
      – You can use Scikit-learn’s StandardScaler to standardize the dataset before applying Lasso regression.

      2. **Choosing a Suitable Alpha:**
      – Start by testing a range of alpha values using cross-validation to find the optimal value where the regularization effect is significant.
      – Use Scikit-learn’s LassoCV or GridSearchCV to automatically find the best alpha that minimizes cross-validation error.

      3. **Demonstrate Coefficient Shrinkage:**
      – After applying Lasso with a suitable alpha, show how some coefficients are reduced to zero, demonstrating Lasso’s ability to perform feature selection.
      – Compare the results with OLS to highlight the differences in model complexity and coefficient values.

      ### Implementation Example:

      python
      from sklearn.linear_model import Lasso, LassoCV
      from sklearn.preprocessing import StandardScaler
      from sklearn.model_selection import train_test_split
      from sklearn.metrics import mean_squared_error
      import numpy as np

      # Example dataset (assuming X, y are defined)
      X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

      # Standardize the features
      scaler = StandardScaler()
      X_train_scaled = scaler.fit_transform(X_train)
      X_test_scaled = scaler.transform(X_test)

      # Perform LassoCV to find the optimal alpha
      lasso_cv = LassoCV(cv=5, random_state=42)
      lasso_cv.fit(X_train_scaled, y_train)

      # The optimal alpha
      best_alpha = lasso_cv.alpha_
      print(f'Optimal alpha: {best_alpha}')

      # Apply Lasso with the best alpha
      lasso = Lasso(alpha=best_alpha)
      lasso.fit(X_train_scaled, y_train)

      # Predictions and performance
      y_pred = lasso.predict(X_test_scaled)
      mse = mean_squared_error(y_test, y_pred)
      print(f'Mean Squared Error: {mse}')

      # Display the coefficients
      print("Coefficients after Lasso:")
      print(lasso.coef_)

      ### Key Points:
      – **Feature Standardization:** Ensures that the features contribute equally and Lasso can effectively penalize the coefficients.
      – **Optimal Alpha:** Choosing an appropriate alpha helps demonstrate the strength of regularization, leading to meaningful shrinkage of coefficients.
      – **Comparison with OLS:** Highlighting the difference in the number of non-zero coefficients between Lasso and OLS emphasizes Lasso’s feature selection capability.

      This approach will make the example more robust and informative, showing the practical differences between Lasso and regular regression.

Leave a Reply

Machine Learning Mastery is part of Guiding Tech Media, a leading digital media publisher focused on helping people figure out technology. Visit our corporate website to learn more about our mission and team.