Sparse learning via Boolean relaxations

Session date: 10 March 2025

Session host: Konstantin Sidorov

Summary:

We are continuing exploring the applications of optimization in machine learning; our next stop is the sparse learning task, that is, learning an accurate linear model while only using few features. A textbook way of doing it is to add the L1-norm of the coefficients into the loss (also known as Lasso) and rely on its properties to guide the learning into sparsity. In this session, I will present a way to enforce it, expose the challenges of constructing a good relaxation, and discuss how the semidefinite programming techniques can be helpful here.

Relevant papers

  1. Sparse learning via Boolean relaxations
    Mert Pilanci, Martin J. Wainwright, and Laurent El Ghaoui
    Mathematical Programming, Mar 2015