Bertsimas, Dimitris, Pauphilet, J and Van Parys, B (2021) Sparse Classification: A Scalable Discrete Optimization Perspective. Machine Learning, 110. pp. 3177-3209. ISSN 0885-6125
Abstract
We formulate the sparse classification problem of n samples with p features as a binary convex optimization problem and propose a outer-approximation algorithm to solve it exactly. For sparse logistic regression and sparse SVM, our algorithm finds optimal solutions for n and p in the 10; 000s within minutes. On synthetic data our algorithm achieves perfect support recovery in the large sample regime. Namely, there exists an n0 such that the algorithm takes a long time to find an optimal solution and does not recover the correct support for n < n0, while for n > n0, the algorithm quickly detects all the true features, and does not return any false features. In contrast, while Lasso accurately detects all the true features, it persistently returns incorrect features, even as the number of observations increases. Consequently, on numerous real-world experiments, our outer-approximation algorithms returns sparser classifers while achieving similar predictive accuracy as Lasso. To support our observations, we analyze conditions on the sample size needed to ensure full support recovery in classification. For k-sparse classification, and under some assumptions on the data generating process, we prove that information-theoretic limitations impose n0 < C(2+o2) k log(p-k), for some constant C >0.
More Details
Item Type: | Article |
---|---|
Subject Areas: | Management Science and Operations |
Date Deposited: | 27 Sep 2021 09:49 |
Date of first compliant deposit: | 27 Sep 2021 |
Subjects: |
Statistics Programming languages Classification |
Last Modified: | 05 Nov 2024 02:51 |
URI: | https://lbsresearch.london.edu/id/eprint/1978 |