Hennessy, C and Goodhart, C (2023) Goodhart’s Law and Machine Learning: A Structural Perspective. International Economic Review, 64 (3). pp. 1075-1086. ISSN 0020-6598
Abstract
We develop a structural framework illustrating how penalized regression algorithms affect Goodhart bias when training data is clean but covariates are manipulated at cost by future agents facing prediction models. With quadratic manipulation costs, bias is proportional to sum-of-squared slopes, micro-founding Ridge. Lasso is micro-founded in the limit under increasingly steep cost functions. However, standard penalization is inappropriate if costs depend upon percentage rather than absolute manipulation. Nevertheless, with known costs of either form, the following algorithm is proven manipulation-proof: Within training data, evaluate candidate coefficient vectors at their respective incentive-compatible manipulation configuration. Moreover, we obtain analytical expressions for the resulting coefficient adjustments: slopes (intercept) shift downward if costs depend upon percentage (absolute) manipulation. Statisticians ignoring agent borne manipulation costs select socially suboptimal penalization, resulting in socially excessive, and futile, manipulation. Model averaging, especially over Lasso or ensemble estimators, reduces manipulation costs significantly. Standard cross-validation fails to detect Goodhart bias.
More Details
Item Type: | Article |
---|---|
Subject Areas: | Finance |
Additional Information: |
© 2023 The Authors. International Economic Review published by Wiley Periodicals LLC on behalf of the Economics
|
Date Deposited: | 03 Apr 2023 10:34 |
Date of first compliant deposit: | 03 Apr 2023 |
Subjects: |
Mathematical models Mathematical programming |
Last Modified: | 21 Nov 2024 03:02 |
URI: | https://lbsresearch.london.edu/id/eprint/2823 |