•  
  •  
 

Abstract

This article examines the potential of and limits to the use of machine learning for financial regulation. Ideally, if we could fully understand the financial system and agree on long- and short-term regulatory goals, we would be able to write code that carries out the computation that extracts proper representations from the data and makes correct regulatory decisions. We cannot do this yet because of limited sources of data, the bias brought by human beings and algorithmic models, and the difficulty of improving uninterpretable models. Furthermore, since law is a combination of merits and facts, there are difficulties in establishing the ground truth, modeling complex financial systems, and attaining fair outcomes simply based on statistics. Statistics, as a method of inductive learning, can only recognize patterns from existing data. From a methodological perspective, this represents a paradigm shift from observational study (deduction) to data analytics (induction). However, in the financial field, there is a fundamental difference between measurable risks and unknowable uncertainty in the future, which significantly affects the reliability of models to determine regulation based on estimated risks. Therefore, algorithmic models cannot make reliable suggestions about unusual situations, nor deal with complex problems that lack sufficient training data. In cases of algorithmic regulation, despite the predetermined regulatory goals, specific standards should remain adaptive to new data collected from the regulated environment, so as to mitigate bias generated from historical data and the initial model setting.

Share

COinS