Singular Learning Theory and Deep Neural networks
Miki Aoyagi
Abstract
Singular Learning Theory was developed by Sumio Watanabe(Watanabe, 2009). Using the resolution of singularities, he analyzed the asymptotic expansion of generalization errors, training errors, free energy, and other quantities with respect to the number of data samples. To characterize the leading term, he introduced the concept of learning coefficients and their orders, which play a crucial role in learning theory. In this paper, we first examine the learning coefficients, which correspond to log canonical thresholds in algebraic geometry, for singular learning models, and then show the another proof for Watanabe theory, which is useful for understanding his theory deeply.