How machine learning exploits the Ising universality
- Abstract
- One of the interesting observations in the supervised learning of the Ising model is that it can still predict the t ransition point in a good accuracy even if the underlying lattice geometry is different from the one for which was trained initially. We address the question why this is possible by analytically solving the minimally down -sized neural network models composed of just a few neurons participating in the hidden layer. We consider the two-unit network with sigmoid neurons [1] and the three-unit network with Heaviside neurons [2]. In b oth of the model networks, we find that the essential information encoded in the network parameters is the sc aling exponent of the criticality, not the precise location of the transition points. This explains why the netwo rks trained in one specific lattice geometry allow the same finite-size-scaling analysis to locate the critical poi nt when applied to any other lattices of the same Ising universality class.
[1] D. Kim and D.-H. Kim, arXiv:1804.02171, to appear in PRE.
[2] J. Carrasquilla and R. G. Melko, Nat. Phys. 13, 431 (2017).
- Author(s)
- Kim, Dong-Hee
- Issued Date
- 2018-10-24
- Type
- Conference Paper
- URI
- https://scholar.gist.ac.kr/handle/local/8364
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.