OAK

How machine learning exploits the Ising universality

Metadata Downloads
Abstract
One of the interesting observations in the supervised learning of the Ising model is that it can still predict the t ransition point in a good accuracy even if the underlying lattice geometry is different from the one for which was trained initially. We address the question why this is possible by analytically solving the minimally down -sized neural network models composed of just a few neurons participating in the hidden layer. We consider the two-unit network with sigmoid neurons [1] and the three-unit network with Heaviside neurons [2]. In b oth of the model networks, we find that the essential information encoded in the network parameters is the sc aling exponent of the criticality, not the precise location of the transition points. This explains why the netwo rks trained in one specific lattice geometry allow the same finite-size-scaling analysis to locate the critical poi nt when applied to any other lattices of the same Ising universality class.
[1] D. Kim and D.-H. Kim, arXiv:1804.02171, to appear in PRE.
[2] J. Carrasquilla and R. G. Melko, Nat. Phys. 13, 431 (2017).
Author(s)
Kim, Dong-Hee
Issued Date
2018-10-24
Type
Conference Paper
URI
https://scholar.gist.ac.kr/handle/local/8364
Publisher
한국물리학회
Citation
한국물리학회 2018 가을 학술논문발표회 및 임시총회(2018 KPS Fall Meeting)
Conference Place
KO
Appears in Collections:
Department of Physics and Photon Science > 2. Conference Papers
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.