OAK

Time-sensitive adaptation of regularization strength of recurrent neural networks for accurate learning

Metadata Downloads
Author(s)
Kim, Kangil
Type
Conference Paper
Citation
2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp.194 - 198
Issued Date
2017-12
Abstract
Regularization is an important issue for neural networks because of strong expression power causing overfitting to data. A regularization method is to penalize cost functions by activation-based penalty. In its applications to recurrent neural networks, the method usually assigns penalty uniformly distributed over time steps. However, required strength for recurrent networks differs by time steps. In this paper we propose a new activation-based penalty function varying its strength over time steps in recurrent neural networks. To verify its impact, we conducted practical experiments to predict the power consumption of home appliances. In the results, the proposed method reduced training errors and maintained validation and test errors, which implies the improvement of forecasting ability. In sensitivity analysis, the method restricted sudden decrease of impact of early time steps to the cost. © 2017 IEEE.
Publisher
IEEE
Conference Place
MX
URI
https://scholar.gist.ac.kr/handle/local/20087
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.