OAK

Fixed Non-negative Orthogonal Classifier: Inducing Zero-mean Neural Collapse with Feature Dimension Separation

Metadata Downloads
Author(s)
Kim, HoyongKim, Kangil
Type
Conference Paper
Citation
12th International Conference on Learning Representations, ICLR 2024
Issued Date
2024-05-11
Abstract
Fixed classifiers in neural networks for classification problems have demonstrated
cost efficiency and even outperformed learnable classifiers in some popular benchmarks when incorporating orthogonality (Pernici et al., 2021a). Despite these
advantages, prior research has yet to investigate the training dynamics of fixed classifiers on neural collapse. Ensuring this phenomenon is critical for obtaining global
optimality in a layer-peeled model, potentially leading to enhanced performance in
practice. However, the neural collapse cannot explain the collapse phenomenon in
the fixed classifier when its shape is not a simplex ETF. To overcome the limits,
we exploit additional constraints to the layer-peeled model: non-negativity and
orthogonality. Then, we propose a fixed non-negative orthogonal classifier, which
makes a layer-peeled model with the fixed classifier have the global optimality and
the max-margin in decision by inducing zero-mean neural collapse. Building on
this foundation, we exploit a feature dimension separation inherent in our classifier for further purposes: (1) enhances softmax masking by mitigating feature
interference in continual learning and (2) tackles the limitations of mixup on the
hypersphere in imbalanced learning. We conducted comprehensive experiments on
various datasets and demonstrated significant performance improvements.
Publisher
International Conference on Learning Representations, ICLR
Conference Place
AU
Hybrid, Vienna
URI
https://scholar.gist.ac.kr/handle/local/20926
공개 및 라이선스
  • 공개 구분공개
파일 목록

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.