Redirigiendo al acceso original de articulo en 22 segundos...
Inicio  /  Information  /  Vol: 13 Par: 7 (2022)  /  Artículo
ARTÍCULO
TITULO

Sequential Normalization: Embracing Smaller Sample Sizes for Normalization

Neofytos Dimitriou and Ognjen Arandjelovic    

Resumen

Normalization as a layer within neural networks has over the years demonstrated its effectiveness in neural network optimization across a wide range of different tasks, with one of the most successful approaches being that of batch normalization. The consensus is that better estimates of the BatchNorm normalization statistics (μ" role="presentation">??µ µ and σ2" role="presentation">??2s2 s 2 ) in each mini-batch result in better optimization. In this work, we challenge this belief and experiment with a new variant of BatchNorm known as GhostNorm that, despite independently normalizing batches within the mini-batches, i.e., μ" role="presentation">??µ µ and σ2" role="presentation">??2s2 s 2 are independently computed and applied to groups of samples in each mini-batch, outperforms BatchNorm consistently. Next, we introduce sequential normalization (SeqNorm), the sequential application of the above type of normalization across two dimensions of the input, and find that models trained with SeqNorm consistently outperform models trained with BatchNorm or GhostNorm on multiple image classification data sets. Our contributions are as follows: (i) we uncover a source of regularization that is unique to GhostNorm, and not simply an extension from BatchNorm, and illustrate its effects on the loss landscape, (ii) we introduce sequential normalization (SeqNorm) a new normalization layer that improves the regularization effects of GhostNorm, (iii) we compare both GhostNorm and SeqNorm against BatchNorm alone as well as with other regularization techniques, (iv) for both GhostNorm and SeqNorm models, we train models whose performance is consistently better than our baselines, including ones with BatchNorm, on the standard image classification data sets of CIFAR?10, CIFAR-100, and ImageNet ((+0.2%" role="presentation">+0.2%+0.2% + 0.2 % , +0.7%" role="presentation">+0.7%+0.7% + 0.7 % , +0.4%" role="presentation">+0.4%+0.4% + 0.4 % ), and (+0.3%" role="presentation">+0.3%+0.3% + 0.3 % , +1.7%" role="presentation">+1.7%+1.7% + 1.7 % , +1.1%" role="presentation">+1.1%+1.1% + 1.1 % ) for GhostNorm and SeqNorm, respectively).