Frozen Weights, Hot Results: Batch Norm Gets 83% on CIFAR-10
·30 words·1 min
Turns out that if you froze all layers of neural networks to their random initialized weights except for batch norms, you can still get 83% accuracy on cifar10!