Friday, March 31, 2017

Scaling the Scattering Transform: Deep Hybrid Networks - implementation - / The Shattered Gradients Problem: If resnets are the answer, then what is the question?

Two items this morning, one about wavelets helping out faster Machine Learning models (a little bit like the spatial transformers bit, see gvnn: Neural Network Library for Geometric Computer Vision - implementation -gvnn: Neural Network Library for Geometric Computer Vision) and examining why Resnets do well. Enjoy !

David makes a reference to the wavelet comeback: Scaling the Scattering Transform: Deep Hybrid Networks by Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko

We use the scattering network as a generic and fixed initialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providing the best results to-date with pre-defined representations while being competitive with Deep CNNs. Using a shallow cascade of 1x1 convolutions, which encodes scattering coefficients that correspond to spatial windows of very small sizes, permits to obtain AlexNet accuracy on the imagenet ILSVRC2012. We demonstrate that this local encoding explicitly learns in-variance w.r.t. rotations. Combining scattering networks with a modern ResNet, we achieve a single-crop top 5 error of 11.4% on imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing only 10 layers. We also find that hybrid architectures can yield excellent performance in the small sample regime, exceeding their end-to-end counterparts, through their ability to incorporate geometrical priors. We demonstrate this on subsets of the CIFAR-10 dataset and by setting a new state-of-the-art on the STL-10 dataset.

The implementation of the Fast Scattering Transform with CuPy/PyTorch: https://github.com/edouardoyallon/pyscatwave

and the experiments run in the paper are here: https://github.com/edouardoyallon/scalingscattering 

Olivier mentioned it this morning on his twitter feed: The Shattered Gradients Problem: If resnets are the answer, then what is the question? by David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, Brian McWilliams

A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. The problem has largely been overcome through the introduction of carefully constructed initializations and batch normalization. Nevertheless, architectures incorporating skip-connections such as resnets perform much better than standard feedforward architectures despite well-chosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise. In contrast, the gradients in architectures with skip-connections are far more resistant to shattering decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new "looks linear" (LL) initialization that prevents shattering. Preliminary experiments show the new initialization allows to train very deep networks without the addition of skip-connections


 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly