Neural Networks Optimization

Convolutional neural networks have been an active research topic in Computer Vision because they have achieved state-of-the-art results in numerous tasks, such as objection detection and face verification. Recent works have been exploring the development of architectures, which is a key point for improving the performance in convolutional neural networks. It has been demonstrated that deeper architectures achieve better results. However, they are computationally expensive, present a large number of parameters and consume considerable memory. To handle this problem, recent approaches have proposed pruning methods, which consist of finding and removing unimportant filters in these networks.

The main idea behind pruning neural networks is that there may be a large number of unimportant and redundant neurons that could be eliminated (i.e., the network is exceeding the required capacity). Therefore, it is possible to reduce the network size while maintaining its original performance. Inspired by this idea, we propose a new approach to efficiently remove filters in deep convolutional neural networks, which is based on Partial Least Squares and Variable Importance in Projection. In this approach, the importance of each filter is measured, and the unimportant (or least important) ones are removed.

Software

Related Publications

Artur Jordao; Ricardo Kloss; Fernando Yamada; William Robson Schwartz

Pruning Deep Neural Networks using Partial Least Squares Journal Article

In: ArXiv e-prints, 2018.

Links | BibTeX