Skip to content Skip to footer

Deep Network Compression based on Partial Least Squares

[trx_title align=”center” color=”#000000″ top=”null”]

Deep Network Compression based on Partial Least Squares

[/trx_title]

Elsevier Non Solus tree

Elsevier’s Neurocomputing magazine publishes articles describing recent key contributions in the field of Neurocomputation. In the next edition, the work of our member Artur Jordão will be present as one of the relevant applications for this area of knowledge.

In the context of artificial intelligence and machine learning, convolutional networks have provided a series of advances in computer vision and natural language processing tasks. However, these models require a large amount of processing and memory, conditioning their use in systems with limited resources.

In this work, a new strategy was proposed that removes neurons from convolutional networks in order to reduce their computational cost while preserving their ability to predict. The approach identifies, in a discriminative and efficient way, neurons of low importance, so that their removal does not affect (or very little) the accuracy of the network. These neurons are identified using the Partial Least Squares feature projection technique, which is used to model the relationship between neurons and the network’s prediction ability.

The results show that computationally prohibitive networks can be more efficient without degrading their accuracy. Compared with other techniques in the context of compression and acceleration of convolutional networks, our strategy achieves state-of-the-art results and, in most cases, is significantly more efficient.

Leave a comment