Improved knowledge distillation with Dynamic Network Pruning


Thesis Type: Postgraduate

Institution Of The Thesis: Middle East Technical University, Graduate School of Natural and Applied Sciences, Turkey

Approval Date: 2019

Thesis Language: English

Student: EREN ŞENER

Supervisor: Emre Akbaş

Abstract:

Deploying convolutional neural networks to mobile or embedded devices is often prohibited by limited memory and computational resources. This is particularly problematic for the most successful networks, which tend to be very large and require long inference times. In the past, many alternative approaches have been developed for compressing neural networks based on pruning, regularization, quantization or distillation. In this thesis, we propose the Knowledge Distillation with Dynamic Pruning (KDDP), which trains a dynamically pruned compact student network under the guidance of a large teacher network. In KDDP, we train the student network with supervision from the teacher network, while applying L_1 regularization on the neuron activations in a fully-connected layer. Subsequently, we prune inactive neurons. Our method automatically determines the final size of the student model. We evaluate the compression rate and accuracy of the resulting networks on image classification datasets, and compare them to results obtained by Knowledge Distillation (KD). Compared to KD, our method produces better accuracy and more compact models.