Abstract

Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.

Relational KD vs Individual KD (conventional KD)

Figure 1. Individual knowledge distillation (IKD) vs. relational knowledge distillation (RKD). While conventional KD (IKD) transfers individual outputs of the teacher directly to the student, RKD extracts relational information using a relational potential function ψ(·), and transfers the information from the teacher to the student.

Results on Metric Learning

1. Distillation to smaller networks

Figure 2. Recall@1 on CUB-200-2011 and Cars 196. The teacher is based on ResNet50-512. Model-d refers to a network with d dimensional embedding. ‘O’ indicates models trained with `2 normalization, while ‘X’ represents ones without it.

2. Comparison with state-of-the art methods

Figure 3. Recall@K comparison with state of the arts on CUB-200-2011, Car 196, and Stanford Online Products. We divide methods into two groups according to backbone networks used. A model-d refers to model with d-dimensional embedding. Boldfaces represent the best performing model for each backbone while underlines denote the best among all the models.

Qualitative Results on Metric Learning

Figure 4. Retrieval results on CUB-200-2011 and Cars 196 datasets. The top eight images are placed from left to right. Green and red bounding boxes represent positive and negative images, respectively. T denotes the teacher trained with the triplet loss while S is the student trained with RKD-DA. For these examples, the student gives better results than the teacher.

Paper

Relational Knowledge Distillation
Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho
CVPR, 2019
[arXiv] [Bibtex]

Code

Check our GitHub repository: [github]