We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking ground-truths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify the target in each domain. We train the network with respect to each domain iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance compared with state-of-the-art methods in existing tracking benchmarks.

MDNet (Multi-Domain Network)

Figure 1. The architecture of our Multi-Domain Network (MDNet), which consists of shared layers and multiple branches of domain-specific layers. Yellow and blue bounding boxes denote the positive and negative training samples in each domain, respectively.


1. Results on OTB50 [1]

Figure 2. Precision and success plots on OTB50.

2. Results on OTB100 [2]

Figure 3. Precision and success plots on OTB100.

3. Results on VOT2014 [3]

Figure 4. AR-rank plots by baseline and region-noise experiments on VOT2014.

4. Results on VOT2015 Challenge [4]

Figure 5. Expected average overlap with trackers submitted to VOT2015 challenge (ranked from right to left) [4].


Learning Multi-Domain Convolutional Neural Networks for Visual Tracking
Hyeonseob Nam, Bohyung Han
arXiv, 2015
[arXiv Link] [Bibtex]


Check our GitHub repository: MDNet GitHub Repository

Raw Results

  • Results on OTB100 (ZIP, 242KB)

  • Results on VOT2014 (ZIP, 3.8MB)

  • Results on VOT2015 (ZIP, 4.0MB)

  • References