# k-means clustering for anchor boxes

In this blog post, I will explain how k-means clustering can be implemented to determine anchor boxes for object detection. Anchor boxes are used in object detection algorithms like YOLO [1][2] or SSD [3]. However, only YOLOv2/YOLOv3 mentions the use of k-means clustering to generate the boxes.

SSD calls them default boxes and applies them to several feature maps. In the configuration they can be configured via minimal scale, maximal scale and several aspect ratios.

YOLO on the other hand uses only one feature map. The anchor boxes are configured via their $x$ and $y$ coordinates. In this post, I will focus on YOLO’s implementation, because it is not clear how much SSD would really benefit from clustering.

Data preparation and metric

In general, bounding boxes for objects are given by tuples of the form $(x_0, y_0, x_1, y_1)$ where $x_0, y_0$ are the coordinates of the lower left corner and $x_1, y_1$ are the coordinates of the upper right corner. For clustering however only the normalized width and height of the boxes is needed. The input data is then given by $(\frac{x_1 - x_0}{w}, \frac{y_1 - y_0}{h})$ where $h$ is the image height and $w$ the width. Intuitively, all the bounding boxes lie after subtraction on top of each other.

In order to apply k-means clustering, we have to use the right metric. According to [1] the standard Euclidean distance causes larger boxes to generate more errors than smaller boxes. By using the Intersection over Union metric (Jaccard index) this problem can be avoided.

The Jaccard index can be defined for two boxes $\mathbf{b_1} = (w_1, h_1)$, $\mathbf{b_2} = (w_2, h_2)$ as follows

When the denominator equals the numerator, $J(\mathbf{b_1}, \mathbf{b_2}) = 1$ because both boxes are equal. Assuming all the components of $\mathbf{b_i}$ are non-negative real numbers, the Jaccard index will be between $0$ and $1$.

Clustering

The k-means clustering algorithm does not really change a lot when applied to anchor boxes. At initialization we can choose $k$ random boxes as our initial means $\mathbf{a_i}$. Then we can assign each bounding box $\mathbf{b_p}$ to a cluster $C_i$:

where $d_J(\mathbf{b_p}, \mathbf{a_i}) = 1 - J(\mathbf{b_p}, \mathbf{a_i})$ and $i \in [1, k]$.

The last step is then to calculate the mean of all the boxes that belong to $C_i$ and set this as the new mean $\mathbf{a_i}$. This process can be repeated until convergence.

Alternatively, we can also describe the algorithm in terms of matrices. Let $\mathbf{B}$ be an $n \times 2$ matrix that contains the bounding boxes and $\mathbf{C}$ a $k \times 2$ cluster matrix. Then the $n \times k$ distance matrix can be defined as $\mathbf{D}_{i} = d_J(\mathbf{B}_i, \mathbf{C})$. The smallest value in each row of $\mathbf{D}$ will give the cluster to which the box belongs to. The new clusters after each iteration will be given by the mean of the bounding boxes in $\mathbf{B}$ that have the least distance to cluster $i$. With each iteration, $\mathbf{D}$ and $\mathbf{C}$ will be recalculated. When $\mathbf{D}$ does not change anymore, the algorithm terminates and the anchor boxes are given by $\mathbf{C}$.

Implementation

I also put the code on github with some more comments, tests and graphs [4].

References

[1] J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger”. http://arxiv.org/abs/1612.08242

[2] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement”. https://pjreddie.com/media/files/papers/YOLOv3.pdf

[3] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single Shot MultiBox Detector” https://arxiv.org/abs/1512.02325

Categories:

Updated: