avatarIsmail Mebsout

Summary

The provided web content offers an in-depth exploration of convolutional neural network (CNN) architectures, detailing their applications in object detection, face recognition, and image classification tasks.

Abstract

The article delves into the intricacies of CNNs, starting with an explanation of cross-entropy as a loss function for image classification tasks. It then traces the evolution of CNN architectures from LeNet, designed for digit recognition, to more complex models like AlexNet and VGG-16, which have significantly more parameters and are trained on larger datasets like ImageNet. The article also covers object detection, focusing on the YOLO (You Only Look Once) algorithm, its use of anchor boxes, and the non-max suppression technique to improve accuracy. Furthermore, it discusses face recognition using Siamese networks, which measure the similarity between images by learning an embedding space and employing a triplet loss function. The article concludes by highlighting the versatility of CNNs in various image processing tasks and their extension into text processing applications.

Opinions

  • The author emphasizes the importance of understanding the fundamentals of CNNs before delving into more complex architectures, suggesting that readers unfamiliar with the topic should first read the initial part of the article series.
  • The article suggests that the use of dark mode
Photo by Skitterphoto downloaded from Pexels

Object detection & Face recognition algorithms

Convolutional Neural Networks-Part 2: Detailed convolutional architectures enabling object-detection and face-recognition algorithms

Convolutional neural networks are widely used in addressing image-based problems, such as object/character detection and face recognition. In this article, we will focus on the most famous architectures from LeNet to Siamese networks, having for most of them the following architecture:

Image by Author

If you don’t have any knowledge about convolutional neural networks, I advise you to read the first part of this article, discussing the fundamentals of CNNs.

NB: Since Medium does not support LaTeX, the mathematical expressions are inserted as images. Hence, I advise you to turn the dark mode off for a better reading experience.

Table of content

1. Cross-Entropy 2. Image classification 3. Object detection — YOLOv3 4. Face Recognition — Siamese Networks

1- Cross-Entropy

When classifying an image, we often use a softmax function at the last layer having the size (C,1) where C is the number of classes in question. The i th line of the vector is the probability that the input image belongs to the class i. The predicted class is set to be the one corresponding to the highest probability.

The network learns by using backpropagation and optimizes the cross entropy defined as follows:

Where

  • p(x,class) is the reference probability and equals 1 if the object really belongs to the class filled out and 0 otherwise
  • q(x,class) is the probability, learned by the network through the softmax, that the object x belongs to that class

For an input xclass_j​:

Thus, we set the loss function as:

We average the loss, where m is the size of the training set.

2- Image classification

LeNet — Digits Recognition

LeNet is an architecture developed by Yann Lecun and it aims at detecting the digit present in the input. Given gray-scale images of hand-written digits from 0 to 9, the convolutional neural network predicts the digit of the image. The trainset is called MNIST, which is a dataset containing more than 70k images having 28x28x1 pixels. The neural network has the following architecture counting more than 60k parameters:

Official paper of LeNet

For more details, I advise you to read the official paper.

AlexNet

AlexNet is a famous architecture which won the ImageNet competition in 2012. It is similar to LeNet but has more layers, dropouts and ReLU activation function most of time.

Official paper of AlexNet

The training set is a subset of the ImageNet database, a 15 million labeled images having a high resolution and representing more than 22k categories. AlexNet used more than 1.2 million images in the training set, 50k in the validation set and 150k in the test set, which were all resized to 227x227x3. The architecture has more than 60 million parameters and thus was trained on 2 GPUs, it outputs a softmax vector with size (1000,1) .

For more information, I advise you to read the official paper.

VGG-16

VGG-16 in a convolutional neural network for image classification, trained on the same dataset ImageNetand has more than 138 million parameters trained on GPUs. The architecture is the following:

Official paper of VGG-16

It is more accurate and deeper than AlexNet since it replaced the large kernels 11x11x5 and 5x5 by successive 3x3 kernels. For more details, check the official paper of the VGG project.

3- Object detection — YOLO

Object detection is the task of detecting multiple objects in an image that comprehenses both object localization and object classification. A first rough approach would be sliding a window with customizable dimensions and predict each time the class of the content using a network trained on cropped images. This process has a high computational cost and can luckily be automized using convolutions. YOLO stands for You Only Look Once and the basic idea consists of placing a grid on the image (usually 19x19) where:

Only one cell, the one containing the center/midpoint of an object is responsible for detecting this object

Each cell of the grid (i,j) is labeled as follows:

Image by Author

Hence, for each image the target output is of size:

IOU & NMS

In order to evaluate the object localization, we use the Intersection Over Union which measures the overlap between two bounding boxes:

Image by Author

When predicting the bounding box of a given object in a given cell of the grid, many outputs might be given, Non-Max Suppression helps you detect the object only once. It takes the highest probability and suppresses the other boxes having a high overlap (IOU). For each cell of the grid, the algorithm is the following:

Image by Author

Anchor Boxes

In most cases, a grid cell might contain multiple objects, anchor boxes allow the detection of all of them. In case of 2 anchor boxes, each cell of the grid is labeled as follows:

Image by Author

More general, the output target is of size:

Where is N the number of classes and M the number of anchor boxes.

YOLOv3 Algorithm

YOLO was trained on coco dataset, a large-scale object detection, segmentation and captioning database with 80 object categories. YOLOv3 has a Darknet-53 architecture as feature extractor also called a backbone.

The training is carried out by minimizing a loss function using gradient methods as well. It is combined of:

  • Logistic regression loss on p_c
  • Squared error loss for b_i
  • Softmax loss(cross-entropy) for the probabilities c_i

At each epoch, in each cell, we generate the output y_(i,j) and evaluate the loss function. When making predictions, we check that p_c is high enough and for each grid-cell, we get rid of low probability prediction and use non-max suppression for each class to generate the final output. For more information, I advise you to read the official paper.

4- Face Recognition — Siamese Networks

Siamese networks are neural networks, often convolutional, which allow to calculate the degree of similarity between two inputs, images in our case, as follows:

Image by Author

The purpose of the CNN module is to represent the information on the image in another space, called embedding space thanks to a function. We then, compare the two embeddings using a certain distance. Learning in Siamese networks is done by minimizing an objective function composed of a loss function called triplet.

The triplet function takes 3 vector variables as an input: an Anchor A, a positive P (similar to A) and a last negative N(different from A). Thus, we are looking to have:

Where ∥x∥²=<x,x> for a given scalar product.

To prevent the learned function f from being null, we define the margin 0<α≤1 so that:

Thus, we define the loss function as follows:

Starting from a learning database of size n, the objective function to be minimized is:

When training the architecture and for each epoch, we fixe the number of triplets and for each one:

  • We randomly choose two images of the same class (Anchor & Positive)
  • We randomly pick an image from another class (Negative)

A triplet (A, N, P) can be:

  • Easy negative, when ∥f(A)−f(P)∥²+α−∥f(A)−f(N)∥²≤0
  • Semi-hard negative, when ∥f(A)−f(P)∥²+α>∥f(A)−f(N)∥²>∥f(A)−f(P)∥²
  • Hard negative, when ∥f(A)−f(N)∥²<∥f(A)−f(P)∥²

We usually choose to focus on the semi-hard negatives to train the neural network.

Application: Face Recognition

Siamese networks can be used to develop a system capable of identifying faces. Given an image taken by camera, the architecture compares it to all the images in the database. Since we can not have multiple images of the same person in our database, we usually train the siamese network on an open-source imageset rich enough to create the triplets.

Image by Author

The convolutional neural network learns a similarity function f which is the embedding of the image. Given a camera-picture, we compare it to each image_j​ fo the database such that:

  • If d(f(image, image_j​))≤τ, both of the image represent the same person
  • If d(f(image, image_j​))>τ, the images are of two different persons

We choose the face image_j​ which is the closest to the image in terms of the distance d. The threshold τ is chosen in such a way that the F1​-score is the highest for example.

Conclusion

CNNs are widely used architectures in image processing, they enable better and faster results. Recently they have been also used in text processing where the input of the network is the embedding of the tokens instead of the pixels of the images.

Do not hesitate to check my previous article dealing with:

References

Originally published at https://www.ismailmebsout.com on February 15, 2019.

Object Detection
Convolutional Network
Data Science
Artificial Intelligence
Machine Learning
Recommended from ReadMedium