Advanced Machine Learning Techniques for Coders Image classification with transfer learning


This classifier is the workhorse of many AI and ML applications. It takes an image, analyzes its content and classifies it into different classes (e.g. Dog, Cat, Banana).

In this article I’ll show you how to create your own image classifier using Google’s Cloud AutoML platform and a python library called turicreate.

If you want to build your own image classifier, but don’t have thousands of training images for each category you want to classify, transfer learning is the way to go!

Image classification with transfer learning: a blog around how to use transfer learning in image classification and building a facial recognition model.

This is part 2 of Advanced Machine Learning Techniques for Coders.

In this post, you’ll learn about transfer learning: a technique for training models with limited data. You’ll also learn how to build your own image classifier to recognize hand-written digits from the MNIST database.

Transfer learning

If you don’t have a lot of data, transfer learning can be an excellent choice for training your model.

Transfer learning is a technique that allows us to use models that have already been trained on large datasets, and adapt them for our own problems. This means we don’t have to start from scratch, which is great if we’re short on time or if our dataset is small relative to what others have used in the past. If your dataset is large enough, it’s usually better to train your own model (more on this below).

It’s also worth noting that transfer learning isn’t just limited to images. It can be used in many other applications such as natural language processing (NLP) and speech recognition.

How to use transfer learning in image classification and building a facial recognition model.

The dataset we are going to use contains the images of celebrities: https://github.com/yashk2810/Facial-Recognition/tree/master/Dataset

We will use the VGG16 model trained on imagenet for this classification task, more on this later. I’m assuming you have basic knowledge of python and machine learning techniques, if not you can check out my previous blog posts.

First, we need to install all the dependencies. If you are using Python 3, run:

pip3 install -r requirements.txt

Now that we have our dependencies installed let’s move on to the next step.

In this article, I will be going through all the steps that you need to know to get started with building your own Image classification model. The best part is that we will be using a technique called transfer learning, which allows us to take a pre-trained neural network and use it for our own purposes. You will learn how to do that in the article.

Transfer learning is a technique that shortcuts a lot of this trial and error by taking a fully-trained model for a set of categories like ImageNet, and retrains from the existing weights for new classes. This is especially useful when you have limited data and/or computing power.

In this post we’ll show how to do that in Keras. We’ll use VGG16, but the same technique works with other image classification models as well.

We’ll also show how to implement this in fastai, which allows you to easily switch between different models, as well as using GPUs to speed up training.

Transfer learning is a machine learning technique where a model developed for a task is reused as the starting point for a model on a second task.


Leave a Reply

Your email address will not be published. Required fields are marked *