Template Matching — Image Processing

Matt Maulion
4 min readFeb 1, 2021

--

Airplanes | Towards Data Science

So far, through image processing techniques, we have learned how to clean/preprocess irrelevant information, segment objects of interest, transform perspectives, and obtain essential features from our images. Just previously, we even applied machine learning in solving a classification problem. Does the realm of image processing end there? Negative! For this blog, we will tackle another important topic in the field — Template Matching.

Template Matching

Basically, template matching is the process of looking for the location of a template (aka reference) image from the source/input image. How is the algorithm implemented? Intuitively, it can be likened to how a convolution works since the template image slides through the source image, one pixel at a time. The expected result is another image (e.g. a matrix of some sort) with each pixel value corresponding to how similar the template image is to the source image when placed at that pixel location. Using the latter information, we then examine for peak locations such that the template image is found and recognized in the source image.

Why is there a need to do template matching in the first place? What are its use cases?

Template matching is primarily used in image recognition and object detection. Remember that some advanced neural network modeling uses pretrained machine learning models in order to perform difficult tasks in computer vision. What if I tell you that you do not need to train a machine learning model to perform object detection? All you need is a template and skimage!

Let us see its implementation in Python! But first things first, import the following libraries:

 from skimage.io import imread, imshow 
from skimage.feature import match_template
from skimage.feature import peak_local_max
import matplotlib.pyplot as plt
import numpy as np

We will be using this cute image as we go along our analysis. You can download this image through the link embedded in the caption.

Pikachus | NintendoSoup

We shall perform the implementation step-by-step, let us see template matching into action!

Step 1: Turn your RGB image into its grayscale equivalent (its binarized form would also work)

Template matching works for color images but is simpler with grayscale or binarized images.

 pikachu_gray = rgb2gray(pikachu)
imshow(pikachu_gray);
Grayscale Pikachu Image

Step 2: Select a template from the source image (choose a patch)

Refer to the source image and locate the patch of the object you want to use as the template (reference) image.

 template = pikachu_gray[150:430, 450:650]
imshow(template);

Step 3: Perform template matching using skimage’s feature match_template

 from skimage.feature import match_template
result = match_template(pikachu_gray, template)
imshow(result, cmap='viridis');

As observed, there are several brightly colored areas in the output image above. If we assume that the template is found only once in the source image, then we can find where it is by looking for the pixel with the highest value. To do this implement the code below:

Step 4: Obtaining the pixel with the highest value

 x, y = np.unravel_index(np.argmax(result), result.shape)
print((x, y))

Output: (150, 450)

Step 5: Detect the object back in the source image using the highest pixel values obtained in Step 4 and the template dimension

 imshow(pikachu_gray)
template_width, template_height = template.shape
rect = plt.Rectangle((y, x), template_height, template_width,
color='b', fc='none')
plt.gca().add_patch(rect);

Brilliant! There you captured the cute Pikachu in the middle. Great job.

Want to catch them all?

Fret not, let us see if we can. Implement this code below:

imshow(pikachu_gray)template_width, template_height = template.shapefor x, y in peak_local_max(result, threshold_abs=0.4):
rect = plt.Rectangle((y, x), template_height, template_width,
color='b',fc='none')
plt.gca().add_patch(rect);

Voila! We captured two (2) Pikachus now! What did you notice from our implementation before and now? Yes! You are correct, we passed a threshold parameter in the current implementation unlike before. Depending on our threshold passed, it would return bounding boxes that pertains to objects having the closest resemblance to our template image.

That’s it for this article! As per your exercise, play around with the threshold value to catch them all! Happy coding and catching.

--

--

Matt Maulion
Matt Maulion

Written by Matt Maulion

A kid who uses data to make a difference.

No responses yet