Seeing Images Through the Eyes of Decision Trees


Seeing Images Through the Eyes of Decision Trees

Seeing Images Through the Eyes of Decision Trees
Image by Editor | ChatGPT

In this article, you’ll learn to:

  • Turn unstructured, raw image data into structured, informative features.
  • Train a decision tree classifier for image classification based on extracted image features.
  • Apply the above concepts to the CIFAR-10 dataset for image classification.

Introduction

It’s no secret that decision tree-based models excel in a wide range of classification and regression tasks, often based on structured, tabular data. However, when used in combination with the right tools, decision trees can also be a powerful predictive tool for unstructured data such as text or images, and even for time series data. 

This article demonstrates how decision trees can make sense of image data that has been converted into structured, meaningful features. More specifically, we will show how to turn raw, pixel-level image data into higher-level features that describe image properties like color histograms and edge counts. We’ll then leverage this information to perform predictive tasks, like classification, by training decision trees — all with the aid of Python’s scikit-learn library.

Think about it: it’ll be like making a decision tree’s behavior more like to how our human eyes work.

Building Decision Trees for Image Classification upon Image Features

The CIFAR-10 dataset we will use for the tutorial is a collection of low-resolution, 32×32 pixel color images, with each pixel being described by three RGB values that define its color.

An excerpt of the CIFAR-10 image dataset

Although other commonly used models for image classification, like neural networks, can process images as grids of pixels, decision trees are designed to work with structured data; hence, our primary goal is to convert our raw image data into this structured format.

We start by loading the dataset, freely available in the TensorFlow library:

Notice that the loaded dataset is already partitioned into training and test sets, and the output labels (10 different classes) are also separated from the input image data. We just need to allocate these elements correctly using Python tuples, as shown above. For clarity, we also store the class names in a Python list.

Next, we define the core function in our code. This function, called extract_features(), takes an image as input and extracts the desired image features. In our example, we will extract features associated with two main image properties: color histograms for each of the three RGB channels (red, green, and blue), and a measure of edge strength.

The number of bins for each computed color histogram is set to 8, so that the density of information describing the image color properties remains at a reasonable level. For edge detection, we use two functions from skimage: rgb2gray and sobel, which together help detect edges on grayscale versions of our original image.

Both subsets of features are put together, and the process repeats for every image in the dataset.

We now call the function twice: once for the training set, and once for the test set. 

The resulting number of features containing information about RGB channel histograms and detected edges amounts to 25.

That was the hard part! Now we are largely ready to train a decision tree-based classifier that takes extracted features instead of raw image data as inputs. If you are already familiar with training scikit-learn models, the whole process is self-explanatory: we just need to make sure we pass the extracted features, rather than the raw images, as the training and evaluation inputs.

Results:

Unfortunately, the decision tree performs rather poorly on the extracted image features. And guess what: this is entirely normal and expected.

Reducing a 32×32 color image to just 25 explanatory features is an over-simplification that misses fine-grained cues and deeper details in the image that help discriminate, for instance, a bird from an airplane, or a dog from a cat. Keep in mind that image subsets belonging to the same class (e.g. ‘plane’) also have great intra-class variations in properties like color distribution. But the important take-home message here is to learn the how-to and limitations of image feature extraction for decision tree classifiers; achieving high accuracy is not our main goal in this tutorial!

Nonetheless, would things be any better if we trained a more advanced tree-based model, like a random forest classifier? Let’s find out:

Slight improvement here, but still far from perfect. Eager for some homework? Try applying what we learned in this article to an even simpler dataset, like MNIST or fashion MNIST, and see how it performs. It only got a pass mark for classifying airplanes, still failing for the other nine classes!

A Last Try: Adding Deeper Features with HOG

If the information level of the features extracted before was arguably too shallow, how about adding more features that capture more nuanced aspects of the image? One could be HOG (Histogram of Oriented Gradients), which can capture properties like shape and texture, adding a significant number of extra features. 

The following code expands the feature extraction process and applies it to train another random forest classifier (fingers crossed).

Training a new classifier (we now have 193 features instead of 25!):

Results:

Well, slow but steady, we managed to get a humble improvement, at least now several classes get a pass mark in some of the evaluation metrics, not just ‘airplane’. But still a long way to go: lesson learned.

Wrapping Up

This article showed how to train decision tree models capable of dealing with visual features extracted from image data, like color channel distributions and detected edges, highlighting both the capabilities and limitations of this approach.

Leave a Reply

Your email address will not be published. Required fields are marked *