Foundational Knowledge

Feature Learning vs. Feature Engineering

Image Analysis Gets a Deep Learning Upgrade

11 March 2025 · 5 min read
  • Artificial Intelligence
  • Life Sciences
  • Automation
  • Foundational Knowledge
  • Materials Sciences
  • Industrial R&D
Portrait image of Dr. Sreenivas Bhattiprolu
Author Sreenivas Bhattiprolu Ph.D. Head of Digital Solutions (ZEISS arivis)

Abstract

This article examines the shift from traditional machine learning to deep learning in AI-driven image analysis. It emphasizes the limitations of manual feature engineering in traditional methods, where relevant features must be extracted from raw data, often leading to inefficiencies. In contrast, deep learning automates feature extraction through hierarchical neural networks, such as the VGG16 model, which learns complex features from large datasets without manual input. This advancement enhances the accuracy and efficiency of image analysis applications across various fields.


Key Learnings:

  • Feature Engineering vs. Feature Learning: Traditional machine learning requires manual feature extraction, while deep learning automates this process, improving efficiency and accuracy.
  • Gabor Filters in Image Analysis: While Gabor filters are useful for detecting textures, they require complex parameter adjustments in traditional methods, which can hinder performance.
  • Hierarchical Learning in Deep Learning: Models like VGG16 learn features in layers, from basic edges to complex objects. This eliminates the need for manual feature engineering and enhances performance on diverse datasets.
INTRODUCTION

A Shift in AI-driven Image Analysis Technologies

Have you ever wondered why deep learning is often preferred over traditional machine learning?
Do you want to learn the key difference between these techniques that gives deep learning an edge over machine learning?
Do you want to train your own deep learning without the need to code?
If you answered "yes" to any of these questions, you are reading the right article.
 

Diagram showing ain Input Image (Gabor Features) extracted from a synthetic image showing horizontal nd vertical aligned objects, transformed through Gabor kernels at angles π/2 and π, resulting in two Gabor-filtered images with distinct features.

Figure 1: Gabor features extracted from a synthetic image showing horizontal and vertical aligned objects. As part of feature engineering, Gabor parameters were adjusted to extract the relevant information from the image.

What Is Feature Engineering for Machine Learning?

Traditional machine learning techniques often rely on feature engineering, which is the process of manually extracting relevant features from raw data to be used as inputs for a model. This can include techniques such as using Gabor filters to detect texture in images.

Gabor can generate an infinite number of features, but the key is finding the correct parameters for the kernel to extract the appropriate features. These parameters include wavelength (lambda), orientation (theta), phase offset (phi), the standard deviation of the Gaussian envelope (sigma), and spatial aspect ratio (gamma).

As shown in Figure 1, horizontal bars can be extracted by using a kernel with a theta value of pi/2, and vertical bars can be extracted using a kernel with a theta value of pi.

However, in real-life images, there is added complexity that makes it difficult to determine which parameters will work effectively.

Even experienced engineers have difficulty determining the correct parameters for a specific problem. As a result, it is common to generate a large number of features by varying all parameters and allowing the machine learning algorithm to determine which ones are the most important.

Although this method is effective, it is not an efficient way to tackle challenges using machine learning, especially when there is a large amount of training data available.

Illustration of a neural network architecture showing layers including convolutional blocks, max pooling, fully connected, and softmax, with dimensions and activation functions labeled.
Illustration of a neural network architecture showing layers including convolutional blocks, max pooling, fully connected, and softmax, with dimensions and activation functions labeled.

Figure 2: The VGG16 model is made up of multiple convolutional blocks. The network can learn and identify features in a hierarchical manner through these different convolutional layers.

Figure 2: The VGG16 model is made up of multiple convolutional blocks. The network can learn and identify features in a hierarchical manner through these different convolutional layers.

Feature Learning in Convolutional Neuronal Networks

Deep learning, on the other hand, is a form of machine learning that uses convolutional neural networks (CNNs) to automatically learn features from raw data. The layers of a neural network can be thought of as a hierarchy of features, where each layer learns increasingly complex features. See Figure 2. 

For example, in the VGG16 model that has been trained on the ImageNet dataset, the early layers learn basic features such as edges and textures, while the later layers learn more complex features such as object parts and entire objects.

In Figure 3, a variety of features are displayed that have been extracted from the same image as seen in Figure 1, utilizing the second convolutional block from the VGG16 network that has been pre-trained on the Imagenet dataset.

No feature engineering was needed as the model had already been trained on a vast number of images.

Deep learning allows for both learning the features and using them. These features can be utilized as input for traditional machine learning techniques such as Random Forest or further fine-tuned for this specific application through additional deep learning training.
 

Grid of 64 square patterns with varying geometric designs in purple, green, and blue hues. The square Patterns represent Features obtained and learned from the Input Image in Figure 1 using the second convolutional block of the VGG16 network pre-trained on the imagenet dataset.

Figure 3: Features obtained and learned from the input image in Figure 1 using the second convolutional block of the VGG16 network pre-trained on the Imagenet dataset.

Say Goodbye to Feature Engineering, Hello to Automated Features

To put it simply, deep learning is favored over traditional machine learning due to its superior ability to perform feature learning eliminating the need for human-designed feature engineering. Furthermore, deep learning can deal with complex relationships between features and the target variable, making it especially effective when there is a large amount of training data.

Due to its advantages in feature learning and handling complex relationships, deep learning is rapidly gaining popularity in the field of scientific image analysis. With the development of products like arivis Cloud, even those with no coding skills can train custom deep-learning models with a relatively small number of training images. arivis Cloud makes the process of deep learning training simple and accessible, enabling more people to leverage the powerful capabilities of deep learning for scientific image analysis.

FAQ

  • Traditional machine learning relies on manual feature engineering, where relevant features are extracted from raw data. In contrast, deep learning automates feature extraction through neural networks, leading to improved efficiency and accuracy.

  • Gabor filters are used to detect textures in images by adjusting parameters such as wavelength and orientation. These adjustments are crucial for effective feature extraction, although they can be complex and inefficient.

  • The VGG16 model employs a hierarchical learning process, extracting features from basic edges to complex objects. This eliminates the need for manual feature engineering and enhances performance in image analysis tasks.

training education academi portal
training education academi portal

Boost your Microscopy Skills

Join the ranks of world-leading microscopists with our expert training courses. Whether you're in academia or industry, a biologist, materials scientist, or somewhere in between, our training courses will help you unlock the full potential of your microscopy skills.

Please note that in some cases, you may be prompted to enter your login credentials again to access certain areas or resources within our training platform.


Share this page

Contact for Insights Hub

Further Questions?

Please feel free to contact our experts.

Form is loading...

/ 4
Next Step:
  • Step 1
  • Step 2
Required Information

If you want to have more information on data processing at ZEISS please refer to our data privacy notice.