21CS743 | DEEP LEARNING
Module-04
Convolutional Networks
Definition of Convolution
• Convolution: A mathematical operation that combines two functions (input signal/image
and filter/kernel) to produce a third function.
ud
• Purpose: Captures important patterns and structures in the input data, crucial for tasks like
image recognition.
lo
C
2. Mathematical Formulation
tu
V
Page 1
21CS743 | DEEP LEARNING
3. Parameters of Convolution
a. Stride
• Definition: The number of pixels the filter moves over the input.
• Types:
o Stride of 1: Filter moves one pixel at a time, resulting in a detailed output.
ud
o Stride of 2: Filter moves two pixels at a time, reducing output size (downsampling).
[Link]
•
lo
Definition: Adding extra pixels around the input image.
Types:
C
o Valid Padding: No padding applied; results in a smaller output feature map.
o Same Padding: Padding applied to maintain the same output dimensions as the
input.
tu
4. Significance in Neural Networks
• Application: Used in convolutional layers of CNNs to extract features from images.
• Learning Hierarchical Representations: Stacked convolutional layers enable learning of
V
complex patterns, essential for image classification and other tasks.
Page 2
21CS743 | DEEP LEARNING
Purpose of Pooling
• Spatial Size Reduction: Decreases the dimensions of the feature maps.
• Parameter and Computation Reduction: Reduces the number of parameters and
computations in the network.
• Overfitting Control: Helps to control overfitting by providing a form of translational
ud
invariance.
2. Types of Pooling
lo
C
a. Max Pooling
• Definition: Selects the maximum value from each patch (sub-region) of the feature map.
tu
• Purpose: Captures the most prominent features while reducing spatial dimensions.
b. Average Pooling
V
• Definition: Takes the average value from each patch of the feature map.
• Purpose: Provides a smooth representation of features, reducing sensitivity to noise.
Page 3
21CS743 | DEEP LEARNING
3. Operation of Pooling
ud
•
lo
4. Significance in Neural Networks
Feature Extraction: Reduces the size of the feature maps while retaining the most relevant
features.
C
• Efficiency: Decreases computational load, allowing deeper networks to train faster.
• Robustness: Provides a degree of invariance to small translations in the input, making the
model more robust.
tu
V
Page 4
21CS743 | DEEP LEARNING
1. Convolution as an Infinitely Strong Prior
• Focus on Local Patterns: Emphasizes the importance of local patterns in the data (e.g.,
edges and textures) over global patterns.
• Effectiveness in CNNs: This locality assumption enhances the effectiveness of
Convolutional Neural Networks (CNNs) for image and video analysis.
ud
2. Pooling as an Infinitely Strong Prior
• Enhances Translational Invariance: Allows the network to recognize objects regardless
of their position within the image.
• Reduces Sensitivity to Position: By downsampling, pooling reduces sensitivity to the
exact location of features, improving generalization.
•
lo
3. Significance in Neural Networks
Feature Learning: Both operations prioritize local features, enabling efficient learning of
essential characteristics from input data.
C
• Improved Generalization: The combination of convolution and pooling enhances the
model's ability to generalize across various input variations.
tu
V
Page 5
21CS743 | DEEP LEARNING
Variants of the Basic Convolution Function
1. Dilated Convolutions
• Definition: Introduces spacing (dilation) between kernel elements.
• Wider Context: Allows the model to incorporate a wider context of the input data without
significantly increasing the number of parameters.
ud
• Applications: Useful in tasks where understanding broader spatial relationships is
important, such as in semantic segmentation.
2. Depthwise Separable Convolutions
• Two-Stage Process:
o
lo
Depthwise Convolution: Applies a separate convolution for each input channel,
reducing computational complexity.
Pointwise Convolution: Uses 1x1 convolutions to combine the outputs from the
depthwise convolution.
C
• Parameter Efficiency: Reduces the number of parameters and computations compared to
standard convolutions while maintaining performance.
tu
• Applications: Commonly used in lightweight models, such as MobileNets, for mobile and
edge devices.
V
Page 6
21CS743 | DEEP LEARNING
1. Definition of Structured Outputs
• Structured Outputs: Refers to tasks where the output has a specific structure or spatial
arrangement, such as pixel-wise predictions in image segmentation or keypoint localization
in object detection.
2. Importance in Semantic Segmentation
Maintaining Spatial Structure: For tasks like semantic segmentation, it’s crucial to
ud
•
maintain the spatial relationships between pixels in predictions to ensure that the output
accurately represents the original input image.
3. Specialized Networks
• Network Design: Specialized neural network architectures, such as Fully Convolutional
•
lo
Networks (FCNs), are designed to handle structured outputs by replacing fully connected
layers with convolutional layers, allowing for spatially consistent predictions.
Skip Connections: Techniques like skip connections (used in U-Net and ResNet) help
preserve high-resolution features from earlier layers, improving the accuracy of the output.
C
4. Adjusted Loss Functions
• Loss Function Modification: Loss functions may be adjusted to enforce structural
tu
consistency in the predictions. Common approaches include:
o Pixel-wise Loss: Evaluating the loss on a per-pixel basis (e.g., Cross-Entropy Loss
for segmentation).
o Structural Loss: Incorporating penalties for structural deviations, such as Dice
V
Loss or Intersection over Union (IoU) metrics, which consider the overlap between
predicted and true regions.
Page 7
21CS743 | DEEP LEARNING
5. Applications
• Use Cases: Structured output networks are widely used in various applications, including:
o Semantic Segmentation: Assigning class labels to each pixel in an image.
o Instance Segmentation: Identifying and segmenting individual object instances
within an image.
ud
o Object Detection: Predicting bounding boxes and class labels for objects in an
image while maintaining spatial relations.
Data Types
lo
C
tu
V
1. 2D Images
• Standard Input: The most common input type for CNNs, typically used in image
classification, object detection, and segmentation tasks.
• Format: Represented as height × width × channels (e.g., RGB images have three channels).
Page 8
21CS743 | DEEP LEARNING
2. 3D Data
• Definition: Includes video processing and volumetric data, such as those found in medical
imaging (e.g., MRI or CT scans).
• Format: Represented as depth × height × width × channels, allowing the network to
capture spatial and temporal information.
ud
• Applications: Useful in tasks like action recognition in videos or analyzing 3D medical
images for diagnosis.
3. 1D Data
• Definition: Consists of sequential data, such as time-series data or audio signals.
• Format: Represented as sequences of data points, often one-dimensional.
•
lo
Applications: Used in tasks like speech recognition, audio classification, and analyzing
sensor data from IoT devices.
C
Efficient Convolution Algorithms
1. Fast Fourier Transform (FFT)
tu
• Definition: A mathematical algorithm that computes the discrete Fourier transform (DFT)
and its inverse, converting signals between time (or spatial) domain and frequency domain.
• Convolution in Frequency Domain:
V
o Convolution in the time or spatial domain can be transformed into multiplication in
the frequency domain, which is often more computationally efficient for large
kernels.
Page 9
21CS743 | DEEP LEARNING
• Applications: Commonly used in applications requiring large kernel convolutions, such as
in image processing and signal analysis.
ud
2. Winograd's Algorithms
• Definition: A set of algorithms designed to optimize convolution operations by reducing
the number of multiplications needed.
• Efficiency Improvement:
o
lo
Winograd's algorithms work by rearranging the computation of convolution to
minimize redundant calculations.
They can reduce the complexity of convolution operations, particularly for small
kernels, making them more efficient in terms of computational resources.
C
• Key Concepts:
o The algorithms break down the convolution operation into smaller components,
tu
allowing for fewer multiplicative operations and leveraging addition and
subtraction instead.
o They are particularly effective in scenarios where computational efficiency is
critical, such as mobile devices or real-time applications.
V
• Applications: Frequently used in lightweight models and resource-constrained
environments where computational power and memory usage are limited.
Page 10
21CS743 | DEEP LEARNING
1. Random Feature Maps
• Definition: A technique that uses random projections to map input data into a higher-
dimensional space, facilitating the extraction of features without the need for labels.
• Purpose: Helps to approximate kernel methods, enabling linear models to learn complex
functions.
ud
• Advantages:
o Efficiency: Reduces the computational burden of traditional kernel methods while
retaining useful information.
o Scalability: Suitable for large datasets as it allows for faster training times.
• Applications: Commonly used in tasks where labeled data is scarce, such as clustering and
anomaly detection.
2. Autoencoders
•
lo
Definition: A type of neural network designed to learn efficient representations of data
C
through unsupervised learning by encoding the input into a lower-dimensional space and
then reconstructing it back.
• Structure:
tu
o Encoder: Compresses the input data into a latent representation.
o Decoder: Reconstructs the original input from the latent representation.
• Purpose: Learns to capture important features and structures in the data without
V
supervision, making it effective for dimensionality reduction and feature extraction.
• Advantages:
o Robustness: Can learn from noisy data and still produce meaningful
representations.
Page 11
21CS743 | DEEP LEARNING
o Flexibility: Can be adapted for various tasks, including denoising, anomaly
detection, and generative modeling.
• Applications: Used in scenarios such as image compression, data denoising, and
generating new data samples.
3. Facilitation of Unsupervised Learning
ud
• Role in Unsupervised Learning: Both methods enable the extraction of meaningful
features from unlabelled data, facilitating learning in scenarios where obtaining labeled
data is challenging or expensive.
• Enhancing Model Performance: By leveraging these techniques, models can improve
their performance on downstream tasks, such as clustering, classification, or regression,
even in the absence of labels.
lo
C
tu
V
Page 12
21CS743 | DEEP LEARNING
Notable Architectures
ud
1. LeNet-5
lo
C
• Introduction:
o Developed by Yann LeCun and colleagues in 1998.
tu
o One of the first convolutional networks designed specifically for image recognition
tasks.
• Architecture Details:
o Input Layer: Takes in grayscale images of size 32x32 pixels.
V
o Convolutional Layer 1:
▪ 6 filters (5x5) with a stride of 1.
▪ Output size: 28x28x6.
o Activation Function: Sigmoid or hyperbolic tangent (tanh).
Page 13
21CS743 | DEEP LEARNING
o Pooling Layer 1:
▪ Average pooling (subsampling) with a 2x2 filter and a stride of 2.
▪ Output size: 14x14x6.
o Convolutional Layer 2:
▪ 16 filters (5x5).
ud
▪ Output size: 10x10x16.
o Pooling Layer 2:
▪ Average pooling (2x2).
▪ Output size: 5x5x16.
o lo
Fully Connected Layers:
▪ 120 neurons in the first layer.
C
▪ 84 neurons in the second layer.
▪ Output layer with 10 neurons (for digit classes 0-9).
• Significance:
tu
o Introduced the concept of using convolutional layers for feature extraction followed
by pooling layers for dimensionality reduction.
o Paved the way for modern CNNs, influencing later architectures.
V
2. AlexNet
• Introduction:
o Developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton in 2012.
o Marked a breakthrough in deep learning by achieving top performance in the
ImageNet competition.
Page 14
21CS743 | DEEP LEARNING
• Architecture Details:
o Input Layer: Accepts images of size 224x224 pixels (RGB).
o Convolutional Layer 1:
▪ 96 filters (11x11) with a stride of 4.
▪ Output size: 55x55x96.
ud
o Activation Function: ReLU, introduced to improve training speed.
o Pooling Layer 1:
▪ Max pooling (3x3) with a stride of 2.
▪ Output size: 27x27x96.
o lo
Convolutional Layer 2:
▪ 256 filters (5x5).
C
▪ Output size: 27x27x256.
o Pooling Layer 2:
▪ Max pooling (3x3).
tu
▪ Output size: 13x13x256.
o Convolutional Layer 3:
▪ 384 filters (3x3).
V
▪ Output size: 13x13x384.
o Convolutional Layer 4:
▪ 384 filters (3x3).
▪ Output size: 13x13x384.
Page 15
21CS743 | DEEP LEARNING
o Convolutional Layer 5:
▪ 256 filters (3x3).
▪ Output size: 13x13x256.
o Pooling Layer 3:
▪ Max pooling (3x3).
ud
▪ Output size: 6x6x256.
o Fully Connected Layers:
▪ First layer with 4096 neurons.
▪ Second layer with 4096 neurons.
•
▪ lo
Output layer with 1000 neurons (for 1000 classes).
Innovative Techniques Introduced:
C
o ReLU Activation:
▪ Enabled faster convergence during training compared to traditional
activation functions like sigmoid or tanh.
tu
o Dropout:
▪ Regularization method that randomly drops neurons during training to
prevent overfitting, significantly improving generalization.
V
o Data Augmentation:
▪ Used techniques like image rotation, translation, and flipping to artificially
expand the training dataset and improve robustness.
Page 16
21CS743 | DEEP LEARNING
o GPU Utilization:
▪ Leveraged parallel processing power of GPUs, enabling training on large
datasets in a reasonable timeframe.
• Significance:
o Established deep learning as a powerful approach for image classification and
ud
sparked widespread research and development in CNN architectures.
o Highlighted the importance of large labeled datasets and robust training techniques
in achieving state-of-the-art performance.
lo
C
tu
V
Page 17