See discussions, stats, and author profiles for this publication at: https://www.researchgate.
net/publication/360719735
Signature Verification Using Deep Learning
Conference Paper · May 2022
CITATION READS
1 2,118
3 authors:
Thilaka Raj Santha Perumal Thangaperumal
Bannari Amman Institute of Technology Bannari Amman Institute of Technology
1 PUBLICATION 1 CITATION 1 PUBLICATION 1 CITATION
SEE PROFILE SEE PROFILE
Uvaprasanth Sundarraj
Bannari Amman Institute of Technology
1 PUBLICATION 1 CITATION
SEE PROFILE
All content following this page was uploaded by Thilaka Raj on 19 May 2022.
The user has requested enhancement of the downloaded file.
SignatureVerification Using Deep Learning
K.Thilakaraj S.Uvaprasanth T. Santha Perumal
UG-Student/IT UG-Student/IT UG-Student/IT
Bannari Amman Institute Bannari Amman Institute Bannari Amman Institute
Of Technology Of Technology Of Technology
Sathyamangalam,India Sathyamangalam,India Sathyamangalam,India
[email protected] [email protected] [email protected] Abstract-- Even though people moving to digital with images. Deep learning is a type of machine learning
documents with digital signature for authentication, most of that is modeled based on how people learn specific types of
the areas such as land records, agreement between parties, information. Because it involves statistics and predictive
legal certificates, identification cards etc., uses only modeling, it is extremely useful for data scientists. The two
handwritten signature. Verifying signatures is an important
one because a fraudulent signature would affect the real
sorts of profound learning approaches are counterfeit neural
owner too much. Hence, recognizing genuine signatures organizations (ANNs) and mimicked neural organizations
becomes essential to avoid such frauds. To recognize the (SNNs). Their name and structure are inspired by the
signature, deep learning technique is used in this work since it human brain, and they function similarly to organic
produces highest accuracy and it does not require too much neurons. Artificial Neural Networks (ANN), Convolutional
preprocessing. A convolutional neural network (CNN) based Neural Networks (CNN), and Recurrent Neural Networks
deep learning model is mostly used for image processing, (RNN) are the three most important kinds of neural
classification, and segmentation. As CNN algorithm learns networks.
more than KNN, SVM etc., CNN is used in this work for better
classification. The CNN based models such as VGG16, CNNs are a sort of deep, feed-forward artificial neural
Inception v3, CNN - with three and four convolution layers network that is utilized to break down visual aids to AI. As
are trained for this classification. The dataset is created by demonstrated in figure 1, convolution, max pooling,
collecting signatures from 10 different users with 50 dropout, and dense layers are applied. A multilayer
signatures each. Out of 500 signatures, 400 is taken for
perceptron form is used in the CNN, which requires
training and 100 is used for testing. Among these four models
Inception V3 produced highest accuracy of 95% with
relatively little preprocessing. These biologically inspired
preprocessed images whereas the same model produced only computational models surpass prior types of artificial
88% when unprocessed images are given as in input. intelligence by a factor of 10 in standard machine learning
tasks. The Large Scale Visual Recognition Challenge is one
Key Words: Signature verification, Classification, CNN, of the enormous scope concerns (LSVRC). Convolutional
VGG16, INCEPTION V3.
Neural Networks (CNNs)- based calculations that gain
from the beginning accomplish cutting edge precision on
the image net issue. The purpose of this project is to utilize
I. INTRODUCTION
a typical CNN to classify 10 users' genuine signatures. The
Machine learning is the result of humans inventing a Large Scale Visual Recognition Challenge is one of the
brilliant approach to simplify complex issues by training a enormous scope concerns (LSVRC). Convolutional Neural
computer to act like a human brain. The capacity of CNNs Networks (CNNs)- based innovation is being gained from
to build an inside portrayal of a two-dimensional image is the beginning. A total of 50 valid signatures are available
one of their benefits. This allows the model to learn position to each user. 400 test images and 100 train images are
and scale invariant data, which is crucial when working included in our distinctive learning set. After the images
have been changed over to twofold, commotion is taken signatures, DMML coordinates information from the
note. On the image net issue, ues accomplishes best in class likenesses and contrasts between the real and forged
precision. That noise is reduced using CV masking. examples of different classes. They compared the proposed
method to SVM, writer-dependent, and writer-independent
Discriminative Deep Metric Learning methods using
II. LITERATURE SURVEY Histogram of Oriented Gradients (HOG) and Discrete
Radon Transform (DRT) features (UTSig, MCYT-75,
In the framework of Handwritten Signature Verification
GPDSsynthetic, and GPDS960GraySignatures). DMML
using Binary Particle Swarm Optimization (BPSO), author
beats other systems in authenticating real signatures,
Rafael M O Cruz [1] investigated the presence of
competent forgeries, and random forgeries, according to the
overfitting when performing feature selection (HSV). In the
findings of their testing.
HSV context, Sig-Net is a 2048-dimensional state-of-the-
art Deep CNN model for feature representation. Some of The author Bouamra et al. [6] main goal is to increase
these dimensions may contain duplicate information in the the capabilities of automatic signature verification systems
dissimilarity representation space generated by the writer- so that they can work in a real-world setting by training
independent (WI) approach's dichotomy transformation them with only positive specimens and no fake sample. The
(DT). The GPDS-960 dataset was used in this study. classification is done with the One-Class Support Vector
Experiments in this work indicate that while looking for the Machine and the evaluations are done with the GPDS960
most discriminant representation, this technique avoids database, which is one of the largest offline signature
overfitting. Maergner Graphs [2] described two new graph- corpuses ever generated (OC-SVM). Experiments show
based offline signature verification algorithms: key point that the suggested technique can detect competent forgeries
graphs with approximated graph edit distance and inkball even when the training set only contains a single reference
models. They described the methods, suggested signature.
improvements in terms of processing time and accuracy,
Author Zhang et al. [7] had the option to accomplish
and presented experimental findings for four benchmark
new most elevated correctness for both on the online and
datasets. The proposed methods outperform the
offline written by Chinese person acknowledgment
competition in a variety of benchmarks, demonstrating the
(HCCR) on the ICDAR-2013 contest information by
power of graph-based signature verification.
joining the conventional standardization coordinated
S Tsang’s[3] told that signature verification was the bearing feature map (directMap) with the deep
most widely used way for verifying a person's identity in convolutional neural network (convNet). They additionally
the field of behavioral biometrics. Convolutional Neural show that, regardless of the way that directMap + convNet
Networks were used to extract information from pre- can accomplish the best outcomes and beat people, author
processed real and fake signatures in this article.CEDAR, transformation is as yet supportive for this situation. To
the BHSig260 signature corpus, and UTSig are among the lessen the discrepancy between training and test data on a
publically available datasets used in this study to test the specific source layer, a novel adaption layer is proposed.
proposed approach. Based on explainable deep learning Unsupervised adaptation is a viable option. By introducing
(DCNN) and a novel local feature extraction technique, an adaptation layer into the pre-trained convNet, it could
Hsin Hsiung Kao [4] suggested an off-line handwritten adapt to unique handwriting styles of specific writers,
signature verification system. To train their algorithm and significantly enhancing recognition accuracy.
determine if a questioned signature is authentic or false,
Author Moises Diaz [8] proposed a method based on a
they used the open-source dataset Document Analysis and
sequence of nonlinear and linear modifications that mimic
Recognition (ICDAR) 2011 SigComp.They achieved
the spatial cognitive map and intrapersonal variability of
precision of 94.37% to 99.96 percent in their testing dataset,
the human motor system while signing. By artificially
with a false rejection rate (FRR) of 5.88 percent to 0% and
augmenting a training sequence, the duplicator is put to the
a false acceptance rate (FAR) of 0.22% to 5.34%.
test, proving that the performance of four state-of-the-art
Author Soleimani [5] proposed a deep multitask off-line signature classifiers utilizing two publicly available
learning-based metric for offline signature verification databases has increased on average as if three additional
system. Deep Multi Task Metric Learning (DMML), a real signatures were acquired.Oliveira et. al. investigated
unique classification method for offline signature signature verification system [9]with set of algorithmically
verification is presented in this work. DMML used produced feature descriptors for a subset of graphometric
multitasking and transfer learning techniques to train a characteristics. The static properties height-to-width ratio
distance measure for each class while also teaching other of an image, the symmetry of the signature, baseline
classes. Unlike prior algorithms that only examined the alignment and spacing were taken into consideration. To
training samples of that class for confirming questioned conquer the absence of dynamic data in static signature
images, Shih Yin Ooi [10] proposed a functioning structure III. MATERIALS AND METHODOLOGY
dependent on a mix of Discrete Radon Change, Principal
Artificial neural networks are calculations derived by
Component Analysis, and probabilistic neural networks
the design and capacity of the human brain, and Deep
(PNN). At the image level, the proposed approach seeks to
Learning is a subfield of AI that deals with them.
identify forgeries from authentic signatures. Both their
Computerized reasoning is a trendy expression for a
private signature database and MYCT, a public signature
method that permits computers to mirror human thinking.
database, are subjected to stringent verification. Arbitrary,
All of this is made possible through machine learning,
relaxed, and proficient frauds of their own data set all had
which is a collection of algorithms trained on
equivalent error rates (EER) of 1.51%, 3.23%, and 13.07%
data.Convolutional Neural Networks (CNNs) are a sort of
respectively. With 10 training samples with the MYCT
profound deep learning neural network that is utilized to
signature database, suggested technique achieved an EER
recognize visual information. Every convolution neural
of 9.87%.
network has layers such as convolution layer, pooling layer,
The utilization of Deep Convolutional Neural Networks flattening layer, dropout layer, and dense layer, and they're
to prepare interpretations from signature pixels for Offline also known as space invariant artificial neural networks
Handwritten Hafemann et al. [11] suggested a signature (SIANN) or shift invariant. ANN (Artificial Neural
verification system and established a significant constraint: Network) and RNN (Recurrent Neural Network) are
the neural network inputs must be of a certain size, although examples of deep learning algorithms.
signature sizes vary widely between individuals. The
system is trained with the GPDS dataset while eliminating
the limitation of a most extreme size for the signs to be IV. PROPOSED ARCHITECTURES
examined. At the point when talented falsifications from a
subset of clients are accessible to include learning, higher For signature verification, two CNN models (with 3
layers and 4 layers) and two pre-trained models (VGG16
resolution(300 or 600dpi) can further develop execution,
and Inception V3) with transfer learning are implemented.
while lower goals (around 100dpi) can be utilized if by
some stroke of good genuine signature are utilized. The A. CNN MODEL WITH THREE CONVOLUTION
research proposed by M. Diaz et al.[12] utilized LAYERS:
handwritten signature as a biometric trademark. The
authors reviewed the literature on handwritten signatures
over the preceding ten years, focusing on the most
intriguing study domains and aiming to elicit prospective
future research directions in this field. Authors V. Nguyen
and M. Blumenstein [13] proposed the chain code
histogram, which is derived from the directional
information retrieved from the signature contour to
construct a grid-based feature extraction technique. By
applying 2D Gaussian channel to the grids containing the
chain code histograms, the system had the option to
accomplish an average error rate (AER) of 13.90% while
limiting the False Acceptance Rate (FAR) for arbitrary
frauds to 0.02%.
This research proposed by J. F. Vargas et al. [14] was
an off-line approach for verifying handwritten signatures.
At the global image level, it uses statistical texture features
to measure the image's gray level changes. The local binary
pattern and the co-occurrence matrix are analyzed and used
as features. A histogram can also be used to reduce the
impact of diverse signers' ink pens on the final product. An
SVM model was trained using genuine samples and random
forgeries, and then evaluated using random and competent
forgeries with two dataset: MCYT-75 and GPDS-100
Corpuses.
. Fig 1. CNN with 3 layers architecture
This model consists of three convolution layers as
shown figure 1. The first level of the convolution layer,
with a filter size of 3x3 is applied to input image. To
incorporate non-linearity into the Convolutional Neural
Network, ReLu activation is employed, followed by a max
pooling and dropout layer, and the same is done for the next
two layers, i.e. convolution layer followed by max pooling
and dropout. The image is flattened into a one-dimensional
array and moved to the completely connected layer going
through three convolution layers.
B. CNN MODEL WITH FOUR CONVOLUTION LAYERS:
This CNN model consists of four convolutional layers.
To stabilize the training process, each image will be
processed through two convolution layers that are separated
into tiny batches using batch normalisation. The Fig 2 VGG16 architecture
convolution layer with filter is 3x3 in size, with a ReLu D. INCEPTION V3 :
actuation layer, trailed by a maximum pooling and dropout
layer, and afterward a similar cycle is rehashed prior for This is CNN based pre-trained model used for
moving to the completely associated layer. There are 512 classification and segmentation. It is also trained with
parameters in the first fully connected layer, which has been ImageNet dataset for classifying 1000 different images.
reduced to 10 by adding batch normalisation and dropout Using transfer learning, this model is modified to classify
layers. the user signatures. It is a deep neural network consists of
48 layers. By applying label smoothing approach,
C. VGG16: factorization of convolutions and auxiliary classifiers, it
VGG16 is a convolutional neural network configuration produced only very less error rate. This model used 5x5
used in various profound learning image classification convolution kernel and RMSProp optimizer. Output layer
whose architecture is shown in Figure 2. VGG16 is trained of this model is removed and add a new output layer to
with Imagenet dataset which has 1000 classes with 10 produce 10 different signature classifications. This model
million images. By applying transfer learning method to produced 94% as training accuracy and 88% as validation
VGG16 pretrained model to verify the Signature of 10 accuracy when giving the images without applying
different users is achieved. This model consists of 16 layers preprocessing whereas, 97% as training accuracy and 95%
with 3x3 size filters which uses sequential model which as validation accuracy when giving preprocessed images
means that all the layers are connected in sequence. At the were given as an input.
end it has two fully connected layer followed by Output
layer with Softmax activation having 10 outputs, each
output activation represents one user signature. All the V. DATA SET DESCRIPTION
hidden layers used RELU activation function. When The dataset (Figure 3) for this technique is made up of
applying preprocessed images as input to the model, it signature images from roughly 100 test and 400 train
produced 94% as accuracy whereas produced only 76% for images which are captured using a 48 megapixel camera.
unprocessed input images. These images have a resolution of 128x128 pixels. All of
these signature images were collected from ten individuals
and organised into ten groups, each with 50 signature
images.
number of right guesses. The results are shown in Table 1.
The figure 5 illustrates that loss of the CNN with 3
layers model when training and testing. The loss is reduced
when number of epoch is increased while training as well
as testing.
Fig 3. Unprocessed dataset
VI. DATA PREPROCESSING Fig 5. Loss of CNN with 3 layer model
In data preprocessing, feature extraction is a technique The figure 6 illustrates that accuracy of the CNN with 3
for reducing noise in data and cleaning it up for further layers model. The accuracy is increased when number of
processing by extracting only the essential features in an epoch is increased while training as well as testing. During
image that the model needs to train. For the signature 100 epoch it produced 95% accuracy.
dataset, the following steps for feature detection and
extraction are used. The images are fed into the open cv
function, which preprocesses them as original, sharpened,
binary, and invert masked images.
Fig 6. Accuracy of CNN with 3 layer model
Fig 4 Preprocessed images
Table 1 Training and validation accuracy comparison
We sharpened the edges of the image in the signature
dataset to make the binary translation more precise, as Without Preprocessing With Preprocessing
shown in the above figure 4. The images are transformed to MODELS
Training Validation Training Validation
binary images after sharpening. After converting the image Accuracy Accuracy Accuracy Accuracy
to binary, masking is used to remove only the noiseless
region of the image, and the result is obtained by inverting CNN with 3- 93% 85% 96.25% 95%
the masked image. layer
CNN with 4- 91% 77% 96.50% 93%
VII. RESULTS AND DISCUSSION layer
Accuracy is the parameter that is used to evaluate the VGG16 92% 76% 96.50% 94%
model. The percentage can predictions the correct
prediction of the data is known as accuracy. It's simple to INCEPTION 94% 88% 97% 95%
calculate by dividing the total number of forecasts by the V3
View publication stats
VGG16, Inception V3, and CNN with three and four layers
to validate signatures. In those models, VGG16 and CNN
Figure 7 represents the performance of different models
with three layers had a 94% accuracy, whereas Inception v3
Without and With Pre-processed dataset.
and CNN with four layers had a 93% accuracy.
REFERENCES
[1] Maergner P, Howe NR, Riesen K, Ingold R, Fischer A (2019) Graph-
based offline signature verification. arXiv preprint arXiv:1906.10401.
[2] A. Soleimani, K. Fouladi, and B. N. Araabi, “Utsig: A persian offline
signature dataset,” IET Biometrics, vol. 6, no. 1, pp. 1–8, 2017.
[3] Zhang, X.-Y.; Bengio, Y.; Liu, C.-L. Online and offline handwritten
Chinese character recognition: A comprehensive study and new
benchmark. Pattern Recognit. 2017, 61, 348–360.
[4] Hafemann L.G., R. Sabourin, and L.S. Oliveira. (2017) “Learning
Features for Offline Handwritten Signature Verification using Deep
Convolutional Neural Networks”.
[5] M. Diaz, M. A. Ferrer, G. S. Eskander, and R. Sabourin. Generation
Fig 7. Accuracy of the Models without and with Pre-processed
of Duplicated Off-Line Signature Images for Verification Systems. IEEE
dataset
Transactions on Pattern Analysis and Machine Intelligence, 39(5):951–
Figure 8 represents the test and validation accuracy of 964, May 2017.
several models given in which has 0.001 learning rate and [6] Alceu S. Britto, Robert Sabourin, and Luiz E. S. Oliveira. Dynamic
100 epochs were run in CNN with 3 convolution layers selection of classifiers - a comprehensive review. Pattern Recognition,
gives an accuracy of 95% and CNN with 4 convolution 47(11):3665–3680, November 2017.
layers gives an accuracy of 93% and in VGG16 it gives an [7] Bouamra, W.; Djeddi, C.; Nini, B.; Diaz, M.; Siddiqi, I. Towards the
accuracy of 94% and 95% is obtained in inception V3. design of an offline signature verifier based on a small number of genuine
samples for training. Expert Syst. Appl. 2018, 107, 182–195.
[8] Shih Yin Ooi, Andrew Beng Jin Teoh, Ying Han Pang, and Bee Yan
Hiew. Image-based handwritten signature verification using hybrid
methods of discrete Radon transform, principal component analysis and
probabilistic neural network. Applied Soft Computing, 40:274–282, 2016.
[9] Rafael M. O. Cruz, Robert Sabourin, and George D. C. Cavalcanti.
Dynamic classifier selection: Recent advances and perspectives.
Information Fusion, 41:195–216, May 2018.
[10] Tsang, S. (2018) “Review: GoogLeNet (Inception-v1) — Winner of
ILSVRC 2014 (Image Classification)”. [Accessed: 24-Sep-2018]
[11] Hafemann, L.G., Oliveira, L.S., Sabourin, R.: Fixed-sized
representation learning from offline handwritten signatures of different
sizes. Int. J. Doc. Anal. Recogn. (IJDAR) 21(3), 219–232 (2018)
[12] M. Diaz, M. A. Ferrer, D. Impedovo, M. I. Malik, G. Pirlo, R.
Plamondon, A prospective analysis of handwritten signature technology,
ACM Comput. Surv. 51 (6) (2019) 117:1–117:39.
[13] V. Nguyen, M. Blumenstein, An Application of the 2D Gaussian
Fig 8 Train and validation accuracy of models with Preprocessed Filter for Enhancing Feature Extraction in Off-line Signature Verification,
dataset in: 2011 International Conference on Document Analysis and
Recognition, IEEE, 2011, pp. 339–343.
[14] J. F. Vargas, M. A. Ferrer, C. M. Travieso, J. B. Alonso, Off-line
VIII. CONCLUSION signature verification based on grey level information using texture
features, Pattern Recognition 44 (2) (2011) 375–385.
The purpose of this research is to use the models