Ear Biometrics System
Automated system that uses
features of the ear to identify or
verify an individuals’ identity
The Comparison is based on
variations of the complex
structure of the ear
Ear growth after the first four
months of birth is proportionl;
gravity can cause the ear to
undergo stretching.
Humans lack the ability to
recognize one another from ears
1
Why Use Ear Biometrics?
Ears have both reliable and
robust features which are
extractable from a distance
Other biometrics such as
Fingerprints, Hand Geometry
and Retinal Scans require close
contact and may be considered
invasive by users
Study by Iannnarelli showed the
feasibility of ear biometrics
2
Applications
Possible Applications for Ear
Biometrics
ATM Machines
Evidence in surveillance and
recognition systems
Access to restricted areas
Any Face Recognition System
would already contain all of
the necessary hardware to
collect ear biometrics
(Supplementary source of
evidence)
3
Ear Anatomy
External anatomy of the
ear: 1-Helix Rim, 2-
Lobule, 3-Antihelix, 4-
Concha, 5-Tragus, 6-
Antitragus, 7-Crus of
Helix, 8-Triangular Fossa,
9-Incisure Intertragica
4
Iannarellis System
Iannarelli System is
anthropometric technique based
upon the 12 ear measurements
Requires exact alignment and
normalization of the ear photo;
allows comparable measurements
Distance between each of the
numbered areas is measured in
units of 3mm and assigned an
integer distance value.
Features are stored along with sex
and race
A. Iannarelli, Ear Identification, Paramont Publishing, 1989
5
Limitations of Iannarelli System
Assuming an average standard deviation in the population of four
units (i.e., 12 mm), the 12 features provide for a space with less
than 17 million (4^12) distinct points.
All measurements are relative to an origin (point no. 10). So if the
origin is not exactly localized, all subsequent measurements will
be incorrect.
The main drawback of ear biometrics is that they are not usable
when the ear of the subject is covered (occlusion due to hair,
scarf/hat).
6
Thermogram image of the ear
Surface heat (i.e., infrared
photo) of the subject’s ear is
used to form an image.
Figure shows the
thermogram of the external
ear at room temperature.
7
Burge and Burger Method
Acquisition
Localization
Edge Extraction
Curve Extraction
Graph model: A generalized Voronoi diagram is
built and a neighborhood graph is extracted.
8
Graph Model
300 x 500 image of subject’s head in profile; ear localization
9
Eigenear
Ear image
First 8 eigenear images
Approach is similar to the eigenface (PCA) approach
10
Examples where Eigenear Approach Fails
11
Bad Image Examples
12
Multi Modal Face and Ear Systems
Combination of face
and ear in a multimodal
system improves the
system performance
768 x 1024 face images
and 400 x 500 ear
images are normalized
to 130 x 150
13
Intra-class Variations
14
Results for Day Variation
15
Results for Lighting Variation
16
Results for Pose Variation
17
Combination of 2D-3D Ear Images
Time lapse:
Oct. 7 – Nov. 13, 2003
Feb. 11 – Mar. 19, 2004
Minolta 910: 2D and 3D images
Totally
2343 2D/3D image pairs
365 subjects
302 subjects
Show up at least twice
Image quality is reasonably good in both 2D and 3D data
18
Preprocessing of 2D data
Scale, Rotation, Translation
Alignment
Histogram Equalization
Eliminate lighting variation
19
Image Preprocessing
Landmark Selection
Preprocessing of 2D Data
Preprocessing of 3D Data
Ear Extraction
20
Landmark Selection Triangular
Fossa
Two-Point
Landmark
With ambiguity Antitragus
Triangular
Two-Point Fossa
Landmark
Used in
PCA-Based and Incisure
Edge-Based Intertragica
Two-line
Landmark
Used in
ICP-Based
21
Preprocessing of 3D Data
for PCA and Edge-Based Approach
3D Orientation Normalization
2D Front-view XY Calibration
Hole Filling
22
Orientation
Normalization
z plane rotation
xy plane rotation
Plane Fitting
23
2D Front View XY Calibration
Original Image
Different Ear Size
After XY
Calibration
Same Ear Size
24
Hole Filling
Major issue with 3D data
• Oily skin and sensor error – corrected by median filter
• Rotation – corrected by mean filter
Before Hole Filling After Hole Filling
25
PCA-Based Approach
302 subjects
302 ears in gallery
302 ears in probe
Gallery images used as training data
Experiments
2D: different scale ratio
3D: hole filling
26
PCA-Based (Cont.)
2D data
Best performance is 63.6%
3D data
Best performance is 55.3%
27
Edge-Based Approach
Holes in the range data degenerate the
performance dramatically in 3D PCA
3D depth images look clearer than 2D intensity
images
2D intensity image 3D depth image
28
Edge Detection on 3D ear
Edge detection on depth data is much
clearer than 2D intensity data
2D data
Canny: Sigma = 1.0
3D depth data T_low = 0.5
T_high= 0.5
Gallery Probe
29
Edge Based Approach (Cont.)
Tune the forward and backward match
rate ratio
Best performance is 67.5%
Forward match rate ratio is 0.6
Backward match rate ratio is 0.4
30
ICP-Based Approach
Ear Extraction
ICP Algorithm
Area Overlapping
Template Size
Time Limitation
Sub-sample
31
Ear Extraction (ICP-Based)
Create Template
Create Template
Crop Crop
Ear Ear
2D view 2D view
32
Examples
Distance is
Distance is
2.8
0.72
Correct Match Incorrect Match 33
Area Overlap - Template Size
Use large template on both gallery and probe
Use small template on both gallery and probe
Use large template on gallery, small template on probe
Large Template Cropped Ear Small Template Cropped Ear
34
ICP performance vs. different template size
LGSP 0.95
85.1%
Recognition Rate
0.9
SGSP 0.85
79.7%
0.8
LGLP
74.8%
0.75
Both Gallery and Probe use Large Ear Template
Both Gallery and Probe use Small Ear Template
Gallery uses Large and Probe uses Small Ear Template
0.7
0 20 40 60 80 100
Rank
35
Time Limitation--Sub-sampling
Average Running Time Rank One
G= Gallery Ear recognition Rate
Number of (1 vs. 202)
P= Probe Ear points (Mins)
G and P use all the
5272 29.9 85.1%
points
Sub-sample both G
1186 4.68 83.7%
and P
Only sub-sample P 3508 6.72 84.7%
36
Performance of Ear Recognition
Ear Recognition Performance Using 302 Subjects
1
ICP 84.1%
0.9
Recognition Rate
Edge
67.5% 0.8
3D ICP−Based
3D Edge−Based
2D PCA 0.7
2D PCA−Based
63.6% 3D PCA−Based
0.6
3D PCA
55.3% 0.5
0 20 40 60 80 100
Rank
37
Dataset size vs. Recognition Rate
ICP has a much better scalability than 2D PCA
Scale of ICP and PCA on Different Gallery Size)
1
ICP Approach
0.95
PCA Approach
Rank One Recognition Rate
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0 50 100 150 200 250 300 350
Difference Gallery Size
38
Multi-modal
Best performance: 2D PCA+3D ICP = 91.7%
Multi−modal Biometrics
1
0.9
Recognition Rate
0.8 ICP−Based + 2D PCA−Based
ICP−Based
Edge−Based + 2D PCA−Based
2D PCA−Based
2D PCA−Based + 3D PCA−Based
Edge−Based
0.7 2D PCA−Based
2D PCA−Based
3D PCA−Based
0.6
0 20 40 60 80 100
Rank
39
Conclusion
3D ear data allows better performance than 2D data
Holes and missing data complicate 3D matching
process
ICP outperforms other methods and has good
scalability
Multi-biometrics improve performance
Multi-modal 2D PCA + 3D ICP gives the highest
performance
40
Force Field Extraction
Force field transformation treats the image as an array of mutually
attracting particles that act as the source of a Gaussian force field.
Peaks correspond to potential energy wells and the ridges correspond
to potential energy channels.
Algorithms to implement are complex
Gaussian noise in the image has little effect on the resulting force
field 41
Force Field Extraction
42