0% found this document useful (0 votes)
10 views60 pages

Tu 2017 5822040035 7189 5674

This thesis presents a novel automatic acne detection and quantification system using image processing techniques, specifically adaptive thresholding and Speeded Up Robust Features (SURF) combined with K-Nearest Neighbors (KNN) for classification. The proposed method achieved an average accuracy of 73%, sensitivity of 78%, and precision of 90%, addressing the limitations of manual acne assessment by dermatologists. The study aims to enhance the efficiency and accuracy of acne diagnosis and treatment monitoring.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views60 pages

Tu 2017 5822040035 7189 5674

This thesis presents a novel automatic acne detection and quantification system using image processing techniques, specifically adaptive thresholding and Speeded Up Robust Features (SURF) combined with K-Nearest Neighbors (KNN) for classification. The proposed method achieved an average accuracy of 73%, sensitivity of 78%, and precision of 90%, addressing the limitations of manual acne assessment by dermatologists. The study aims to enhance the efficiency and accuracy of acne diagnosis and treatment monitoring.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

AUTOMATIC ACNE DETECTION AND

QUANTIFICATION
FOR MEDICAL TREATMENT
THROUGH IMAGE PROCESSING

BY

NATCHAPOL KITTIGUL

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE


REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE
(ENGINEERING AND TECHNOLOGY)
SIRINDHORN INTERNATIONAL INSTITUTE OF TECHNOLOGY
THAMMASAT UNIVERSITY
ACADEMIC YEAR 2017

Ref. code: 25605822040035JYO


AUTOMATIC ACNE DETECTION AND
QUANTIFICATION FOR MEDICAL TREATMENT
THROUGH IMAGE PROCESSING

BY

NATCHAPOL KITTIGUL

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE


REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE
(ENGINEERING AND TECHNOLOGY)
SIRINDHORN INTERNATIONAL INSTITUTE OF TECHNOLOGY
THAMMASAT UNIVERSITY
ACADEMIC YEAR 2017

Ref. code: 25605822040035JYO


Abstract

AUTOMATIC ACNE DETECTION FOR MEDICAL TREATMENT


THROUGH IMAGE PROCESSING

by

NATCHAPOL KITTIGUL

Bachelor of Science (Computer Science), Sirindhorn International Institute of


Technology, 2013

Master of Science (Engineering and Technology), Sirindhorn International Institute of


Technology, 2017

About 85% of people age between 12 and 24 experience acne while the acne
treatment cost exceed $3 billion in U.S.A. Currently, dermatologists use manual skin
assessment method such as visual and photography then manually mark and count acne
on patient face which is time-consuming and subjective. This thesis proposed automatic
acne segmentation method using adaptive thresholding, detection and quantification
using Speeded Up Robust Features with novel feature extraction. The system applied
supervised learning with Train and Test algorithm. After the training process, the
correlated 6 designed features were found: Standard Deviation (SD) of Red, SD of
Green, SD of Blue, Circularity, Entropy and Saturation Average that should be used for
acne classification. The System classified and quantified acnes using K-Nearest
Neighbors algorithm. The results showed that the proposed method could efficiently
detected acne with 73% accuracy, 78% sensitivity and 90% precision on average.

Keywords: Acne detection, Acne quantification, Speeded Up Robust Features


(SURF), K-Nearest Neighbor (KNN), Feature Extraction.

ii

Ref. code: 25605822040035JYO


Acknowledgements

In searching for Master degree knowledge, my journey could not be


accomplished without support from my advisor: Dr. Bunyarit Uyyanonvara. I would
like to express my sincere gratitude and sincerest appreciation for his valuable
guidance, time and immense knowledge.

Besides my advisor, I would like to thank to the rest of my thesis


committee: Dr. Pakinee Aimmanee and Dr. Chanjira Sinthanayothin for their
insightful comments and encouragement to widen my knowledge from various
perspectives.

My master degree study can never happen without Excellent Thai Student
Scholarship from Sirindhorn International Institute of Technology. So I would like to
express my special gratitude for the institute to give this opportunity.

Lastly, I am grateful to my parents for their support and encouragement


for my study and living (as always).

Thanks for all your encouragement and guidance.

iii

Ref. code: 25605822040035JYO


Table of Contents

Chapter Title Page

Signature Page i
Acknowledgements ii
Abstract iii
Table of Contents iv
List of Figures vi
List of Tables viii

1 Introduction 1

1.1 Acne 1
1.2 Image Processing 3
1.3 Motivation 5
1.4 Objectives and Score of the Study 5
1.5 Thesis Organization 5

2 Literature Review 7

2.1 Existing Image Processing Techniques for Acne Detection 7


2.2 Existing Performance Evaluation Techniques 10

3 Acne Detection Using Adaptive Thresholding 13

3.1 Thresholding 13
3.2 Adaptive Thresholding Implementation 15

iv

Ref. code: 25605822040035JYO


4 Acne Detection Using Speeded Up Robust Features and Quantification
Using K-Nearest Neighbors Algorithm 23

4.1 Pre-Processing 23
4.2 Speeded Up Robust Features (SURF) 23
4.3 Feature Extraction 25
4.4 Training and Testing 27
4.5 K-Nearest Neighbor Classification 29
4.6 System Implementation on Whole Face Image 35

5 Result and Discussion 37


5.1 Features Evaluation 37
5.2 Detection and Quantification Algorithm Evaluation 37

6 Conclusions and Recommendations 40

References 42
Appendices 48
Appendix A 49

Ref. code: 25605822040035JYO


List of Figures

Figures Page
1.1 Image processing processes 4
3.1 Thresholding value comparison 14
3.2 Comparison of Binary Thresholding and Adaptive Thresholding 15
3.3 Adaptive Thresholding Flow Chart 16
3.4 Result 1 of Adaptive Thresholding Segmentation 17
3.5 Result 2 of Adaptive Thresholding Segmentation 17
3.6 Result 3 of Adaptive Thresholding Segmentation 18
3.7 Result 4 of Adaptive Thresholding Segmentation 18
3.8 Result 5 of Adaptive Thresholding Segmentation 19
3.9 Result 6 of Adaptive Thresholding Segmentation 19
3.10 BLOB detection Result 1 of Adaptive Thresholding Segmentation 21
3.11 BLOB detection Result 2 of Adaptive Thresholding Segmentation 21
3.12 BLOB detection Result 3 of Adaptive Thresholding Segmentation 22
4.1 Discard Overlapped Key points 26
4.2 Automatic Training System 28
4.3 Cross Validation 28
4.4 Acne Detection using Speeded Up Robust Feature and Quantification
using K-Nearest Neighbors Algorithm Flow Chart 30
4.5 Result 1 of Acne Detection using Speeded Up Robust Feature and
Quantification using K-Nearest Neighbors Algorithm 31
4.6 Result 2 of Acne Detection using Speeded Up Robust Feature and
Quantification using K-Nearest Neighbors Algorithm 31
4.7 Result 3 of Acne Detection using Speeded Up Robust Feature and
Quantification using K-Nearest Neighbors Algorithm 32
4.8 Result 4 of Acne Detection using Speeded Up Robust Feature and
Quantification using K-Nearest Neighbors Algorithm 32
4.9 Result 1 of Acne Detection on whole face image 35
4.10 Result 2 of Acne Detection on whole face image 36
4.11 Result 3 of Acne Detection on whole face image 36

vi

Ref. code: 25605822040035JYO


Figures Page
4.12 Result 4 of Acne Detection on whole face image 36

vii

Ref. code: 25605822040035JYO


List of Tables

Tables Page
1.1 Acne types 2
2.1 Comparison of grading and lesion counting 10
2.2 The global acne grading system 11
2.3 Confusion matrix 12
2.4 Condition and interpretation of confusion matrix 12
3.1 Parameters of simple BLOB detector 20
4.1 Supervised learning steps, detail and implementation 33
5.1 Correlation of 9 features 37
5.2 Training and testing data setting for the experiment 38
5.3 Condition and interpretation of confusion matrix 38
5.4 Acne detection using Speeded Up Robust Feature and
quantification using K-Nearest Neighbors Algorithm 39

viii

Ref. code: 25605822040035JYO


Chapter 1
Introduction

Acne is a chronic skin disease occurring from inflammation of pilosebaceous


units which are hair follicles under skin and their surrounding sebaceous gland (fatty
gland) clog up [1]. Currently, dermatologist has to manually mark a location of acnes
on the sheet, then count to quantify and measure treatment progress. This is an
unreliable and inaccurate method. Moreover, this method requires dermatologist’s
excessive effort [2]. In the present study, a novel automatic acne detection and
quantification method using Image processing technique is proposed. This chapter
mainly describes basic knowledge for this research. Motivation and Objectives are
introduced in this chapter. Lastly, Thesis structure is presented for reviewing the overall
structure of this thesis.

1.1 Acne
Acne is a chronic inflammatory skin disease occurring from disorder of
pilosebaceous units, follicular epidermal hyper-proliferation and propionic bacteria (p-
acne) activity [3] characterized by blackheads or whiteheads pimples, oily skin and
scar. Acne primarily affects face, upper part of chest and back which have high numbers
of oil glands.
Acne causes significant physical and psychological problems for patients such
as permanent scarring, depression and anxiety from poor self-image [4]. When you
have acnes, go to see dermatologist early is the safest way to heal and prevent future
permanent scars.
Acne can be caused by many factors such as overactive oil glands that produce
too much oil, combine with skin cells to make pores in the skin, become plugged, and
p-acne bacteria cause skin lesions.
Food with high glycemic ingredients such as rice, sweets, bread and pasta have
been linked to be the cause of acne [5]. To prevent acne and for overall good health,
balanced diet and eating a healthy food can prevent acne.

Ref. code: 25605822040035JYO


Acne is categorized into 6 different types which are shown in Table 1.1:
Table 1.1 Acne types
Acne type Illustration Real Image Description
Image
1. Whitehead This acne occurs when sebum and
dead skin cells get plugged in
follicles. Small blemishes with
white head.

2. Blackhead Black head or comedone: follicles


get clogged by sebum, the only
difference from whitehead is that
the debris inside the follicle
becomes oxidized and the hair
follicles are clogged.
3. Papules When whiteheads are advancing to
further stage: P-acnes bacteria,
sebum and dead skin cells cause
inflammation. These acnes
characters are red and swelling.

4. Pustules Pustules are similar to papules, but


with the presence of white or
yellow sebum on the surface.

5. Nodules Nodules are one of the severe


types of acne, characterized by red
and inflamed blemished. This can
cause permanent damage to skin if
left untreated.

6. Cysts Cysts are one of the severe types of


acne, characterized by red and
inflamed blemished. The
difference from nodules is that
cysts are more severe and tend to
have larger size. This can cause
permanent damage to skin if left
untreated.

Ref. code: 25605822040035JYO


1.2 Image Processing
Image Processing is a signal processing which is applied on image input [6].
Image processing has two main types: digital and analog. The processes of image
processing are shown in Figure 1.1. In this study, only digital image processing is
focused. Digital image processing is superior to analog image processing in such many
ways, for example, digital image processing allows more algorithms to be applied to
the input image. Furthermore, noise problem and signal distortion can be avoided.
Computer Vision is an interdisciplinary field, the objective of this field is to
make computer understand high-level detail from digital images or videos resulting in
automate human visual system tasks [7–9]. The main tasks include acquiring,
processing, analyzing and understanding digital images. The discipline focuses on
theory to extract information from images and construct models for computer vision
system.
The first step of image processing is digital image acquisition using sensors in
optical or thermal wavelengths. Due to range sensors, ultra-sonic cameras, tomography
devices, radar, etc. [10] the image can be captured in 2 Dimensions or 3 Dimensions or
image sequence. The pixel values obtained from quantization represent light intensity
in one or multi-spectral bands. Other physical measures such as depth, nuclear magnetic
resonance or wave absorption can be related.
Pre-processing is the step to enhance and refine acquired image to get better
result visual quality of the image. The key objective is to improve image qualities to
increase success chances for other processes [6]. The Pre-processing steps include:
 Re-sampling
 Noise reduction
 Contrast enhancement
 Scale space

Feature Extraction is the process to extract meaningful features: the related


piece of information for solving specific task, features represent quantitative
information of interest that are used for object’s class differentiation from another, for
example: character recognition use descriptors such as lakes (holes) and bays to
differentiate one alphabet from another [6]. Feature has various levels of complexity

Ref. code: 25605822040035JYO


such as lines, edges, ridges, corners, BLOBs, points, color value, texture, roundness etc.
Moreover, each feature can be represented into different forms such as scalars, function,
Boolean etc. For example, color will be represent in three scalars (average values of R,
G, and B). Edge can be represent as Boolean or certainty measure of the edge’s
existences.
Feature vectors and feature spaces: features are extracted from the image. Each
feature will have feature descriptor at each image point. Feature vector contains
organized information and set of all features vectors is called a feature space.
Classification is the process to determine classes for image points or regions.
The Region of Interest (ROI) is assessed again model from feature extraction process.
If the condition is satisfied, the detected object will be classified into different
categories, also called “Image recognition”.
Post-processing is the last step in Image processing, the application will draw
results on original image, and the results depend on application. If the application is
automatic inspection, the result is Pass or Fail. For recognition application, the result is
match/no-match. Lastly, for medical, military, security or some recognition
applications the results will be flagged for further human inspection.

Figure 1.1 Image processing processes

Ref. code: 25605822040035JYO


1.3 Motivation
Currently, there are more than 3,000 beauty clinics in Thailand with more than
2 billion baht revenue. About 85% of people age between 12 and 24 experience acne
and the acne treatment cost exceed $3 billion in U.S.A. [11]
In acne treatment, dermatologist diagnoses acne quantity and severity by
manual counting and classifying into following lesion types: comedo, papule, pustule,
nodule and cyst. Dermatologist has to mark the spot of acne on the sheet to show acne’s
location and count them manually. This method has high degree of unreliability,
inaccuracy and requires doctor’s excessive effort [12]. Therefore, a Computer assisted
- Image processing system for acne detection has been proposed to overcome manual
counting in recent years [2].
The motivation of this thesis was to create and implement effective Image
processing system for acne detection to overcome dermatologist excessive effort of
manual counting, and classification to diagnose acne’s quantity and severity.

1.4 Objectives and Scope of the Study


1. To develop and implement more efficient and more accurate approach of acne
segmentation using Adaptive Thresholding approach.
2. To develop and implement more efficient and more accurate approach of acne
detection which includes acne feature extraction and classification using K-
Nearest Neighbors (KNN) Algorithm.
3. To design and implement a practical acne detection system from end to end,
that has user-friendly front-end User Interface and User Experience.

1.5 Thesis Organization


The thesis is organized into 6 chapters as follows:
 Chapter 1 Introduction, this chapter introduces definition of acne, image
processing steps, existing problems motivation and necessity of this thesis.
 Chapter 2 Literature Review, this chapter provides a review of existing
literatures and notable existing performance evaluation techniques.
 Chapter 3 describes Acne Detection using Adaptive Thresholding approach.

Ref. code: 25605822040035JYO


 Chapter 4 describes Acne Detection using Speeded Up Robust Features and
Quantification Using K-Nearest Neighbors algorithm approach.
 Chapter 5 Results and Discussion: this chapter presents experimental design and
performance results for Acne Detection using Speeded Up Robust Features and
Quantification Using K-Nearest Neighbors algorithm.
 Chapter 6 Conclusions and Recommendations. The important points are
summarized and recommendations for future work are discussed.

Ref. code: 25605822040035JYO


Chapter 2
Literature Review
In this chapter, existing image processing technique for acne detection and
existing performance evaluation techniques are reviewed to provide background
knowledge of existing works and how to evaluate the performance of the system.

2.1 Existing Image Processing Techniques for Acne Detection


In previous image processing approaches to detect acne, Ramli et al. [13] used
CIELAB color space to do skin lesion segmentation. Firstly, sample images were
converted from RGB color space to CIELAB color space by calculating the Euclidean
distance. They applied Otsu’s thresholding method to extract foreground (acnes) from

background (skin). The system had 80% sensitivity and specificity.

Khan et al. [3] applied Fuzzy C-means (FCM) Clustering Technique which
clustered associated pixel in one or more clusters. They applied FCM on 4 color spaces
which were RGB, OHTA, YIQ and I1I2I3. They noted that RGB was very sensitive to
illumination variations. YIQ color space solved illumination problem by separating the
luminance (Y) from chrominance information (I, Q). The results showed that optimum
clusters number was 3. Specificity / sensitivity and accuracy is varying in different
clusters number from 45-95%.
Alamdari et al. [14] used k-means clustering (2 levels) with Hue Saturation
Value (HSV) color space, which had more meaningful color component over RGB.
They achieved 70% accuracy. They also used Fuzzy c-means (FCM) clustering
technique and Support Vector Machine (SVM) to differentiate acne scar from
inflammatory lesions with 80% and 66.6% accuracy, respectively. They found that the
accuracy of classifying detected acne from normal skin was 100% using FCM. In
addition, they applied wathershed segmentation and multi thresholding but the
application was failed to detect acne properly.
Liu and Zerubia [15] used Markov random fields (MRFs) with Chromophore
Descriptors by applying iterated conditional modes (ICM). The algorithm was robust
to large-dynamic-range intensity that would work on images captured under

Ref. code: 25605822040035JYO


uncontrolled environment. The result highly agreed to human visual inspection
(estimated).
Chen et al. [16] developed imaging system on Android device as an alternative
to expensive skin probe. They used normal and Ultraviolet (UV) lighting with YCbCr
color space. The system used simple thresholding to extract acne features. Their
systems had drawback that required user to manually mark Region of Interest (ROI).
They achieved 82% accuracy by simply counting positive and negative samples.
Malik et al. [17] used K-means clustering and SVM classifier to detect and
classify acnes into 4 categories: comedo, papule, pustule, and nodule with severity
level, and resulted in 93% accuracy in average (with post processing).
Huamyun and Malik [18] used Multilevel thresholding on RGB images. The
result needed more improvement and they suggested a use of Multispectral and Thermal
images with more color bands. There would be an improvement in detection result.
Lucut and Smith [19] proposed their own K-means clustering algorithm that
applied Hough Transform and First Order Derivative to find thresholding point. This
approach automatically found actual number of clusters, resulting in 59-99% accuracy.
Chantharaphaichit et al. [1] detected acnes by using image processing technique
in MATLAB module. They converted the image from RGB color space to Gray Scale,
applied normalization with maximum intensity, converted RGB to HSV color space
then applied brightness extraction process. The system marked the Region of Interest
(ROI) by image subtraction then applied Binary thresholding with user defined value
to apply spot and region. Lastly, acnes detection result was marked on original image.
The system had fair accuracy.
Chantharaphaichit et al. [2] applied feature extraction with supervised learning:
“Training” and “Testing” algorithm on 10 acne images. The system did Blob detection,
Feature extraction for each candidate and then applied Bayesian Classification,
Supervised Training and Unsupervised Testing. The system had accuracy at 70.65%.
Chang and Liao [20] did facial region extraction from captured image by using
skin color filtering and region-filling method to detect the largest connected region and
remove unrelated facial feature which were eyes, eyebrows, nostrils and mouth through
Fourier descriptor. Then, they used feature extraction with co-occurrence matrix and
the sequential floating forward selection (SFFS) to select features. They used Support

Ref. code: 25605822040035JYO


Vector Machine (SVM) to classify normal patterns, acnes and spot. They applied a
decision tree structure which consisted of two SVMs. Chang’s methods worked
effectively at 98% accuracy. The system sensitivity was medium at 64% because of the
different features in various types of acnes.
Chandra et al. [21] used segmentation with color-based technique, Mahalanobis
distance (MD) Minimum distance classifier, then compared with Bayesian Classifier.
The experimental result showed that Mahalanobis distance was superior to Bayesian
Classifier but with limitations such as photographic session ambience light.
Khongsuwan et al. [22] applied Ultra-Violet fluorescence lighting in image
capturing process to detect P. acne, a gram positive anaerobic microorganism, which
causes the acne, They converted UV image to RGB and to Gray-Scale, applied adaptive
histogram equalization which used a bilinear interpolation to eliminate artificially
induced boundaries of the acne, and then applied extended maxima transform to
separate close objects from each other. The experiment was done on cropped-part of
the skin area only. The system had 83.75% accuracy.
Singh and Kanwal [23] did facial marks detection techniques comparison
survey. They found that to extract facial features, Active Apperance Model (AAM) was
the most efficiency technique. For acnes detection, various techniques were reviewed
such as Laplacian-of-Gaussian (LoG), Speeded Up Robust Features (SURF), BLOB
detection and Morphological operators. They could identify various types of acne [8].
Fujii et al. [12] applied image processing techniques on Multispectral Image
captured from 16 bands and 12 bits depth multispectral camera. Two tungsten lamps
were used to illuminate Patient’s face. The measurement of spectral energy distribution
was done by Spectroradiometer. The classification process was done by Fisher linear
discriminant function (LDF’s) and thresholding. The 3 Fisher LDF’s and thresholding
value were experimentally calculated. The system showed good results.

Ref. code: 25605822040035JYO


2.2 Existing Performance Evaluation Techniques
Although acne vulgaris is easy to be diagnosed, the polymorphic aspect and its
property variation do not permit simple severity evaluation [24]. There are two main
types of diagnostic approaches: grading and lesion counting. The comparison of them
is shown in Table 2.1. Grading of acne is a subjective method, the dermatologist
determines acne severity by observing the dominant lesions and evaluates them by
estimating inflammation and amount of involvement. Lesion counting requires
dermatologist effort in counting and recording number of acnes in different types and
overall severity.
Table 2.1 Comparison of grading and lesion counting
Grading Lesion counting

Involves observing the dominant lesions, Involves recording the number of each
and estimating the extent of involvement type of acne lesion and determining the
overall severity
Subjective method Objective method
Quick and Simple Time-consuming
Less accurate More accurate
Does not distinguish small differences in Distinguishes small differences in
therapeutic response therapeutic response
Effect of treatment on individual lesions Effect of treatment on individual lesions
cannot be estimated. can be estimated.
Used in offices and clinical settings Used in clinical trials

2.2.1 The Global Acne Grading System


For an example of grading system, “The Global Acne Grading System”
developed by Doshi et al. [25] is one of the most comprehensive acne grading criteria
[24], as shown in Table 2.2.

10

Ref. code: 25605822040035JYO


Table 2.2 The global acne grading system
Location Factor Severity (S) Local Score (F x S)
(F)
Forehead 2 0: Nail Mild: 1-18
Right cheek 2 1: Comedone Moderate: 19-30
Left check 2 2: Papules Severe: 31-38
Nose 1 3. Pustule Very Severe > 39
Chin 1 4. Nodule
Chest and upper back 3

Assessment of the acne vulgaris severity continues to be a challenge for


dermatologists, because there is no universal grading system that has been accepted.
But to evaluate image processing system performance for acne quantification, the most
accepted method is Confusion matrix and Sensitivity, Precision and Accuracy analysis.

2.2.2 Confusion Matrix


A confusion matrix (or error matrix) is a table designed to visualize the
performance of a supervised learning algorithm. Because the accuracy only cannot
determine real performance of a classifier if the data set is unbalanced. It is a
contingency table which has actual and predicted dimensions [26]. To test
classification function the conditions of confusion matrix is shown in Table 2.3 and the
interpretation of each condition is shown in Table 2.4.

11

Ref. code: 25605822040035JYO


Table 2.3 Confusion matrix
Predicted condition
Total Prediction positive Prediction negative
population
True Condition True Positive (TP) False Negative (FN)
condition positive (type II error)
Condition False Positive (FP) True Negative (TN)
negative (type I error)

Table 2.4 Condition and interpretation of confusion matrix


Condition Interpretation
True positive (TP): Acne that is correctly detected as acne.
False positive (FP): Scar and normal skin which are incorrectly
detected as acne.
True negative (TN): Scar and normal skin which are correctly
detected as scar and normal skin.
False negative (FN): Acne that is incorrectly detected as scar and
normal skin.

Therefore, conditions can be calculated as the followings:


Sensitivity = TP/(TP+FN),
Precision = TP/(TP+FP),
Accuracy = (TP+TN)/(TP+TN+FP+FN).

12

Ref. code: 25605822040035JYO


Chapter 3
Acne Detection Using Adaptive Thresholding

3.1 Thresholding
Thresholding is one of the most simple image segmentation method. The
thresholding process separates object from background by selecting threshold T that
separates these modes. Any point (x, y) for which f(x, y) >T is an object point,
otherwise, it is a background point [6]. Single level (or Binary Thresholding)
thresholding only has one threshold T but for multilevel thresholding, the system has
two or more T, for example T1 and T2. The point is object class if f(x, y) >T2, and it is
background class if f(x, y) <= T1. Multilevel thresholding is less reliable than single
thresholding because of difficulty of finding multiple threshold points that are
effectively separate regions of interest from background [6].
Binary thresholding, also called “Simple Global Thresholding”, is the most
simple thresholding techniques. Binary thresholding partitions the image histogram by

separating the object from the background with a single thresholding T. The
segmentation process starts by scanning through each pixel and labels them as object
or background depending on their gray levels of that pixel which is greater or less than
the value of T. The gray level range is from 0 to 255. In practice, binary thresholding
can be expected to be successful in highly controlled environments. Chantharaphaichit
et al. [1] achieved fair accuracy by using binary thresholding to detect acne vulgaris.
The main drawback of binary thresholding was single thresholding value if not optimal,
it reduced accuracy of the system, as shown in Figure 3.1.

13

Ref. code: 25605822040035JYO


Figure 3.1 Thresholding value comparison

Thresholding approach has main drawback that the result is subjective to image
illumination. Generally, when the histogram of an image with no illumination noise is
seen, there will be two valleys that separate foreground from background. But when the
image has illumination noise, the noise will eliminate each valley boundaries from
other, making segmentation by single threshold impossible.
Regarding Adaptive thresholding, unlike binary thresholding, the thresholding
value of each pixel location depends on the neighboring pixel intensities [27]. This
approach assumes that smaller image regions tend to have uniform illumination, thus it
is more suitable for thresholding. Adaptive thresholding gives better results for an
image with varying illumination. The comparison of Binary Thresholding and Adaptive
Thresholding is shown in Figure 3.2.

14

Ref. code: 25605822040035JYO


Figure 3.2 Comparison of Binary Thresholding and Adaptive Thresholding

3.2 Adaptive Thresholding Implementation


The Adaptive Thresholding Flow Chart is shown in Figure 3.3. The system
starts from pre-processing by applying Smooth Gaussian filter with smooth value=3.
The system extracts only green channel from RGB color space, because in green
channel, acne exhibits more contrast but red and green channels tend to contain more
noise [28].

15

Ref. code: 25605822040035JYO


Figure 3.3 Adaptive Thresholding Flow Chart

In adaptive thresholding, there are two adaptive methods:


1. Adaptive Thresholding Mean C: threshold value is the mean of
neighborhood area.
2. Adaptive Thresholding Gaussian C: threshold value is calculated from the
weighted sum (Gaussian window) of neighborhood values.

In this implementation, method 1: Adaptive Thresholding Mean C will be used


because this method gives more persistent results in the giving data set. The block size
parameter = 699, C=1, the thresholding segmentation results are presented in Figure
3.4 to Figure 3.9.

16

Ref. code: 25605822040035JYO


Figure 3.4 Result 1 of Adaptive Thresholding Segmentation

Figure 3.5 Result 2 of Adaptive Thresholding Segmentation

17

Ref. code: 25605822040035JYO


Figure 3.6 Result 3 of Adaptive Thresholding Segmentation

Figure 3.7 Result 4 of Adaptive Thresholding Segmentation

18

Ref. code: 25605822040035JYO


Figure 3.8 Result 5 of Adaptive Thresholding Segmentation

Figure 3.9 Result 6 of Adaptive Thresholding Segmentation

After the segmentation process, the system will do BLOB detection which is
Simple BLOB Detector which implements an algorithm for extracting blobs [29]:

1. The binary images convert from the source image by applying several
thresholding values between minThreshold to maxThreshold with gradually
increased thresholdStep.

19

Ref. code: 25605822040035JYO


2. FindCountours are used to extract connected components from all binary
images and then calculate their centers.
3. Several binary images are used to calculate centers coordinates: centers that are
close to others from one group that transforms to one blob. The parameter that
controls grouping process is minDistBetweenBlobs.
4. The estimated centers and radiuses of detected blobs are calculated from the
groups. Locations and sizes of key points are returned.

This Simple BLOB Detector has filtration parameters as shown in Table 3.1.
User can adjust these filters to improve BLOB detection accuracy.

Table 3.1 Parameters of simple BLOB detector


Parameter Detail Value Visualization (0-1)
Color The color of BLOB from dark to light.

Area The size (width x height) of the blob


between min and max values.
Circularity The circularity value can be calculated
from

Where A=Area,
P=Perimeter
The circularity is limited between min
and max Circularity value.
Inertia Extracted blobs have their Inertia ratio
between min Inertia ratio and max
Inertia ratio.
Convexity The convexity is calculated from
(area / area of blob convex hull). The
limit is between min and max
convexity.

20

Ref. code: 25605822040035JYO


Simple BLOB detector demonstrates average acne’s candidate detection
results, as shown in Figure 3.10 to Figure 3.12.

Figure 3.10 BLOB detection Result 1 of Adaptive Thresholding Segmentation

Figure 3.11 BLOB detection Result 2 of Adaptive Thresholding Segmentation

21

Ref. code: 25605822040035JYO


Figure 3.12 BLOB detection Result 3 of Adaptive Thresholding Segmentation

In Conclusion, adaptive thresholding method is good for acne segmentation


but the BLOB detection process, which implements Simple BLOB detector, needs
more advance method to detect acne which will be explained in the next chapter.

22

Ref. code: 25605822040035JYO


Chapter 4
Acne Detection Using Speeded Up Robust Feature and
Quantification Using K-Nearest Neighbors Algorithm
In previous chapter, an algorithm for acne detection was proposed using
Adaptive Thresholding segmentation method and Simple BLOB detector. However,
adaptive thresholding technique still have flaw due to illumination and constant Block
Size parameter, furthermore, Simple BLOB detector requires user to specify some
filtration parameter to improve its performance and filter value can differ through
images. So, in this chapter more robust acne detection method is proposed. The method
applied Speeded Up Robust Features (SURF) and more advance feature extraction and
classification.

4.1 Pre-Processing
The system starts by pre-processing the image by extracting only green channel,
then removes noise with Gaussian Blur with Smooth value = 3. In this implementation,
the color spaces that will be used are RGB color space and Hue Saturation Value (HSV)
color space.
HSV color space represent points in an RGB color model in the cylindrical-
coordinate representations. HSV rearranges the geometry of RGB in more intuitive
and perceptually relevant way which gives benefits over the Cartesian representation.
HSV decouples intensity value from color, with hue and saturation
corresponding to human perception. Image processing algorithms can be developed
from this representation easily [31].

4.2 Speeded Up Robust Feature (SURF)


SURF is a scale- and rotation-invariant interest point (key point) detector and
descriptor. It is partly inspired by the scale-invariant feature transform (SIFT)
descriptor and faster than SIFT [30]. Han and Uyyanonvara [31] did Biomedical image
BLOB detection survey and found that SURF, which implemented Determinant of
Hessian (DOH) blob detector, was suitable for real-time key point detection. Moreover,

23

Ref. code: 25605822040035JYO


detection speed was independent from blob’s size parameter and it could detect several
common geometrical structures.
The SURF algorithm has three main processes: first, interest point detection,
second, local neighborhood description, and third, matching. For the first process,
interest point detection: SURF applies square-shaped filter as Gaussian smoothing
approximation on integral image as eq. 4.1.

(4.1)
Then, Hessian matrix blob detector was used to find key points in eq. 4.2 as done by
Lindeberg [32]. Given a point p=(x, y) in an image, the Hessian matrix H (p, σ) can be
calculated at point p and scale σ, where etc. are the second-order derivatives
of the grayscale image. The determinant of Hessian matrix measures local change
around the point. The candidate point will be the key point if the determinant is
maximal.

(4.2)
Key points can be found at various scales by up-scaling the filter size. The scale
space is analyzed. The output of the above 9×9 filter is considered as the initial scale
layer at scale s=1.2 wit Gaussian derivatives at σ=1.2. The gradually bigger masks filter
is applied to the image to get the following layers. Brown’s method is used to find the
maxima of the determinant of the Hessian matrix, which is interpolated in scale and
image space [33].
For local neighborhood description, the descriptor objective is to provide a
unique and robust description of image feature. SURF descriptor can withstand
rotational invariance by introducing The Haar wavelet responses in X and Y direction
within a circular neighborhood of radius 6s around the interested points which are
computed, where s, the scale value at the interested point, is detected. The Gaussian
function is weighted at the center of the key point, then the system plots the point with
the horizontal response in the abscissa and the vertical response in the ordinate (2
dimensions). The sliding orientation window of size π/3 is used to calculated dominant

24

Ref. code: 25605822040035JYO


orientation by calculating the sum of all responses from the window. The horizontal
and vertical responses within the window are summed. The local orientation vector is
yielded from the two summed. The longest vector determines the orientation of the key
point. To achieve good balance between robustness and angular resolution, the sliding
windows size parameter need to be optimal.
The square window size of 20s around the point is extracted with the selected
orientation. This process is applied to describe the interested region.
The interested region is divided into smaller 4x4 square sub-regions, Haar
wavelet responses are extracted at spaced sample points with the size of 5x5 for each
one. The Gaussian is used to weight responses to give the system more robustness.
Lastly, Matching, SURF main objective is to match the descriptors obtained
from different images to find matching pairs and do the image recognition process. But
for this thesis SURF algorithm was used up to BLOB detection part only.
The system store all detected key points in the Vector of Key Points. In the
experiment SURF demonstrated high performance with calculation time less than 10
seconds for each image.

4.3 Feature Extraction


The key points detected from SURF algorithm may overlap on each other, there
are two types of overlap: 1. Whole overlap: the whole smaller BLOB is inside the larger
BLOB, 2. Part overlap: two BLOB intersect on each other. The discard overlapped key
points process is shown in Figure 4.1 Overlapping problem can be solved by two steps:
1. Sort key points from large to small, in this implementation, author used
Bubble sort technique.
2. Iterate over each key point, check if small one is overlapped with large
one more than ¼ of an area of small one, then discard small one.

25

Ref. code: 25605822040035JYO


A B
Figure 4.1 Discard Overlapped Key points

After that, the system will iterate over key points that size is greater than
30 x 30px which is the smallest size of acne. Then, the system applies Otsu
Thresholding that will extract foreground object (acne) from background (skin).
Otsu thresholding is the thresholding technique that maximizes the likelihood
that the threshold is chosen so as to split the image between foreground and background
[34]. Otsu thresholding method selects a threshold point that best separate two classes
automatically.
Otsu thresholding may result in unwanted result because some detected skin
BLOB do not have foreground object, therefore the system will filter out these detected
skin BLOB by this algorithm:

1. Assume that the foreground object will be in the center, the system
sampling the center pixel to check whether it has value or not.
2. The system scales up to 5x5 square to check if any pixel has value
or not. If there are not any pixels that has value>0, then discard this
BLOB.

Features of each acne will be calculated. Initial designed features are:


4.3.1 Redness: calculated from number of pixel in EMGUCV’s Hue
range between (0-8 and 172-255) which is red color.
4.3.2 Hue Mean: average hue of every pixels.
4.3.3 Standard Deviation (SD) of Red.

26

Ref. code: 25605822040035JYO


4.3.4 Standard Deviation (SD) of Green.
4.3.5 Standard Deviation (SD) of Blue.
4.3.6 Circularity: the system find contour of detected candidate and
calculate area (A) and perimeter (P) to obtain circularity which
is calculated from eq. 4.3 if the circularity is close to 1. Therefore,
the detected candidate is circular shape.

(4.3)
4.3.7 Entropy: Shannon Entropy of detected image.
4.3.8 Saturation Mean.
4.3.9 Standard Deviation (SD) of Saturation.

4.4 Training and Testing


For the training process: the system was trained with a total of 9 images, 865
training data which were 129 positive samples and 736 negative samples. User has to
decide whether the detected candidate is an acne or not, based on ground truth image
obtained from dermatologist. Training data stored in Microsoft SQL Database
(MSSQL) which provides important benefits over storing in text file or excel such as
querying, reporting and better scalability, with C#, database interface implemented
natively.
The training process requires trainer’s excessive effort to manually observe the
detection result then compare with the ground truth image, which may result in
inaccuracy and time wasting, consequently the automatic training system is developed.
User will choose training image and corresponding ground truth image. The ground
truth image is the image with all acnes are marked by black circle (pixel value=0). The
system will iterate over each detected key point (candidate) of training then masking
them with ground truth image to identify that the detected key point is an acne or not,
if masked result has number of black pixel more than threshold, the key point is
considered an acne and green square will be drawn. If not, therefore it is a skin and red
square will be drawn. The system will continue doing this process until all key points
are classified. The automatic training system is presented in Figure 4.2.

27

Ref. code: 25605822040035JYO


Figure 4.2 Automatic Training System
Six features were chosen that had the highest correlation values which were: SD of
Green, SD of Red, Entropy, Saturation Average, SD of Blue and Circularity. More
information of correlation values can be found in Chapter 5.
Testing. Cross-validation (rotation estimation) [35–37] is a model validation
technique for assessing the relation of generalization of a statistical analysis result to an
independent data set. 3-Fold cross validation is validated: all images are divided into
70% training data and 30% testing data (validation set). For each fold, the selection of
training data is changed. The data tested in each fold are not included in testing data.
The visualization of Cross Validation is shown in Figure 4.3.

Figure 4.3 Cross Validation

28

Ref. code: 25605822040035JYO


4.5 K-Nearest Neighbor Classification
Supervised learning or supervised classification is the machine learning task of
predicate a function from labeled training data. A set of training examples is provided
in the training data. Each training example in supervised learning is a pair of an input
object and a correct output value. There are two types of this approach: distribution free
and statistical analysis. Distribution-free methods do not require any knowledge of
probability distribution functions. The methods are based on reasoning and heuristics.
In the other hand, statistical method is based on probability distribution model that
requires prior knowledge which can be parametric or nonparametric.
K-Nearest Neighbors algorithm is a distribution-free classification [38] which
used for classification and regression. In both cases, the input consists of the k closest
training examples in the feature space. An object is classified by a majority vote of its
neighbors. The object will be assigned to the class that is the most common class of
neighbors among them.
The implementation of Acne Detection using Speeded Up Robust Feature and
Quantification using K-Nearest Neighbors Algorithm is shown in Figure 4.4 and the
results are shown in Figure 4.5 to Figure 4.8.
The experimental designs for each step of supervised learning to test this
approach are shown in Table 4.1.

29

Ref. code: 25605822040035JYO


Figure 4.4 Acne Detection using Speeded Up Robust Feature and Quantification using
K-Nearest Neighbors Algorithm Flow Chart
30

Ref. code: 25605822040035JYO


Figure 4.5 Result 1 of Acne Detection using Speeded Up Robust Feature and
Quantification using K-Nearest Neighbors Algorithm

Figure 4.6 Result 2 of Acne Detection using Speeded Up Robust Feature and
Quantification using K-Nearest Neighbors Algorithm

31

Ref. code: 25605822040035JYO


Figure 4.7 Result 3 of Acne Detection using Speeded Up Robust Feature and
Quantification using K-Nearest Neighbors Algorithm

Figure 4.8 Result 4 of Acne Detection using Speeded Up Robust Feature and
Quantification using K-Nearest Neighbors Algorithm

32

Ref. code: 25605822040035JYO


Table 4.1 Supervised learning steps, detail and implementation
Supervised Learning Detail Implementation
Step
1. Determine the type of Choose what kind of data is Acnes images on part of
training example. to be used as a training set. the face, captured from
dermatologist’s clinic.
2. Gathering a training Gathering real-world data Dermatologist create
set. from expert as a set of input ground truth image.
objects and set of output
objects.
3. Determine input The input feature Feature Vector with a
feature representation. representation will be total of 9 features.
represented in feature vector
that contains corresponding
features. The number of
features will be limited due
to the curse of
dimensionality. But it
should contain enough
information to give the
system good prediction
accuracy.
4. Determine the Determine the learning K-Nearest Neighbor
structure of learned function or algorithm such Classification.
function. as Support Vector Machine,
K-Nearest Neighbor,
Decision tree, etc.
5. Complete the design. Run the learning algorithm Run the system, observe
on the training set. Testing and fine-tune.
and determine certain
control parameters.

33

Ref. code: 25605822040035JYO


6. Evaluate Evaluate the accuracy of the Evaluate by confusion
learned function. Do matrix, 3-Folds Cross-
parameter adjustment to Validation.
obtain optimal control
parameter, after that,
measure the performance of
the learned function on a
testing data set which is
separate from training data
set.

4.6 System Implementation on Whole Face Image


Acne detection and quantification device is a device consisting of web camera
and Acne detection and quantification software. The software started from image
acquisition process by capture a whole face image of patient with full high definition
resolution of 1080x1920 pixels, the system drew an ellipse and mask the face area. The
Heat-Mapping technique was applied on the green channel of the image, which acne’s
pixel had the highest contrast [28] to gray scale. The system created result image in
HSV color space. Heat-Map image (C image in Figure 4.9 to Figure 4.12) was created
from setting Saturation and Value of the image to 255 and used only original Hue value.
The Heat-Map image provided explicit mark on acne. The patient could easily
investigate and understood the acne on the Heat-Mapped image. The System then did
the pre-processing process by Smooth Gaussian. The SURF parameter was adjusted to
match the size of an image.
The first challenge is the SURF algorithm which detects the border between
ellipse’s mask and face to be candidates. So, the problem is solved by drawing smaller
ellipse and checking whether detected candidate’s area is overlapping with this area or
not, if the detected candidate is in this area, then discard this candidate.

34

Ref. code: 25605822040035JYO


The next challenge is the system detecting candidates which are in eyebrows,
eyes and mouth area. These candidates are discarded by using Haar-Cascade Classifier
to detect eyes and mouth area and remove these areas from Region of Interest (ROI).
Viola and Jones [39] proposed an effective object detection method using Haar
feature-based cascade. It is a machine learning approach of which the cascade function
is trained with a lot of positive images and negative images. The function is used to
detect objects in other testing images.

In the detection phase of the Viola–Jones object detection framework, a window


of the target size is moved over the input image. Haar-like feature is calculated for each
subsection of the image. This difference is then compared to a learned threshold that
separates non-objects from objects. To improve accuracy in describing an object, a
large number of Haar-like features are required to train this weak classifier. The
stronger learner or classifier can be formed from the organization of the Haar-like
features.

The Haar-like feature has key advantage of fast computation speed and the
calculation time of any size object’s Haar-like feature is a constant value.

The results of the system implementation on whole face images are shown in
Figure 4.9 to Figure 4.12. Noted that these are only detected candidate which are not
classified into acne or skin.

Figure 4.9 Result 1 of Acne Detection on whole face image

35

Ref. code: 25605822040035JYO


Figure 4.10 Result 2 of Acne Detection on whole face image

A B C
Figure 4.11 Result 3 of Acne Detection on whole face image

A B C
Figure 4.12 Result 4 of Acne Detection on whole face image

36

Ref. code: 25605822040035JYO


Chapter 5
Result and Discussion
The performance of acne detection using Speeded Up Robust Features and
Quantification Using K-Nearest Neighbors algorithm is demonstrated in this chapter.
The result was analyzed by comparing them with ground truth image which was
obtained from dermatologist. There were 9 images which had a total of 129 acnes
(mixed types). There were 865 training data including 129 positive samples and 736
negative samples. Lighting condition was not controlled. The image size was
800x600 px.

5.1 Features Evaluation


To evaluate features correlation with acne, the system was trained with
training data, the correlation of 9 features is shown in Table 5.1.

Table 5.1 Correlation of 9 features


No. Feature Correlation
1 Redness -0.074
2 Hue Mean 0.111
3 SD of Red 0.513
4 SD of Green 0.562
5 SD of Blue 0.158
6 Circularity 0.143
7 Entropy 0.425
8 Saturation Average 0.314
9 SD of Saturation 0.106

5.2 Detection and Quantification Algorithm Evaluation

3-Fold cross validation was validated: all images were divided into 70% training
data and 30% testing data (validation set). For each time, the selection of training data
was changed. The data that tested in each fold were not included in testing data. The
number of Training and Testing data in each fold is shown in Table 5.2.

37

Ref. code: 25605822040035JYO


Table 5.2 Training and Testing data setting for the experiments
No. K (Fold) Total Total Training Testing
Training Testing Image Image
Data Data Number Number
1 1 579 286 4,5,6,7,8,9 1,2,3
2 1 579 286 4,5,6,7,8,9 1,2,3
3 1 579 286 4,5,6,7,8,9 1,2,3
4 2 630 235 1,2,3,7,8,9 2,3,4
5 2 630 235 1,2,3,7,8,9 2,3,4
6 2 630 235 1,2,3,7,8,9 2,3,4
7 3 521 344 1,2,3,4,5,6 5,6,7
8 3 521 344 1,2,3,4,5,6 5,6,7
9 3 521 344 1,2,3,4,5,6 5,6,7

The confusion matrix was used to evaluate the performance of the algorithm.
Six conditions were used to measure. Those were True positive (TP), False positive
(FP), True negative (TN), False negative (FN), Sensitivity, Precision and Accuracy.
The interpretation for each condition is shown in Table 5.3

Table 5.3 Conditions and interpretation of confusion matrix


Condition Interpretation
True positive (TP): Acne that is correctly detected as acne.
False positive (FP): Scar and normal skin which are incorrectly
detected as acne.
True negative (TN): Scar and normal skin which are correctly
detected as scar and normal skin.
False negative (FN): Acne that is incorrectly detected as scar and
normal skin.

Therefore, those conditions were calculated using the followings:


Sensitivity = TP/(TP+FN),
Precision = TP/(TP+FP),
Accuracy = (TP+TN)/(TP+TN+FP+FN).

38

Ref. code: 25605822040035JYO


Table 5.4 Acne Detection using Speeded Up Robust Feature and Quantification using
K-Nearest Neighbors Algorithm
No. TP TN FP FN Sensitivity Precision Accuracy
1 25 1 1 3 0.893 0.962 0.867
2 14 1 3 6 0.700 0.824 0.625
3 11 1 0 7 0.611 1.000 0.632
4 12 1 1 6 0.667 0.923 0.650
5 21 1 2 4 0.840 0.913 0.786
6 12 1 3 2 0.857 0.800 0.722
7 20 1 1 2 0.909 0.952 0.875
8 7 1 0 2 0.778 1.000 0.800
9 25 1 10 8 0.758 0.714 0.591
Average 0.779 0.899 0.727

By applying K-Nearest Neighbors classification with K=4, the performance was


high with calculation time less than 10 seconds for each image. The average sensitivity
was 78%, precision was 90% and accuracy was 73%.

39

Ref. code: 25605822040035JYO


Chapter 6
Conclusions and Recommendations
The main purpose of this study was to create novel acne detection and
quantification method to help overcome doctor’s excessive effort to manually quantify
acne on patient’s face. Acne segmentation method was introduced using adaptive
thresholding, detection and quantification using Speeded Up Robust Features with
feature extraction.
Adaptive thresholding segmentation is a good approach for acne image
segmentation. It can create accurate segmentation to acnes image, the method does not
require any manual adjustment.
A novel acne feature extraction and classification method was proposed. The
method was first pre-processing with Smooth Gaussian technique. The system then did
the BLOB detection and with Speeded Up Robust Feature (SURF) technique, iterated
over detected key points, applied Otsu Thresholding extract, saved feature vectors and
training results in the database. From the statistical analysis of correlation of 9 features
that were introduced for acne quantification, the top 6 features that should be used for
recognition were: SD of Green, SD of Red, Entropy, Saturation Average, SD of Blue
and Circularity. Supervised learning with Train and Test were applied on the system.
By training the system with 865 samples and testing the system with 129 acnes in 9
images, by applying K-Nearest Neighbors classification with (K=4), 3-Fold cross
validation was validated and the results were analyzed on confusion matrix. The
performance was high with calculation time for each image less than 10 seconds. The
average sensitivity was 78%, precision was 90% and accuracy was 73%.
For future study, more acne images should be tested with the proposed algorithm
by giving more training data and fine-tuning system, the accuracy would be improved.
Acnes images should be made available online as Open-Database for accessing by other
researchers. Implementation of the system with more advance pre-processing
techniques such as advance brightness and contrast adjustment, histogram
normalization and multi-scale smooth Gaussian can improve the accuracy. The
controlled environment such as lighting, flash or non-flash photography can give the

40

Ref. code: 25605822040035JYO


system more accuracy. Applying SURF on different color space such as YCbCr or
CIELAB can tolerate different lighting conditions.
Furthermore, to implement acne detection system on whole face images. Ground
truth images for acnes should be made and trained into automatic training system. For
system implementation, patient’s database with treatment record is one of an important
software feature to record treatment progress and location of acnes. Lastly,
implementation of the system with professional camera can improve the accuracy of
the system.

41

Ref. code: 25605822040035JYO


References

1. Chantharaphaichit, T., Uyyanonvara, B., Sinthanayothin, C., & Nishihara A.


(2015). Automatic acne detection for medical treatment. Proceedings of The 6th
International Conference of Information and Communication Technology for
Embedded System 2015 (ICICTES 2015) held in Hua-Hin, 22–24 March 2015 (pp. 1–
6). Thailand: Novotel Hua Hin Cha Am Beach Resort and Spa.

2. Chantharaphaichit, T., Uyyanonvara, B., Sinthanayothin, C., & Nishihara A.


(2015). Automatic acne detection with featured Bayesian classifier for medical
treatment. Proceedings of The 3rd International Conference on Robotics, Informatics,
and Intelligence Control Technology (RIIT2015) held in Bangkok, 27–30 April 2015
(pp. 10–16). Thailand: Asia Hotel.

3. Khan, J., Malik, A., Kamel, N., Dass, S., & Affandi A. (2015). Segmentation of
acne lesion using fuzzy C-means technique with intelligent selection of the desired
cluster. Proceedings of an Annual International Conference of the IEEE Engineering
in Medicine and Biology Society held in Milan, 25–29 August 2015 (pp. 3077–3080).
Italy: The Milano Congressi Center.

4. Strauss, J. S., Krowchuk, D. P., Leyden, J. J., Lucky, A. W., Shalita, A. R.,
Siegfried, E. C., et al. (2007). Guidelines of care for acne vulgaris management. Journal
of the American Academy of Dermatology, 56, 651–663.

5. Bowe, W. P., Joshi, S. S., & Shalita, A. R. (2010). Diet and acne. Journal of the
American Academy of Dermatology, 63, 124–141.

6. Gonzalez, R. C., & Woods, R. E. (2008). Digital image processing. London:


Prentice Hall. (pp. 1–3).

42

Ref. code: 25605822040035JYO


7. Ballard, D. H., & Brown, C. M. (1982). Computer vision. London: Prentice
Hall.

8. Huang, T. (1996). Computer vision: evolution and promise (PDF). In: Vandoni
Carlo, E. (Ed.) 19th CERN School of Computing. Geneva: CERN. (pp. 21–25).
9. Sonka, M., Hlavac, V., & Boyle, R. (2008). Image processing, analysis, and
machine vision. United Kingdom: Thomson.

10. Roy Davies, E. (2005). Machine vision: Theory, algorithms, practicalities. San
Mateo, CA: Morgan Kaufmann.

11. Bickers, D. R., Lim, H. W., Margolis, D., Weinstock, M. A., Goodman, C.,
Faulkner, E., et al. (2006). The burden of skin diseases: 2004 a joint project of the
American Academy of Dermatology Association and the Society for Investigative
Dermatology. Journal of the American Academy of Dermatology, 55, 490–500.

12. Fujii, H., Yanagisawa, T., Murakami, Y., & Yamaguchi M. (2008). Extraction
of acne lesion in acne patients from multispectral images. Proceedings of an Annual
International IEEE EMBS Conference held in Vancouver, 21–24 August 2008 (pp.
4078–4081). British Columbia, Canada: The Vancouver Convention and Exhibition
Center.

13. Ramli, R., Malik, A. S., Yap, F. B. (2011). Identification of acne lesions, scars
and normal skin for acne vulgaris cases. Proceedings of The 2011 National
Postgraduate Conference held in Perak, 19–20 September 2011 (pp.1–4). Malaysia:
Universiti Teknologi PETRONAS (UTP) - Tronoh.

14. Alamdari, N., Alhashim, M., & Fazel-Rezai, R. (2016). Detection and
classification of acne lesions in acne patients: A mobile application. Proceedings of The
2016 IEEE International Conference on Electro Information Technology (EIT) held in
Grand Forks, 19-21 May 2016 (pp.739–743). North Dakota, USA: Alerus Conference
Center.

43

Ref. code: 25605822040035JYO


15. Liu, Z., & Zerubia, J. (2013). Towards automatic acne detection using a MRF
Model with chromophore descriptors. Proceedings of The 21st European Signal
Processing Conference (EUSIPCO 2013) held in Marrakech, 9–13 September 2013
(pp. 1–5). Morocco: Palais des Congres.

16. Chen, D., Chang, T., & Cao, R. (2012). The development of a skin inspection
imaging system on an Android device. Proceedings of The 7th International
Conference on Communications and Networking in China held in Kun Ming, 8–10

August 2012 (pp. 653–658). China: Dianchi Garden Hotel & Spa.

17. Malik, A. S., Ramli, R., Hani, A. F. M., Salih, Y., Yap, F. B. B., &
Nisar, H. (2014). Digital assessment of facial acne vulgaris. Proceedings of The 2014
IEEE International Instrumentation and Measurement Technology Conference
(I2MTC) held in Montevideo, 12–15 May 2014 (pp. 546–550). Uruguay: Radisson
Victoria Plaza Hotel.

18. Huamyun, J., & Malik A. S. (2011). Multispectral and thermal images for acne
vulgaris classification. Proceedings of The 2011 National Postgraduate Conference
held in Perak, 19–20 September 2011 (pp. 1–4). Malaysia: Universiti Teknologi

PETRONAS (UTP) – Tronoh.

19. Lucut, S., & Smith, M. R. (2016). Dermatological tracking of chronic acne
treatment effectiveness. Proceedings of The 2016 38th Annual International
Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) held in
Orlando, 16–20 August 2016 (pp. 5421–5426). Florida, USA: Disney’s Contemporary
Resort at Walt Disney World® Resort.

20. Chang, C., & Liao, H. (2013). Automatic facial spots and acnes detection
system. Journal of Cosmetics, Dermatological Sciences and Applications, 3, 28–35.

44

Ref. code: 25605822040035JYO


21. Chandra, D. B., Nirmal, B., & Ramesh R. (2013). Automatic detection of acne
scars: Preliminary results. Proceedings of The 2013 IEEE Point-of-Care Healthcare
Technologies held in Bangalore, 16–18 January 2013 (pp. 224–227). India: Sheraton
Bangalore.

22. Khomgsuwan, M., Kiattisin, S., Wongseree, W., & Leelasantitham, A. (2012).
Counting number of points for acne vulgaris using UV fluorescence and image
processing. Proceedings of The 2011 IEEE BMEICON Conference held in Chiang Mai,

29–31 January 2012 (pp. 142–146). Thailand: Kantary Resort.

23. Singh, J., & Kanwal, N. (2013). Survey of facial marks detection techniques.
International Journal of Engineering Research & Technology, 2(5), 219–225.

24. Adityan, B., Kumari, R., & Thappa, D. M. (2009). Scoring systems in acne
vulgaris. Indian Journal of Dermatology, Venereology and Leprology, 75(3), 323–326.

25. Doshi, A., Zaheer, A., & Stiller, M.J. (1997). A comparison of current acne
grading systems and proposal of a novel system. International Journal of Dermatology,
36(6), 416–418.

26. Stehman, S. V. (1997). Selecting and interpreting measures of thematic


classification accuracy. Remote Sensing of Environment, 62(1), 77–89.

27. Hanzra, B. S. Adaptive thresholding. Retrieved July 7, 2017,


from http://hanzratech.in/2015/01/21/adaptive-thresholding.html

28. SujithKumar, S. B., Sing, V. (2012). Automatic detection of diabetic


retinopathy in non-dilated RGB retinal fundus images. International Journal of
Computer Applications, 47(19), 26–32.

45

Ref. code: 25605822040035JYO


29. Hanzra B. S. Blob detection using OpenCV (Python, C++). Retrieved
July 7, 2017, from https://www.learnopencv.com/blob-detection-using-opencv-
python-c/

30. Bay, H., Ess, A., Tuytelaars, T., Van Gool, L. (2008). SURF: Speeded Up
Robust Features. Computer Vision and Image Understanding, 110(3), 346–359.

31. Han, K. T. M., Uyyanonvara, B. (2016). A survey of Blob Detection Algorithms


for biomedical images. Proceedings of the 2016 7th International Conference of
Information and Communication Technology for Embedded Systems (IC-ICTES) held
in Bangkok, 20–22 March 2016 (pp. 57–60). Thailand: Pullman Hotel.

32. Lindeberg, T. (1998). Feature detection with automatic scale selection.


International Journal of Computer Vision, 30(2), 79–116.

33. Brown, M., & Lowe, D. (2002). Invariant features from interest point groups.
In: Marshall, D & Rosin P. L. (Eds). Proceedings of the British Machine Conference
held in Cardiff, 2–5 September 2002 (pp. 23.1–23.10). UK: BMVA Press.

34. Mark, N., & Aguado, A. S. (2008). Feature Extraction & Image Processing.
London: Academic Press.

35. Seymour, G. (1993). Predictive Inference. New York: Chapman and Hall.

36. Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy


estimation and model selection. Proceedings of the Fourteenth International Joint
Conference on Artificial Intelligence, 2(12), 1137–1143.

37. Devijver, P. A., Kittler, J. (1982). Pattern recognition: A statistical approach.


London: Prentice Hall.

46

Ref. code: 25605822040035JYO


38. Jain, A. K. (1989). Fundamentals of digital image processing. London: Prentice
Hall.

39. Viola, P., Jones, M. (2001). Rapid object detection using a boosted cascade of
simple features. Proceeding of the 2001 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, CVPR 2001 held in Kauai, 8–14 December
2001 (pp. I-511 – I-518). Hawaii, USA: Kauai Marriott.

47

Ref. code: 25605822040035JYO


Appendices

48

Ref. code: 25605822040035JYO


Appendix A
List of Publications

1. Kittigul N., Uyyanonvara B. "Automatic acne detection system for medical


treatment progress report”, 2016 7th International Conference of Information
and Communication Technology for Embedded Systems (IC-ICTES),
Bangkok, 2016, pp. 41–44.

2. Kittigul N., Uyyanonvara B. "Acne detection using Speeded Up Robust


Features and quantification using K-Nearest Neighbor Algorithm”, 2017 6th
International Conference on Bioinformatics and Biomedical Science (ICBBS
2017), Singapore, 2017, pp. 43–46.

49

Ref. code: 25605822040035JYO


50

Ref. code: 25605822040035JYO

You might also like