0% found this document useful (0 votes)
18 views10 pages

1 s2.0 S1877050919315145 Main

Uploaded by

Aldo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views10 pages

1 s2.0 S1877050919315145 Main

Uploaded by

Aldo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Available online at www.sciencedirect.

com
Available online at www.sciencedirect.com
ScienceDirect
ScienceDirect
Procediaonline
Available Computer
at Science 00 (2019) 000–000
www.sciencedirect.com
Procedia Computer Science 00 (2019) 000–000
www.elsevier.com/locate/procedia
www.elsevier.com/locate/procedia
ScienceDirect
Procedia Computer Science 159 (2019) 1439–1448

23rd International Conference on Knowledge-Based and Intelligent Information & Engineering


23rd International Conference on Knowledge-Based
Systems and Intelligent Information & Engineering
Systems
Lung boundary detection for chest X-ray images classification based
Lung boundary detection for chest X-ray images classification based
on GLCM and probabilistic neural networks
on GLCM and probabilistic neural networks
Aleksandr Zotinaa, Yousif Hamadbb, Konstantin Simonovcc, Mikhail Kurakob,b,*
Aleksandr Zotin , Yousif Hamad , Konstantin Simonov , Mikhail Kurako *
a
Reshetnev Siberian State University of Science and Technology, 31 Krasnoyarsky rabochу av., Krasnoyarsk 660037, Russian Federation
a
Reshetnev Siberian StatebSiberian
University of Science
Federal and Technology,
University, 31 st.,
79 Svobodny Krasnoyarsky rabochу
Krasnoyarsk av.,
660041, Krasnoyarsk
Russian 660037, Russian Federation
Federation
c
Institute of bComputational Modeling
Siberian Federal SB RAS,
University, 50/44 Akademgorodok,
79 Svobodny st., KrasnoyarskKrasnoyarsk 660036,
660041, Russian Russian Federation
Federation
c
Institute of Computational Modeling SB RAS, 50/44 Akademgorodok, Krasnoyarsk 660036, Russian Federation

Abstract
Abstract
Extraction of various structures from the chest X-ray (CXR) images and abnormalities classification are often
Extraction
performed asof an
various
initial structures from the chest
step in computer-aided X-ray (CXR) images
diagnosis/detection (CAD)and abnormalities
systems. The shapeclassification are often
and size of lungs may
performed
hold clues as
to an initial diseases
serious step in computer-aided diagnosis/detection
such as pneumothorax, (CAD) and
pneumoconiosis systems.
even The shape andWith
emphysema. size of
thelungs may
growing
hold
numberclues to seriousthediseases
of patients, doctors such as pneumothorax,
overwork pneumoconiosis
and cannot counsel and take careand ofeven emphysema.
all their patients. With
Thus,the growing
radiologists
number of patients,
need a CAD system the doctors overwork
supporting boundary CXRand cannot
imagescounsel andand
detection takeimage
care of all their patients.
classification. Thus, presents
This paper radiologists
our
need a CADapproach
automated system supporting boundarydetection
for lung boundary CXR images and detection and image classification.
CXR classification in conventional This paperanterior
poster presentschest
our
automated
radiographs.approach
We extractforthelung
lungboundary detection
regions, sizes and CXR
of regions, classification
and shape in conventional
irregularities poster techniques
with segmentation anterior chest
that
radiographs. We extract
are used in image the lung
processing regions,
on chest sizes of regions,
radiographs. From CXRand shape
imageirregularities
we extract 18with segmentation
features using thetechniques
gray level that
co-
are used in image
occurrence matrixprocessing
(GLCM). It onallows
chest radiographs.
us to classifyFrom CXR image
the CXR image as wenormal
extract or18abnormal
features using
usingthethegray level co-
probabilistic
occurrence matrix
neural network (GLCM).
(PNN) It allows
classifier. us to classify
The proposed methodthe has
CXR image as normal
competitive or abnormal
results with using the
comparatively probabilistic
shorter training
neural
time andnetwork (PNN) classifier. The proposed method has competitive results with comparatively shorter training
better accuracy.
time and better accuracy.
© 2019 The Author(s). Published by Elsevier B.V.
© 2019
© 2019 The
The Authors. Published
Author(s). PublishedbybyElsevier
ElsevierB.V.
B.V.
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
This is an
Peer-reviewopen access
under article under
responsibility of the
KES CC BY-NC-ND
International. license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of KES International.
Peer-review under responsibility of KES International.
Keywords: Chest X-ray imaging; balance contrast enhancement technique; BCET; lung boundary detection; gray level co-occurrence matrix;
probabilistic neuralX-ray
Keywords: Chest network.
imaging; balance contrast enhancement technique; BCET; lung boundary detection; gray level co-occurrence matrix;
probabilistic neural network.

* Corresponding author.
*E-mail address: [email protected]
Corresponding author.
E-mail address: [email protected]
1877-0509 © 2019 The Author(s). Published by Elsevier B.V.
1877-0509 © 2019
This is an open The article
access Author(s).
underPublished by Elsevier license
the CC BY-NC-ND B.V. (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review
This under
is an open responsibility
access of KES
article under International.
the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of KES International.

1877-0509 © 2019 The Authors. Published by Elsevier B.V.


This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of KES International.
10.1016/j.procs.2019.09.314
1440 Aleksandr Zotin et al. / Procedia Computer Science 159 (2019) 1439–1448
2 Aleksandr Zotin et al. / Procedia Computer Science 00 (2019) 000–000

1. Introduction

Currently, the incidence of respiratory organs occupies a leading position, which leads to demand in radiological
studies. Chest radiography or chest X-ray is still the most commonly used imaging modality for diagnosing various
pulmonary diseases and the most widely used diagnostic imaging in the world due to its low radiation, free of side
effects, economic feasibility and moderate sensitivity. The formulation of high-precision X-ray findings is based on
the experience of the radiologist and the quality of the X-ray image. For most diseases, many cures are effective
only in the early and symptomless stage of the disease. Screening can help in early diagnosis, and standard chest
radiography is the most popular imaging method for the reasons mentioned above [1].
The important steps in the automatic analysis of CXR images are accurate lung boundaries detection and their
classification as normal or abnormal (with pathologies). It is an early diagnostic tool commonly used in clinical
settings to monitor defects in the heart area, including lung and heart disease, haemorrhage, consolidation,
pneumothorax, pleural effusion, swelling, heart enlargement and inflation [2], and in some diagnostic cases,
information based on directly related images of lung boundaries can be extracted without further analysis. For
example, the availability of irregular shape, volume measurements, total lung volume [3] evidence of serious
diseases such as heart enlargement, pneumonia or emphysema [4, 5].
There have been many studies on the discovery of lung abnormalities in the past. Computer-aided diagnosis has
become part of routine clinical work in many countries. The discovery of chest X-ray lung areas is an important
component, especially in determining whether the lung is normal or abnormal [6-9]. Lung areas detection is usually
the first step in the computerized analysis of chest radiography. Information about lungs extent enables us to make
further assessment of the lungs condition.
Over the past decade a number of methods for CXR image segmentation and lung boundaries extraction were
proposed. These methods based on several different approaches. First type of methods is rule-based methods. Rule
based methods mostly utilize heuristic assumptions and approximate solutions. The solutions can be used as initial
values for more robust segmentation algorithms [10]. Saad et al. proposed method for segmenting lung regions in
CXR images by Canny edge filter and morphological operators [8]. More general approach relies on a pixel
classification by modeling internal and external of the lung regions and marking (classifying) pixels as objects’
pixels or background.
There are methods based on shape or appearance models of lungs [11]. Apart from these general approaches
researchers uses combination of methods. Thus, Li et al. [6] proposed usage of graph-based segmentation with
saliency information based on a global contrast function. Jaeger et al. [7] used a graph cut optimization method in
combination with a lung model. Hybrid methods aim to produce better results by fusing several techniques. In [12],
Jaeger et al. used a combination of an intensity mask, a lung model mask derived from a training set, and a Log-
Gabor mask for lung region extraction.
The advent of digital chest radiography and the possibility of digital image processing has given new impetus to
computer-aided screening and diagnosis. Still, despite its omnipresence in medical practice, the standard CXR is a
very complex imaging tool. In order to determine the presence of pathologies (possible diseases), an approach based
on artificial neural networks (ANN) classification is widely used [13-17]. Programs that use neural networks
(probabilistic neural networks or deep learning neural networks) are able to achieve self-learning and will be able to
cope with changing circumstances.
This paper describes the solution for the problem of how to make clearer lungs boundaries which will lead to
better CXR image classification. Thus, we propose a set of computational procedures for image preparation for
further analysis by medical specialists. In this settings, two main components can be distinguished: improvement of
image quality and formation of lungs boundaries with the generation of an edge map. At last step a CXR image
classification is conducted by ANN binary classifier.
The rest of the paper is organized as follows. Section 2 provides details of proposed method for lungs boundaries
detection and classification of CXR images as normal or abnormal. Section 3 presents experimental studies and
describes the main results. The conclusions are given in Section 4.
Aleksandr Zotin et al. / Procedia Computer Science 159 (2019) 1439–1448 1441
Aleksandr Zotin et al. / Procedia Computer Science 00 (2019) 000–000 3

2. Proposed methodology

The objective of this work is to design a CXR image segmentation and classification methods for lungs
boundaries and abnormalities detection. The presence of pathologies is determined by a proposed technique with
three steps (stages). In most cases, CXR images (or image sets) have noticeable noise and different contrast levels
due to the device technical characteristics. Thus, noise suppression with median filter and contrast enhancement are
conducted at the first step. On the second step lungs boundaries detection is performed. During this step a
thresholding using Otsu’s method and formation of convex hull of outer ring points are performed. The third step is
devoted to image classification (pathology detection). Features extraction is based on GLCM and CXR classification
is made by PNN classifier. Fig. 1 depicts a principal scheme of proposed approach.

Fig. 1. The proposed algorithmic scheme for processing and analyzing the CXR images.

Preprocessing step is discussed in Section 2.1. Section 2.2 gives information about lungs boundaries formation.
Features extraction is described in Section 2.3. Section 2.4 presents the description of probabilistic neural network
used for CXR image classification.

2.1. Preprocessing

The primary task of preprocessing is the CXR images quality improvement. This step helps to improve certain
parameters of CXR images (e.g. the signal-to-noise ratio) and to enhance the visual appearance by removing the
irrelevant noise and undesired parts of background. The most commonly encountered types of noise are Salt and
Pepper, Speckle, Gaussian, and Poisson noises. Taking into account possible variants of noises on CXR images
median filter was selected as the main filter for noise suppression [18].
Apart from noise suppression, during medical image processing, the contrast enhancement is required for the area
of interest. In proposed methodology we use approach based on BCET. The choice was made because the contrast
1442 Aleksandr Zotin et al. / Procedia Computer Science 159 (2019) 1439–1448
4 Aleksandr Zotin et al. / Procedia Computer Science 00 (2019) 000–000

of the image can be stretched or compressed without changing the histogram pattern of the input image [19]. Also,
the other choice is contrast limited adaptive histogram equalization (CLAHE) [20]. Results of contrast enhancement
for CXR images with and without pathologies shown in Fig. 2 and Fig. 3.

Fig. 2. (a) original image without pathology; (b) result of BCET enhancement; (c) result of CLAHE enhancement.

Fig. 3. (a) original image with pathology; (b) result of BCET enhancement; (c) result of CLAHE enhancement.

As can be seen, the BCET results are better suited for finding the lung area by thresholding in comparison with
CLAHE results. This is due to the fact that the area of the lungs is darker than on the original image and the contrast
of the vessels, tissues and ribs is small. However, CLAHE results can be used as a supplementary data for feature
calculation.

2.2. Segmentation

The higher segmentation accuracy in medical imaging allows to identify the disease more precisely. There are
two approaches for image segmentation: discontinuity-based approach and similarity-based approach.
Discontinuity-based approach is an identification of isolated points or lines or edges in an image (e.g. identifying
lungs boundaries in CXR). Similarity-based approach is a grouping of similar image intensity values. The lung
segmentation is based on geometric features like edges. Segmenting the lung with edge detection is a fundamental
and essential pre-processing step because edges represent important contour features within the corresponding
image [8].
The thresholding is one of the similarity-based approaches. In the case of CXR images thresholding, the dark
object (except images boundaries) means the lungs and others are its background. For threshold segmentation, the
Otsu method was chosen. It allows us to adaptively define a global threshold.
To determine an outline of the lung’s boundaries, the image is conventionally divided into two halves. Then a
convex hull is built for both halves in order to give a set of points in the plane [21]. After that both halves is
Aleksandr Zotin et al. / Procedia Computer Science 159 (2019) 1439–1448 1443
Aleksandr Zotin et al. / Procedia Computer Science 00 (2019) 000–000 5

combined in one image. Lastly, a contour representation is formed by Canny edge detector. Example of lung
segmentation with boundary detection is shown in the Fig. 4.

Fig. 4. (a) pre-processed image; (b) Otsu segmentation; (c) convex hull of outer ring points; (d) detected lung boundary.

Obtained segmented CXR image will be helpful in the medical diagnosis by medical specialists. Thus,
information of lung segments allows us to evaluate size and lung features which can be useful for medical
assessment.

2.3. Feature Extraction

The accuracy of the classification depends on the quality of extracted features. The gray level co-occurrence
matrix is a robust way in statistical image analysis. It is used for evaluation of images features regarding to second-
order statistics. GLCM is defined as a two-dimensional matrix of a joint probabilities between pairs of pixels over
an image (I) as the distribution of co-occurring values at a given offset (dx, dy) for image with size N×M:

N M
1, if I ( p=
, q ) i and I ( p + dx, q +=
dy ) j
Cdx , dy (i, j ) =   (1)
q 1
p 1=
= 0, otherwise

Features were chosen after Gómez et al. [22]: Energy (F8), Contrast (F2), Correlation (F3), Autocorrelation (F1),
Entropy (F9), Homogeneity (F10), Dissimilarity (F7), Cluster Shade (F6), Cluster Prominence (F5), Maximum
Probability (F12), Sum Of Square (F13), Inverse Difference Moment (F21), Sum Average (F14), Sum Variance
(F16), Sum Entropy (F15), Difference Variance (F17), Difference Entropy (F18) and Information Measure Of
Correlation (F19). Example of GLCM graphical representation for lung regions is shown in Fig. 5, for better
perception GLCM is depicted in inverted values.

Fig. 5. (a) lung regions; (b) GLCM representation of the left lung; (c) GLCM representation of the right lung.

In order to obtain better classification results, GLCM matrixes were generated for different offsets (from 2 to 4
pixels) and angles (0°, 45°, 90°, and 135°).
1444 Aleksandr Zotin et al. / Procedia Computer Science 159 (2019) 1439–1448
6 Aleksandr Zotin et al. / Procedia Computer Science 00 (2019) 000–000

2.4. CXR Image Classification

Classifier is used to detect abnormal CXR images (images with pathology). In order to conduct image
classification, we use a probabilistic neural network. PNNs are relatively insensitive to outliers and generate
accurately predicted target probability scores.
A PNN is a feed forward neural network based on Bayesian rules and a kernel Fisher discriminant analysis. The
Bayes rule for class A is defined as follows:

PACA f A ( X )  PB CB fB ( X ) (2)

where PA is priori probability of occurrence of pattern in class A, CA is a cost function, fA(X) is a probability density
function (PDF) for class A.
The PNN consists of 4 layers: input layer, pattern layer, summation layer, and decision layer. In software
implementation in order to distinguish abnormalities from normal CXR images the following generalized form of
layers were used:

• Input: Read input units x(p), p=1, 2, …, P and connect them to all pattern units.
• Pattern: Generate pattern unit ZP with weight vector WP = x(p) (ZP is either ZA or ZB unit).
• Summation: If x(p) belongs to class A, connect pattern unit ZP to summation unit SA, otherwise to unit SB. The
weight used by the summation unit for class B is calculating according to equation 2, where mA is a number of
training patterns in class A.

PB CB mA
VB = − (3)
PACA mB

• Decision: The input vector is classified as class A if the total input to decision unit is positive.

The most important advantage of PNN is a training speed. Weights are not “trained” but assigned. Existing
weights will never be alternated but only new vectors are inserted into weight matrices when training. Since the
training can be implemented by matrix manipulation, the PNN performance is very high [23].

3. Experimental research

The experimental research was based on two CXR datasets: Montgomery County (MC) set and Shenzhen
Hospital (SH) set [24]. The MC set has been collected in collaboration with the Department of Health and Human
Services, Montgomery County, Maryland, USA. The set contains 138 frontal chest X-rays from Montgomery
County’s Tuberculosis screening program, and there are 80 normal cases and 58 cases with manifestations of
tuberculosis. The SH set was collected in collaboration with Shenzhen No.3 People’s Hospital and Guangdong
Medical College, China. The set contains 662 frontal chest X-rays. There are 326 normal cases and 336 cases with
manifestations of TB. The X-rays are provided in PNG and DICOM formats. The sizes of the X-rays are either
4020×4892 or 4892×4020 pixels for MC set and from 1250×1136 to 3000×2989 pixels for SH set.
Reliability and correctness of lung boundaries detection have evaluated by the following metrics: figure of merit
(FOM), Jaccard similarity coefficient (Ω), Dice similarity coefficient sensitivity (DSC), average contour distance
(ACD) and accuracy.
These metrics mostly depend on the values of TP, TN, FP, FN and RECnt, where TP is the number of pixels
correctly identified as lung region, TN is the number of pixels correctly detected as background, FP is the number of
pixels falsely identified as lung region, and FN is the number of pixels falsely detected as background. Reference
edge count (RECnt) represents the number of edge pixels in reference map created by expert.
Aleksandr Zotin et al. / Procedia Computer Science 159 (2019) 1439–1448 1445
Aleksandr Zotin et al. / Procedia Computer Science 00 (2019) 000–000 7

The Pratt's Figure of Merit (FOM) represents a level of similarity of two contours [17] and is defined as:

1 AE
1

Cnt

FOM = (4)
max( RECnt , AECnt ) i =1 1 +   d i
2

where RECnt is the number of ideal (reference) edge points, AECnt is the number of actual (computed) edge points, di
is the distance between the computed edge pixel and the nearest reference edge pixel, α is an empirical calibration
constant (α =1/9 for our case, optimal value established by Pratt).
The Jaccard similarity coefficient represents overlap metrics. It allows to quantify the overlapping area between
the algorithm’s segmentation and reference boundaries. Jaccard similarity coefficient is defined as:

TP
= (5)
TP + FP + FN

The other overlapping measure is Dice similarity coefficient, formulated as follows:

2  TP
DSC = (6)
2  TP + FP + FN

Both measures have a value between 0 and 1; 1 indicates fully overlapped segmentation. The classification value
of each pixel has the same impact to the computation regardless of their distance to the reference border. Therefore,
overlapping metrics alone are not sufficient to evaluate the region detection algorithm’s performance. Researchers
use distance-based metrics such as average contour distance (ACD) to quantify the distance between the reference
lung boundary and estimated boundary. ACD measures the minimum distance of each point on the boundary S to
the contour R. Let si i=1,…,nS and rj j=1,…,nR are the points on the algorithm’s detected boundary S and reference
boundary R, respectively. The minimum distance of point si on S to R is defined as d(si, R) = minj||rj-si||. Then ACD
is computed as follows,

1   i d ( si , R )  j d (rj , S ) 
ACD ( S , R )
=  +  (7)
2 nS nR 
 

Table 1 demonstrates statistical data (minimum, maximum, median and average values with standard deviation)
of quantitative estimates of detected lungs boundaries for all described metrics. Estimates calculated for joint dataset
(MC and SH datasets).

Table 1. Statistical data of quantitative estimates of lungs boundaries detection for proposed method.
Estimates Statistical data
min avg ± SD med max
Jaccard similarity 0.882 0.915 ± 0.016 0.917 0.945
Sensitivity 0.940 0.964 ± 0.012 0.967 0.988
Accuracy 0.950 0.965 ± 0.007 0.964 0.977
Dice similarity coefficient 0.937 0.955 ± 0.015 0.957 0.971
Average contour distance 1.247 1.642 ± 0.323 1.523 2.551
Figure of merit 0.925 0.955 ± 0.011 0.957 0.976

Visual representation of detected lungs boundaries is presented in Fig. 6 for series of images with and without
pathologies, where red line defines lung boundaries obtained by proposed method and green line is ground truth.
1446 Aleksandr Zotin et al. / Procedia Computer Science 159 (2019) 1439–1448
8 Aleksandr Zotin et al. / Procedia Computer Science 00 (2019) 000–000

Fig. 6. (a) detected lung regions in CXR images without pathology; (b) detected lung regions in CXR images with pathology.

Comparative analysis of the proposed method and known methods is based on Jaccard similarity coefficient,
Dice similarity coefficient and average contour distance. The results are presented in Table 2.

Table 2. Quantitative comparison of lung boundary detection methods.


Authors, Dataset Estimates
citation Jaccard similarity Dice similarity coefficient Average contour distance
JSRT 0.954 ± 0.015 0.967 ± 0.008 1.321 ± 0.316
Candemir, S., Jaeger [2] MC 0.941 ± 0.034 0.960 ± 0.018 1.599 ± 0.742
India 0.917 ± 0.048 0.947 ± 0.025 2.567 ± 1.454
Li, X. [6] JSRT 0.916 ± 0.024 - -
Jaeger, S. [7] JSRT 0.901 ± 0.054 - -
Saad, M. N. [8] Local CXR 0.805 ± 0.004 - -
Yang et al. [9] JSRT 0.952 ± 0.018 0.975 ± 0.010 1.37 ± 0.670
SH 0.921 ±0.014 0.958 ± 0.009 1.515 ± 0.310
Proposed method
MC 0.910 ± 0.014 0,952 ± 0.008 1.635 ± 0.323

In order to verify the accuracy of the PNN classifier, training was made on a joint set of CXR images from MC
and SH datasets. Training set contains 40 CXR images with pathologies and 40 CXR images without pathologies.
PNN classification accuracy was separately verified on sets of CXR images. Results and description are shown in
Table 3.

Table 3. PNN classifier accuracy


Images MC Dataset SH Dataset
Images, Correctly Incorrectly Images, Correctly Incorrectly
count classified, % classified, % count classified, % classified, %
Images without pathology 75 95.94 4.06 99 96.89 3.11
Images with pathology 50 94.98 5.02 99 95.77 4.23
Total 125 95.93 4.11 198 96.73 3.15

Experimental studies have shown that the average classification accuracy is 96%. As can be noted from Table 3,
the accuracy for SH set is higher than for the MC set. Misclassification of abnormal CXR images is about 4-5%.
Aleksandr Zotin et al. / Procedia Computer Science 159 (2019) 1439–1448 1447
Aleksandr Zotin et al. / Procedia Computer Science 00 (2019) 000–000 9

4. Conclusions

In this research, we have developed methodology which allows to make an automated system for lungs
boundaries detection and classification of CXR images. The proposed methodology consists of three key steps.
Firstly, CXR image enhancement by noise reduction and contrast adjustment is conducted. Secondly, the lung
regions are detected. Lastly, we compute a set of features of enhanced CXR image and use them as input to the PNN
binary classifier, which classifies the given image as normal or abnormal. The proposed method of lung regions
detection gives accuracy, which is on pair with other methods according to the Jaccard and Dice similarity
coefficients. The mean values of these metrics are 0.915 and 0.955 correspondingly. Average classification accuracy
is from 94.98% to 95.77% and depends on dataset. Misclassification of abnormal CXR images (4-5%) is not enough
for full automation. However, the system based on proposed methods can be used as a decision support system for
medical specialists.

References

[1] Wan Ahmad, Wan Siti Halimatul Munirah, Wan Mimi Diyana Wan Zaki, Mohammad Faizal Ahmad Fauzi, and Wooi Haw Tan (2016)
“Classification of Infection and Fluid Regions in Chest X-Ray Images.” IEEE International Conference on Digital Image Computing:
Techniques and Applications (DICTA): 1–5.
[2] Candemir, Sema, Stefan Jaeger, Kannappan Palaniappan, Jonathan P. Musco, Rahul K. Singh, Zhiyun Xue, Alexandros Karargyris, Sameer
Antani, George Thoma, and Clement J. McDonald (2014) “Lung segmentation in chest radiographs using anatomical atlases with nonrigid
registration.” IEEE transactions on medical imaging 33 (2): 577–590.
[3] Carrascal, Francisco M., José M. Carreira, Miguel Souto, Pablo G. Tahoces, Lorenzo Gómez, and Juan J. Vidal. (1998) “Automatic
calculation of total lung capacity from automatically traced lung boundaries in postero‐anterior and lateral digital chest radiographs.” Medical
physics 25 (7): 1118–1131.
[4] Qin, Chunli, Demin Yao, Yonghong Shi, and Zhijian Song. (2018) “Computer-aided detection in chest radiography based on artificial
intelligence: a survey.” Biomed Eng Online. 17 (1):113. doi:10.1186/s12938-018-0544-y.
[5] Coppini, Giuseppe, Massimo Miniati, Simonetta Monti, Marco Paterni,Riccardo Favilla,and Ezio Maria Ferdeghini. (2013) “A computer-
aided diagnosis approach for emphysema recognition in chest radiography.” Medical engineering & physics 35 (1): 63–73.
[6] Li, Xin, Leiting Chen, and Junyu Chen. (2017). “A visual saliency-based method for automatic lung regions extraction in chest radiographs.”
IEEE 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP): 162-165.
[7] Jaeger, Stefan, Alexandros Karargyris, Sema Candemir, Les Folio, Jenifer Siegelman, Fiona Callaghan, Zhiyun Xue, Kannappan Palaniappan,
Rahul K. Singh, Sameer Antani, George Thoma, Yi-Xiang Wang, Pu-Xuan Lu, and Clement J. McDonald. (2014) “Automatic tuberculosis
screening using chest radiographs.” IEEE transactions on medical imaging 33 (2): 233–245.
[8] Saad, Mohd Nizam, Zurina Muda, Noraidah Sahari Ashaari, and Hamzaini Abdul Hamid. (2014) “Image segmentation for lung region in
chest X-ray images using edge detection and morphology.” IEEE International Conference on Control System, Computing and Engineering
(ICCSCE 2014): 46–51.
[9] Yang, Wei, Yunbi Liu, Liyan Lin, Zhaoqiang Yun, Zhentai Lu, Qianjin Feng, and Wufan Chen. (2018) “Lung field segmentation in chest
radiographs from boundary maps by a structured edge detector”. IEEE J Biomed Health Inform 22 (3): 842–851.
[10] Annangi, Pavan, Sheshadri Thiruvenkadam, A. Raja, Hao Xu, Xiwen Sun, and Ling Mao. (2010) “A region based active contour method for
X-ray lung segmentation using prior shape and low level features.” IEEE International Symposium on Biomedical Imaging: From Nano to
Macro : 892–895.
[11] Xu, Tao, Mrinal Mandal, Richard Long, Irene Cheng, and Anup Basu. (2012) “An edge-region force guided active shape approach for
automatic lung field detection in chest radiographs.” Computerized Medical Imaging and Graphics 36 (6) : 452–463.
[12] Jaeger, Stefan, Alexandros Karargyris, Sameer Antani, and George Thoma. (2012) “Detecting tuberculosis in radiographs using combined
lung masks.” Proc. 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society: 4978–4981.
[13] Suzuki, Kenji (ed.). (2011) Artificial neural networks-Methodological advances and biomedical applications, InTech, Croatia
[14] Azar, Ahmad Taher, and Shaimaa Ahmed El-Said. (2013) “Probabilistic neural network for breast cancer classification.” Neural Computing
and Applications 23 (6): 1737–1751.
[15] Wang, Chunliang. (2017) “Segmentation of multiple structures in chest radiographs using multi-task fully convolutional networks.”
In Scandinavian Conference on Image Analysis: 282–289. Springer, Cham.
[16] Kieu, Phat Nguyen, Hai Tran, Thai Hoang Le, Tuan Le, and Thuy Thanh Nguyen. (2018). “Applying Multi-CNNs model for detecting
abnormal problem on chest x-ray images.” IEEE 10th International Conference on Knowledge and Systems Engineering (KSE): 300–305.
[17] Akram, Sheeraz, Muhammad Younus Javed, Usman Qamar, Aasia Khanum, and Ali Hassan “Artificial Neural Network based Classification
of Lungs Nodule using Hybrid Features from Computerized Tomographic Images.” Appl. Math. Inf. Sci. 9(1):183–195
[18] Cadena, Luis, Alexander Zotin, and Franklin Cadena. (2018) “Enhancement of Medical Image using Spatial Optimized Filters and OpenMP
Technology.” Lecture Notes in Engineering and Computer Science: Proceedings of The International MultiConference of Engineers and
1448 Aleksandr Zotin et al. / Procedia Computer Science 159 (2019) 1439–1448
10 Aleksandr Zotin et al. / Procedia Computer Science 00 (2019) 000–000

Computer Scientists 2018, 324–329.


[19] Zotin, Alexander, Konstantin Simonov, Mikhail Kurako, Yousif Hamad, Svetlana Kirillova. (2018) “Edge detection in MRI brain tumor
images based on fuzzy C-means clustering.” Procedia Computer Science 126: 1261–1270.
[20] Kumbhar, Uday, Vishal Patil, and Shekhar Rudrakshi. (2013) “Enhancement of Medical Images Using Image Processing In Matlab.”
International Journal of Engineering Research and Technology 2 (4): 2359–2364.
[21] Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars. (2008) “Computational Geometry: Algorithms and Applications.”
Springer Berlin Heidelberg.
[22] Gómez, Wilfrido, Wagner Coelho Albuquerque Pereira, and Antonio Fernando Catelli Infantosi. (2012) “Analysis of co-occurrence texture
statistics as a function of gray-level quantization for classifying breast ultrasound.” IEEE transactions on medical imaging 31 (10): 1889–
1899.
[23] Nandhagopal, Narayanasamy, K. Rajiv Gandhi, and R. Sivasubramanian. (2015) “Probabilistic Neural Network Based Brain Tumor
Detection and Classification System.” Research Journal of Applied Sciences, Engineering and Technology 10 (12): 1347–1357.
[24] Jaeger, Stefan, Sema Candemir, Sameer Antani, Yì-Xiáng J. Wáng, Pu-Xuan Lu, and George Thoma. (2014) “Two public chest X-ray
datasets for computer-aided screening of pulmonary diseases.” Quantitative imaging in medicine and surgery 4 (6): 475–477.

You might also like