Multi-scale Ternary and Septenary Patterns for
Texture classification
Rachdi Elmokhtar Issam El khadiri Youssef El merabet Cyril Meurie
dept. of physics, Dept. of Algorithms Department of physics, Univ Gustave Eiffel,
Faculty of Sciences, and Their Applications Ibn Tofail University, COSYS-LEOST, F-59650,
Ibn Tofail University Eötvös Loránd University Kenitra, Morocco Villeneuve d’Ascq, France
kenitra 14000, Morocco Budapest, Hungary
Department of physics,
Ibn Tofail University,
Kenitra, Morocco
.
Abstract—This paper proposes, inspired by local binary pat- ture descriptor owing to its simplicity, good invariability to
tern (LBP) and its variants, a novel local texture operator for monotonic gray level changes, and applicability for real-time
texture modelisation and classification, referred to as Multi-scale applications because of its low computational cost. Despite
Ternary and Septenary Pattern (MTSP). MTSP is a histogram-
based feature representation designed based on two single-scale its origins in texture modeling and classification, the LBP ap-
STP and SSP encoders (single-scale ternary and septenary pat- proach has proven to be useful in a wide range of applications,
terns, respectively). STP and SSP are built using a new set theory- including medical and biomedical image analysis, motion
based pattern encoding technique that combines the concpet of detection, image retrieval, face and facial description and iden-
both LTP’s and LQP’s texture descriptors. The main idea behind tification, background removal, and more. Yet, the basic LBP
STP and SSP is to calculate several virtual pixels based on
different local and global image statistics and to progressively descriptor has several drawbacks [El Merabet and Ruichek,
encode both local and non-local pixel interactions by analyzing 2018]. Therefore, in the past few years, many methods similar
the differential excitation and direction information based on to LBP have been proposed to get around these problems
the relationships between pixels sampled in different locations. and improve texture classification performance. Indeed, many
Then, the obtained histograms of SSP and TSP methods are dense-based feature extraction (LBP-based methods) continue
concatenated to form the final MTSP feature vector. Experiments
have shown that MTSP has better performance stability across to be designed still today, such as, Global refined local binary
nine texture datasets than many recent state-of-the-art texture pattern (GRLBP) [Shu et al., 2022], Locally encoded transform
approaches. feature histogram for rotation-invariant (LETRIST) [Song
Index Terms—: Texture recognition, texture descriptors, LTP, et al., 2017], Petersen graph multi-orientation based multi-
LQP, LBP, directional topologies scale ternary pattern (PGMO-MSTP) [El Khadiri et al., 2021],
Circumferential local ternary pattern (CLTP) [Zheng et al.,
I. I NTRODUCTION 2022], Local ternary pattern based multi-directional guided
In the field of texture analysis, texture classification over mixed mask (MDGMM-LTP) [El Khadiri et al., 2022], Ori-
time is considered as a serious problem. It plays a very im- ented star sampling structure based multi-scale ternary pattern
portant role in many applications, such as image classification, (O3S-MTP) [El Khadiri et al., 2020], Directional neighbor-
facial classification, object recognition, gender classification, hood topologies based multi-scale quinary pattern (DNT-
etc. However, textures in the real world vary in rotation, illu- MQP) [Rachdi et al., 2020], etc.
mination, scale, and affine varieties when imaging conditions Even though LBP and its modifications achieve excellent
change. Extracting robust characteristics for texture modelisa- performance, there needs to be a different way to improve the
tion remains a difficulty for texture analysis. Many improved discriminative strength of an image so that modeling texture
methodologies for texture analysis have been developed in the can be done more efficiently. Therefore, in this paper, we de-
literature over the years, with great evaluations found in [Liu velop a conceptually and computationally simple yet powerful
et al., 2019]. Local feature extraction approaches have been texture operator, named multi-scale ternary and septenary pat-
remarkably designed and implemented in the area of texture terns (MTSP), for image texture understanding and analysis to
analysis over the last decades. The main benefits of these local better address the limitations of local feature descriptors. The
feature descriptors are that they are easy to implement and MTSP technique computes feature representation by utilizing
don’t need a lot of training data [Bhattacharjee and Roy, 2019]. distinct neighborhood topologies to gather complete spatial
Among local feature extraction techniques, local binary information from neighboring pixels in various directions and
patterns (LBP), established by [Ojala et al., 1996], is one blocks and describes the spatial connection and appearance
of the most prominent texture descriptors among local feature of a particular pixel intensity. The MTSP operator is made
extraction methods. Researchers regard LBP as a useful tex- up of two single-scale descriptors, STP (Single-scale Ternary
Pattern) and SSP (Single-scale Septenary Pattern) operators. A. Pattern sampling
A compact encoding scheme based on set theory is used to
The neighborhood topologies employed in MTSP take use
get feature maps. This scheme combines LQP and LTP-like
of several intriguing aspects that may aid in performance
texture methods to get more useful texture information.
enhancement depending on the following steps:
The rest of this paper is organized as follows: In Section II,
the basic local binary patterns (LBP) method is briefly ex- • take more pixels in the neighborhood to capture multi-
plained. In Section III, the proposed MTSP texture descriptor scale objects;
is explained in detail. Section IV provides comprehensive ex- • use blocks of pixels in several directions;
perimentation and comparative evaluation. Section V provides • use the average values and the median values of the
a summary of the findings and some suggestions for further surrounding blocks.
research. Several mean and median values (see the formulae
presented below) are integrated as virtual pixels in the
II. B RIEF REVIEW OF BASIC L OCAL B INARY PATTERNS modeling of proposed texture patterns in order to increase the
tolerances of the threshold ranges to identify a code insensitive
The well-known texture operator LBP was first developed to noise and more robust to illumination fluctuations. In order
by Ojala et al. [Ojala et al., 1996], and it has since been shown to do this, we look at the effects of different neighborhood
to be a very efficient and computationally straightforward layouts by direction and by block, as shown in Figures 2
texture descriptor for monochromatic images, as illustrated in and 3. From these figures, we get the following equations:
Figure 1. The LBP code is calculated for each pixel in the
input image by comparing its intensity value to the intensities
of its neighboring pixels in each 3×3 gray-scale image patch.
In formal terms, the LBP label of a pixel ac in the center of a
3×3 grid is formed using the kernel function LBP(.) (cf. Eq.
1).
P
X −1
LBP (ac ) = ψ(ap − ac )2p (1)
p=0
where ac is the central pixel, ap;p∈{0,1,...,P −1} are its neigh-
boring pixels and P corresponds to the number of neighboring
pixels. ψ(.) is the Heaviside step function defined as follow:
(
1 if 1 ≥ 0, Fig. 2. Semantic representation for different template-directional neighbor-
ψ(x) = (2)
0 otherwise hood topologies.
Image Binary code: 11101010
Decimal:134
87 92 91 1 1 1
ψ(x)
77 81 80 0 0
82 74 89 1 0 1
Histogram of LBP image
x=Ip − Ic
Fig. 1. Standard procedures for extracting LBP-like features.
III. MULTI-SCALE TERNARY AND SEPTENARY
PATTERNS (MTSP)
Fig. 3. Emantic representation for different template-block neighborhood
By combining and integrating the principles of LTP-like and topologies.
LQP-like texture operators into the same compact encoding
method, MTSP gains greater accuracy in texture modeling, • Direction 4 :
kπ
leading to more promising results. The essence of MTSP is
to locally sampling and encoding patterns in the most relevant Dk = (ak + ac + ak+4 )/3 (3)
directions of texture images. The MTSP descriptor is built by
the following procedures: D
e k = (ak + 2ac + ak+4 )/4 (4)
a
Definition 3. Let E be a set and A a subset of E. The comple- (
ment of A in E is the set x|x ∈ E et x < A . We denote it C E A e
Where Dk and D A orkA∈C{0,
e k represent respectively the averagesorofE \Avec or...,
Ā., 3}
(ak , ac , ak+4 ) and (ak , ac ),(ac , ak+4 ) respectively, where k ∈
max(mD , mB ) max(m̃D , m̃B )
{0, 1, 2, 3}, as presented in Figure 2. Accordingly,
F6 = { by considering
2
, the2neighbouring
, max(D̃pixels
k , B̃k )}and the
U = (a0 + a4 + 2ac + a2 + a6 )/6 virtual
(5) pixels as well as their mean and median values,(20) we con-
To get set operations, we use two kinds of Venn diagrams,
struct six sampling sets denoted as SSi;i∈{1,...,6} (cf. Eqs. 20 to
i.e., Venn diagrams in upper and lower modes, as illustrated
U
e = (a1 + a5 + 2ac + a3 + a7 )/6 25).
(6) in Figures 4(2022)
and 5, respectively.
Rachdi et al. / Computer and Electrical Engineering 5
Where U and U e , represents the average value of the average SS1 = {D0 , D1 , I4 , I5 , I6 , I7 } (20)
ding Scheme intensities of the pixels of the directions π/2 & 0 including
the central
of all, in order pixel and
to establish the average between
relationships value of the intensities of
pixels SS2 = {D2 , D3 , I0 , I1 , I2 , I3 } (21)
the pixels of the directions 3π/4 & π/4 including the central
erent sets of pixels, we use the set theory and in partic-
pixel, as presented in Figure 3. SS3 = {mI , K0 , I1 , I3 , I5 , I7 } (22)
nn diagrams which is a diagrammatic representation of n o
t possible relationships between
• Blocks delimited by different sets
the angles: of
( kπ
2 ,aafinite
bc , (2k+1)π
2 ) SS4 = mb I , K1 , e
Bk;k∈{0,..,3} (23)
of elements. Let’s consider the following+definitions:
B = (a + 2a a )/4 (7) ( )
k 2k 2k+1 2k+2
min(mD , mB ) min(b
mD , m
bB ) ek , B
ek )k∈{0,..,3}
SS5 = , , min(D (24)
on 1. A set is an unordered collection of objects, called 2 2
B
ek = (a2k + 2ac + a2k+2 )/4 (8)
s or elements of the set. A real interval x is a nonempty max(mD , mB ) max(b mD , mbB )
al numbers AWhere
= [a, b] Bk= and{x|a Be≤k ,x ≤ b} where
represent a is calledthe mean SS6 = {
respectively , , ... max(Dek , B
ek )k∈{0,..,3} } (25)
of and the mean of
2 2
mum and b is called
((a 2k the
, a2k+1supremum.
), (a ,
2k+1 2k+2a ))
((a2k , ac ), (ac , a2k+2 ), where k ∈ {0, 1, 2, 3} To visualize set operations, we define two kinds of Venn
Then, we consider (Dk , D e k ) and (Bk , B ek )) as virtual
diagrams, i.e.,
on 2. Let Aneighboring
and B be pixels
two sets. The set containing AVenn diagrams inupper
lower
Venn and upper
mode .modes, as
of the central pixel ac , andthose
then compute Fig. 4.Fig.
A 4. schematic
schematic image of the
image of the upper Venndiagrams
diagrams mode.
s in both Atheir
andlocal
B isaverage
the intersection
and mediansofasthe sets in
shown A the schematized in Figures 3 and 4, respectively.
andfollowing
ated by A ∩equations:
B.
3
1 X and lower Venn diagrams modes as well as the three definitions
mD = (ac + (Dk + Dk))
e (9)
on 3. Let E be a set and A a subset
9 of E. The comple- (given in Definition 1 to Definition 3). Denoted as A, they are
k=0
A in E is the set x|x ∈ E et x < A . XWe
3 denote it C E A expressed as follows (cf. Eqs 26 to 39):
C 1
or A or Ā. mB = (ac + (Bk + B
ek )) (10)
9
k=0 AU
1 (x) = x ∈ SS1 | x ≥ Ic − τ1 (26)
rdingly, by considering the neighbouringM pixels
−1 N −1 and the e
1
AU
2 (y) = y ∈ SS2 | y ≥ Ic + τ1 (27)
X X
M
fI =
pixels as well as their mean and ( (a )) con- (11) e
M median
× N i=0values, we
i,j
j=0
x sampling sets denoted as SSi;i∈{1,...,6} (cf. Eqs. 20 to t
(12) AU (x, y) = AU U
1 (x) ∩ A2 (y) (28)
m
e D = median(D) e
n
= {D0 , D1 , I4 , I5 , I6 , I7 }
m
e B = median(B)
(20)
(13) A1L (x) = x ∈ SS1 | x ≤ Ic + τ1 (29)
1 a
M
fI = median(IM ×N ) (14)
A2L (y) = y ∈ SS2 | y ≤ Ic − τ1 (30) n
2 = {D2 , D3 , I0 , I1 , I2 , I3 } (21)
by considering the virtual pixels, the neighbouring pixels Fig. Fig.
3. A 5.schematic
A schematic imageofofthe
image the lower
lower Venn
Venndiagrams modemode.
diagrams . M
as well as their median and average values, we construct six
3 = , I1 , I3 , Isets
{mI , K0sampling 5 , I7denoted
} (22) AL (x, y) = A1L (x) ∩ A2L (y) (31) t
as Fi : Given the six sampling sets Fk , k ∈ {1, ..., 6} defined
n o Given the
above
defined six sampling sets
(cf. Eqs. (15) to (20)), an ansemble
SSi;i∈{1,...,6} , an
of sets of pixel
ensemble t
4 = m ek;k∈{0,..,3}
bI , K1 , B (23) of A
sets
U
(x)
of pixels
= x ∈ SS | x
relationship
3 ≥ I c based
+ τ 2 on three dynamic (32)
thresh- o
F1 = {D0 , D1, a4 , a5 , a6 , a7 } (15) relations
3 are constructed based on three dynamic threshold
( ) old values
valuesττ11,τ,2τ2and
and ττ33 are constructed
according according
to the modes to both
of the upper andupper n
min(mD , mB ) min(b
mDF,2m
b=B ){D2 , D3, (16) A U
= ydiagrams,
∈ SS4 | yas≥indicated
Ic − τ3 in the following equations (33)
4 (y)Venn
ek ,aB
e0 , a1 , a2 , a3 } lower
= , , min(D )
k k∈{0,..,3} (24)
2 2 F = {M , U, a , a , a , a } (17) (cf. Eqs. 21) to (33)):
3 I 1 3 5 7
AU
5 (z) = z ∈ SS5 | z ≥ Ic + τ1 (34)
max(mD , mB ) max(b
mD , m ) 4 = {M̃I e
bBF B̃k)}
, Ũ,, B
e (18)
={ , , ... max(D k k k∈{0,..,3} } (25)
2 2 AU (x, y, z) =AA U = {x ∈U F1 | x ≥ U
U
(21)
3 (x) ∩ A4 (y) ∩ A5 (z) (35)
Avec k ∈ {0, ..., , 3} 1 (x) ac − τ1 }
sualize set operations,
min(mwe, mdefine two kinds of Venn AU2 (x) = {x ∈ F2 | y ≥ ac + τ1 } (22)
B ) min(m̃D , m̃B ) L
s, i.e., Venn diagrams
F5 = {
D
in2 lower and upper
, modes, ask k
, min( D̃ , B̃ )} A 3 (x) = x ∈ SS3 | x ≤ Ic − τ2 (36)
2
ized in Figures 3 and 4, respectively. (19) AU (x, y) = AU
1 (x) ∩ A2 (y)
U
(23)
L
A4 (y) = y ∈ SS4 | y ≤ Ic + τ3 (37)
A5L (z) = z ∈ SS6 | z ≤ Ic − τ1 (38)
AL (x, y, z) = A3L (x) ∩ A4L (y) ∩ A5L (z) (39)
AL
1 (x) = {x ∈ F1 | x ≤ ac + τ1 } (24) between points in the rest of the sampling sets Fk ; k ∈
{3, ..., 6} and the central pixel. Furthermore, we can capture
AL
2 (x) = {x ∈ F2 | y ≤ ac − τ1 } (25)
discriminant microstructure information from the perspective
AU
3 (x) = {x ∈ F3 | x ≥ ac + τ2 } (26) of the established ensemble of sets of pixels relationship using
this method (i.e., Single-scale Septenary pattern (SSP)). SSP
AU
4 (x) = {x ∈ F4 | x ≥ ac − τ3 } (27) employs the following indicator denoted as ψ(.) (cf. Eq. 37):
AU
5 (x) = {x ∈ F5 | x ≥ ac + τ1 } (28)
U
+3 if α ∈ AU
3 (x) ∩ A4 (y) ∩ A (x, y, z),
U
AU (x, y, z) = AU
3 (x) ∩ A4 (y) ∩ A5 (z)
U U
(29)
U
+2 if α ∈ AU 4 (x) ∩ A5 (y) ∩ A (x, y, z),
U
AL (30)
3 (x) = {x ∈ F3 | x ≤ ac − τ2 }
U
+1 ifα ∈ AU 3 (x) ∩ A5 (y) ∩ A (x, y, z),
U
AL
4 (x) = {x ∈ F4 | x ≤ ac + τ3 } (31) ψ(α) = −1 if α ∈ AL
L
3 (x) ∩ A5 (y) ∩ A (x, y, z),
L
L
−2 if α ∈ AL 4 (x) ∩ A5 (y) ∩ A (x, y, z),
L
AL
5 (x) = {x ∈ F6 | x ≤ ac − τ1 } (32)
L
−3 if α ∈ AL 3 (x) ∩ A4 (y) ∩ A (x, y, z),
L
AL (x, y, z) = AL
3 (x) ∩ A4 (y) ∩ A5 (z)
L L
(33)
otherwise
0
The local texture relationship between the the central pixel (37)
U
and each points within the considered six sampling sets Fk , where A designates the complement of AU .
k ∈ {1, ..., 6}, is encoded using three and seven-valued coding
schemes (hence the name is ternary respectively septenary) Using the following SSP encoder , the local information of
according to the ensemble of sets of pixels relationship based the pixels ac is encoded as follows:
on the three dynamic threshold values τ1 , τ2 and τ3 . To more 5
precisely extract the entire micro-structural features included X
SSP pattern
(p, q) = ψpattern (ap ) × 2p (38)
in these interconnections, we developed a texture operator
p=0
named Multi-scale Ternary and Septenary Pattern (MTSP)
which is conceptually comprised of Single-scale Ternary Where pattern ∈ {1, 2, 3, 4, 5, 6} represents six binary pat-
Pattern (STP) and Single-scale Septenary Pattern (SSP), as terns by considering its upper-positive, middle-positive, lower-
specified below: positive, upper-negative, middle-negative and lower-negative
components denoted as ψ(+3), ψ(+2), ψ(+1), ψ(−1),
1) Single-scale Ternary pattern (STP): Using a three- ψ(− 2) and ψ(−3) respectively. Note that SSP encodes an
valued coding technique (i.e., the notion of LTP-like ap- image in seven channels but gives six bit patterns. The image
proaches), we created a Single-scale Ternary Pattern (STP) encoded by SSP is divided into six bit patterns under the
to encode the connection between the central pixel and the following conditions:
points of both sample sets F1 and F2 . The indicator function
φ(.) that transforms each pair connection into ternary form is si ψ(x) = {1, 2, 3, 4, 5, 6},
1
defined as follows (cf. Eq. (42)): ψpattern (x) = (39)
0 si sinon.
B. Features extraction
if α ∈ AU (x, y),
1
φ(α) = 0 if α ∈ AU (x, y) ∩ AL (x, y), (34) After encoding each pixel in the input texture image using
−1 if α ∈ AL (x, y). STP and SSP encoders, two feature maps are created. The
following equations are used to convert the two feature maps
The local information of the pixel ac is then encoded using into the texture-representing histograms.
the STP encoder, expressed by the following equations (cf.
Eq. (34) and (34)): P
X −1
hST P (k) = δ(ST
b P (ap ), k) (40)
5
X p=0
ST P pattern (ac ) = φpattern (ac ) × 2p (35)
P −1
p=0 X
hSSP (k) = δ(SSP
b (ap ), k) (41)
Where p=0
if φ(x) = {1, 2},
1
φpattern (x) = (36) where k ∈ [0; 26 ] is the number of STP and SSP patterns.
0 otherwise b .) denotes the Kronecker delta function defined as below:
δ(.,
2) Single-scale Septenary Pattern (SSP): In the same
1 si x = y,
way that LQP changed the LTP algorithm to work with the δ(x, y) = (42)
0 si sinon.
b
five-value encoding method, we use a seven-value encoding
scheme based on three dynamic threshold values designed by Since several texture features provide various levels of
[Rachdi et al., 2020], in order to represent the relationships descriptive power, combining them into a single row vector
feature seems to be the most effective strategy for making use
of their combined strengths. A new hybrid texture description
model is created via the use of a multi-scale fusion operation
to achieve this goal and capture more prominent texture
properties. By combining the STP and SSP operators, they
generate the multi-scale ternary and septenary pattern (MTSP)
(cf. Eq. 43), which is considered to be more effective due
to the fact that it improves the discriminatory and expressive .
capabilities of both original operators.
Fig. 6. Image datasets used in this study.
D E
h(M T SP ) = hST P , hSSP , (43)
B. Comparative Assessment of Performance
where h.i is the concatenation operator of the two histograms
hST P et hSSP 1) Experiment #1: Investigation on Performance
Stability:
IV. EXPERIMENTAL RESULTS AND DISCUSSION
The following conclusions may be drawn from Tables II and
In this section, the proposed MTSP descriptor’s efficiency III, which summarize the classification accuracies achieved on
and performance were tested on several publicly available average for each technique evaluated and the overall ranking
texture and material datasets (cf. Table 6) and compared to for each dataset:
15 recent state-of-the-art feature extraction methods using 1) It can be seen that the Single-scale Ternary Pattern
a set of experiments. The experiments here used the split- (STP), the Single-scale Septenary Pattern (SSP), and
sample validation protocol, in which half of the images are their combination (MSTP), tend to achieve the highest
selected randomly for training and the other half are used for and most stable discriminative accuracy of all the meth-
testing. In addition, the classification task is carried out using ods tested.
the parameter-free nearest-neighbor classifier (1-NN) with the 2) The proposed MTSP operator achieved a maximum
L1-city block distance. Note that the classification tasks are classification accuracy of 100% across four datasets
repeated over 100 random splits to avoid any bias associated (i.e., BonnBTF, JerryWu, Brodatz, and KTH-TIPS),
with the data partitioning and that the average results are indicating that our technique was able to distinguish
used to measure the final accuracy. Table 7 depict the feature between all classes flawlessly and offers no space for
extraction methods tested and compared with the proposed improvement. Note that our method was the only method
descriptor. to achieve 100% over 4 different datasets.
N Texture descriptor Reference
3) A further finding from Table II shows that none of
1
2
Local binary patterns (LBP)
Local ternary patterns (LTP)
[Ojala et al., 1996]
[Tan and Triggs, 2010]
the evaluated feature extraction methods performed well
3 Local binary patterns (LQP) [Nanni et al., 2010] over all the tested datasets. In fact, our proposed method
4 Improved local texture patterns (ILTP) [Yang and Sun, 2011]
5 Median robust extended local binary pattern (MRELBP) [Liu et al., 2016] achieves the highest average accuracy over six datasets
6 Locally encoded transform feature histogram (LETRIST) [Song et al., 2017]
7 Repulsive-and-attractive local binary gradient contours (RALBGC) [El Khadiri et al., 2018b] out of the nine tested ones, indicating that the MSTP
8
9
Local concave-and-convex micro-structure patterns (LCCMSP)
Local directional ternary pattern (LDTP)
[El Merabet and Ruichek, 2018]
[El Khadiri et al., 2018a] method is more stable and strong than all the tested
10
11
Mixed neighborhood topology cross decoded patterns (MNTCDP)
improved local Quinary patterns (ILQP)
[Kas et al., 2018]
[Armi and Fekri-Ershad, 2019]
methods. Note that for the three remaining datasets,
12
13
Attractive-and-repulsive center-symmetric LBP (ARCSLBP)
Directional neighborhood topologies based MQP (DNT-MQP)
[El Merabet and Ruichek, 2019]
[Rachdi et al., 2020]
the proposed method keeps its strength by achieving
14 Local triangular coded pattern (LTCP) [Arya and Vimina, 2021] an average accuracy that is competitive with the score
15 LTP based multi-directional guided mixed mask (MDGMM-LTP) [El Khadiri et al., 2022]
TABLE I
provided by the top one method in each dataset, as
F EATURE EXTRACTION METHODS USED IN THIS EXPERIMENT. illustrated in Columns 6, 8, and 9 in Table II. Using
the USPtex1 dataset as an example (column 6 in Table
II), the proposed method is ranked second with an
average accuracy of 91.13%, which is a very good
A. Texture datasets classification rate (close to the average accuracy of the
Extensive tests are conducted on nine popular top-ranked feature extraction method, DNT-MQP, which
datasets—BonnBTF, JerryWu, Brodatz, KTH-TIPS, KTH- is 92.46%). The same finding is correct for the other two
TIPS2b, USPtex1, VisTex, MBT, and NewBarkTex—to datasets.
validate MTSP’s capabilities and performance stability. These Based on the observations outlined above, it is clear that
datasets were chosen because they are representative of a MSTP consistently outperforms the state-of-the-art feature
range of factors, including the number of images and classes extraction approaches examined in our studies across the vast
included, and the unique problems inherent to each dataset, majority of the tested texture and material datasets.
as shown in Table 6.
Descriptors BonnBTF JerryWu Brodatz
KTH-TIPS KTH-TIPS2b USPtex1.0 VisTex MBT NewBarkTex
MTSP 100 100 100100 95.61 91.13 80.93 87.96 85.04
STP 99.51 99.33 10099.98 92.28 86.28 79.35 88.60 77.53
SSP 99.96 99.83 10099.95 93.86 89.93 77.68 83.57 81.76
DNT-MQP 99.25 99.88 100100 95.12 92.46 78.22 86.91 86.98
LETRIST 100 100 10099.99 90.08 89.34 68.68 81.39 63.12
LCCMSP 97.64 98.18 100100 93.32 88.42 78.27 85.41 84.72
ARCSLBP 99.17 99.53 10099.88 93.23 86.89 75.76 83.23 78.66
LDTP 99.88 98.23 100100 90.47 83.00 76.76 82.97 72.66
ILQP 98.72 98.03 100100 93.39 87.54 75.39 85.80 83.85
RALBGC 98.80 97.51 100100 93.39 87.16 77.89 86.22 84.56
MDGMM-LTP 100 98.01 100100 93.57 89.37 79.21 87.19 85.17
LBP 95.86 97.26 100100 89.67 81.43 74.19 85.61 79.00
LTCP 98.59 97.74 100100 88.14 80.44 72.78 84.42 76.56
ILTP 99.17 98.32 100100 93.91 88.83 77.44 84.74 84.44
LTP 98.64 98.06 100100 92.92 86.42 75.38 88.56 82.81
LQP 97.06 97.69 99.97
99.90 93.17 85.42 73.83 89.31 78.54
MRELBP 98.97 99.53 100100 89.00 84.38 64.74 75.66 61.81
MNTCDP 100 100 100100 90.93 85.73 79.53 78.95 71.03
TABLE II
T HE ACHIEVED CLASSIFICATION PERFORMANCE OF STATE - OF - THE - ART FEATURE EXTRACTION TECHNIQUES .
BonnBTF JerryWu Brodatz KTH-TIPS KTH-TIPS2b USPtex1.0 VisTex MBT NewBarkTex
MTSP(100%) MTSP(100%) MTSP(100%) MTSP(100%) MTSP(95.61%) DNT-MQP(92.46%) MTSP (80.93%) LQP(89.31%) DNT-MQP(86.98%)
LETRIST LETRIST DNT-MQP STP( 100.00 %) DNT-MQP MTSP MNTCDP STP( 88.61 %) MDGMMLTP
MNTCDP MNTCDP RALBGC SSP( 100.00 %) ILTP SSP( 89.93 %) STP(79.35%) LTP MTSP
MDGMMLTP DNT-MQP LDTP DNT-MQP SSP( 93.86 %) MDGMMLTP MDGMMLTP MTSP RALBGC
SSP( 99.51%) SSP(99.96%) ILQP LETRIST MDGMMLTP LETRIST RALBGC MDGMMLTP RALBGC
LDTP RCSLBP RALBGC RALBGC ILQP ILTP DNT-MQP DNT-MQP ILTP
STP( 99.51 %) MRELBP MDGMMLTP RCSLBP RALBGC RALBGC RALBGC RALBG ILQP
DNT-MQP STP(99.33%) LBP LDTP RALBGC ILQP SSP( 77.68 %) ILQP LTP
RALBGC ILTP LTCP ILQP RCSLBP RALBGC ILTP LBP SSP( 81.76%)
ILTP LDTP ILTP RALBGC LQP RCSLBP LDTP RALBGC LBP
MRELBP RALBGC LTP MDGMMLTP LTP LTP RCSLBP ILTP RCSLBP
RALBGC LTP MRELBP LBP STP(92.28 %) STP(86.28%) ILQP LTCP LQP
ILQP ILQP MNTCDP LTCP MNTCDP MNTCDP LTP SSP( 83.57 %) STP(77.53 %)
LTP MDGMMLTP LETRIST ILTP LDTP LQP LBP RCSLB LTCP
LTCP LTCP STP (99.98%) LTP LETRIST MRELBP LQP LDTP LDTP
RALBGC LQP LQP MRELBP LBP LDTP LTCP LETRIST MNTCDP
LQP RALBGC SSP(99.95%) MNTCDP MRELBP LBP LETRIST MNTCDP LETRIST
LBP LBP RCSLBP LQP LTCP LTCP MRELBP MRELBP MRELBP
TABLE III
O N EACH OF THE DATASETS EVALUATED , A RANKING OF THE FEATURE EXTRACTION TECHNIQUE RESULTS WAS PERFORMED . T HE APPROACH
PRESENTED IS INDICATED IN LIGHT GRAY.
2) Experiment #2: Statistical significance of the achieved
results in terms of accuracy improvement:
The objective of this section is to further prove statistically
the realized performances via MTSP compared to the existing
evaluated methods by employing the Wilcoxon signed rank
test-based ranking technique [El Merabet and Ruichek, 2018].
The algorithm is applied to all the pairwise combinations of
the 18 evaluated methods, including (STP and SSP and their
combination MTSP=STP+SSP) on the nine tested databases.
Figure 7 depicts the achieved ranking results based on the nor-
malized number of victories achieved by each method across
all databases considered in our experiment. It is evident from
Fig. 7. Ranking results and the number of victories obtained for the 18
the findings shown in Figure 7 that MSTP is the most effective evaluated methods.
operator among the most recent feature extraction techniques,
validating the general conclusion drawn from Tables II and
III. multiple oriented blocks. In fact, MTSP encodes the linkages
and interactions between pixels in a 3×3 grayscale image
V. CONCLUSION patch using a compact coding technique that combines the
In this paper, we introduce Multi-scale Ternary and Septe- ideas of LTP and LQP-like technologies. The capabilities
nary Pattern (MTSP), an innovative feature extraction method and performance stability of MTSP’s were evaluated on nine
that makes use of set theory, neighborhood topology, and complex texture databases using the 1-NN classifier against
15 new and advanced state-of-the-art texture operators. The [Song et al., 2017] Song, T., Li, H., Meng, F., Wu, Q., and Cai, J. (2017).
MTSP descriptor performed well across all databases, both in Letrist: Locally encoded transform feature histogram for rotation-invariant
texture classification. IEEE Transactions on circuits and systems for video
terms of results and dimensions, indicating that it provides technology, 28(7):1565–1579.
a more accurate representation of texture images. In future [Tan and Triggs, 2010] Tan, X. and Triggs, B. (2010). Enhanced local texture
work, we plan to try out more advanced classifiers in order to feature sets for face recognition under difficult lighting conditions. IEEE
transactions on image processing, 19(6):1635–1650.
improve classification accuracy. [Yang and Sun, 2011] Yang, W. and Sun, C. (2011). Face recognition using
improved local texture patterns. In 2011 9th World Congress on Intelligent
Control and Automation, pages 48–51. IEEE.
R EFERENCES [Zheng et al., 2022] Zheng, Z., Xu, B., Ju, J., Guo, Z., You, C., Lei, Q.,
and Zhang, Q. (2022). Circumferential local ternary pattern: New and
efficient feature descriptors for anti-counterfeiting pattern identification.
[Armi and Fekri-Ershad, 2019] Armi, L. and Fekri-Ershad, S. (2019). Tex-
IEEE Transactions on Information Forensics and Security, 17:970–981.
ture image classification based on improved local quinary patterns. Multi-
media Tools and Applications, 78(14):18995–19018.
[Arya and Vimina, 2021] Arya, R. and Vimina, E. (2021). Local triangular
coded pattern: a texture descriptor for image classification. IETE Journal
of Research, pages 1–12.
[Bhattacharjee and Roy, 2019] Bhattacharjee, D. and Roy, H. (2019). Pattern
of local gravitational force (plgf): A novel local image descriptor. IEEE
transactions on pattern analysis and machine intelligence, 43(2):595–607.
[El Khadiri et al., 2018a] El Khadiri, I., El Merabet, Y., Chahi, A., Ruichek,
Y., Touahni, R., et al. (2018a). Local directional ternary pattern: A new
texture descriptor for texture classification. Computer vision and image
understanding, 169:14–27.
[El Khadiri et al., 2022] El Khadiri, I., El Merabet, Y., Ruichek, Y.,
Chetverikov, D., El Mokhtar, R., and Tarawneh, A. S. (2022). Local
ternary pattern based multi-directional guided mixed mask (mdgmm-ltp)
for texture and material classification. Expert Systems with Applications,
page 117646.
[El Khadiri et al., 2020] El Khadiri, I., El Merabet, Y., Ruichek, Y.,
Chetverikov, D., Touahni, R., et al. (2020). O3s-mtp: Oriented star sam-
pling structure based multi-scale ternary pattern for texture classification.
Signal Processing: Image Communication, 84:115830.
[El Khadiri et al., 2021] El Khadiri, I., El Merabet, Y., Tarawneh, A. S.,
Ruichek, Y., Chetverikov, D., Touahni, R., and Hassanat, A. B. (2021).
Petersen graph multi-orientation based multi-scale ternary pattern (pgmo-
mstp): An efficient descriptor for texture and material recognition. IEEE
Transactions on Image Processing, 30:4571–4586.
[El Khadiri et al., 2018b] El Khadiri, I., Kas, M., El Merabet, Y., Ruichek,
Y., and Touahni, R. (2018b). Repulsive-and-attractive local binary gradient
contours: New and efficient feature descriptors for texture classification.
Information Sciences, 467:634–653.
[El Merabet and Ruichek, 2018] El Merabet, Y. and Ruichek, Y. (2018). Lo-
cal concave-and-convex micro-structure patterns for texture classification.
Pattern Recognition, 76:303–322.
[El Merabet and Ruichek, 2019] El Merabet, Y. and Ruichek, Y. (2019).
Attractive-and-repulsive center-symmetric local binary patterns for texture
classification. Engineering Applications of Artificial Intelligence, 78:158–
172.
[Kas et al., 2018] Kas, M., Ruichek, Y., Messoussi, R., et al. (2018). Mixed
neighborhood topology cross decoded patterns for image-based face recog-
nition. Expert Systems with Applications, 114:119–142.
[Liu et al., 2019] Liu, L., Chen, J., Fieguth, P., Zhao, G., Chellappa, R.,
and Pietikäinen, M. (2019). From bow to cnn: Two decades of texture
representation for texture classification. International Journal of Computer
Vision, 127(1):74–109.
[Liu et al., 2016] Liu, L., Lao, S., Fieguth, P. W., Guo, Y., Wang, X., and
Pietikäinen, M. (2016). Median robust extended local binary pattern
for texture classification. IEEE Transactions on Image Processing,
25(3):1368–1381.
[Nanni et al., 2010] Nanni, L., Lumini, A., and Brahnam, S. (2010). Local
binary patterns variants as texture descriptors for medical image analysis.
Artificial intelligence in medicine, 49(2):117–125.
[Ojala et al., 1996] Ojala, T., Pietikäinen, M., and Harwood, D. (1996). A
comparative study of texture measures with classification based on featured
distributions. Pattern recognition, 29(1):51–59.
[Rachdi et al., 2020] Rachdi, E., El Merabet, Y., Akhtar, Z., and Messoussi,
R. (2020). Directional neighborhood topologies based multi-scale quinary
pattern for texture classification. IEEE Access, 8:212233–212246.
[Shu et al., 2022] Shu, X., Pan, H., Shi, J., Song, X., and Wu, X.-J. (2022).
Using global information to refine local patterns for texture representation
and classification. Pattern Recognition, 131:108843.