0% found this document useful (0 votes)
13 views14 pages

Optimization of Bayesian Algorithms For - 2

This document discusses the optimization of Bayesian algorithms for multi-threshold image segmentation by integrating an immune algorithm to reduce computational intensity. The improved algorithm demonstrates significantly enhanced optimization capabilities, achieving optimal values more frequently than traditional methods. The research highlights the effectiveness of the optimized Bayesian algorithm in image segmentation, showcasing better results compared to genetic algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views14 pages

Optimization of Bayesian Algorithms For - 2

This document discusses the optimization of Bayesian algorithms for multi-threshold image segmentation by integrating an immune algorithm to reduce computational intensity. The improved algorithm demonstrates significantly enhanced optimization capabilities, achieving optimal values more frequently than traditional methods. The research highlights the effectiveness of the optimized Bayesian algorithm in image segmentation, showcasing better results compared to genetic algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Optimization of Bayesian Algorithms for Multi-threshold Image Segmentation

Abstract: The Bayesian optimization algorithm is one of the distribution estimation algorithms formed
by combining statistical theory with evolutionary algorithms. The algorithm uses Bayesian networks as
the probabilistic model of its solution space, which is a new form of evolution with vitality and
research prospects. The research on Bayesian algorithms has been steadily developing but there are
some problems with the algorithm, such as being too computationally intensive. Therefore, the research
chooses to improve the Bayesian algorithm in view of the immune algorithm, by reducing the number
of Bayesian network construction to solve the problem of computation. The immune algorithm is
combined with the Bayesian algorithm to improve the individual fitness of the population. Simulation
experiments show that the improved Bayesian algorithm in view of the immune algorithm can
effectively reduce the computational effort, shorten the computation time and improve the optimization
capability. The average quantity of times the improved Bayesian algorithm achieves the optimal value
is 30 times higher than that of the traditional algorithm, which is 20 times higher. The study applies the
optimised Bayesian algorithm to multi-threshold image segmentation, and uses the excellent
merit-seeking ability to search for the optimal threshold value to complete the image segmentation. The
simulation showcases that the improved Bayesian optimization algorithm can obtain better image
segmentation results compared with the genetic algorithm. The application of Bayesian algorithm to
image segmentation broadens the application area of the algorithm and provides a new direction of
exploration for image segmentation.
Keywords: Bayesian algorithm; image segmentation; immune algorithm; merit-seeking capability

1. INTRADUCTION
Bayesian algorithms (BA) are an important part of the field of evolutionary studies and are now
widely used in many areas as a statistical method. It assumes that the probability distribution is a priori,
i.e. the prior distribution of the parameters is assumed to be known [1]. Bayes’ theorem states that an
exact estimate cannot be obtained for the prior distribution of an unknown parameter (usually a random
variable) [2]. However, the prior distribution of the posterior of a parameter can be inferred from the
posterior distribution, and the posterior probability of a parameter can be found by backpropagation for
a known parameter [3]. Multi-threshold (MT) image segmentation (IS) is separating an image into
regions, which do not overlap, thus facilitating subsequent processing. In IS, there are significant
differences between the target and the background, between the target and other objects, etc. In order to
achieve effective segmentation, it is necessary to choose between multiple thresholds [4-6]. Bayesian
theorem-based IS methods can achieve good segmentation results as they can separate the target from
the background in a statistical sense and can effectively remove noise [7]. The BA in MT IS has the
problem of excessive computation, so the study chooses to optimize the BA by using the immune
algorithm (IA), using the guided variation of the IA to improve the individual fitness of the population,
reduce the computation, shorten the computation time and speed up the convergence.
This research is divided into four. The first offers a research background and summarises research
in related fields. The second part describes the specific approach of introducing IA in BA. The third
part investigates the convergence of the IA optimised BA and its effectiveness for MT image
classification. The final section summarises and outlines the whole research.

2. RELATED WORKS
BA and IS are both hot academic research directions, and over the years, many scholars have
made various improvements and studies on them. Liu combined the naive Bayesian classification
algorithm (BCA) with fuzzy models for constructing a teaching model. In teaching, the naive BCA
only produces a small quantity of features, and it just utilizes the probability in operations for handling
items. Through combining relevant models, the materials required for corresponding teaching could be
produced rapidly, and the relevant corresponding strategies for architectural art can be concluded [8].
Zhang et al. proposed a method in view of Bayesian Optimization Algorithm (BOA). This is for
exploring the best operating scheme in special conditions. BOA is an exploration strategy utilized for
adjusting operational plans based on observation results. For testing the method, Process simulation are
used to simulate the power failure of all main pumps. The optimal operating plan for application
failures [9]. Sultana et al. provided machine learning techniques based on BOA. Comparing the
developed model with the existing model for validation results in better performance [10]. Due to the
inability to directly determine the f-CaO (FC) content in cement clinker, accurate and rapid forecasting
of its content is of great significance. However, due to the issues in the cement calcination, it is difficult
for establishing accurately a FC prediction model. Therefore, Hao et al. proposed a Bayesian optimized
light gradient elevator cement clinker FC prediction model. A time series (TS) input window was
designed for forming a special TS data matrix. The time-varying delay features are extracted from the
special TS matrices using Bayesian optimized light gradient elevator histogram algorithm. Compared
with support vector regression, backpropagation, gradient lifting decision trees, extreme gradient lifting,
and Bayesian optimized light gradient lifting machines, Bayesian optimized light gradient lifting
machines have better prediction accuracy, robustness, and generalization ability [11]. Ma et al.
developed a framework calibration complex model. A loosely coupled network architecture is adopted
for facilitating bidirectional transmission of variables. A kind of model evaluation has been
implemented on the cluster, which markedly separates the computation from execution. This offers an
excellent circumstance for evaluating the effective estimation ability of Bayesian optimization
quantification SWAT parameters [12].
As the fuzzy C-mean clustering algorithm (CA) has difficulty in dealing with clustering
boundaries and outliers, Ji et al. presented a colour IS method in view of neutrophil C-mean clustering.
Firstly a simple linear CA was improved for obtaining excellent adaptive local spatial neighbourhood.
Then, the study adds the local neighbourhood information to the objective function of neutrophil
C-mean clustering for obtaining more accurate affiliation, classify deterministic clusters based on
maximum affiliation, and classify superpixels of uncertain clusters in view of structural similarity. The
outcomes indicate that the method possesses excellent function [13]. To achieve fast segmentation of
images, Yan and Weng proposed a relevant model driven by optimised local pre-fitted image energy.
The model integrates a local pre-fitted image function and an optimised edge indication function, so
that it can exactly segment images with intensity inhomogeneities. It also can handle images with large
noise disturbances inside the target [14]. A single threshold can enhance processing during fine
segmentation of medical images, but there are problems of feature blurring. Li et al. presented a
method in view of MT optimization. The influence of the contour wave (CW) transform on the
grey-scale correlation was analyzed, and Bayesian thresholding was enhanced. The correlation
properties of the CW coefficients were used to improve the mid-threshold function and MT constraints
were applied to the contour features of medical images [15]. Xing proposed a MT IS method. The
computational complexity of MT grows as the quantity of thresholds increases. The emperor penguin
optimization is utilized for finding the best MT for colour images. Gaussian variation is utilized for
enhancing the search power of the emperor penguin optimization algorithm. Kapur ’s MT method was
optimized and experiments were conducted on three kind of images. The experiment indicates that the
improved emperor penguin optimization is an effective colour IS method with high segmentation
accuracy and less CPU time [16].
In summary, a lot of scholars around the world have researched and explored the optimization of
BA and IS. The study chose to introduce the IA into the BA for optimization to improve the problem of
excessive operation and combine it with the MT IS issue, with a view to optimizing the influence of
MT IS.

3. BOA BASED ON IA AND ITS APPLICATION IN MT IS


BA are used to update populations using Bayesian networks. An important step in the construction
of Bayesian networks is the learning of the structure of the Bayesian network, which is more complex
and takes longer [17]. The study chose to reduce the number of Bayesian network constructions by
increasing the average fitness of the population.
3.1 Implementation Method and Steps of BOA in View of IA
The BOA in view of the IA should first establish the order of the nodes, followed by establishing
the undirected graph and searching for the undirected graph. After the determination of the node data is
completed, the Bayesian network structure is found. Because the maximum likelihood method has the
advantage of simple and convenient operation, the maximum likelihood method is chosen for the study
for calculating the Bayesian network parameters. The improved Bayesian optimisation algorithm is
shown in Figure 1.

End

Stop running and


output results
Begin
Y

Whether the judgment


Encode according to conditions are met?
N
specific problems to
generate initial population
T=T+1

Calculate the fitness values of Update the population


each individual in the through immune selection and
population and select generate new populations
individuals with higher fitness
to form a sample set

Constructing Suitable Vaccinate the population


Bayesian Networks Using
Sample Sets

Extract vaccines based on the best individual in


Bayesian network sampling to the population by combining the sample set with
generate individual sets individuals and their constituent populations

Figure 1 Improved BOA algorithm process


As shown in Figure 1, the population is coded according to the needs of a particular problem. The
coding is similar to that of a genetic algorithm, usually using a binary code, after which an initial
population is randomly generated. Using the particular problem as a basis, an fitness function is defined
and the fitness of each individual in the population is calculated. Individuals with greater fitness are
selected from the population, thus forming a new population. Using this population as a sample, an
undirected graph is constructed based on the analysis of these samples, and a depth-first search is
performed on this undirected graph to determine an order of nodes, based on which an algorithm is
utilized for obtaining the structure of the Bayesian network. Using the maximum likelihood estimation
algorithm, each of conditional probability is found and the Bayesian network parameters are calculated
as shown in Equation (1).
num( X i  xi , fa( X 1 )  yi )
p( X i  xi fa( X 1 )  x1 )  (1)
num( fa( X 1 )  yi )
Equation (1) illustrates that X i serves as the variable, x1 is the value chosen for the variable,
X i fa ( X i ) represents the parent node of X i , yi is the value taken for the parent node fa ( X i ) ,
num( X i  xi , fa ( X 1 )  yi ) is the quantity of samples in the sample set (SS) when
X i  xi , fa ( X 1 )  yi is also made to hold, and num( fa ( X 1 )  yi ) is the quantity of samples in the SS
fa ( X 1 )  yi . The Bayesian network takes the conditional probability distribution of each node as a
sample, thus generating a new set of individuals. The set of individuals and the SS are combined to
form a new population, then the fitness value (FV) of each individual in this population is counted, the
individual with the highest FV is found among them, their information is made into a vaccine, the
population is vaccinated using the vaccination probability, the vaccinated population is updated using
immune selection, and the individuals with greater fitness are selected among the updated population,
thus forming a new If the termination condition is met, the result is output if it is met, and if it is not,
the operation continues. The study uses the less computationally intensive K2 algorithm to learn the
structure of Bayesian networks. The first step is to determine the order of the nodes, which is divided
into two steps: building the undirected graph and searching the undirected graph. The data in the SS
can be used to estimate the conditional probabilities, which are calculated as shown in Equation (2).
num( X i  xi , X 1  x1 )
p( X i  xi X 1  x1 )  (2)
num( X1  x1 )
As shown in Equation (2), num( X i  xi , X 1  x1 ) is the samples’ quantity in the SS where
X i  xi , X 1  x1 num( X i  xi , X 1  x1 ) holds simultaneously and num( X 1  x1 ) serves as the
quantity of samples in the SS at X 1  x1 X 1  x1 . The undirected graph representation created is shown
in Figure 2.

1 2 3

4 5 6

7 8
Figure 2 Schematic diagram of undirected graph
The undirected graph is represented by an adjacency matrix as shown in Figure 2 and a point is
randomly chosen as the starting point (SP). Here it is assumed that the SP is 2, then the next nodes
connected to it are 1 and 3. As 1 is smaller, the next node is 1 and the other nodes are similar to it.
When a node 5 is reached and all the nodes connected to it are selected, it will return to node 7, at
which point only nodes connected to node 8 are available. Again, if a node also has no nodes connected
to it, then it will keep returning. By the time it reaches node 2, it again has nodes connected to it.
Therefore, the last node is visited in the order 2.1.4.7.5.8.3.6. In the undirected graph search step, the
study selects the commonly used depth-first search method for the undirected graph. Figure 3 indicates
the relevant details of the traditional BA.
Begin

Initialization

End

Calculation of fitness function


Stop running and output
results

Building Bayesian Networks


Whether the
judgment conditions
are met?
Bayesian network sampling

Figure 3 Schematic flowchart of traditional BOA algorithm


As shown in Figure 3, traditional BA construct a Bayesian network over the set of solutions to a
known problem, and then sample the Bayesian network for obtaining a new solution. A best solution to
a given issue is evolved step by step from an initial population [18]. The specific operations of BA
contain: initialisation, adaptation function calculation, construction of Bayesian networks, Bayesian
network sampling and conditional judgement. There are many ways to choose from for each step of
Bayesian optimization, but each has its own advantages and shortcomings, so the various ways of
Bayesian optimization are analysed in detail and the appropriate way is chosen for different problems
to maximise their effectiveness. An example is given for Bayesian networks, as shown in Figure 4.
0.4 0.6 Snow A1

0.6 0.4 A1=1


A1=1 0.8 0.2
Traffic jam A2 FAll A1 0.1 0.9 A1=0
0.3 0.7
A1=0

A2=1 A1=1 1.0 0.0


0.05 0.95 A3=1
A2=1 A1=0 0.9 0.1 Be late A4 Fracture A4
0.01 0.99 A3=0

A2=0 A3=1 0.9 0.1

A2=0 A3=0 0.2 0.8

0.9 0.1 Deduct money


A4=1
A4
A3=0 0.2 0.8

Figure 4 Examples of Bayesian networks


There are two main types of scoring functions (SF) included in Bayesian networks, the more
commonly used of which is the Bayesian SF, whose learning objective is to find the maximum scoring
structure, taking the fixed SS as a premise and using the knowledge in the network topology to obtain
the topology that maximises the posterior probability, where the relevant formula is showcased in
Formula (3).
p( D M ) p( M ) p( D, M )
p(M D)   (3)
p( D) p( D)

As shown in Equation (3), M is the known topology, p( D M ) is the conditional probability of

the known topology, p ( M ) is the prior probability (PP) of the topology, and p ( D ) is the PP of the SS
D . Usually p ( D ) is continuous and independent of the topology, and Equation (3) is simplified as
indicated in Equation (4).

log p(M , D)  log p(D M )  log p(M ) (4)

The K2 SF in the Bex SF, extended by the K2 SF for the BD SF, has the function performance
shown in public (5).
n qi
(ri ) ri
p( M , D)  p( M )  N ijk ! (5)
i 1 j 1 (ri  Nij ) k 1

As shown in Equation (5), i is the quantity of variables in the training SS, j serves as the
possible values of the node’s parent node, qi is the maximum possible value of the variable’s parent
node in the SS, k is the possible value of the current processing node (CPN), ri is the maximum

possible value of the CPN in the SS, and N ijk is the quantity of samples in the set of variables in the

sample.
3.2 Application of BA based on IA optimization in MT IS
IS is an essential element not only in image process but also in real-life image processing, where
the aim is to extract meaningful content from the image for subsequent processing [19]. Regions of the
image that have special significance are segmented according to different features of the image, such as
grey scale, colour or texture, which do not intersect with each other and each region is significantly
different from the other. The three common methods used for IS are thresholding, region segmentation
and edge detection, and the study chooses the thresholding method to segment the image. The process
of implementing an improved BA based on the IA for IS is shown in Figure 5.

Begin

End
Code

Determine
Output the optimal segmentation threshold
fitness function
and use it for image segmentation

initialization Y

N Whether the judgment


Calculate fitness conditions are met?
function value

Sampling Improved Bayesian


Optimization Algorithm for Decode
Population Evolution

Figure 5 Implementation process of improved BOA algorithm based on IA in IS


As shown in Figure 5, the specific steps of the algorithm include: encoding, determining the
fitness function, initialisation, calculating the value of the fitness function, selecting a new population
consisting of an ensemble with a greater fitness from the population, decoding, judging whether the
termination condition is met, and using the best segmentation threshold obtained to separate the subject
and background of the image by dividing the image pixels into two classes according to whether they
are greater than the best segmentation threshold. The algorithm is simple in principle, fast in operation
and particularly suitable for situations where the target and background are relatively strong. One of the
thresholding methods used in the image’s pixel grey average is represented as shown in Equation (6).
  n1 1  n0 0 (6)
As shown in Equation (6), where  represents the mean grey scale value and divides the box
count threshold into two categories, denoted as C0 and C1 , n1 and n0 denote the number of pixels
in C0 and C1 respectively, and similarly n1 and n0 denote the mean grey scale value of pixels in
C0 and C1 . The pixel variances are shown in Equation (7).
t
 02   (i  0 ) 2 / n0 (7)
i 0

As shown in Equation (7),  02 is the pixel variance of C0 . The pixel variance of is shown in

Equation (8).
255
 12   (i  1 ) 2 / n1 (8)
i 0

As shown in Equation (8),  12 is the pixel variance of C1 . The inter-class variance for both
classes of pixels is shown in Equation (9).

 C2  n0 (  0 )2  n1 (  1 )2  no n1 (0  1 )2 (9)

As shown in Equation (9),  C2 is the between-class variance of C . The class-quilt variance of

the two classes of pixels is shown in Equation (10).

 l2  no 02  n112 (10)

As shown in Equation (10),  l2 is the intra-class variance of C . The optimal segmentation

threshold is represented as shown in Equation (11).

T  max  C2 (t ) /  t2 (t )  t  0,1, 2,..., 255 (11)

The maximum interclass variance method is a common IS algorithms based on eigenvalues.


Derived from the principle of least squares, the maximum interclass variance method is a simple and
convenient algorithm, and it possesses an extensive range of applying, making it one of the most
common methods used in selecting images. Figure 6 indicates the relevant details.

Begin

Generate initial antibody


population and encode it

Calculate the affinity


between antigens and Produce new
antibodies antibodies

Update memory cells


Antibody evolution

Optimal solution/ N
Antibody selection
reaching the end bar?

End

Figure 6 Basic flowchart of artificial IA


As shown in Figure 6, four of the artificial immunisation algorithms that have been successfully
applied are the Negative Selection Algorithm, the Clonal Selection Algorithm, the immunogenetic
Algorithm and the immunoplaning Algorithm. The immune planning algorithm belongs to a kind of
optimisation algorithm that combines the vaccination mechanism with the genetic algorithm, which
theoretically explores the possibility of selectively and purposefully using some prior knowledge or
feature information in problem solving to suppress degradation in the optimisation process when
dealing with tricky problems, so that the algorithm can get better performance, and therefore is a study
of choosing the artificial IA in the immune planning algorithm to combine with BA for improvement.
For comparing the differences between the improved and traditional BA and for verifying the
superiority of the algorithm in computational complexity, the study chose a commonly used test, the
deceptive function, to test the effectiveness of the algorithm. When using the genetic algorithm to solve
the deceptive function, it possibly fall into the local optimum point, which is due to the chain
relationship between different gene loci in the chromosome of the solution, therefore, for testing the
improved algorithm, the following deceptive function is chosen to test it in this paper. The third order
deception function is represented as shown in Equation (12)

0.9, u  0
 
f (u )  0, u  1, 2  (12)
1, u  3 
 

As shown in Equation (13), u represents the number of input strings 1. The fifth-order deception
function is represented as shown in Equation (13).

 4  u , u  5

f (u )    (13)
5, u  5 

For a phase-free overlap spoof function of order k , the formula is shown in Equation (14).
n
F (u )   f (ui ) (14)
i 1

As shown in Equation (14), ui is the number of 1’s in the i group k of binary operations and
n is the function scale. The formula for the overlapping third-order deception function is shown in
Equation (15).
F (u )  f (u1 )  f (u2 )  f (u3 ) (15)
u x , x , x u
As shown in Equation (15), 1 is the quantity of 1’s in 1 2 3 , 2 indicates the number of 1’s
in 3 , x4 , x5 and u3 indicates the quantity of 1’s in x5 , x6 , x7 .
x
4. PERFORMANCE VERIFICATION OF IMPROVED BA BASED ON IA AND ITS
EFFECTIVENESS IN IS
4.1 Performance and Superiority Verification of Improved BA Based on Immunity Algorithm
For verifying the BA before and after the improvement, the study was carried out in both
algorithms, setting the same parameters, choosing a third-order non-overlapping spoofing function of
scale 30 and a third-order with overlapping spoofing function of scale 31, as well as a fifth-order
non-overlapping spoofing function of scale 30, and simulation experiments under three conditions. The
relevant experimental outcomes is indicated in Figure 7.
10.0 15

9.5 14

13
9.0

Astringency
12
8.5
Traditional
11
8.0 Traditional Bayesian algorithm
Bayesian algorithm Improved Bayesian
10
7.5 algorithm
Improved Bayesian
algorithm 9
7.0
8
6.5 0 10 20 30 40 50 60 70 80 90 100
0 10 20 30 40 50 60 70 80 90 100
Convergence time
Convergence time
(a)Comparison Results of Third Order Non overlapping (b)Comparison Results of Third Order Overlapping
Deception Functions with a Function Scale of 30 Deception Functions with a Function Scale of 31

28

27

26
Astringency

25

24

23

22

21
0 10 20 30 40 50 60 70 80 90 100
Convergence time

(c)Comparison Results of Fifth Order Non overlapping


Deception Functions with a Function Scale of 30

Figure 7 Convergence curves under three simulation conditions


As shown in Figure 7, the improved Bayesian optimisation algorithm converges faster than the
traditional BA under all conditions, and is able to reach the optimal value faster. In particular, the
fastest convergence is reached under the condition of a third-order non-overlapping deception function
with a function size of 30. For investigating the details of the other performance improvements of the
algorithm, i.e. the optimality finding ability and the computing time, the traditional BA and the BOA in
view of the IA were run under the same conditions for different function sizes. The function sizes were
divided into 15 and 30, and the population size of 60 was chosen at function size 15; the population
size of 10,000 was chosen at function size 30, with a vaccination probability of 0.08 and a variance
locus of 2. The experimental results on the third-order deception function are represented in Table 1.
Table 1 Threshold results and running time on third order deception functions
Number of times to reach Average
Optimization algorithm Function Optimal value
the optimal value run time (s)
Traditional Bayesian 15 5 20 18.0
algorithm 30 10 18 140.3
Improved Bayesian 15 5 30 15.2
algorithm 30 10 30 80.4
Traditional Bayesian 15 5 20 21.7
algorithm 30 10 30 166.2
Improved Bayesian 15 5 30 16.8
algorithm 30 10 30 92.1
As shown in Table 1, at smaller function sizes, the improved algorithm based on the IA does not
show much difference in computation time compared to the conventional algorithm, but it has a
significant improvement in its optimality finding capability, reaching the optimal value every time, for
every run. In the case of larger function sizes, the improved algorithm in view of the IA is made
comparison with the conventional algorithm, which not only provides a significant improvement in the
optimality seeking capability, but also ensures that the optimal solution is found every time, and that
the amount of computation required to find the optimal solution is markedly diminished, and the
computation time is also markedly diminished. The experimental results on the fifth-order deception
function are indicated in Table 2.
Table 2 Threshold Results and Running Time Five on the Third Order Deception Function
Optimization Number of times to reach Average run
Function Optimal value
algorithm the optimal value time (s)
Traditional Bayesian 15 5 21 24.2
algorithm 30 10 30 175.2
Improved Bayesian 15 5 30 14.7
algorithm 30 10 30 83.4
Table 2 illustrates that a comparison in Table 1 and Table 2 indicates that there is no significant
difference in search efficiency between the improved BOA in the case of third order and fifth order, but
the search efficiency is the same, which also indicates that the improved BOA is more advantageous in
solving problems of high complexity and advanced associativity. To further test the effectiveness of the
algorithm, the study selected a standard image as the segmentation target, and the algorithm was
repeated several times using the conventional genetic algorithm, the conventional BOA and the
improved BA, respectively, for five groups of experiments, each of which contained 50 segmentation
experiments to obtain the segmentation effect as shown in Figure 8.

50
5
Number of times the optimal
threshold has been reached

40 4
Run time(s)

30
3

20 2
Improving GA Improving GA
10 Traditional GA Traditional GA
1

0 0
1 2 3 4 5

Group

Figure 8 Comparative results of traditional GA and improved GA in segmentation experiments


Figure 8 indicates that although the operation time of the BOA is slightly longer than that of the
traditional BA algorithm, the optimization-seeking capability of the BOA is higher than that of the
traditional BA, and the optimization-seeking ability of the improved BOA is further strengthened, and
its average threshold value also differs little from the theoretical value, and the threshold result is better,
the effectiveness of the BOA in IS can be demonstrated.
4.2 Experimental Results of Improved BA in IS Simulation
To further demonstrate the function of the IA- BA (I- BA) based improved BA for segmentation in
MT images, the study compared the improved BA with five other traditional algorithms, namely DE,
HHO, PSA, SCA, MFO, and PSO, in the dataset. The study compares the proposed improved IA with
similar traditional and improved algorithms and all the algorithms are tested on the benchmark function
for 30 times and their convergence results are shown in Figure 9.

13000 1600
HHO I-BA DE HHO
I-BA DE
12000 1500
SCA SCA PSO MFO
PSO MFO 1400
11000
1300

Optimal value
10000

9000 1200

8000 1100

7000 1000

6000 900

0 0.5 1.0 1.5 2.0 2.5 3.0 0 0.5 1.0 1.5 2.0 2.5 3.0
×105 ×105
FEs FEs

(a)F5 (b)F8

Figure 9 Convergence curves obtained from testing I-BA algorithm and other similar traditional
algorithms on benchmark functions
As shown in Figure 9, the convergence curves obtained by the I-BA algorithm tested on two
benchmark functions, F5 and F8, outperformed similar conventional algorithms. The I-BA algorithm
outperformed other comparative algorithms in convergence accuracy (CA) and the capability for
avoiding falling into local optima. This indicates that the I-BA algorithm performs well on different
tasks. To further illustrate the superiority of the I-BA algorithm relative to other similar algorithms, an
experimental comparison was performed on the benchmark function. Its convergence curve is shown in
Figure 10.

1010 1010
Optimal value

Optimal value

I-BA CBA RDWOA I-BA CBA RDWOA

MSCA HGWO BEOA MSCA HGWO BEOA

105 105

0 0.5 1.0 1.5 2.0 2.5 3.0 0 0.5 1.0 1.5 2.0 2.5 3.0
FEs ×105 ×105
FEs
(a)F1 (a)F5

Figure 10 Convergence curves obtained from testing I-BA algorithm and other similar improved
algorithms on benchmark functions
This is demonstrated in Figure 10, where convergence curves are given for certain MDEs on
certain test functions as well as for other improved algorithms. The relevant outcomes indicate that the
method possesses an excellent CA and a strong ability to jump out of local extremes. Therefore, in
view of the analysis of the comparative experimental results where MDE performs significantly with
other similar algorithms, it can be well demonstrated that MDE possesses high CA, the capability of
jumping out of local optima and also obtains high quality solutions.

5. Conclusion
BA are a new evolutionary approach to probabilistic modelling of their solution spaces using
Bayesian networks, and are a dynamic and promising new approach. The research on BA is constantly
intensifying, but this approach has its own drawbacks, such as being computationally intensive, among
others. For this reason, BA based on IA are chosen to address the problem of high computational
complexity. The IA is combined with the BA to improve the fitness of individuals in the population by
using the guided variation of the IA. Simulation experiments show that the improved BA in view of the
IA can effectively reduce the computational effort, shorten the computing time, and improve the
optimisation capability compared with the traditional BA. This study applies the optimised BA to MT
IS, and uses the excellent merit-seeking ability to search for the best threshold to complete the IS. The
simulation outcomes illustrate that the improved BA can achieve better IS results than the genetic
algorithm, and the average quantity of times the improved BA achieves the optimal value is 30 times
higher than the traditional algorithm’s 20 times. However, the reduction in the number of times the
Bayesian network is constructed in the study does not start from the root i.e. the Bayesian network
construction method, which still leads to some shortcomings. In future research, we will strive to
further reduce the computational effort by starting from the construction algorithm of Bayesian
networks. The application of BA to IS broadens the application area of the algorithm and finds a new
direction to explore for IS.

References
[1] Gonalves M S, Lopez R H, Oroski E, Valemte A. A Bayesian algorithm with second order
autoregressive errors for B-WIM weight estimation. Engineering Structures, 2022, 250: 31-46.
[2] Fang C, Liu H J, Lam H F, Adeagbo M O, Peng H Y. Practical model updating of the Ting Kau
Bridge through the MCMC-based Bayesian algorithm utilizing measured modal parameters.
Engineering Structures, 2022, 254: 91-105.
[3] Koo J, Halimi A, Mclaughlin S. A Bayesian based deep unrolling algorithm for single-photon lidar
systems. IEEE Journal of Selected Topics in Signal Processing, 2022, 16(4): 762-774.
[4] Li W, Huang Q, Srivastava G. Contour feature extraction of medical image based on multi-threshold
optimization. Mobile Networks & Applications, 2021, 26(1): 381-389.
[5] Xing Z. An improved emperor penguin optimization based multilevel thresholding for color image
segmentation. Knowledge-Based Systems, 2020, 194(4): 21-40.
[6] Zhang X, Zhao X, Zhong J, Ma N. Low carbon multi-objective scheduling of integrated energy
system based on ladder light robust optimization. International Transactions on Electrical Energy
Systems, 2020, 30(9): e12498.
[7] Li Y, Cao G, Wang T. A novel local region-based active contour model for image segmentation
using Bayes theorem. Information Sciences, 2020, 506(8): 443-456.
[8] Liu Y. Optimization of architectural art teaching model based on Naive Bayesian classification
algorithm and fuzzy model. Journal of Intelligent & Fuzzy Systems, 2020, 39(2): 1965-1976.
[9] Zhang B, Peng M, Cheng S, Sun L. A decision-making method based on Bayesian optimization
algorithm for small modular reactor. Kerntechnik, 2020, 85(2): 109-121.
[10] Sultana N, Hossain S M Z, Abusaad M, Alanbar N, Senan Y, Razzak S A. Prediction of biodiesel
production from microalgal oil using Bayesian optimization algorithm-based machine learning
approaches. Fuel, 2022, 309: 122184.
[11] Hao X, Zhang Z, Xu Q, Huang G, Wang K. Prediction of f-CaO content in cement clinker: A novel
prediction method based on LightGBM and Bayesian optimization. Chemometrics and Intelligent
Laboratory Systems, 2022, 220: 104461.
[12] Ma J, Zhang J, Li R, Zheng H, Li W. Using Bayesian optimization to automate the calibration of
complex hydrological models: Framework and application. Environmental Modelling & Software,
2022, 147: 105235.
[13] Ji B, Hu X, Ding F, Ji Y, Gao H. An effective color image segmentation approach using
superpixel-neutrosophic C-means clustering and gradient- structural similarity. Optik, 2022, 260:
169039.
[14] Yan X, Weng G. Hybrid active contour model driven by optimized local pre-fitting image energy
for fast image segmentation. Applied Mathematical Applied Mathematical Modelling, 2022, 101:
586-599.
[15] Li W, Huang Q, Srivastava G. Contour feature extraction of medical image based on
multi-threshold optimization. Mobile Networks & Applications, 2021, 26: 381-389.
[16] Xing Z. An improved emperor penguin optimization based multilevel thresholding for color image
segmentation. Knowledge-Based Systems, 2020, 194: 105570.
[17] Duan Z, Wang L, Sun M. Efficient heuristics for learning Bayesian network from labeled and
unlabeled data. Intelligent Data Analysis, 2020, 24(2): 385-408.
[18] Pappalardo P, Ogle K, Hamman E A, Bence J, Hungate B, Osenberg, C. Comparing traditional and
Bayesian approaches to ecological meta-analysis. Methods in Ecology and Evolution, 2020,
11(10): 1286-1295.
[19] Masood F, Masood J, Zahir H, Driss K, Mehmood N, Farooq H. Novel approach to evaluate
classification algorithms and feature selection filter algorithms using medical data. Journal of
Computational and Cognitive Engineering, 2023, 2(1): 57-67.

You might also like