Quantum Machine Learning: A Systematic Categorization Based On Learning Paradigms, NISQ Suitability, and Fault Tolerance
Quantum Machine Learning: A Systematic Categorization Based On Learning Paradigms, NISQ Suitability, and Fault Tolerance
https://doi.org/10.1007/s42484-025-00266-4
REVIEW ARTICLE
Received: 22 July 2024 / Accepted: 22 February 2025 / Published online: 11 March 2025
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2025
Abstract
Quantum computing has become popular as a result of the rise in computational demands for solving issues in a variety
of sectors that are computationally expensive to tackle with conventional computers. Quantum computers are based on
ideas taken from quantum mechanics including superposition, entanglement, tunnelling, and annealing. Numerous industries,
including artificial intelligence, cryptography, and communication, have found use for quantum computing. A novel family of
quantum algorithms has emerged to challenge the conventional wisdom on the limited applications of a quantum computer. In
addition to providing exponential speedups over current classical techniques, these new algorithms also do so for a number of
eminently realistic problems, including machine learning problems like clustering of data, classification of data into different
classes, and finding patterns in massive number of available data. At the nexus of two of the most intriguing current fields
of research, conventional machine learning and quantum computing, lies quantum machine learning. Quantum machine
learning (QML) is an emerging field that merges the principles of quantum computing with machine learning, aiming to
leverage quantum computational power for complex data processing tasks. This paper provides a systematic categorization
of QML, structured around three fundamental dimensions: learning paradigms, noisy intermediate-scale quantum (NISQ)
devices, and fault-tolerant design. Initially, we classify QML techniques into supervised, unsupervised, and reinforcement
learning paradigms, elucidating how quantum approaches can enhance traditional machine learning tasks. Subsequently,
we examine the capabilities and constraints of NISQ devices, which represent the current generation of quantum hardware,
and their impact on the development and performance of QML algorithms. Finally, we delve into fault-tolerant quantum
computing, discussing the importance of error correction and robust algorithmic frameworks necessary for achieving reliable
and scalable quantum machine learning. By categorizing QML along these lines, we aim to provide a clear and comprehensive
overview of the field, highlighting the current state of research, practical implementations, and future challenges in advancing
quantum-enhanced machine learning.
Keywords Quantum machine learning (QML) · Classical machine learning (CML) · Machine learning (ML) ·
Quantum reinforcement learning (QRL) · Quantum support vector machines (QSVM) ·
Quantum principal component analysis (QPCA) · Quantum neural network (QNN) ·
Noisy intermediate-scale quantum (NISQ)
1 Introduction Krenn et al. 2023; O’Quinn and Mao 2020). Although such
study fields are now being recognized, the very first principles
The connection between the domains of AI and machine were identified in the early phases of quantum computing.
learning, as well as quantum information processing, pose The domain of artificial intelligence employs several tech-
substantial challenges and opportunities (Nguyen et al. 2024; niques that are mostly geared towards resolving issues that
are challenging for computers but appear simple to humans.
B Bisma Majid
The area of machine learning, which emerged from the exam-
[email protected]
ination of patterns in view of artificial intelligence, deals with
1 Department of Information Technology, NIT Srinagar, J&K, many algorithmic elements of learning problems.
India The field of creating and applying quantum software to
2 Department of Mathematics, NIT Srinagar, J&K, India machine learning that can be faster in terms of speedup than
123
39 Page 2 of 55 Quantum Machine Intelligence (2025) 7:39
that of conventional computers is known as quantum machine understanding of classical machine learning and quantum
learning (Biamonte et al. 2017). Some recent research has machine learning and its key concepts. Section 3 outlines
developed quantum algorithms which may potentially serve the methodology for literature identification, detailing the
as the base of machine learning programs, but still there approach taken to select and analyze relevant works. In Sec-
are substantial hardware and software challenges in quan- tion 4, we review the related work, examining prior research
tum machine learning. (Cerezo et al. 2022). Many machine and its contributions to the field. Section 5 presents the moti-
learning tasks, such as large-scale data classification, opti- vation and contribution of our study, highlighting the need
mization of neural networks, and solving high-dimensional for a comprehensive categorization of quantum machine
problems, stand to benefit from the exponential speedups learning algorithms. Section 6 introduces the systematic cat-
promised by quantum algorithms (Farhi et al. 2014; Reben- egorization of quantum machine learning algorithms based
trost et al. 2014). on learning paradigms, NISQ suitability, and fault toler-
The aim of this paper is to provide a systematic cate- ance. In Section 7, we explore the applications of quantum
gorization of QML models using three pivotal dimensions. machine learning in various domains. Section 8 discusses the
The paper categorizes QML approaches based on learning recent developments in quantum machine learning, covering
paradigms, NISQ suitability (Bharti et al. 2022; Wang and advancements in algorithms and hardware. Section 9 offers
Liu 2024), and fault tolerance (Wang and Liu 2024). This a discussion on the implications of our findings, and finally,
classification is essential for mapping quantum advantages Section 10 concludes the paper with key takeaways and future
to specific ML tasks, such as classification, clustering, and directions
decision-making. By doing so, the authors provide a clear
understanding of how quantum algorithms can enhance or
reimagine these traditional tasks. Given the constraints of 2 Background
NISQ devices (e.g., limited qubit counts, high error rates,
and decoherence), the paper evaluates the feasibility of In the background section, we provide an overview of
implementing QML models on existing quantum hardware. classical machine learning paradigms, their computational
It highlights the types of algorithms that are resilient to challenges, and how quantum machine learning (QML)
noise and resource limitations, focusing on near-term appli- offers a potential pathway to address these limitations while
cability while balancing theoretical quantum advantages. introducing new computational paradigms.
For quantum systems to fully realize their potential, fault
tolerance—achieved through error-corrected qubits—will be 2.1 Classical machine learning
critical. The paper examines which QML models will benefit
most from such systems, providing a roadmap for their long- The idea of classical machine learning is to improve the
term viability and performance. This perspective is essential performance of classical computers with experience. The
as researchers prepare for the transition from NISQ to fault- three basic classical kinds of learning in machine learning
tolerant quantum computers. A taxonomy of QML models is are supervised learning, unsupervised learning and reinforce-
proposed, offering a unified structure to evaluate their feasi- ment learning. Analysis of data and data mining tasks are the
bility and potential based on learning paradigms, hardware fundamental building blocks of supervised and unsupervised
readiness, and fault-tolerant requirements. The paper offers learning. While interaction-based learning, or reinforcement
practical insights into which QML approaches can be effec- learning, improves learning step by step. Contemporary
tively implemented with current NISQ hardware and which machine learning deals with learning from interaction, such
should be reserved for fault-tolerant quantum systems. It as reinforcement learning, or from data, such as supervised
analyzes the theoretical and practical advantages QML may learning mainly used for data classification and unsupervised
offer over classical ML techniques, considering both compu- learning which is used for data clustering as shown in Fig. 1.
tational complexity and data representation capabilities. This In the subsections that follow, we discuss each type of
work serves as a foundational resource for researchers, devel- learning.
opers, and practitioners in quantum computing and machine
learning. By categorizing QML models along these three 2.1.1 Supervised learning
dimensions, the study enables the community to strategically
focus on models that balance theoretical potential with prac- A set of training data denoted by D with various input–output
tical feasibility, ultimately accelerating the development and pairs (x, t) is given to us in supervised learning. Generally
adoption of QML in real-world applications. speaking, the input x is an n-dimensional vector which rep-
The rest of the paper is organized as follows: In Sec- resents n input features. The main aim of supervised learning
tion 2, we provide the background, offering a foundational is to forecast the output. In other words, we need to infer the
123
Quantum Machine Intelligence (2025) 7:39 Page 3 of 55 39
mapping between the inputs and outputs of the given dataset whereas we define the point distribution family p(x, t |
D. For any input x in D, the goal is to anticipate the out- θ ) for generative models.
ˆ Twitter sentiment analysis is a good illustration of
put t(x). b. Learning: In the second stage of supervised learning, the
supervised machine learning. The classifier labels tweets as aim is to optimize a learning parameter which in this case
positive tweet or negative tweet based on a training set of is the loss function f (tˆ(x), t) given training set D. In this
tweets that were either labeled as positive or negative. way, we try to learn the parameter θ and by extension
The use of the loss function is to predict the precision of determine the model’s density from the parameterized
the mappings learned by the supervised algorithm. The dis- family of distributions by θ .
tance between the predicted value t(x) and the actual output t c. Testing or inference: The last phase is known as infer-
can be measured using a variety of loss functions, depending ence or testing. The learnt model is applied to forecast
on the situation (x). The goal is to reduce this loss function, the output tˆ(x) keeping in accordance to reduce the loss
ˆ t(x)) as minimum as possible. The probability den-
f (t(x), parameter. The discriminative model would provide the
sity function p(x, t) can be predicted in three steps: predictive distribution immediately, whereas if the gen-
erative model is used in step 1, marginalization should
have been utilized.
a. Selection of model: The distribution function assuming
some vector θ can be considered to belong to a family of
functions. This is known as inductive bias. To specify a 2.1.2 Unsupervised learning
certain parametric family of density functions, there are
primarily two options. The predictive distribution p(t | Contrary to supervised learning, this class of learning does
x, θ ) for discriminative models is directly parametrized, not have labeled data points. There are a number of unlabeled
123
39 Page 4 of 55 Quantum Machine Intelligence (2025) 7:39
inputs x in the training set D. The purpose of this procedure Markov decision process is used to describe the algorithm’s
is to draw out meaningful properties from this data. The tasks environment.
listed below appeal to us:
2.1.4 Issues in classical machine learning that can be
a. Density estimation: Here, we attempt to directly estimate addressed with quantum machine learning
the probability distribution p based on the training set x.
b. Clustering: Based on the data’s commonalities and dif- QML aims to leverage quantum computing’s unique proper-
ferences, we might choose to group the information into ties to tackle some challenges in classical machine learning
different clusters. The specific instance at hand and the (CML). Here are key issues in CML that QML can address:
application you are working in will determine what simi-
larity and difference mean in this situation. Consequently,
1. Scalability and high-dimensional data: Classical algo-
despite the input data collection lacked any labels or clas-
rithms struggle with the exponential growth of data
sifications, however by using clustering, we were able to
dimensions, leading to computational bottlenecks in
easily split the groupings of data points.
tasks like feature selection, clustering, and optimiza-
c. Dimensionality reduction: In order to better visualize the
tion. Quantum algorithms, like quantum-enhanced sup-
correlations between multiple components, the method
port vector machines, exploit high-dimensional Hilbert
entails displaying the data point xn in several spaces. This
spaces to encode and process data efficiently, offering
representation is typically made in a space with fewer
potential exponential speedups (Rebentrost et al. 2014).
dimensions since it makes it easier to extract the connec-
2. Model training complexity: Training large-scale models
tions between the different components.
like deep neural networks can require immense compu-
d. Generation of new samples: We could want to create
tational power and time. Variational quantum circuits
fresh samples using the data set D that roughly imitates
and hybrid quantum-classical models could reduce the
the probability density of the unlabeled practice data. An
complexity of training certain models by optimizing
appropriate illustration is how sports analysts and scien-
parameters more efficiently using quantum gradients or
tists use the same to forecast behavior of sportsman.
quantum-enhanced optimization techniques (Chen et al.
Similar to supervised learning, this also involves three
2020; McClean et al. 2016; Slabbert and Petruccione
steps: choosing a model, learning it, and applying it to
2024; Chen et al. 2020).
cluster data or create new examples.
3. Optimization problems: Many ML tasks rely on solv-
ing complex optimization problems, often encountering
2.1.3 Reinforcement learning issues like getting stuck in local minima or requiring pro-
hibitive computation time for large datasets. Quantum
In order to optimize a parameter while making decisions optimization algorithms, such as the quantum approxi-
sequentially is the goal of reinforcement learning. The mate optimization algorithm (QAOA), can provide better
comparison between supervised learning and reinforcement solutions to non-convex optimization problems faster
learning lies in the fact that the former had a training set “D” than classical counterparts in specific scenarios (Farhi
with pairings of input/output where the output is an accu- et al. 2014).
rate description of the input with a minimum value for the 4. Kernel-based methods: Computing kernel matrices for
loss function. There is no need to instantly correct subopti- non-linear feature mappings can be computationally
mal paths taken by the algorithms in reinforcement learning. expensive, especially for large datasets (Rebentrost et al.
As there is no instant proper response to the input, there is 2014). Quantum kernel estimation can compute complex
reward-based supervision to evaluate whether the number of similarity measures exponentially faster, enabling better
steps made is in the direction of required target or not. Rein- performance for tasks like classification and regression
forcement learning can be considered as the learning that is (Wang et al. 2021).
between supervised and unsupervised learning. 5. Sampling from complex distributions: Generating sam-
Since there is no established result or outcome for each ples from high-dimensional or complex probability dis-
input in reinforcement learning, the reinforcement algorithm tributions is computationally expensive, limiting meth-
receives input from the surrounding environment based on its ods like generative adversarial networks (GANs) or
actions. This happens only once the algorithm has determined Bayesian inference. Quantum generative models (e.g.,
an output for the specified input. The algorithm receives quantum GANs and quantum Boltzmann machines) nat-
input on how successfully the selected stages or actions have urally sample from quantum states, potentially providing
aided or hampered the accomplishment of the objectives in an advantage in generating complex distributions (Gao
the form of positive feedback and negative feedback. The et al. 2018; Amin et al. 2018; Crawford et al. 2019).
123
Quantum Machine Intelligence (2025) 7:39 Page 5 of 55 39
6. Data encoding and representation: Efficiently repre- In our digital age, these issues have become significantly
senting certain data types (e.g., molecular structures, more important. Google’s Page Rank, which is an example
quantum systems) in classical systems is challenging. of machine learning algorithm for search engines, which was
Quantum systems can natively represent certain datasets invented by Larry Page et al. (1999) in 1971, contributed to
more compactly and intuitively, such as in quantum the growth of one of the largest information technology firms
chemistry or material simulations (Peruzzo et al. 2025). in the world and serves as an excellent illustration. Other
7. Overfitting and generalization: Overfitting is a persis- significant uses of machine learning include spam filters,
tent problem in classical models, especially with small consumer behavior analysis, risk assessment in the financial
datasets. Quantum models, like variational circuits, may industry, iris recognition, and creating strategies for video
provide better generalization by leveraging the unique games. Machine learning, in summary, is used whenever
structure of quantum state spaces (Chen et al. 2020; computers are required to understand data depending upon
McClean et al. 2016). prior knowledge. With the aim to handle and process big data,
machine learning algorithms can be considered extremely
2.2 Quantum machine learning effective, by feeding our chosen model with enormous num-
bers of previously gathered pairs of input–output data.
In this section, we aim to comprehend how machine learning With the increase in popularity of quantum computing,
algorithms can be altered by utilizing quantum computing numerous research and academic contributions investigate
and information processing. We list a few recent advance- the value of utilizing the superiority and benefits of quantum
ments in this area at the conclusion of this section. computing for machine learning with the aim of enhanc-
Machine learning is a specialized area of computer science ing machine learning algorithms (O’Quinn and Mao 2020).
where focus is on finding patterns in data to interpret previ- There are several efforts being made to create quantum ver-
ously unintelligible inputs. In order to perform tasks that the sions of artificial neural networks that can be extensively
brain of human is naturally good at, like identifying patterns, employed in machine learning; however, the majority of the
speech, and optimization of picture recognition strategy, the methods frequently have a more biological foundation. In
machine learning algorithms that are a part of both artificial some approaches, the aim is to develop the complete quan-
intelligence and statistics have the tendency to process data tum algorithms that can be used to solve problems like
in large numbers (Fig. 2). pattern recognition. Other methods propose locating and run-
ning classical machine learning algorithm subroutines on a
quantum computer just to increase the speedup. One more
approach is adiabatic quantum machine learning, which can
be used for some classes of optimization problems (Farhi
et al. 2014). Models like Bayesian decision theory and hid-
den Markov models, which are stochastic, are smoothly
converted into the open quantum system language. A full
quantum learning theory, or how quantum information might
theoretically be applied to intelligent sorts of computers, is
still in the very early stages for promising results, despite the
topic’s growing popularity and claims made.
To solve the problems and hindrances faced in the way of
machine learning, we can take advantage of quantum com-
puting by exploiting the efficiency of quantum computing.
This can be accomplished by identifying the classical sub-
routines that are more expensive in terms of computation and
make them compatible to execute and process on a quantum
computer to achieve the speedup. With the advancement in
quantum technologies, it is anticipated that quantum devices
would be available for such applications and can be used to
process the expanding amounts of data available globally.
There are also methodologies available that can be used to
work the other way around that consist of machine learn-
ing techniques which can help in improving and extending
Fig. 2 Generalized block diagram for quantum machine learning quantum information theory.
123
39 Page 6 of 55 Quantum Machine Intelligence (2025) 7:39
A quantum learning theory defines methods for process- algorithm in quantum computing based on unitary quantum
ing of information in quantum computing that learn relation gates. Numerous theories in that area have previously been
of input to output from the training input in the dataset for researched; crucial tools might include control of feedback
the optimization of system parameters in quantum learn- in quantum or Hamiltonian learning in quantum. Adiabatic
ing (Dunjko et al. 2016) or to find a “decision function” or quantum computing could make for a good learning oppor-
“quantum strategy.” There are a number of unanswered ques- tunity. In conclusion, despite the fact that there is still much
tions regarding what an effective quantum learning process work to be done, quantum machine learning is still a very
might entail. For instance, how can we effectively execute prospective and promising new area of study with a wide
an optimization problem on a lucid or coherent and hence range of prospective practical applications.
reversible quantum computer which is often resolved by gra- To develop and design quantum machine learning algo-
dient descent which is an iterative and dissipative method? rithms, two different approaches can be used, and most
How can we use quantum states to convert and process crucial researchers fall somewhere in the middle of the two. The
structural data to its true value, such as metrics for distance? first approach seeks to convert classical models into quantum
How do we create a quantum physics-based decision-making mechanics’ language in order to get algorithmic speedups.
plan? And last, is there a method in general for quantum Reproducing the output of a certain model, such as a neural
physics to theoretically accelerate some machine learning network, is the only objective, but only after “outsourc-
problems? ing” all or parts of the computation or subroutines of the
The representation of classical data by quantum machines machine learning algorithms to a quantum device. Signifi-
is a relevant problem. In quantum computing, representing cant knowledge of quantum algorithmic design is needed for
conventional information as binary strings (x1 , ..., xn ) with the translational technique. The challenge is to set together
xi ∈ 0, 1 for i=1,...,n, is the most popular strategy that quantum subroutines that replicate the outcomes or results
can be straight away interpreted as n-qubit quantum states of the classical process while using few resources like com-
|x1 , ., ., ., xn from a 2n-dimensional Hilbert space with basis putational resources. While occasionally introducing some
|0, ., ., ., ., 00, |0, ., ., ., ., 01, ..., |1, ., ., ., ., 11, and mea- new techniques to the arsenal of quantum processes, learning
surements can be used to give out information. Existing does not here present a particularly novel issue. Instead, the
machine learning algorithms, however, frequently rely on computational difficulties that must be solved reflect more
the internal organization of this data, such as the Euclidean common mathematics issues like computation of a func-
distance as a comparison of two instances of features. Seth tion which is non-linear in nature, inverting a matrix, or
Lloyd and his colleagues, who encode classical information determining the best value for a objective function which
into the norm of a quantum state, have suggested alterna- is non-convex function in nature. As a result, the limits of
tive data representations |(x = |− →x |− 1− →x 2 , leading to the speedups are essentially the same as in quantum computing.
definition: The second approach departs from the limitations of well-
known conventional machine learning models and has other
x |− −
|(x = |−
→ 1→
x possible avenues that are still mostly unexplored. One starts
2
with a quantum computer instead of a classical algorithm and
The future of quantum machine learning may depend on considers what kind of machine learning model may match
the ability to represent and extract information in “truly its physical limits, formal language, and claimed benefits.
quantum” ways in order to make use of the advantages of This might result in a completely new model, optimization
quantum physics without being constrained by traditional goal, or even branch of machine learning that is based on the
notions of data encoding. A relatively small number of con- quantum computing paradigm. This strategy will be referred
tributions, despite emphasizing quantum machine learning to as exploratory. The exploratory approach may use any sys-
algorithms, genuinely address the topic of how the learning tem abiding by the laws of quantum mechanics to derive (and
process—the key strength and distinguishing characteris- subsequently train) a model that is suited to learn from data
tic of machine learning—can be effectively mimicked in instead of necessarily relying on a digital, all-purpose quan-
quantum systems. From a quantum standpoint, particularly tum computer to implement quantum algorithms. Runtime
learning techniques for parameter optimization have not yet speedups are a goal, but so is introducing novel approaches
been accessed. For this, various quantum computing strate- to the machine learning community. A thorough understand-
gies can be researched. The challenging aspect of quantum ing of the nuances of machine learning is required for this,
computing would be to parameterize and gradually adjust especially since the new model needs to be evaluated and
the transformations which are unitary and that constitute the benchmarked in order to reach its full potential (Fig. 3).
123
Quantum Machine Intelligence (2025) 7:39 Page 7 of 55 39
123
39 Page 8 of 55 Quantum Machine Intelligence (2025) 7:39
Algorithms” OR “Quantum Computing Algorithms”). We Scholar, while IEEE Xplore and ACM show a more dis-
focused on the same query to retrieve the documents across tributed contribution over the years. The data emphasizes
different databases. the growing research activity in the past 5 years across these
In addition, the study focused on narrowing the publica- platforms. We also analyzed the Google Scholar database
tion years to capture the most relevant research conducted that shows consistent growth from 6 in 2018 to 166 in 2024,
from 2019 to 2024. This refinement ensured the inclusion as shown in Fig. 7 The numbers rose sharply during 2020–
of studies aligned with current advancements and ongoing 2023, with significant increases each year, peaking at 158
trends in QML. in 2023. This highlight growing academic interest in QML,
driven by advancements in quantum computing.
3.3 Growth in publications from the past 5 years
3.4 Selection criteria for study inclusion
A critical analysis of the growth in publications over the
past 5 years (2019–2024) was conducted to evaluate the To ensure a rigorous and focused review, a set of inclusion and
increasing focus on quantum machine learning. After refining exclusion criteria were established to select studies relevant
the query, the total number of publications and the num- to QML. These criteria were applied consistently across all
ber of publications from 2019 to 2024 were extracted for retrieved publications to filter and analyze the most pertinent
each database. The results indicated a marked increase in studies. The details are summarized in Table 1 below:
the volume of QML-related publications in the past 5 years To ensure a comprehensive and systematic review of QML
as shown in Fig. 6 This was particularly evident in databases literature, we conducted an extensive literature search across
like SCOPUS and Google Scholar, which showed substantial multiple databases and platforms, including IEEE Xplore,
contributions in the recent years, reflecting the growing inter- arXiv, Scopus, and Google Scholar. The search queries com-
est in the field. The analysis shows a significant contribution bined terms related to the key dimensions of the study
of the past 5 years (2019–2024) to the overall publications included quantum machine learning, classification of quan-
across the databases. Google Scholar leads with 94.9% of tum machine learning, quantum machine learning algorithms
its total publications during this period, followed by SCO- suitable for NISQ, error-corrected quantum algorithms, and
PUS at 96.4%, IEEE Xplore at 49.6%, and ACM at 31.3%. fault-tolerant quantum machine learning algorithms. To
This highlights a strong recent focus in SCOPUS and Google ensure relevance and quality, we included peer-reviewed
123
Quantum Machine Intelligence (2025) 7:39 Page 9 of 55 39
journal articles, conference papers, and preprints focusing on learning or unrelated quantum computing topics. The screen-
QML models or applications, particularly those discussing ing and selection process began with an initial review of
learning paradigms, NISQ devices, or fault-tolerant quan- titles and abstracts to filter out irrelevant works, followed
tum systems. Foundational papers introducing key quantum by a detailed full-text review to assess the relevance of the
algorithms or frameworks relevant to machine learning were remaining articles to the study’s dimensions. Selected papers
also prioritized. Conversely, we excluded papers unrelated were then categorized based on their primary focus into
to QML or lacking direct relevance to the dimensions three key areas: learning paradigms (e.g., supervised, unsu-
of categorization (learning paradigms, NISQ suitability, pervised, reinforcement learning), NISQ suitability (e.g.,
fault tolerance), non-English publications without accessible noise-resilient algorithms, hardware feasibility), and fault
translations, and articles focused solely on classical machine tolerance (e.g., scalable and error-corrected models).
123
39 Page 10 of 55 Quantum Machine Intelligence (2025) 7:39
123
Table 2 Summary of related work
Reference Title Methodology Key focus
Rebentrost et al. Quantum support vector Evaluation of inner products using quantum algorithm, matrix inver- Design a support vector machine exploiting quantum advantage
(2014) machine for big data classi- sion algorithm
fication
Landman (2021) Quantum algorithms for Quantum computing and quantum algorithms Design unsupervised machine learning algorithms and neural networks
unsupervised machine in quantum computing
learning and neural net-
works
Cerezo et al. Challenges and opportuni- Quantum computing and quantum algorithms Design unsupervised machine learning algorithms and neural networks
(2022) ties in quantum machine in quantum computing
Quantum Machine Intelligence (2025) 7:39
learning
Parthasarathy and Quantum optical convo- Quantum optical neural networks (QONNs) QOCNN model that achieves accuracies that are similar to the models
Bhowmik (2021) lutional neural network: a that are used as benchmark. Outperforms in robustness
novel image recognition
framework for quantum
computing
Duan et al. (2020) A survey on HHL algorithm: Quantum SVM, quantum PCA Reviewed the standard HHL algorithm and its usages in the context of
from theory to application in quantum machine learning
quantum machine learning
Lloyd et al. Quantum algorithms for Quantum K-means The best known algorithm in classical computing takes O( poly(M N ))
(2013) supervised and unsuper- to assign N-dimensional vector to one of the clusters of M states. The
vised machine learning same problem takes O(log(M N )) on a quantum computer
Dong et al. (2008) Quantum reinforcement Quantum theory and reinforcement learning In QRL, the state or action is represented as the eigen state or eigen
learning action. According to the collapse postulate of quantum measurement,
superposition state can be used to represent the state (action), and the
eigen state (eigen action) can be determined by arbitrarily viewing the
imitated quantum state. In accordance with reward function and value
function, probability amplitude is updated that determines the likelihood
of an eigen state or eigen action. As a result, it offers a worthwhile com-
promise between exploration and exploitation and can hasten learning
Trugenberger Quantum pattern recogni- Quantum associative memory By modifying a parameter that acts as an effective temperature, one
(2002) tion can fine-tune the accuracy of pattern recall. The well-known capacity
scarcity issue with traditional associative memories is resolved by this
architecture, offering an exponential increase in capability. The proba-
bilistic aspect of information retrieval, which this model and our own
brains have in common, is the price to be paid
Kordzanganeh Benchmarking simulated Platforms like QMware and AWS SV1 were tested for their ability to Establishing a clear understanding of runtime and accuracy differences
et al. (2023) and physical quantum pro- handle quantum circuits of varying sizes. Devices such as Rigetti’s between simulated and physical QPUs. Identifying scenarios where sim-
cessing units using quantum Aspen-M2 and IonQ’s Harmony were analyzed for runtime perfor- ulators or physical devices excel, particularly in terms of qubit scalability
and hybrid algorithms mance and fidelity. Performance was studied across a range of circuit and hybrid algorithm implementation. Exploring how current platforms
sizes (up to 40 qubits for simulators and 34+ qubits for physical can address industry challenges despite hardware constraints like noise
devices). Both gate-based quantum circuits and hybrid workflows and limited fidelity. Analyzing how the number of qubits impacts com-
Page 11 of 55
combining classical and quantum computations were tested putational performance across different platforms
123
39
39 Page 12 of 55 Quantum Machine Intelligence (2025) 7:39
component analysis (PCA) (Wang et al. 2024). This could science benefit from quantum simulations that can pre-
lead to faster data analysis and better extraction of mean- dict molecular properties or simulate complex materials
ingful features from complex datasets. with higher fidelity and speed (Arute et al. 2019) (Table 3
3. Optimization problems: Algorithms like the QAOA and and Table 4).
variational quantum eigensolver (VQE) are designed to
solve optimization problems more efficiently than clas- In summary, QML is expected to outperform classical
sical algorithms in certain cases (Farhi et al. 2014; Tilly ML in scenarios involving tasks such as large-scale opti-
et al. 2022; Mesman et al. 2024). This capability is mization, complex data analysis in high-dimensional spaces,
particularly useful in fields such as materials science, and simulation of quantum systems. As quantum computing
chemistry, and logistics where optimization problems are technology advances, these advantages may become more
prevalent. pronounced, leading to transformative impacts across vari-
4. Quantum interference and entanglement: quantum neural ous fields where computational efficiency and accuracy are
networks (QNNs) and quantum support vector machines critical.
(QSVMs) may leverage quantum interference and entan- However, it is important to note that quantum comput-
glement to process and recognize patterns in data more ers are still in their early stages of development, operating
effectively than classical counterparts (Dhaulakhandi with limited qubits and high error rates compared to classical
2023; Farhi and Neven 2018; Sagingalieva et al. 2022; computers. We need to acknowledge the fact that currently,
Akpinar et al. 2024). These algorithms can potentially there is no way available to us where we can execute quan-
provide higher accuracy and robustness in classification tum algorithms on noise free systems. In addition to this, we
tasks. have very limited tools which can be used to show this advan-
5. Simulation of quantum systems: Quantum computers are tage. Therefore, the practical realization of these advantages
naturally suited for simulating quantum systems, which is currently constrained by the capabilities of current quan-
are computationally intensive tasks for classical com- tum hardware and the complexity of developing quantum
puters. Applications in quantum chemistry and materials algorithms.
123
Quantum Machine Intelligence (2025) 7:39 Page 13 of 55 39
Table 4 Components where QML outperforms classical ML vs components where classical ML outperforms QML
Ref Problem Components where QML Components where classical ML
outperforms classical ML outperforms QML
123
39 Page 14 of 55 Quantum Machine Intelligence (2025) 7:39
ing QML, ensuring that the development of algorithms ments provide probabilistic outcomes that can be inter-
aligns with the capabilities of both current and future preted to make predictions.
quantum hardware. This will guide efforts to refine and 5. Quantum support vector machine: Support vector machine
implement QML algorithms for practical use. is one of the well-known machine learning algorithms
used for solving both classification and regression tasks.
Support vector machine can be expressed as a quadratic
7 Systematic categorization of quantum programming problem. By formulating SVM into quadra-
machine learning tic programming problem, it can be solved in time which
is proportional to O(log( −1 ) poly(N , M)), where N is
With the aim of simplifying the study of various quantum the dimension of feature space, M is the number of data
machine learning algorithms, we try to categorize QML algo- points in the dataset, and is the accuracy. The complex-
rithms based on three criterias: ity of classical SVM algorithm depends upon the number
1. Quantum machine learning algorithms based on learning of features in the dataset and the number of data points.
paradigms (Martín-Guerrero and Lamata 2022; Khan and
Robles-Kelly 2020; Simeone et al. 2022). Rebentrost et al. (2014) proposed a quantum support vec-
2. Quantum machine learning algorithms based on NISQ tor machine algorithms in which both classification and
suitability (Torlai and Melko 2020; Dhaulakhandi 2023; training stages can implemented in O log(N M) time. The
Cemin et al. 2024). key stages involved in QSVM are as follows:
3. Quantum machine learning algorithms based on fault-
tolerant design (Fig. 8).
1. Input data: Classical data points xi are initially provided
with their corresponding labels yi .
7.1 Categorization of quantum machine learning
2. Feature mapping: Quantum gates and circuits transform
based on learning paradigms
classical data into quantum states |φi , often involving
quantum feature maps tailored for the problem at hand.
7.1.1 Quantum supervised learning
3. Quantum kernel computation: Quantum circuits evaluate
the kernel function K (xi , x j ) using quantum operations,
Quantum supervised learning involves using quantum com-
which can include techniques like quantum phase esti-
puting principles to enhance or perform tasks related to
mation or amplitude estimation.
supervised learning, where the algorithm learns from labeled
4. Training: The quantum kernel evaluations are used to
training data to make predictions or classifications on new,
optimize parameters through classical optimization algo-
unseen data points. The principles of quantum supervised
rithms, aiming to define the support vectors and decision
learning are given below:
boundaries that maximize classification accuracy.
5. Measurement and classification: Quantum measure-
1. Quantum state representation: In quantum supervised
ments on the final quantum states provide probabilistic
learning, data points (features and labels) can be encoded
outcomes, which are interpreted to assign class labels to
as quantum states using qubits. Quantum states allow for
new data points.
representation in higher-dimensional spaces, potentially
capturing more complex relationships in the data.
2. Quantum state representation: In quantum supervised The improvement in performance in terms of N can be
learning, data points (features and labels) can be encoded explained by fast quantum evaluation of inner products, while
as quantum states using qubits. Quantum states allow for the improvement in M can be obtained by formulating SVM
representation in higher-dimensional spaces, potentially as an approximate least-squares problem. By doing so, SVM
capturing more complex relationships in the data. can be solved in terms of matrix inversion algorithm (Figs. 9
3. Quantum feature mapping: Quantum algorithms can per- and 10).
form feature mapping by transforming classical data into The oracle for the training data in quantum setting returns
a quantum state or embedding classical features into a quantum vectors |x j = N
1
where the norms |xj |
|xj | k=1 ( xj )k |k
quantum space. This mapping can exploit quantum prop- and the labels y j are given. The performance of quantum
erties like superposition and entanglement to enhance machine learning algorithm is relative to the oracle which
pattern recognition capabilities. can be considered as lower bound for the true complexity
4. Quantum measurement and prediction: Quantum algo- (Aaronson 2009). Quantum RAM can be used to efficiently
rithms can measure quantum states to extract information construct these states. The hardware resources used by this
about the underlying data distribution, facilitating tasks approach are O(M N ), which require only O(log(N M))
such as regression or classification. Quantum measure- operations to access these states (Lloyd et al. 2013). The
123
Quantum Machine Intelligence (2025) 7:39 Page 15 of 55 39
Fig. 8 Categorization of quantum machine learning algorithms based on learning paradigms, NISQ suitability, and fault-tolerant design
√ 1
kernel matrix can be evaluated by performing inner prod- 1. The state M is used to invoke the training data
M i=1 |i
uct with the help of which we can obtain complexity of oracles.
O(log( 1 M 3 + M 2 log( N )) for SVM. 2. The state |χ = √ M1 where Nχ =
Nχ i=1 |xi ||i|xi
In the dual formulation of SVM and least-squares refor- M
mulation, the kernel plays a crucial step. An efficient quantum 2
i=1 | x i | can be prepared in quantum parallel in
method that can be used for direct preparation and expo- O(log(N M)) time.
nentiation of the normalized kernel matrix K̂ = tr (K K
) is as
3. We may get the necessary kernel matrix as a quantum
follows: density matrix by discarding the training set register.
123
39 Page 16 of 55 Quantum Machine Intelligence (2025) 7:39
Quantum decision trees A tree that is rooted and directed, 2. Quantum oracle (decision node): This represents a deci-
with a root node having no edges which are incoming, makes sion point where a quantum oracle performs a transfor-
up the quantum equivalent of the classical decision tree, R. mation on the quantum state based on the input data or
There is only one incoming edge for each of the other nodes. conditions. It may use quantum algorithms to encode
A node with outgoing edges that reflects the attribute ai ∈ A decision rules probabilistically.
is known as an internal node. The internal node can also be 3. Measurement and outcome: After interacting with the
considered as a test node. Any additional nodes that are part quantum oracle, a measurement is made on the quan-
of C are known as leaves or decision nodes (the set of class tum state to obtain a probabilistic outcome. This outcome
states). The significance of internal node or test node in quan- guides the branching in the decision tree.
tum decision tree (Khadiev et al. 2019) is to use particular 4. Branching and decision making: Based on the outcome
discrete function of the input attribute states and divide the of the measurement, the algorithm branches into different
training dataset into two or more subgroups. There are dif- paths corresponding to possible decisions or outcomes.
ferent approaches for construction of trees for decision trees 5. Recursive branch processing: The process may recur-
such as CART, ID3, C4.5, and C5.0. In Khadiev et al. (2019), sively continue, with each branch potentially encounter-
C5.0 algorithm is considered as a classifier decision tree. ing further decision nodes or oracle operations until a
The complexity of this algorithm is O(hd(N M + N log M)), final decision is reached.
where h is the height of the tree, N is the number of data points 6. Final decision output: Ultimately, the algorithm outputs
in the dataset, d is the dimension of feature space, and M is a decision or result based on the combined probabilistic
the number of classes. The quantum counterpart for decision outcomes from the quantum states and decisions made
trees known√ as quantum decision tree can be implemented throughout the tree structure.
in O(h log d d log N ) time using Grover’s search algorithm
(Kwiat et al. 2000). The steps involved in quantum decision
trees are as follows: The two main approaches used for quantum decision trees,
which have resulted in improvement over classical decision
trees, are Durr and Hoyer’s (1996) algorithm for maximum
1. Initial quantum state preparation: This step involves search and amplitude amplification algorithm (Kwiat et al.
preparing an initial quantum state, often initialized to a 2000). When the two algorithms are combined together, they
superposition of all possible inputs or states. have the following properties:
123
Quantum Machine Intelligence (2025) 7:39 Page 17 of 55 39
1. Consider a function f : {1, . . . , K } → R where the Quantum matrix inversion Many quantum linear regres-
running time complexity of finding f (x) is T (K ). We sion algorithms leverage the Harrow-Hassidim-Lloyd (HHL)
have a quantum-based algorithm that is used to find x0 of (Lloyd 2010) algorithm, which provides an exponential
maximal( f (x0 )), then
√ the anticipated running time of speedup for solving linear systems of equations under certain
the algorithm is O( K · T (K )), and the probability of conditions. This is particularly useful for inverting matrices,
success is 21 . The quantum algorithm for decision trees is a key step in linear regression.
given below: Quantum state preparation Efficient preparation of quantum
2. The quantum counterpart for decison trees known as states representing the data vectors X and y is crucial. Tech-
quantum √decision tree can be implemented in niques vary based on the structure and properties of the data
O(h log d d log N ) time. The probability of success of (Tables 5 and 6).
quantum C5.0 is O((1 − d1 )k ), where k is the number of Quantum measurement After performing quantum oper-
internal nodes in decision tree. This technique exhibits ations, measurements are required to extract the desired
approximately quadratic speedup with regard to a num- coefficients β. This step often involves sampling from the
ber of dataset properties. quantum state to estimate the values (Fig. 11 and Table 7).
Quantum states are used in quantum decision trees (Lu
and Braunstein 2014) to build classifiers in machine
learning. The root of decision trees, which connects to a Algorithm 2 Quantum linear regression.
number of leaves, is a starting node with outgoing edges 1: Input:
but no incoming edges. As we progress down the struc- 2: Data matrix X (m x n)
3: Target vector y (m x 1)
ture, the response to a question is classified. The node 4: Output:
has a decision function that determines how the input 5: Regression coefficients β (n x 1)
vector will proceed along the branches and leaves. From 6: Step 1: Data preparation
a collection of training data, the quantum decision tree 7: Normalize the data matrix X and target vector y
8: Compute X T X and X T y (classically)
gains knowledge. Each node divides the training data set 9: Encode X T X into a quantum state |A
into subsets based on discrete functions in the quantum 10: Encode X T y into a quantum state |b
decision tree. Based on the target attribute state, the leaf 11: Step 2: Quantum matrix inversion using HHL
is ascribed to a class. The data is therefore categorized 12: Apply the HHL algorithm to solve |A|β = |b
13: Perform quantum phase estimation on |A
by the quantum decision tree from the root to the last 14: Apply controlled rotations based on eigenvalues
necessary leaf. 15: Perform inverse quantum phase estimation
16: Extract the state |β representing the solution
17: Step 3: Measurement
18: Measure the quantum state |β to obtain the classical regression
coefficients β
Algorithm 1 Quantum decision tree. 19: Step 4: Post-processing
function CHOSESPLIT(χ ) 20: Normalize the measured coefficients β
max_attr ← −1, max_split ← (φ, φ), max_value ← −1 21: Return β as the final regression coefficients
for each r ∈ (1, ..., log2 d) do
val, split ← Quantum Max((1, ..., d), Pr ocess Attribute)
if val > max_val then
max_val ← val, max_attr ← attr , max_split ← split In quantum linear regression (Wang 2017) using the least-
end if squares approach, a quantum algorithm is provided to fit
end for
return max_attr, max_split a linear regression equation to a given dataset. The paper
end function provides an approach claiming this approach focuses on
the machine learning challenge of guessing the output cor-
responding to a new input given instances of data points,
Quantum linear regression Quantum linear regression is an unlike comparable previous contributions that suffer from
application of quantum computing to perform linear regres- the problem of reading out the optimal parameters of the fit.
sion, which is a fundamental method in statistical analysis We further modify the approach to handle non-sparse data
and machine learning. By leveraging quantum algorithms, matrices that have low-rank approximations for representa-
certain computational tasks related to linear regression can tion, and we greatly enhance the dependence on its condition
be made more efficient compared to classical methods. Quan- number. The prediction result can be utilized for additional
tum linear regression uses quantum algorithms to solve quantum information processing procedures or obtained by a
the same problem but aims to do it faster, particularly for single-qubit measurement. If the data is supplied to the rou-
large datasets or high-dimensional data. Some key quantum tine as quantum information, then the algorithm’s runtime is
approaches include the following: logarithmic in the dimension of the input space.
123
39 Page 18 of 55 Quantum Machine Intelligence (2025) 7:39
Quantum neural network Deep supervised learning is a minimize the difference between the obtained label and the
method used in quantum neural network models to teach actual label. Finding the training parameter that produces the
computers to spot patterns in data or images known as a feed least amount of error is the main goal. The training parameter
forward network. The fundamental idea is to create circuits is updated on each iteration. The backpropagation approach,
using rotation gates and qubits that run the network similarly which is based on the gradient descent principle, minimizes
to neurons and weights used in a classical neural network. errors.
A collection of training examples is used to train the net-
work. A label value is included with each input string. The The structure of a QNN differs significantly from classical
network’s job is to determine the dataset’s label value and neural networks due to its reliance on quantum computing
Table 6 Summary of recent research on quantum support vector machines (QSVMs), highlighting datasets, quantum modules, contributions, and
limitations
Ref Year Dataset used Modules where quantum Key contributions Weaknesses
approach is used
Akpinar et al. (2024) 2024 - Wisconsin Quantum kernel-based Assessed quantum kernel per- Limited scalability; results
Breast Cancer SVM module formance for SVMs in medical dependent on simulator rather
Dataset, The datasets, showing the influence than real quantum hardware
Cancer Genome of quantum kernels on classifi-
Atlas (TCGA) cation accuracy
Glioma
Sharma et al. (2024) 2024 Synthetic Quan- Quantum state prepara- Introduced cross-domain classi- Reliance on synthetic data; lacks
tum State Data tion and measurement fication for quantum state data real-world applications
using QSVMs, demonstrating
its potential for various quantum
computing scenarios
Chen et al. (2024) 2024 MNIST Dataset cuTensorNet library, Demonstrated exponential- Simulations become infeasible
Quantum Kernel to-quadratic reduction in for qubit counts over 50 with-
Computation, GPU- computational cost for QSVM out hardware; relies heavily on
accelerated QSVMs simulations using cuTensorNet, GPUs and optimized libraries
achieving 95% classification for performance
accuracy on MNIST dataset
123
Quantum Machine Intelligence (2025) 7:39 Page 19 of 55 39
principles. The typical components and structure of a quan- perform transformations essential for computation and
tum neural network are given below: learning (Table 8 and Fig. 12).
Qubits as neurons
Quantum circuit layers
• Qubit encoding: Qubits serve as the fundamental units,
analogous to neurons in classical neural networks. These • Layered structure: QNNs are typically organized into
qubits can exist in superpositions of states, allowing for layers of quantum gates.
parallel processing and the potential to encode richer • Depth: The depth of a QNN refers to the number of layers
information compared to classical bits. or the sequential arrangement of gates within a layer.
• Quantum gates: Similar to classical neural networks • Parameterization: Parameters associated with quantum
where activation functions process inputs and produce gates are adjusted during training to optimize the net-
outputs, quantum gates perform operations on qubits. work’s performance.
Includes Hadamard gates, CNOT gates, Phase gates, etc. • Training: Optimization methods, often adapted from
These gates manipulate the quantum states of qubits to quantum optimization or classical optimization tech-
Table 7 Comparison between classical decision trees and quantum decision trees
Feature Classical decision trees Quantum decision trees
Basic operation Sequential decision-making based Utilizes quantum states and superposition
on if-else rules
Representation Tree structure with nodes and Similar tree structure but operates on qubits
branches
Decision making Deterministic Probabilistic, leveraging interference and superposi-
tion
Complexity Computationally manageable Potentially faster for specific types of problems
Parallelism Limited by sequential processing Exploits parallelism through quantum superposition
Speed Depends on depth and complexity Could potentially solve certain problems faster
of tree
Training requirements Requires labeled data and iterative Training involves quantum algorithms and states
learning
Applications Widely used in machine learning Emerging field with applications in optimization and
applications machine learning
Practical implementations Well-established with various algo- Limited practical implementations due to quantum
rithms hardware constraints
Error correction Error-prone due to noise and data Quantum error correction required due to quantum
variance decoherence
123
39 Page 20 of 55 Quantum Machine Intelligence (2025) 7:39
Table 8 Summary of recent research on quantum decision tress, highlighting datasets, quantum modules, contributions, and limitations
Ref Year Dataset used Key contributions Weakness
Sharma et al. (2023) 2023 Haberman’s Proposes a quantum-inspired deci- Limited exploration of scalability
Cancer Survival sion tree using fidelity for splitting to larger datasets and computa-
Dataset, Wheat criteria, achieving balanced trees tion efficiency for high-dimensional
Seed Dataset, compared to classical approaches data
Wisconsin Breast
Cancer Dataset
Kumar et al. (2024) 2023 -PIMA,- 1. Quantum algorithm (Des-q) for Quantum advantage is demon-
Spambase,- efficient retraining of decision trees strated in simulations, but practical
Blood,-Boston in regression and binary classifi- implementation on quantum hard-
Housing cation tasks. 2. Achieves logarith- ware may be limited by current
mic complexity for retraining with hardware constraints
small, periodic data increments. 3.
Uses quantum supervised cluster-
ing based on q-means algorithm.
4. Demonstrates significant speedup
over classical methods while main-
taining competitive performance
niques, are used to train QNNs by adjusting these Algorithm 3 Quantum neural network.
parameters to minimize a defined loss function. 1: Input: Classical data X = {x1 , x2 , . . . , xn }, Labels Y =
{y1 , y2 , . . . , yn }, Quantum circuit parameters θ, Learning rate α,
Number of iterations T
Measurement and output
2: Output: Trained quantum circuit parameters θ ∗
3: for iteration = 1 to T do
• Quantum measurement: At the end of the computation, 4: Initialize cost C = 0
measurements are performed on specific qubits to extract 5: for each training sample (xi , yi ) in (X , Y ) do
classical information. 6: Encode classical data xi into quantum state |ψ(xi )
7: Apply quantum circuit U (θ) to |ψ(xi ) to get |φ =
• Output encoding: The outcome of measurements rep- U (θ) |ψ(xi )
resents the network’s prediction or classification, anal- 8: Measure the quantum state |φ to get prediction ŷi
ogous to the output layer in classical neural networks 9: Compute cost Ci = Loss(yi , ŷi )
(Tables 9 and 10). 10: Update total cost C = C + Ci
11: end for
12: Compute gradient ∇θ C using parameter-shift rule
13: Update parameters θ = θ − α∇θ C
Quantum perceptron Perceptrons are the fundamental
14: end for
building blocks of artificial neural networks. These are used 15: return θ ∗ = θ
to model activation mechanism of output neuron which is
123
Quantum Machine Intelligence (2025) 7:39 Page 21 of 55 39
Table 9 Comparison between classical linear regression and quantum linear regression
Feature Classical linear regression Quantum linear regression
Data representation Classical data (real-valued vectors) Quantum states (encoded into qubits)
Computation basis Deterministic, classical bits Probabilistic, qubits (superposition and entan-
glement)
Solution method Ordinary least-squares, gradient descent Quantum algorithms (e.g., Harrow-Hassidim-
Lloyd algorithm)
Speedup potential Linear time complexity O(n) Potential exponential speedup for large sys-
tems
Hardware requirements Classical computers, CPUs/GPUs Quantum computers, quantum processors
Scalability Limited by classical computation resources Potentially more scalable with quantum advan-
tage for large datasets
Noise and errors Relatively stable, susceptible to overfitting or Sensitive to quantum noise and decoherence
underfitting
Applications Finance, biology, economics, etc Quantum machine learning, optimization
problems, large-scale data analysis
Maturity Well-established, widely used in various indus- Emerging field, ongoing research and develop-
tries ment
caused due to input signals from it neighbors. In Schuld et al. inputs can be processed in parallel when the inputs are pre-
(2015), they have used quantum phase estimation algorithm sented as superposition i |xi
to implement quantum perceptron (Schuld et al. 2015) to Improving the processing time required for the training
imitate the activation function of classical perceptron. The phase of artificial neural networks by skillfully utilizing
concept behind the quantum perceptron algorithm is to rep- quantum effects is one of the main objectives of quantum neu-
resent normalized input h(w, x) = φ ∈ [0, 1] in terms of ral network research. Several training techniques have been
phase quantum state as |x1 , ...., xn , then the phase estima- examined, such as employing a Grover search to determine
tion algorithm is applied on this state with precision τ . The the ideal weight vector or the classical perceptron training
result of applying phase estimation algorithm on this state is approach to modify the weight parameters of a quantum per-
a quantum state which can be represented as |J1 , ...., Jτ . ceptron (Fig. 13 and Table 11).
The computational complexity of quantum perceptron
algorithm can be compared to number of resources. A Quantum algorithm for orthogonal neural network A
single operation can be used to implement classical per- new class of neural networks called orthogonal neural net-
ceptron. The√complexity of quantum perceptron algorithm works (Kerenidis et al. 2021) has recently been developed,
is O(n log2 ( (n)), which is not significant advantage over forcing the weight matrices to be orthogonal. For deep archi-
classical perceptron algorithm. The major improvement of tectures, they could attain more precision and keep away from
quantum perceptron algorithm is that multiple number of evanescent or explosive gradients. Although approaches to
Table 10 Summary of recent research on quantum linear regression, highlighting datasets, quantum modules, contributions, and limitations
Ref Year Dataset used Key contributions Weakness Approach used
Koura et al. (2024) 2024 Iris, Palmer Uses quantum annealing for regression tasks Classical bench- Quantum
penguin with continuous variables. Demonstrates quan- marks not discussed
tum advantages in solving large regression
problems efficiently. Maintains accuracy with-
out requiring an increase in the number of
qubits, provided the adiabatic condition is met
Carugno et al. (2024) 2024 Simulated Introduces a quantum linear regression Limited real-world Quantum
datasets approach with adaptive learning techniques. applications
Provides speedup in high-dimensional data
tasks
Doriguello et al. (2024) 2024 Synthetic data Proposes a quantum version of the LARS Approximate solu- Quantum
algorithm for Lasso regression, with quadratic tions may impact
speedup in high-dimensional problems accuracy
123
39 Page 22 of 55 Quantum Machine Intelligence (2025) 7:39
maintain orthogonality while updating the weight matrices labeled. The vector labels are also in a state of superposition.
have been put forth, they are slow and only give approxi- After measuring the labeled qubits, the quantum mechanical
mate orthogonality. In this study, the author have presented principles cause one classical label to be randomly returned.
the pyramidal circuit (Kerenidis et al. 2021), a brand-new Only the vectors that match the measured label are superim-
class of neural network layer that employs orthogonal matrix posed to form the quantum state that is still present.
multiplication. With the same asymptotic running time as Quantum principal component analysis quantum princi-
a standard layer, it enables gradient descent with perfect pal component analysis (QPCA) (Lloyd et al. (2014)) aims
orthogonality. to find the principal components of a set of quantum states.
It leverages quantum algorithms to perform PCA efficiently
7.2 Quantum unsupervised learning on quantum data.
• Quantum state preparation: prepare the density matrix ρ
Due to the inherent ability of quantum algorithms to perform representing the quantum data.
tasks simultaneously, quantum computing can be effec- • Quantum phase estimation (QPE): use QPE to find the
tively used for speedup of unsupervised machine learning eigenvalues and eigenvectors of ρ.
algorithms. Using simultaneous processing, the proposed • Measurement: measure the eigenvalues to determine their
quantum method in Kerenidis et al. (2018) known as q-means magnitudes.
significantly accelerates the well-known k-means algorithm. • Selection: choose the top k eigenvalues and their corre-
All of the vectors in the dataset begin with the q-means algo- sponding eigenvectors.
rithm in a quantum superposition, and all of the centroid’s • Output: the principal components are the quantum states
distances are calculated at once. Finally, by selecting the corresponding to the selected eigenvectors (Tables 12,
vector’s nearest centroid, all of the vectors are concurrently 13, and Fig. 14).
Table 11 Comparison between classical neural networks and quantum neural networks
Feature Classical neural networks (CNNs) Quantum neural networks (QNNs)
Data representa- Classical data (real-valued vectors) Quantum states (complex-valued vectors)
tion
Computation Deterministic, classical bits Probabilistic, qubits (superposition and entan-
basis glement)
Training algo- Gradient descent, backpropagation Variational quantum algorithms, parameter-
rithm shift rule
Hardware Classical computers, GPUs/TPUs Quantum computers, quantum processors
requirements
Scalability Scales with increasing number of classical Potential exponential speedup for certain prob-
nodes lems
Noise and errors Relatively stable, errors due to overfitting or Sensitive to quantum noise and decoherence
underfitting
Applications Image recognition, natural language process- Quantum chemistry, optimization problems,
ing, etc potentially any classically hard problem
Maturity Well-established and widely used in various Emerging field with ongoing research and
industries development
123
Quantum Machine Intelligence (2025) 7:39 Page 23 of 55 39
QPCA can offer exponential speedup over classical PCA Quantum clustering technique The k-means clustering
for certain types of data, particularly when the data is natu- problem is solved using the quantum Lloyd’s algorithm in
rally represented as a quantum state. The key steps involve the quantum clustering (Li et al. 2020). To fundamentally obtain
preparation of the quantum state, performing quantum phase the distance of the cluster centroid, repeating process is used.
estimation, and selecting the principal components based The fundamental techniques entail selecting an initial cen-
on the measured eigenvalues. The quantum cluster assign- troid at random and allocating each vector to the cluster with
ment example demonstrates how quantum self-analysis can the closest mean. Up until the stationary value is attained, the
be helpful in accelerating machine learning tasks like pattern cluster centroid should be updated and calculations repeated.
recognition and clustering. The process moves more quickly with the quantum algo-
Basilewitsch et al. (2024) 2024 An artificial hyper- Compares quantum neu- Quantum models show Hybrid
cube dataset, MNIST ral networks (QNN) with less consistency in per-
handwritten digits, classical models in var- formance compared to
real-world industrial ious tasks, showcasing classical ones
images from laser performance across dif-
cutting machines ferent data types
Al-Zafar Khan et al. (2024) 2024 Water quality data Applied quantum sup- QNN faced dead neu- Quantum
from Umgeni Catch- port vector classifier ron issues and lower per-
ment, Durban (QSVC) and quantum formance compared to
neural networks (QNN) QSVC
to predict water quality,
QSVC performed better
Ai and Liu (2024) 2024 Synthetic datasets Uses graph neural net- Method is specific to cir- Hybrid
generated works for optimizing cuit design, limiting gen-
parameters in quan- eralizability
tum circuits to reduce
cross-talk errors
Mesman et al. (2024) 2024 Synthetic dataset Introduces a hybrid Potential for overfitting Hybrid
NN-AE-VQE model to when using NN-AE
improve parameter pre- components
dictions for variational
quantum eigensolvers
Khoo et al. (2024) 2023 Generated labeled Analyzes quantum con- QCNNs still face scal- Quantum
datasets of ground- volutional neural net- ability challenges and
states works (QCNNs) for clas- high noise sensitivity
sification and data com-
pression tasks, showing
potential improvements
over classical methods
Pan et al. (2023) 2023 Sythetic dataset gen- High fidelity (96.0%) Backward process accu- Hybrid
erated and accuracy (93.3%), racy is harder to achieve,
qubit count independent training scales exponen-
of depth, works for quan- tially with hidden/output
tum channels and chem- qubits, and hardware
istry problems, runs on a issues like decoherence
six-qubit processor and residual interactions
degrade performance
Li et al. (2022) 2022 No specific dataset Overview of QNN clas- The document presents Hybrid
(simulated bench- sifiers, encoding strate- detailed benchmarks and
marks used) gies, and benchmarking promising results for the
using quantum simula- proposed QNN models;
tion it does not explicitly
acknowledge or discuss
the potential weaknesses
or limitations of the
approaches
123
39 Page 24 of 55 Quantum Machine Intelligence (2025) 7:39
Quantum SVM O(log( 1 M 3 + M 2 log( N )) Phase estimation and the Rebentrost et al. (2014)
quantum matrix inver-
sion algorithm used with
the least-squares formu-
lation of the support vec-
tor machine
√
Quantum decision trees O D·N Unstructured search Beigi et al. (2022)
using Grover’s algo-
rithm
Quantum linear regression poly(log2 N , d, k, 1 ) Operates with datasets Lloyd (2010)
that have non-sparse
design matrices and
is compatible with the
standard Oracle model
123
Quantum Machine Intelligence (2025) 7:39 Page 25 of 55 39
Table 14 Comparison between classical principal component analysis and quantum principal component analysis
Feature Classical principal component analysis (PCA) Quantum principal component analysis
(QPCA)
Data representation Classical data (real-valued vectors) Quantum states (density matrices)
Computation basis Deterministic, classical bits Probabilistic, qubits (superposition and entan-
glement)
Algorithm Singular value decomposition (SVD), Eigen- Quantum phase estimation (QPE), measure-
value decomposition ment
Speedup potential Polynomial time complexity O(n 3 ) for large Potential exponential speedup for large sys-
datasets tems
Hardware require- Classical computers, CPUs/GPUs Quantum computers, quantum processors
ments
Scalability Limited by classical computation resources Potentially more scalable with quantum advan-
tage for large datasets
Noise and errors Relatively stable, susceptible to overfitting or Sensitive to quantum noise and decoherence
underfitting
Applications Dimensionality reduction, data visualization, Quantum machine learning, large-scale data
feature extraction analysis
Maturity Well-established, widely used in various indus- Emerging field, ongoing research and develop-
tries ment
Quantum reinforcement learning The combination of rein- In Dong et al. (2008), the authors have provided a novel
forcement learning protocols with quantum systems is now method known as quantum reinforcement learning which
being investigated in the field of quantum machine learn- combines quantum theory and reinforcement learning. The
ing, giving rise to quantum reinforcement learning (Wang equivalent of state in classical reinforcement learning is
and Lin 2023). Using quantum characteristics to aid rein- eigen state. The set of eigen states is represented by a quan-
forcement learning and using reinforcement learning to aid tum superposition state. The eigen state is generated and is
quantum circuit design are the two main components of obtained by observing quantum states and then measuring
the field of quantum reinforcement learning. The effective- these states according to collapse postulate of quantum the-
ness and viability of the simulation trials were assessed after ory. The algorithm given for quantum reinforcement learning
agent training for a number of well-known games was com- (Dong et al. 2008) is as follows:
pleted using quantum reinforcement learning techniques. In the paper (Dong et al. 2008), four major results are
Numerous fields, including finance, industrial simulation, provided, which can be summed as follows:
mechanical control, quantum communication, and quantum
circuit optimization, can benefit from the application of the 1. QRL algorithms show asymptotic convergence. The dif-
QRL algorithm (Fig. 15). ference between traditional RL and Quantum RL lies in
Wang et al. (2024) 2024 Chemiresistive sen- Introduces quantum kernel PCA for Limited scalability due to current Quantum
sor array data for IoT data compression, achieving bet- quantum hardware constraints
applications ter retention of critical information
compared to classical PCA. Partic-
ularly effective in non-linear pattern
retention and dimensionality reduc-
tion
He et al. (2021) 2021 Synthetic dataset Proposes a low-complexity quan- Still theoretical, with challenges Quantum
tum PCA algorithm to reduce com- related to hardware scalability and
putational overhead while maintain- implementation
ing effectiveness in dimensionality
reduction tasks
123
39 Page 26 of 55 Quantum Machine Intelligence (2025) 7:39
the fact that exploration policy depends upon collapse utilizing probability amplitude. Nonetheless, it is still
postulate of quantum theory. All states are updated in clear that QRL simulation on a conventional computer
parallel, which explains the synchronous behavior of this cannot exponentially speedup learning because quantum
learning algorithm (Tables 16, 17, and Fig. 16). parallelism is not actually implemented through genuine
2. Although quantum RL algorithms cannot ensure that physical systems. Also, the agent will learn much more
every tactic is optimal, they can provide the best choice effectively when more potent computation is available.
with a probability close to one by repeatedly doing the Then, we might once more rely on the physical imple-
computation. The optimal value functions and optimal mentation of quantum computation.
policies in QRL are defined similarly to those in con- 3. A significant balance between exploration and exploita-
ventional RL. The representation and computing modes tion is encountered in case of quantum RL.
are different. This makes the policy more efficient and 4. For calculating the superposition state that is weighted
safe because it is probabilistic rather than being definite uniformly to initialize the quantum system and to per-
123
Quantum Machine Intelligence (2025) 7:39 Page 27 of 55 39
Poggiali et al. (2024) 2022 4 synthetic Proposes a hybrid approach com- Assumes the input data is Hybrid (quantum-
datasets, 2 bining quantum computing and quantum, which restricts its classical)
real datasets - classical k-means clustering, lever- direct use for classical data
Iris, Wine aging quantum resources to enhance
the classical clustering process.
Demonstrates improved efficiency
on large datasets
Gopalakrishnan et al. (2024) 2023 Moons and Introduces qLUE, a quantum clus- The algorithm becomes Quantum
Circles tering algorithm designed to han- increasingly complex as
datasets dle high-dimensional datasets effi- the number of dimensions
ciently, providing better perfor- grows, which may hinder
mance than classical methods in scalability
clustering tasks
Fang et al. (2024) 2022 Quantum Proposes a quantum state clustering Limited to quantum data, Quantum
states algorithm using variational quan- thus not directly applicable
tum circuits. Demonstrates effec- to classical datasets without
tiveness in classifying quantum modifications
states and handling quantum data
more efficiently than classical meth-
ods
Algorithm 6 Quantum reinforcement learning. algorithm also requires these procedures. These can be
procedure QuantumReinforecementLearning completed by utilizing various Hadamard and phase gate
11...1 (n)
Randomly initialize |s (m) = s=00...0 C s |s, f (s) = |as = combinations. So, in theory, there is no problem with the
11...1
a=00...0 C a |a and V (s) arbitrarily. physical manifestation of QRL. Additionally, the experi-
repeat for every episode
mental Grover algorithm implementations show that our
For all states |s in |s (m) = 11...1
00...0 C s |s:
(n) QRL technique could be physically realized (Tables 18,
1. Observe f (s) = |as and obtain |a.
2. After taking action |α > and 19).
, the next state |s is observed with reward denoted by r ,then
a. Value of state needs to be updated as V (s) ← V (s)+
α(r + γ V (s ) − V (s))
7.3 Categorization of quantum machine learning
b. Probability amplitudes need to be updated:
For L times, redo UGr ov algorithms based on NISQ suitability
(n) (n) (n)
UGrov |as = Ua0 Ua |as
until for each state |V (s)| ≤ Categorizing QML algorithms based on their suitability for
end procedure NISQ devices involves evaluating their robustness to noise
and their ability to run effectively on currently available quan-
tum hardware, which has limited qubits and is prone to errors.
form an already defined number of Grover iterations for In addition to this, there are few limitations including deco-
updating and calculating probability amplitude in accor- herence, gate errors, measurement errors, and cross-talk that
dance with reward function and value function, QRL is needs to be addressed to implement quantum machine learn-
physically realizable as a quantum algorithm. The Grover ing algorithms on NISQ computers.
123
39 Page 28 of 55 Quantum Machine Intelligence (2025) 7:39
Table 18 Comparison between classical reinforcement learning and quantum reinforcement learning
Feature Classical reinforcement learning Quantum reinforcement learning
Data representation Classical states and actions Quantum states (qubits) and superposition of actions
Initialization Initialize classical agent and environment Initialize quantum agent and quantum environment
State observation Observe classical state from the environment Measure quantum state using quantum measurement
techniques
Policy Classical policy using value function or policy gradi- Quantum policy leveraging quantum superposition
ents and entanglement
Action selection Select action based on classical policy Select action based on quantum policy, potentially
using quantum circuits
Reward mechanism Receive scalar reward from the environment Receive quantum reward, which may be a quantum
state or measured value
Update mechanism Update policy and value functions using algorithms Update quantum policy and value functions using
like Q-learning or policy gradient quantum algorithms and quantum gates
Algorithmic complexity Typically polynomial in state and action space Potential for exponential speedup using quantum par-
allelism, but depends on the specific implementation
and hardware
Scalability Limited by classical computational resources Potentially more scalable with quantum advantage for
large state-action spaces
Noise and errors Relatively stable, with issues of local minima Sensitive to quantum noise and decoherence, requir-
ing error correction
Convergence Convergence to optimal or suboptimal policies, Convergence may be faster with quantum speedup,
depends on exploration-exploitation tradeoff but requires error correction
Applications Widely used in robotics, game playing, autonomous Emerging applications in quantum machine learning,
systems quantum control, and quantum optimization
Maturity Well-established and extensively studied Emerging field with ongoing research and develop-
ment
Variational quantum algorithms (Cerezo et al. 2021) Seri- Three issues come up when using VQAs in large-scale
ous limitations on current quantum devices include low qubit applications: trainability, accuracy, and efficiency. Tech-
counts and noise processes that reduce circuit depth. One of niques to overcome these issues are presently being devel-
the most effective methods to overcome these limitations is oped. These algorithms are considered suitable for NISQ
to train a parameterized quantum circuit using a classical devices due to their ability to mitigate errors through vari-
optimizer. This technique is known as variational quantum ational principles. VQAs are known for their utilization of
algorithms or VQAs. Virtual quantum advantages, or VQAs, quantum computer to use the strength of classical optimiz-
seem to be the best chance of gaining a quantum edge and ers to estimate the cost function C(θ ) (or its gradient) to get
have already been proposed for almost every application the parameters θ trained. Algorithms like VQE, QAOA, and
that researchers have imagined for quantum computers. The variational quantum classifier (VQC) are examples of VQAs
most promising method for attaining quantum advantage (Figs. 17 and 18).
with near-term quantum computers is the use of variational The barren plateau problem (Wang et al. 2021; Larocca
quantum algorithms or VQAs. VQAs have been developed et al. 2024 is a significant challenge in training varia-
for a variety of uses, including as solving linear equation tional quantum algorithms, especially on near-term quantum
problems, simulating the dynamics of quantum systems, and devices. The below points show how this problems affect
determining the ground states of molecules. VQAs have a variational quantum algorithms.
similar structure: a classical optimizer trains the parameters
in the VQA after a job is encoded into a parameterized cost • Exponential vanishing gradient: As the number of
function that is assessed by a quantum computer. VQAs’ qubits or layers in the quantum circuit increases, the
adaptive design makes them ideally adapted to the limita- number of parameters to optimize also increases expo-
tions of near-term quantum computing. nentially. Due to the nature of quantum circuits and the
123
Table 19 Summary of recent work done on quantum reinforcement learning
Paper title Year Dataset used Key contributions Weakness Approach used
Quantum Machine Intelligence (2025) 7:39
Meyer et al. (2024) 2024 CartPole environ- Introduces the RegQPG algorithm with Lip- Regularization too high can harm Quantum approach using
ment schitz regularization for improved robustness performance; sensitive to parameter quantum policy gradients
and generalization in QRL. Demonstrates tuning with regularization
improvements on noisy environments
Nagy et al. (2024) 2024 GridWorld, Taxi- Combines quantum and classical reinforce- The quantum part is still limited by Hybrid quantum-classical
v3 ment learning in latent observation spaces, current quantum hardware; needs approach
improving performance in tasks with high- more testing on larger problems
dimensional observations
Wang et al. (2024) 2024 Drug design Proposes a quantum-inspired RL approach Computational complexity for large Quantum-inspired classical
datasets to improve drug design by selecting suit- molecules; needs practical demon- approach
able molecular candidates faster than classical stration on real-world drug candi-
methods dates
Chen (2024) 2024 Custom RL envi- Integrates differentiable quantum architecture Training requires high computa- Quantum approach with dif-
ronments search (QAS) in QRL, improving architecture tional resources and efficient imple- ferentiable quantum archi-
efficiency through continuous quantum archi- mentation of differentiable quantum tecture search
tecture adjustments circuits
Saggio et al. (2021) 2021 Synthetic Dataset Introduced a quantum communication channel Focuses on a specific experimental Hybrid agents that switch
to accelerate reinforcement learning, devel- setup, limiting general applicability between quantum and clas-
oped a hybrid quantum-classical communi- sical communication rounds
cation protocol for optimized learning, and
implemented the approach on a tunable
nanophotonic processor with active feedback
Page 29 of 55
123
39
39 Page 30 of 55 Quantum Machine Intelligence (2025) 7:39
Fig. 17 Issues that need to be addressed for machine learning algorithms to run on NISQ computers
structure of their parameter landscapes, the gradients dom or shallow parameterized quantum circuits. These
(derivatives of the cost function with respect to the param- circuits may lack the necessary structure or entangle-
eters) can become exceedingly small. ment depth to propagate gradients effectively through the
• Implications for optimization: When gradients are very entire circuit, leading to the barren plateau phenomenon.
small, classical optimization algorithms (such as gradi- • Impact on quantum machine learning: In the context of
ent descent) struggle to find meaningful updates for the variational quantum algorithms used for machine learn-
circuit parameters. This can result in very slow conver- ing tasks (such as VQE or QAOA), the barren plateau
gence or convergence to poor local minima, making it problem limits the scalability and effectiveness of these
challenging to find optimal solutions efficiently. algorithms on current and near-term quantum hardware.
• Origins in quantum circuit design: The barren plateau It imposes constraints on the size and complexity of
problem is often associated with certain types of ran- problems that can be feasibly tackled with parameterized
quantum circuits (Table 20, Figs. 19 and 20).
123
Table 20 Variational quantum machine learning algorithms: limitations, advantages, applications, and noise behavior
Algorithm Advantages Limitations Applications Behavior with noise
Quantum Machine Intelligence (2025) 7:39
Variational • Suitable for near-term quantum • Sensitive to noise and decoher- • Quantum chemistry: molecular • Benefits from error mitigation
quantum eigen- devices (NISQ). ence. simulation. techniques.
solver (VQE) • Leverages classical optimization • Limited scalability with current • Optimization problems with quan- • Performance degrades with noise
(Tilly et al. 2022) techniques. quantum hardware. tum objectives. levels.
• Efficient for finding ground state Requires significant classical • Impacted by gate and measure-
energies. resources. ment errors.
Quantum • Suitable for combinatorial opti- • Performance tied to circuit depth. • Optimization problems: minimum • Moderate noise resilience.
approximate mization. • Requires parameter tuning. vertex cover problem (Zhang et al. • Sensitivity increases with circuit
optimization • Finds approximate solutions effi- • Limited by hardware connectivity. 2022), traveling salesman. complexity.
algorithm ciently. • Machine learning: clustering, pat- • Affected by gate and readout
(QAOA) (Farhi • Utilizes variational techniques. tern recognition. errors.
et al. 2014;
Blekos et al.
2024; Zhou et al.
2020
Variational • Hybrid model: combines quantum • Limited to specific classification • Binary and multiclass classifica- • Vulnerable to measurement and
quantum clas- and classical. tasks. tion problems. gate errors.
sifier (VQC) • Potential quantum advantage for • Requires substantial quantum • Pattern recognition in quantum • Noise impacts decision boundary
(Maheshwari classification. resources. datasets. accuracy.
et al. 2021; Li • Scalable for certain datasets. • Performance affected by quantum • Improved with error mitigation
and Deng 2022; noise. strategies.
Chen et al. 2020)
Page 31 of 55
123
39
39 Page 32 of 55 Quantum Machine Intelligence (2025) 7:39
overcome barren plateaus by providing necessary pertur- quantum kernels due to noise can lead to unreliable simi-
bations. These approaches collectively enhance quantum larity measures, impacting tasks like clustering and anomaly
machine learning performance (Kulshrestha and Safro 2022). detection (Gujju et al. 2024). The impact of noise on quantum
kernel methods is a critical consideration in the current era of
Quantum kernel methods Quantum kernel methods rep-
NISQ devices. Noise can degrade the performance and relia-
resent a class of algorithms that leverage quantum com-
bility of these methods, affecting their practical applicability.
puting principles to enhance traditional machine learning
However, ongoing research and development in error mitiga-
tasks, particularly in computing inner products efficiently
tion techniques, hybrid approaches, and optimized quantum
in high-dimensional feature spaces. Compared to optimal
circuits are paving the way for more robust and effective
classical kernel approaches, quantum kernels can learn spe-
quantum kernel methods. As quantum hardware continues to
cific datasets with a smaller generalization error (Wang et al.
improve, the resilience of quantum kernel methods to noise
2021). The majority of their findings, meanwhile, are based
is expected to increase, enhancing their utility in practical
on ideal conditions and do not take into account the limita-
machine learning applications (Table 22 and Fig. 22).
tions of near-term quantum devices (Table 21 and Fig. 21).
In Wang et al. (2021), authors try to answer the ques- Quantum annealing Quantum annealing (Yulianti and Suren-
tion of power of quantum kernel methods in NISQ era by dro 2022) is based on the principles of quantum mechanics,
considering quantum kernels under sample error and system specifically the adiabatic theorem, and is used to find the
noise. In QSVM, noise can affect the computation of the ker- ground state (minimum energy state) of a given problem
nel matrix, leading to suboptimal decision boundaries and Hamiltonian. It is particularly effective for combinatorial
reduced classification accuracy. In QPCA, noise can distort optimization problems. It is particularly relevant for NISQ
the principal components extracted from the data, affecting devices due to its potential to find approximate solutions to
the quality of dimensionality reduction and data visualiza- complex optimization problems. Quantum annealing can be
tion. In quantum kernel estimation, inaccurate estimation of more resilient to certain types of noise compared to other
123
Quantum Machine Intelligence (2025) 7:39
Quantum support - Efficient computation of complex - Sensitivity to quantum hardware - Image and speech recognition - Performance degrades with
vector machine kernel functions noise (Golchha and Verma 2023) increasing noise
(QSVM) - Potentially improved classifica- - Requires robust error mitigation - Fraud detection (Grossi et al. 2022 - Error mitigation necessary for
tion accuracy in high-dimensional techniques reliable results
spaces
Quantum Kernel - Ability to handle large feature - Limited by current qubit coher- - Clustering (Li et al. 2016) - Sensitive to noise in quantum cir-
Estimation spaces ence times - Anomaly detection (Liu and cuits
- Efficient computation of similarity - Computational overhead of quan- Rebentrost 2018) - Accuracy depends on fidelity of
measures tum feature map preparation quantum state preparation
Quantum principal - Potential for faster dimensionality - Noise affects the quality of - Data visualization (Ouedrhiri et al. - Noise can significantly impact
component analysis reduction extracted principal components 2023) results
(QPCA) - Better handling of high- - Quantum error correction over- - Feature extraction for complex - Requires error mitigation for prac-
dimensional data head datasets (Salih Hasan et al. 2021) tical use
Hybrid Quantum- - Combines strengths of quantum - Complex implementation - Hybrid AI models (Stein et al. - Robustness varies with the level of
Classical Kernel and classical computing - Dependent on both quantum and 2021) noise
Methods - Potential for near-term quantum classical resources - Enhanced predictive modeling - Hybrid approaches can help miti-
advantage (Houssein et al. 2022) gate some noise effects
Page 33 of 55
123
39
39 Page 34 of 55 Quantum Machine Intelligence (2025) 7:39
quantum computing paradigms. This is because it relies on tain extent, allowing for practical implementations on NISQ
the gradual evolution of the quantum state, which can be devices (Domino et al. 2023). Quantum annealing does not
less susceptible to transient noise. Decoherence and oper- require fully error-corrected qubits, which aligns well with
ational errors can still affect quantum annealing, but the the capabilities of NISQ devices. Current quantum annealers,
annealing process can sometimes tolerate these to a cer- such as those developed by D-wave, operate with thousands
Quantum anneal- • Can handle complex • Sensitive to deco- • Combinatorial opti- • Resilient to some noise,
ing optimization problems herence and operational mization (Djidjev et al. but excessive noise can
• Does not require fully errors 2018) degrade performance
error-corrected qubits • Limited by hardware • Feature selection • Noise impacts the
• Relatively resilient to constraints and scalabil- (Otgonbaatar and Datcu accuracy of the final
some types of noise ity issues 2021) solution
• Problem mapping can • Clustering (Kumar
be complex et al. 2018)
• Machine learning
model training (Mott
et al. 2017)
Reverse quantum • Can escape local min- • Requires careful tuning • Refinement of machine • Sensitive to noise,
annealing ima more effectively of reverse and forward learning models (Chan- especially during reverse
• Useful for refining phases cellor 2023) annealing phases
solutions obtained from • Increased complexity • Improved training of • Needs noise-robust
forward annealing in implementation deep learning networks strategies
(Wilson et al. 2021)
Hybrid quantum- • Leverages the strengths • Complex integration • Large-scale optimiza- • Classical components
classical anneal- of both quantum and of quantum and classical tion in AI (Farshi 2023) help buffer noise impact
ing classical computing components • Hyperparameter tun- • Overall performance
• Can handle larger prob- • Dependency on classi- ing (Sagingalieva et al. still depends on quantum
lems with classical pre cal optimization heuris- 2022) annealing quality
and postprocessing tics
Quantum Boltz- • Suitable for probabilis- • Training is computa- • Generative models in • Noise affects the accu-
mann machines tic modeling tionally intensive AI (Gao et al. 2018) racy of the probability
• Can represent complex • Sensitive to thermal • Deep learning (Amin distributions
probability distributions noise and quantum noise et al. 2018) • Requires noise mitiga-
Reinforcement learning tion techniques for reli-
(Crawford et al. 2019) able training
123
Quantum Machine Intelligence (2025) 7:39 Page 35 of 55 39
of qubits but still face challenges related to noise and deco- VQCs are parameterized quantum circuits (Benedetti et al.
herence. The hardware design for quantum annealing is often 2019) that can be optimized to learn and generate com-
more specialized and can be optimized for specific types of plex distributions.VQCs consist of quantum circuits where
problems, making it more feasible with the existing NISQ the gates and parameters are tunable known as parameter-
technology. ized quantum circuits. By adjusting the parameters of the
quantum circuit, VQCs can be trained using classical opti-
Quantum generative models Quantum generative models mization algorithms to fit given data distributions. Once
aim to learn and generate complex probability distributions of trained, VQCs can generate new data points that approximate
data, which can be useful for tasks such as sampling, data gen- the learned distribution. VQCs can be implemented on cur-
eration, and enhancing classical machine learning models. rent NISQ devices due to their flexibility in circuit design and
These models leverage quantum properties like superposi- optimization. VQCs can potentially mitigate noise effects
tion and entanglement to potentially outperform classical through error correction techniques and noise-resilient algo-
counterparts in certain tasks. Quantum generative models rithms.They can be adapted to various generative modeling
can represent and manipulate complex probability distribu- tasks, including sampling from complex distributions and
tions more efficiently than classical counterparts. In theory, enhancing classical machine learning models. There can
quantum computers could provide an exponential speedup be certain challenges for VQCs, such as optimizing VQCs
over classical methods for certain generative modeling tasks. requires classical-quantum feedback loops, which can be
NISQ devices are prone to errors and noise, which can computationally intensive. Also, performance of VQCs heav-
degrade the accuracy of quantum generative models (Table ily depends on the qubit quality and coherence times of NISQ
23 and Fig. 23). devices (Tables 24, 25, 26 and 27).
Quantum autoencoders • Encoding efficiency • Data compression • Data compression • Sensitive to noise in
(Khoshaman et al. 2018) • Coherence constraints • Potential for represen- (Romero et al. 2017) encoding and decoding
tation • Denoising quantum phases
channels (Achache et al.
2020)
Quantum generative • Complexity in training • Capability to gener- • Image and data gener- • Vulnerable to noise
adversarial networks • High resource require- ate realistic data distribu- ation (Tsang et al. 2023) affecting discriminator
(QGANs) (Lloyd and ments tions • Enhancing training and generator conver-
Weedbrook 2018) datasets gence
123
39 Page 36 of 55 Quantum Machine Intelligence (2025) 7:39
7.4 Categorization of quantum machine learning tocols to mitigate the effects of noise, decoherence, and
based on fault-tolerant design operational errors inherent in quantum hardware. These sys-
tems enable the execution of quantum algorithms with high
QML can be categorized based on fault-tolerant design fidelity over extended periods, overcoming the challenges
(Moreno Casares 2023), taking into account the noise- posed by the fragile nature of quantum states. The key
resilience and error correction capabilities of different characteristics of fault-tolerant quantum systems are quan-
approaches. In fault-tolerant design, we see a time when tum error correction codes (QECC) (Knill and Laflamme
quantum error correction is fully realized. Fault-tolerant 1997; Roffe 2019; Devitt et al. 2013), syndrome measure-
quantum systems are quantum computational frameworks ment, fault-tolerant gates, and operations. A fault-tolerant
that employ error correction codes and fault-tolerant pro- quantum system requires many physical qubits to encode
Shor code The first quantum error correction code, encodes one • Corrects arbitrary single-qubit
logical qubit into nine physical qubits errors.
• Combines bit-flip and phase-flip
codes.
Steane code A seven-qubit code based on classical Hamming code, • Corrects single-qubit errors.
designed for easier fault-tolerant operations • Enables transversal gates for fault
tolerance.
Surface code Topological code encoding logical qubits in a 2D lat- • High threshold error rates.
tice of physical qubits • Localized error detection and cor-
rection.
• Scalable to large systems.
Bacon-Shor code Combines aspects of Shor and surface codes for • Corrects arbitrary single-qubit
improved error correction errors.
• Suitable for 2D lattice architec-
tures.
Concatenated codes Combines multiple levels of error correction codes for • Multilevel error correction.
enhanced protection • Scalable with increased qubit
resources.
Quantum low- Generalization of classical LDPC codes to quantum • Efficient decoding algorithms.
density parity-check systems • Potential for high error thresholds.
(LDPC) codes
Subsystem codes Encode information into subsystems to simplify error • Allow for flexible error correction
correction strategies.
• Can simplify fault-tolerant opera-
tions.
123
Table 25 Categorization of quantum machine learning based on fault-tolerant design
Category Characteristics Examples Advantages Limitations Applications
Fully fault- • Leverages error- • Fault-tolerant quantum • High accuracy and reli- • Requires advanced • Large-scale optimiza-
tolerant QML corrected quantum generative adversarial ability. quantum error correc- tion problems.
computers. networks (QGANs) • Suitable for large- tion codes. • Advanced quantum
• Resistant to errors due • Quantum principal scale, complex quantum • High resource require- simulations.
to quantum noise. component analysis algorithms. ments (qubits, quantum
• Implements complex (QPCA) gates).
Quantum Machine Intelligence (2025) 7:39
quantum algorithms
with high accuracy.
Near-fault- • Utilizes partially error- • Variational quantum • Improved accuracy • Still susceptible to • Medium-scale opti-
tolerant QML corrected quantum sys- algorithms (VQAs) with compared to non-fault- some degree of quantum mization problems.
tems. error mitigation. tolerant systems. noise. • Quantum-enhanced
• Implements some error • Hybrid quantum- • Reduced resource • Complexity in imple- machine learning tasks.
mitigation techniques. classical algorithms with requirements compared menting error mitigation
• Balance between fault error correction layers. to fully fault-tolerant techniques.
tolerance and resource systems.
requirements.
NISQ QML • Operates on NISQ • Quantum variational • Feasible with current • Susceptible to quantum • Proof-of-concept QML
devices with limited autoencoders (QVAEs) quantum hardware. noise and decoherence. algorithms.
qubits and coherence • Quantum Boltzmann • Can demonstrate quan- • Limited by hardware • Small-scale quantum
times. machines (QBMs) tum advantage in specific constraints (qubit count, machine learning tasks.
• No explicit error cor- • NISQ-compatible tasks. gate fidelity). • Quantum-enhanced
rection, relies on noise quantum neural net- feature spaces for classi-
resilience. works (QNNs) cal ML models.
• Focuses on near-term
achievable quantum
machine learning tasks.
Hybrid quantum- • Combines quantum • Hybrid quantum- • Utilizes strengths of • Still faces noise and • Enhancing classical
classical QML and classical computa- classical neural both quantum and classi- decoherence in the quan- machine learning mod-
tion. networks. cal systems. tum part. els.
• Quantum part handles • Quantum-assisted • Reduces the burden on • Complexity in integrat- • Implementing quan-
computationally inten- machine learning quantum hardware. ing quantum and classi- tum kernels for classical
sive tasks. (QAML). cal systems. ML algorithms.
• Classical part manages
optimization and error
mitigation.
Page 37 of 55
123
39
39 Page 38 of 55 Quantum Machine Intelligence (2025) 7:39
Speed and effi- QML has the potential to solve certain problems exponen- CML algorithms are highly optimized and efficient for
ciency tially faster than CML, particularly for large datasets and a wide range of tasks. They excel in applications with
complex computations. For example, quantum algorithms well-understood algorithms and models. However, their per-
can search unsorted databases significantly faster than clas- formance can degrade significantly with large-scale problems
sical algorithms (Grover’s algorithm)
Data handling QML can handle high-dimensional data more efficiently due CML is currently more practical for most real-world appli-
to quantum parallelism and superposition. This makes QML cations due to its mature ecosystem of tools and libraries.
particularly useful for tasks such as pattern recognition and It effectively handles large datasets but requires significant
feature extraction in large datasets computational resources and time
Accuracy and QML promises higher accuracy for certain types of prob- CML models can achieve high accuracy, especially with well-
precision lems, particularly those involving complex optimization and tuned hyperparameters and large datasets. However, they may
probabilistic tasks. Quantum algorithms can explore a vast struggle with problems that have vast or complex solution
solution space more effectively than classical algorithms spaces
Scalability QML scales more efficiently with problem size due to quan- CML scales linearly or polynomially with the size of the
tum entanglement and parallelism. As quantum hardware input data. While it can handle moderately large datasets, it
improves, the scalability of QML will surpass classical meth- faces significant challenges with extremely large or complex
ods for many applications datasets
Hardware QML requires specialized quantum hardware, which is cur- CML can be run on classical computers, which are widely
requirements rently in the early stages of development. This hardware is available and have well-established infrastructures. This
expensive and not widely available, limiting the practical makes CML more accessible and practical for most appli-
application of QML in the near term cations
Current applica- QML is primarily in the research and experimental stages. CML is widely used in various fields, including image and
tions Successful applications include small-scale quantum chem- speech recognition, natural language processing, finance,
istry simulations and optimization problems. Large-scale, healthcare, and more. It has a mature ecosystem with numer-
practical applications are still in development ous successful real-world applications
Noise and error QML is highly sensitive to noise and decoherence, which CML is relatively robust against noise and errors, with
handling can significantly impact performance. Error correction tech- well-established methods for regularization, overfitting pre-
niques are still being developed to mitigate these issues, vention, and data preprocessing. It can handle noisy data more
making current QML implementations less robust effectively than current QML approaches
Algorithm com- QML algorithms can be more complex to design and imple- CML algorithms, while complex, are well-documented and
plexity ment due to the nuances of quantum mechanics. They supported by extensive research and development. There is a
require a deep understanding of both quantum computing vast array of tools and libraries that simplify their implemen-
and machine learning principles tation and optimization
Future prospects QML has the potential to revolutionize computational tasks CML will continue to be a dominant force in technology,
by providing exponential speedups for certain problems. As with ongoing advancements in algorithm efficiency, hard-
quantum hardware and error correction techniques improve, ware capabilities, and application breadth. It will remain
QML will likely become more practical and impactful highly relevant even as QML matures
a single logical qubit, depending on the error correction mates the eigenvalues of a unitary operator, fundamental for
code used. Increased computational resources (qubits, gates, many quantum algorithms like Shor’s algorithm. Quantum
and measurements) are necessary to implement fault-tolerant amplitude amplification (Brassard et al. 2002) generalizes
protocols. QPE (Svore et al. 2013) is an algorithm that esti- Grover’s algorithm for searching unsorted databases. Fault-
123
Quantum Machine Intelligence (2025) 7:39 Page 39 of 55 39
tolerant quantum simulation simulates quantum systems with cesses using conventional bits (such as subatomic component
fault-tolerant quantum devices. behavior). The underlying physical objects that constitute
QuBits match with natural processes, such computations on
7.5 Performance comparison between quantum and a quantum computer are exceedingly simple to do.
classical machine learning Table 4 shows the comparison in the computational com-
plexity of classical machine learning algorithms and quantum
The technologies for machine learning have developed into machine learning algorithms.
potent tool for a variety of application sectors, ranging
from metrology to physics and processing of natural lan-
7.6 Quantum hardware challenges
guage, which is encouraged by increasing computing power
and algorithmic advancement. A new area of study called
QML leverages quantum systems to tackle computation-
“quantum machine learning” has originated as a result of
ally intensive machine learning tasks. However, the practical
exploiting quantum systems to process data available in clas-
deployment of QML algorithms is heavily constrained by
sical domain using quantum machine learning techniques.
the current state of quantum hardware. This section explores
Quantum machine learning, despite having its roots in the
these limitations, their implications for QML, and potential
processing of classical data, studies the use of quantum phe-
mitigation strategies (Tables 28 and 29).
nomena for learning systems, the use of quantum computers
for learning on quantum data, and the development and usage
of machine learning software and algorithms on quantum 1. Noise and decoherence: Qubits are highly prone to envi-
computers. Computer science may undergo a radical change ronmental interactions, leading to errors and loss of quan-
as a result of quantum machine learning. Information pro- tum coherence over time. Variational quantum circuits
cessing could be accelerated far beyond current classical and quantum neural networks suffer reduced accuracy
speeds. Although Moore’s Law, which states that every 2 and convergence due to decoherence during iterative pro-
years integrated circuits capacity should double, has shown cesses. Advanced error mitigation techniques, such as
to be incredibly durable since 1965, it is expected to come zero-noise extrapolation and quantum error correction,
to an end since transistors in conventional computers can are required to overcome noise and decoherence in quan-
soon no longer be made any smaller. This makes quantum tum systems (Cerezo et al. 2022; Hu et al. 2024; Wang
computing interesting. et al. 2024).
The application of quantum processes allows for the 2. Limited qubit connectivity: NISQ devices typically have
effective solution of challenging mathematical issues by con- limited qubit-to-qubit connectivity, requiring additional
ventional computers. Computers struggle to find solutions operations to simulate ideal circuit behavior. Increased
and occasionally fall short. In addition, it is envisaged that circuit depth and execution time lead to higher suscep-
a new family of algorithms will be created using quantum- tibility to errors. Innovations in hardware architecture,
enhanced machine learning. This is due to the impression that such as lattice-based superconducting qubits and all-
subatomic particles can exist concurrently in several states, to-all connectivity in trapped ions, aim to address this
which forms the basis of quantum computations. Unlike to limitation (Murali et al. 2020).
conventional computers, which are binary in nature, which 3. Qubit count: Current quantum processors have a restricted
means they can be represented in the form of 0 or 1, quantum number of qubits, constraining algorithm scalability.
bits (QuBits) can be superimposed and interlaced. Because Algorithms requiring large datasets or high-dimensional
QuBits work in a far richer environment than ordinary com- feature spaces are infeasible on NISQ devices. Hybrid
puters, a quantum computer has inherent parallel computing quantum-classical frameworks partition workloads to
capability than any available conventional computer. optimize qubit usage (Placidi et al. 2023).
It is possible that conventional computers will not entirely 4. Gate fidelity: Quantum gates operate with finite preci-
be replaced by quantum computers. However, they will sion, and errors accumulate over the execution of deep
enable us to broaden the types of computer-tractable issues. circuits. Algorithms with large parameter spaces require
Quantum computers can carry out calculations and resolve high-fidelity gates to ensure model accuracy. Develop-
issues that traditional computers are unable to. For instance, ment of error-robust gates and reduced-depth circuits is
standard hardware is unable to produce truly random num- critical (Baum et al. 2021).
bers. The random number generators found in any traditional 5. Benchmarking and validation: Lack of standardized
computers are referred to as “spontaneous pseudo gen- benchmarks complicates the evaluation of quantum
erators” for the given reason. However, this is a simple algorithm performance across devices. Variability in
assignment for a quantum computer. Similarly, it is both hardware performance impedes consistent assessment
time-consuming and expensive to try to replicate natural pro- of quantum advantage (Kordzanganeh et al. 2023).
123
39
123
Page 40 of 55
Table 28 Practical implementation challenges of quantum machine learning: analysis of noise, error prone, decoherence, and qubit limitations in recent work
Ref Year Noise Error prone Decoherence Qubit
Ganeshamurthy et al. (2024) 2024 Noise leads to reduced pre- Quantum circuits may suffer Decoherence can impact Qubit limitations restrict the
diction accuracy in electrical from errors in multi-output long-running simulations complexity of the quantum
grid parameter estimation Gaussian processes required for grid parameter circuits in practical applica-
estimations tions
Chinzei et al. (2024) 2024 Quantum noise can impair Error-prone quantum com- Decoherence hampers the The number of qubits avail-
the learning process in con- ponents cause instability in efficiency and accuracy of able is a bottleneck for
volutional models model training quantum convolution opera- implementing larger, more
tions complex models
Nguyen et al. (2024) 2024 Noise in quantum circuits The overall quantum hard- Decoherence interferes Limited qubits available
hinders the reliability of ware quality impacts the with precise quantum state constrain the potential for
machine learning models error rates during computa- manipulations required for large-scale machine learning
tions ML models
Hong et al. (2024) 2024 Noise significantly affects High error rates during quan- Decoherence and loss The limited number of qubits
the performance of equiva- tum gate operations lead to of quantum state fidelity available hampers the simu-
lence checking in parameter- incorrect equivalence results impede the accurate check- lation of large parameterized
ized circuits ing of quantum circuits circuits
Mandadapu (2024) 2024 Noisy intermediate-scale Quantum error-prone sys- Decoherence significantly Qubit limitations in NISQ
quantum (NISQ) devices tems make it challenging to impacts the performance of devices restrict the scalabil-
cause variability in image achieve reliable classifica- image recognition tasks on ity and robustness of com-
classification accuracy tion NISQ devices puter vision models
Dutta et al. (2024) 2024 Noise in quantum com- Error rates in quantum hard- Decoherence makes it dif- The qubit number available
ponents affects the ware hinder convergence ficult to maintain quantum in NISQ devices limits the
optimization and trainability during training states for optimization over depth and complexity of
of hybrid quantum-classical extended periods hybrid training procedures
models
Quantum Machine Intelligence (2025) 7:39
Table 29 Recent work on mitigation strategies for practical challenges in quantum machine learning
Ref Year Mitigation strategies Challenges addressed Disadvantages
Somogyi et al. (2024) 2024 - Noise-induced regular- Noise, Decoherence - Regularization methods
ization via training opti- may introduce additional
mization computational overhead
- Use of classical prepro- - Classical preprocessing
cessing to reduce noise might not fully eliminate
impact quantum noise
Slabbert and Petruccione (2024) 2024 - Hybrid quantum- Noise, qubit limitations - Increased complexity in
classical approach for system design
Quantum Machine Intelligence (2025) 7:39
123
39
39 Page 42 of 55 Quantum Machine Intelligence (2025) 7:39
Community-driven benchmarks like QASMBench are time, making it possible to perform more computa-
establishing frameworks for standardized validation. tions before decoherence becomes an issue (Niko-
laeva et al. 2024).
To address the practical implementation challenges in 4. Qubit limitations:
QML as highlighted in the table, several potential mitigation
strategies have been proposed by researchers. These strate- • Quantum circuits optimization: To deal with the
gies aim to reduce noise, error rates, decoherence, and the limitations of qubit numbers and connectivity, quan-
limitations of qubits. Here are some of the key approaches: tum circuits can be optimized to minimize the use
of qubits. Techniques like quantum circuit simpli-
fication and qubit routing help to reduce the qubit
1. Noise mitigation:
overhead and make the algorithms more scalable
• Error correction codes: Techniques like quantum (Slabbert and Petruccione 2024).
error correction (QEC) and surface codes help to cor- • Quantum hardware development: Scaling up qubit
rect errors caused by noise in quantum circuits. (Roffe numbers is crucial for practical quantum machine
2019) These approaches have been shown to improve learning. Advances in quantum computing hardware,
the reliability of quantum computations by encoding such as quantum annealers or quantum processors
quantum information across multiple physical qubits, with more qubits (e.g., those being developed by com-
thus protecting it from noise and errors. panies like IBM, Google, and Rigetti) are crucial in
• Noise-resilient algorithms: Some quantum machine overcoming qubit limitations.
learning algorithms are specifically designed to be
more resilient to noise. For instance, variational
By employing these mitigation strategies, the practical
quantum algorithms (VQAs) can work within the
implementation challenges of quantum machine learning can
constraints of noisy quantum devices by iteratively
be addressed, helping to bring quantum computing closer to
adjusting the quantum circuit to minimize the effect
real-world applicability (Tables 30 and 31).
of noise (Benedetti et al. 2021; Wang et al. 2024).
2. Error-prone systems:
• Fault-tolerant quantum computing: Involving the 8 Applications of quantum machine learning
design of quantum algorithms that can function even
in the presence of errors without leading to catas- This section outlines various applications of quantum algo-
trophic failures (Moreno Casares 2023). For instance, rithms, the datasets referenced in the literature, and specific
quantum fault tolerance can help improve system use cases in quantum machine learning.
robustness by using redundant encoding of quantum In quantum chemistry, the VQE and QPE algorithms are
information. employed. Datasets typically include molecular structures
• Hybrid quantum-classical approaches: Combining such as H2 , Li H , and BeH2 , as well as databases specific
classical and quantum computing allows for the to quantum chemistry. Use cases in quantum machine learn-
error correction capabilities of classical systems to ing involve predicting molecular properties and simulating
be leveraged while taking advantage of quantum chemical reactions, which are crucial for understanding and
enhancements for certain computations (Slabbert and designing new molecules.
Petruccione 2024). This hybrid approach can reduce For optimization problems, the QAOA and Grover’s
the reliance on error-prone quantum hardware. search are the primary algorithms used. Common datasets
include instances of the MAX-CUT problem and the travel-
3. Decoherence:
ing salesman problem (TSP). These algorithms and datasets
• Quantum coherence time extension: Techniques such are applied to optimize supply chains and solve scheduling
as dynamical decoupling can help mitigate the effects problems, leveraging quantum algorithms’ potential to find
of decoherence by rapidly switching between states solutions more efficiently than classical methods.
to avoid the influence of environmental noise. This In machine learning, QSVM, QNN, and QPCA are promi-
method has been used in quantum computation to nent algorithms. The datasets used are familiar to classical
extend the coherence time of qubits (Somogyi et al. machine learning practitioners, such as MNIST, the Iris
2024). dataset, and Fashion-MNIST. These algorithms are utilized
• Improved quantum hardware: Advances in quantum for tasks like image classification, data clustering, and fea-
hardware, such as trapped-ion qubits and supercon- ture extraction, aiming to enhance machine learning models’
ducting qubits, are aiming to increase the coherence performance with quantum computing’s unique capabilities.
123
Table 30 Summary of potential applications of quantum machine learning
Ref Application Quantum algorithms Datasets Use cases in quantum machine learning
Arute et al. (2019); Quantum chemistry • Variational quantum • Molecular structures • Predicting molecular
McClean et al. (2016); eigensolver (VQE) (e.g., H2 , LiH, BeH2 ) properties
Peruzzo et al. (2014) • Quantum phase estima- • Quantum chemistry • Simulating chemical
tion (QPE) databases reactions
Quantum Machine Intelligence (2025) 7:39
Farhi et al. (2014); Kwiat Optimization problems • Quantum approximate • MAX-CUT problem • Optimizing supply
et al. (2000) optimization algorithm instances chains
(QAOA) • Traveling salesman • Scheduling problems
• Grover’s search problem (TSP) datasets
Rebentrost et al. (2014); Machine Learning • Quantum support vec- • MNIST • Image classification
Golchha and Verma tor machines (QSVM) • Iris dataset • Data clustering
(2023); Akpinar et al. • Quantum neural net- • Fashion-MNIST • Feature extraction
(2024); Wang et al. works (QNN)
(2024); Landman (2021) • Quantum principal
component analysis
(QPCA)
Lanyon et al. (2007); Cryptography • Shor’s algorithm • RSA encryption keys • Secure communica-
Gisin et al. (2002) • Quantum key distribu- • Quantum crypto- tions
tion (QKD) graphic protocols • Breaking classical
encryption schemes
Huggins et al. (2022) Finance • Quantum Monte Carlo • Financial time series • Risk analysis
(QMC) data • Pricing of financial
• Quantum amplitude • Portfolio optimization derivatives
estimation (QAE) datasets
Daley et al. (2022); Ver- Material science • Quantum simulation • Crystalline structures • Discovering new mate-
straete et al. (2023) • Density matrix • Material property rials
renormalization group datasets • Simulating material
(DMRG) properties
Salo-Ahen et al. (2020); Drug discovery • Quantum molecular • Protein-ligand binding • Drug efficacy predic-
Wang et al. (2023) dynamics datasets tions
• VQE • Drug compound • Molecular docking
databases simulations
Page 43 of 55
123
39
Table 31 Recent developments in quantum machine learning
39
123
Senokosov et al. (2024) 2024 Hybrid quantum neural Achieved 99.21% accu- Two hybrid quantum Quantum circuits are
networks (HQNN) with racy on MNIST with models: HQNN-parallel, constrained by cur-
Page 44 of 55
parallel quantum circuits 8x fewer parameters incorporating parallel rent hardware limits;
and quanvolutional lay- than a classical CNN. quantum circuits for real-world deployment
ers - Generalized perfor- encoding and classifica- requires addressing
mance on Medical tion; and a hybrid model noise and scalability
MNIST (> 99%) and using quanvolutional challenges
CIFAR-10 (> 82%). - layers for dimensional-
Quanvolutional layers ity reduction and feature
reduce image resolution extraction. Loss function
effectively with fewer optimization (cross-
parameters entropy) employed for
training on datasets like
MNIST and CIFAR-10
Lohani et al. (2023) 2023 Quantum Bayesian net- Enhanced algorithms Use quantum circuits Demanding on quantum
works for probabilistic graph- to represent and infer resources and error rates
ical models in quantum Bayesian networks
contexts
Cerezo et al. (2021) 2021 Variational quantum Development of noise- Use hybrid quantum- Susceptibility to barren
algorithms (VQAs) resilient optimization classical algorithms for plateaus
techniques parameter optimization
Zoufal et al. (2021) 2021 Quantum Boltzmann Enhanced training tech- Combine quantum Training can be slow and
machines (QBMs) niques for better approx- annealing with classical resource-intensive
imation of distributions postprocessing
Chen et al. (2020) 2020 Quantum reinforcement Novel approaches for Leverage quantum states Complexity in imple-
learning (QRL) quantum agents in rein- and operations to accel- mentation and high noise
forcement learning erate learning sensitivity
Schuld et al. (2020) 2020 Quantum neural net- Development of QNNs Combine quantum layers Require significant
works (QNNs) for specific applications with classical neural net- quantum resources and
like image recognition work architectures hybrid training complex-
ity
Stokes et al. (2020) 2020 Quantum natural gradi- Optimized techniques Leverage natural gradi- Challenges with noise
ent descent for training variational ents for efficient opti- and hardware require-
quantum circuits mization ments
Mari et al. (2020) 2020 Quantum transfer learn- Application of transfer Transfer knowledge Hybrid implementation
ing learning principles to from pre-trained quan- complexity and resource
quantum models tum models to new demands
tasks
Bondarenko and Feldmann (2020) 2020 Quantum autoencoders Advanced methods for Use quantum circuits for Require high coherence
compressing and recon- encoding and decoding times and efficient error
structing quantum data processes correction
Quantum Machine Intelligence (2025) 7:39
Table 31 continued
Ref Year Algorithm Key points Methodology Limitations
Cong et al. (2019) 2019 Quantum convolu- Enhanced methods for Apply quantum circuits Hardware noise and
tional neural networks image recognition and to perform convolution coherence time limita-
(QCNNs) quantum phase detection operations on data tions
Zoufal et al. (2019) 2019 Quantum generative Advances in training Quantum circuits are High resource demands
adversarial networks algorithms to improve used for both the gener- and training instability
(QGANs) convergence ator and discriminator
Havlíček et al. (2019) 2019 Quantum support vector Implementation of Utilize quantum kernel Requires large quantum
machines (QSVMs) QSVMs with improved estimation to enhance datasets and error correc-
kernel estimation meth- SVM performance tion
Quantum Machine Intelligence (2025) 7:39
ods
Kerenidis et al. (2019) 2019 Quantum k-means clus- Refined quantum algo- Utilize quantum distance Scalability and error cor-
tering rithms for efficient data measures for clustering rection remain challeng-
clustering ing
Schuld and Killoran (2019) 2019 Quantum kernel estima- Improved algorithms for Leverage quantum cir- High resource demands
tion estimating quantum ker- cuits to compute effec- and noise sensitivity
nels for ML tasks tive kernels
Lloyd et al. (2014) 2014 Quantum principal com- Improved algorithms for Perform PCA using High demands on quan-
ponent analysis (QPCA) high-dimensional data quantum circuits for tum resources and error
analysis speedup correction
Farhi et al. (2014) 2014 Quantum approximate Improvements for solv- Implement QAOA for Sensitive to noise and
optimization algorithm ing optimization prob- combinatorial optimiza- hardware limitations
(QAOA) lems with quantum tech- tion tasks
niques
Page 45 of 55
123
39
39 Page 46 of 55 Quantum Machine Intelligence (2025) 7:39
Cryptography benefits from Shor’s algorithm and quan- correction techniques (Havlíček et al. 2019). Quantum rein-
tum key distribution (QKD). Datasets include RSA encryp- forcement learning (QRL) has introduced novel algorithms
tion keys and various quantum cryptographic protocols. that accelerate learning and improve decision-making in
These quantum algorithms are pivotal for secure commu- complex environments (Chen et al. 2020). Lastly, quantum
nications and have the potential to break classical encryption Boltzmann machines (QBMs) have advanced with more effi-
schemes, thus revolutionizing data security. cient training methods, allowing for more accurate modeling
In the finance sector, quantum Monte Carlo (QMC) and of complex data distributions (Zoufal et al. 2021). These
quantum amplitude estimation (QAE) are key algorithms. developments collectively enhance the potential of QML to
Financial time series data and portfolio optimization datasets address computational challenges beyond the reach of classi-
are typically used. These applications include risk analysis cal machine learning methods. Below table highlights some
and the pricing of financial derivatives, where quantum algo- of the recent developments in the filed of quantum machine
rithms can provide more accurate and faster computations learning:
compared to classical methods. One of the areas that have garnered attention in quantum
Material science utilizes quantum simulation and the den- machine learning is hybrid quantum-classical machine learn-
sity matrix renormalization group (DMRG) algorithms. The ing (HQCML) algorithms which has emerged as a promising
datasets involve crystalline structures and material property approach in recent QML developments, aiming to com-
databases. Quantum machine learning applications in this bine the strengths of classical and quantum computing to
field include discovering new materials and simulating their solve complex problems. These algorithms leverage quantum
properties, which is essential for developing advanced mate- computing’s ability to process large, high-dimensional data
rials with desirable characteristics. spaces and perform optimizations that would be intractable
Finally, in drug discovery, algorithms like quantum for classical systems, while still relying on classical com-
molecular dynamics and VQE are used. Relevant datasets puting for data preprocessing, postprocessing, and other
include protein-ligand binding datasets and drug compound supportive tasks.
databases. These algorithms assist in predicting drug effi- One of the key areas of HQCML research focuses on inte-
cacy and performing molecular docking simulations, which grating quantum circuits within classical machine learning
are critical steps in the drug discovery process. models, such as quantum-enhanced neural networks, sup-
port vector machines (SVMs), and generative adversarial
networks (GANs). In these models, quantum processing is
9 Recent developments in quantum machine typically applied to tasks like feature space mapping, dimen-
learning sionality reduction, or kernel computation, while classical
computing handles the heavier lifting in terms of model train-
Attempts are being made to tie quantum information pro- ing, optimization, and generalization across large datasets.
cessing to AI using some sort of simulations. Large quantum For example, quantum circuits can be used for feature encod-
data can be simulated using quantum computing (QC), which ing or creating quantum kernels for classification tasks,
also makes search and optimization faster. This is particularly which classical systems then use for training machine learn-
advantageous for AI (Tables 32 and 33). ing models.
For instance, the Grover method can be used in sev- Recent studies have also explored hybrid quantum models
eral ways to solve search issues more quickly, and some for specific applications, such as drug discovery, power plant
contemporary developments in quantum machine learning optimization, and satellite mission planning. These appli-
have produced exponential improvements in some machine cations benefit from the computational power of quantum
learning techniques. Recent developments in QML have processors, particularly in solving problems involving com-
shown significant advancements across various algorithms. binatorial optimization or simulating quantum systems, tasks
Quantum convolutional neural networks (QCNNs) have where classical systems may struggle due to the computa-
demonstrated enhanced capabilities in classifying quan- tional cost.
tum phases and detecting errors (Cong et al. 2019). VQAs However, hybrid quantum-classical systems are not with-
have seen improvements in optimization frameworks, mak- out their challenges. Current quantum hardware is still in the
ing them more robust to noise and efficient in resource NISQ regime, meaning that the quantum processors are prone
usage (Cerezo et al. 2021). Quantum generative adversar- to noise and error, limiting their utility in large-scale appli-
ial networks (QGANs) have benefited from better training cations. Moreover, finding efficient hybrid algorithms that
algorithms, enabling them to handle high-dimensional data can balance quantum and classical workloads is an ongoing
more effectively (Zoufal et al. 2019). QSVMs have achieved area of research. Many proposed algorithms still rely heavily
scalability improvements and better integration with error on classical systems for critical components, such as opti-
123
Table 32 Recent work in hybrid classical-quantum machine learning: advancements, challenges, application domains, and distinct roles of classical and quantum components
Ref Year Key contributions Weaknesses Area of applica- Classical Quantum
tion
Senokosov et al. (2024) 2024 Introduced QCNNs for High qubit fidelity Image classifica- Classical CNN Quantum convo-
scalable image classi- required; limited test- tion layers for feature lution layers for
fication, outperforming ing on large datasets extraction complex features
classical CNNs on cer-
tain datasets
Lusnig et al. (2024) 2024 Developed a hybrid Limited to specific Medical image Federated learn- Quantum image pro-
model integrating fed- medical imaging analysis ing and local cessing layers
erated learning and datasets; scalability dataset prepro-
quantum image classi- issues with real- cessing
Quantum Machine Intelligence (2025) 7:39
123
39
Table 32 continued
39
123
Haboury et al. (2023) 2023 Proposed a supervised Limited real-world Emergency Training on his- Real-time escape
Page 48 of 55
hybrid quantum model testing; depends on response plan- torical routing route optimization
for optimizing escape the noise level of ning data
routes during emergen- quantum hardware
cies
Kordzanganeh et al. (2023) 2023 Developed parallel Requires advanced Neural network Training and clas- Intermediate feature
hybrid networks com- optimization tech- optimization sical layer com- embedding and quan-
bining quantum and niques for balancing putations tum depth tuning
classical layers for quantum and classi-
enhanced performance cal workloads
in neural network tasks
Sagingalieva et al. (2023) 2023 Integrated quantum lay- Focused on a sin- Image recog- Data augmen- Intermediate feature
ers into ResNet for car gle domain; limited nition and tation and processing
classification, achieving cross-domain appli- automotive appli- hyperparame-
better hyperparame- cability cations ter tuning
ter tuning with hybrid
methods
Sagingalieva et al. (2023) 2023 Improved photovoltaic Dependence on high- Renewable Feature selection State optimization
power forecasting using quality data prepro- energy fore- and data prepro- for prediction
hybrid models combin- cessing casting cessing
ing classical ML and
quantum optimization
Anoshin et al. (2024) 2023 Proposed a hybrid quan- Limited scalability Drug discovery GAN architecture Quantum-enhanced
tum GAN for gener- to larger molecular and computa- design and adver- molecule generation
ating molecular struc- libraries tional chemistry sarial training
tures, improving small
molecule screening
Quantum Machine Intelligence (2025) 7:39
Quantum Machine Intelligence (2025) 7:39 Page 49 of 55 39
Table 33 Key advances and advantages in quantum machine learning: a summary of recent research
Ref Year Key contribution Focus area Weakness Quantum/hybrid
Li and Deng (2025) 2025 Introduces quantum Privacy protec- Complexity in encryp- Quantum
homomorphic encryp- tion tion protocols could
tion for secure delegated impact efficiency
and federated learning
Zhang et al. (2024) 2024 Explores quantum- Computational Focuses on shallow cir- Quantum
classical separations in speedup cuits, may not scale to
shallow circuits, even in deeper networks
noisy environments
Li and Deng (2024) 2024 Focuses on mitigating Quantum robust- Dependent on the qual- Quantum
noise in quantum devices ness ity of quantum devices,
to ensure reliable learn- which may limit scalabil-
ing outputs ity
Gong et al. (2024) 2024 Investigates improving Quantum robust- May not be fully opti- Quantum
quantum adversarial ness mized for large-scale
robustness using ran- applications
domized encodings
Ren et al. (2022) 2022 Implements quantum Quantum robust- Focused on a specific Quantum
adversarial learning on ness experimental setup, may
superconducting qubits not generalize to all sys-
tems
Pei-Xin et al. (2021) 2021 Discusses adversarial Quantum robust- May require advanced Quantum
learning methods in ness quantum hardware for
quantum AI practical implementation
Liu et al. (2021) 2021 Demonstrates quantum Computational Limited to specific learn- Quantum
speedup in supervised speedup ing scenarios
learning tasks
Li et al. (2021) 2021 Proposes a quantum fed- Privacy protec- Relies on blind quantum Quantum
erated learning approach tion computing, which may
using blind quantum have scalability issues
computing for privacy
mization and error correction, and further advancements in in certain learning problems. Zhang et al. (2024) explore
quantum error correction and fault-tolerant quantum com- the advantages of shallow quantum circuits, which offer
puting are needed to realize the full potential of HQCML. quantum-classical separations in learning tasks even in noisy
Overall, hybrid quantum-classical machine learning algo- environments. These results highlight how quantum com-
rithms represent an exciting frontier, with potential applica- puting can provide substantial computational advantages,
tions across various fields like healthcare, finance, energy, particularly in complex and noisy settings, where classical
and optimization. However, challenges related to hardware algorithms might struggle.
limitations and algorithm design remain a key focus for Another key advantage of quantum learning is privacy
researchers, and achieving practical, large-scale implementa- protection. Traditional machine learning methods, particu-
tions will require continued collaboration between quantum larly in federated learning, face challenges regarding data
computing and classical machine learning communities. privacy, as sensitive information is exchanged between par-
ties. Quantum techniques offer promising solutions in this
9.1 Key contributions of quantum learning area, with works such as Li et al. (2021) and Li and Deng
(2025) proposing frameworks for secure machine learning.
Quantum learning offers significant advantages over classi- These approaches use quantum cryptographic protocols to
cal machine learning techniques, particularly in the areas of ensure that data remains private during training and learning
computational speedup, privacy protection, and robustness processes, providing an additional layer of security and trust
in adversarial settings. that is not easily achievable through classical means.
In terms of computational speedup, quantum approaches Finally, robustness is a crucial benefit of quantum learning,
have shown the potential to outperform classical methods especially when considering adversarial settings. In tradi-
in supervised machine learning tasks. The study (Liu et al. tional machine learning, models are vulnerable to adversarial
2021) demonstrates a quantum speedup in the training of attacks that manipulate input data to mislead the system.
machine learning models, achieving improved performance Quantum learning offers enhanced adversarial robustness
123
39 Page 50 of 55 Quantum Machine Intelligence (2025) 7:39
through the properties of quantum systems. The study (Ren 1. Quantum variational algorithms: These algorithms, in-
et al. 2022) demonstrates how quantum adversarial learn- cluding the VQE and quantum approximate optimiza-
ing can be implemented on superconducting qubits, offering tion algorithm (QAOA), are particularly well-suited for
a new approach to securing quantum models. Furthermore, NISQ devices. They leverage classical-quantum hybrid
Li and Deng (2024) address how to mitigate noise in quan- approaches to optimize parameterized quantum circuits,
tum devices to ensure the reliability of learning outputs. demonstrating robustness to noise and hardware imper-
Additional research, Pei-Xin et al. (2021) and Gong et al. fections.
(2024), explores methods to strengthen quantum systems 2. Quantum annealing: Quantum annealing methods, imple-
against adversarial attacks by using quantum encodings and mented on platforms like D-Wave, show promise for
randomized techniques, thereby enhancing the robustness solving optimization problems under NISQ constraints.
of quantum machine learning models in real-world appli- They are relatively resilient to certain types of noise,
cations. although their applicability to a broader range of QML
tasks is limited compared to gate-based quantum com-
puting.
10 Discussion
10.2 Fault-tolerant design
We classified QML algorithms into three primary learn-
ing paradigms: supervised, unsupervised, and reinforcement Fault-tolerant quantum computing aims to mitigate the
learning. effects of noise and errors through error correction codes
and robust circuit designs.
1. Supervised learning: Quantum algorithms such as QSVMs
and QNNs demonstrate potential for significant speedups 1. Topological quantum computing: Leveraging anyons
in training and inference. Their ability to handle large and braiding operations, topological quantum comput-
datasets with high-dimensional feature spaces highlights ing offers a pathway to fault-tolerant QML. While still
their suitability for complex classification tasks. How- in early stages, its potential to operate error-free could
ever, practical implementations are currently limited by enable scalable and reliable implementation of complex
the depth of quantum circuits and error rates in NISQ QML algorithms.
devices. 2. Surface codes: Surface codes provide a practical approach
2. Unsupervised learning: Quantum clustering algorithms, to error correction, enabling fault-tolerant operations by
including quantum k-means and QPCA, offer promis- encoding logical qubits into larger numbers of physical
ing avenues for data analysis and pattern recognition. qubits. This approach could support the development of
These algorithms can process vast amounts of data more scalable QML algorithms, although significant technical
efficiently than classical counterparts, but they remain challenges remain in achieving low enough error rates.
constrained by noise and decoherence in current quan-
tum hardware. 10.3 Effect of noise
3. Reinforcement learning: Quantum reinforcement learn-
ing algorithms, such as quantum Q-Learning, show Noise remains a critical challenge in the deployment of
potential for optimizing decision-making processes in QML algorithms. Its impact varies across different learning
dynamic environments. Their ability to explore and paradigms and hardware implementations.
exploit large state-action spaces more efficiently could
revolutionize fields like robotics and autonomous sys- 1. Mitigation techniques: Techniques such as error mitiga-
tems. However, the practical application of these algo- tion, noise-aware algorithm design, and quantum error
rithms is hindered by the limitations of NISQ-era quan- correction are essential for enhancing the performance of
tum processors. QML algorithms on NISQ devices. Strategies like zero-
noise extrapolation and probabilistic error cancellation
10.1 NISQ suitability show promise in reducing the detrimental effects of noise.
2. Adaptive algorithms: Adaptive algorithms that adjust
NISQ devices, characterized by their limited qubit counts and their parameters or learning strategies in response to
susceptibility to noise, present unique challenges and oppor- observed noise levels can improve robustness and reli-
tunities for QML. Algorithms designed for NISQ devices ability. These algorithms are particularly valuable for
must balance performance with resilience to noise and deco- dynamic and noisy environments, where maintaining
herence. accuracy is crucial.
123
Quantum Machine Intelligence (2025) 7:39 Page 51 of 55 39
123
39 Page 52 of 55 Quantum Machine Intelligence (2025) 7:39
Data Availability No datasets were generated or analyzed during the Carugno C, Dacrema MF, Cremonesi P (2024) Adaptive learning for
current study. quantum linear regression
Cemin G, Cech M, Weiss E, Soltan S, Braun D, Lesanovsky I, Carollo
F (2024) Machine learning of quantum channels on NISQ devices.
Declarations arXiv preprint arXiv:2405.12598
Cerezo M, Arrasmith A, Babbush R, Benjamin SC, Endo S, Fujii K,
McClean JR, Mitarai K, Yuan X, Cincio L et al (2021) Variational
Conflict of interest The authors declare no competing interests. quantum algorithms. Nature Rev Phys 3(9):625–644
Cerezo M, Verdon G, Huang H-Y, Cincio L, Coles PJ (2022) Challenges
and opportunities in quantum machine learning. Nat Comput Sci
2(9):567–576
References Cerezo M, Arrasmith A, Babbush R et al (2022) Challenges and oppor-
tunities in quantum machine learning. Nature Rev Phys
Aaronson S (2009) BQP and the polynomial hierarchy Chancellor N (2023) Modernizing quantum annealing II: genetic
Abbas OA (2008) Comparisons between data clustering algorithms. Int algorithms with the inference primitive formalism. Nat Comput
Arab J Inf Technol (IAJIT) 5(3) 22(4):737–752
Achache T, Horesh L, Smolin J (2020) Denoising quantum states with Chen K-C, Li T-Y, Wang Y-Y, See S, Wang C-C, Wille R, Chen
quantum autoencoders–theory and applications. arXiv preprint N-Y, Yang A-C, Lin C-Y (2024) cuTN-QSVM: cuTensorNet-
arXiv:2012.14714 accelerated quantum support vector machine with cuQuantum
Ai H, Liu Y (2024) Graph neural networks-based parameter design SDK
towards large-scale superconducting quantum circuits for crosstalk Chen SY-C (2024) Differentiable quantum architecture search in asyn-
mitigation chronous quantum reinforcement learning
Akpinar E, Islam SMN, Oduncuoglu M (2024) Evaluating the impact Chen SY-C, Huang C-M, Hsing C-W, Kao Y-J (2020) Hybrid quantum-
of different quantum kernels on the classification performance of classical classifier based on tensor network and variational quan-
support vector machine algorithm: a medical dataset application tum circuit. arXiv preprint arXiv:2011.14651
Al-Zafar Khan M, Al-Karaki J, Omar M (2024) Predicting water quality Chen SY-C, Yang C-HH, Qi J, Chen P-Y, Ma X, Goan H-S (2020) Vari-
using quantum machine learning: the case of the Umgeni catch- ational quantum circuits for deep reinforcement learning. IEEE
ment (U20A) study region access 8:141007–141024
Amin MH, Andriyash E, Rolfe J, Kulchytskyy B, Melko R (2018) Quan- Chinzei K, Tran QH, Endo Y, Oshima H (2024) Resource-efficient
tum Boltzmann machine. Phys Rev X 8(2):021050 equivariant quantum convolutional neural networks
Anoshin M, Sagingalieva A, Mansell C, Zhiganov D, Shete V, Pflitsch Cong I, Choi S, Lukin MD (2019) Quantum convolutional neural net-
M, Melnikov A (2024) Hybrid quantum cycle generative adversar- works. Nat Phys 15(12):1273–1278
ial network for small molecule generation. IEEE Trans Quantum Crawford D, Levit A, Ghadermarzy N, Oberoi JS, Ronagh P (2019)
Eng 5:1–14 Reinforcement learning using quantum Boltzmann machines
Arute F et al (2019) Quantum supremacy using a programmable super- Daley AJ, Bloch I, Kokail C, Flannigan S, Pearson N, Troyer M, Zoller
conducting processor. Nature 574(7779):505–510 P (2021) Practical quantum advantage in quantum simulation.
Basilewitsch D, Bravo JF, Tutschku C, Struckmeier F (2024) Quantum Nature 607(7920):667–676
neural networks in practice: a comparative study with classical Devitt SJ, Munro WJ, Nemoto K (2013) Quantum error correction for
models from standard data sets to industrial images beginners. Rep Prog Phys 76(7):076001
Baum Y, Amico Y, Howell S, Hush M, Liuzzi M, Mundada P, Merkh T, Dhaulakhandi R (2023) Quantum neural network architecture to per-
Carvalho ARR, Biercuk MJ (2021) Experimental deep reinforce- form machine learning tasks on NISQ computers. PhD thesis
ment learning for error-robust gate-set design on a superconducting Djidjev HN, Chapuis G, Hahn G, Rizk G (2018) Efficient combi-
quantum computer. PRX Quantum 2(4) natorial optimization using quantum annealing. arXiv preprint
Beigi S, Taghavi L, Tajdini A (2022) Time-and query-optimal quantum arXiv:1801.08653
algorithms based on decision trees. ACM Trans Quantum Comput Domino K, Koniorczyk M, Krawiec K, Jałowiecki K, Deffner S, Gardas
3(4):1–31 B (2023) Quantum annealing in the NISQ era: railway conflict
Benedetti M, Lloyd E, Sack S, Fiorentini M (2019) Parameterized quan- management. Entropy 25(2):191
tum circuits as machine learning models. Quantum Sci Technol Dong D, Chen C, Li H, Tarn T-J (2008) Quantum reinforcement
4(4):043001 learning. IEEE Trans Syst, Man, Cybernetics, B (Cybernetics)
Benedetti M, Fiorentini M, Lubasch M (2021) The role of entangle- 38(5):1207–1220
ment and noise in variational quantum algorithms. Nat Mach Intell Doriguello JF, Lim D, Pun CS, Rebentrost P, Vaidya T (2024) Quantum
3(7):558–566 algorithms for the pathwise lasso
Bharti K, Cervera-Lierta A, Kyaw TH, Haug T, Alperin-Lea S, Anand Duan B, Yuan J, Yu C-H, Huang J, Hsieh C-Y (2020) A survey on
A, Degroote M, Heimonen H, Kottmann JS, Menke T, Mok W-K, HHL algorithm: from theory to application in quantum machine
Sim S, Kwek L-C, Aspuru-Guzik A (2022) Noisy intermediate- learning. Phys Lett A 384(24):126595
scale quantum algorithms. Rev Mod Phys 94(1) Dunjko V, Taylor JM, Briegel HJ (2016) Quantum-enhanced machine
Biamonte J, Wittek P, Pancotti N, Rebentrost P, Wiebe N, Lloyd S (2017) learning. Phys Rev Lett 117(13):130501
Quantum machine learning. Nature 549(7671):195–202 Durr C, Hoyer P (1996) A quantum algorithm for finding the minimum
Blekos K, Brand D, Ceschini A, Chou C-H, Li R-H, Pandya K, Sum- Dutta T, Jin A, Huihong CL, Latorre JI, Mukherjee M (2024) Trainabil-
mer A (2024) A review on quantum approximate optimization ity of a quantum-classical machine in the NISQ era
algorithm and its variants. Phys Rep 1068:1–66 Fang P, Zhang C, Situ H (2024) Quantum state clustering algo-
Bondarenko D, Feldmann D (2020) Quantum autoencoders to denoise rithm based on variational quantum circuit. Quantum Inf Process
quantum data. Phys Rev Lett 124(13):130502 23(4):125
Brassard G, Hoyer P, Mosca M, Tapp A (2002) Quantum amplitude Farhi E, Goldstone J, Gutmann S (2014) A quantum approximate opti-
amplification and estimation. Contemp Math 305:53–74 mization algorithm. arXiv preprint arXiv:1411.4028
123
Quantum Machine Intelligence (2025) 7:39 Page 53 of 55 39
Farhi E, Neven H (2018) Classification with quantum neural networks Khadiev K, Mannapov I, Safina L (2019) The quantum version of classi-
on near term processors. arXiv preprint arXiv:1802.06002 fication decision tree constructing algorithm C5. 0. arXiv preprint
Farshi E (2023) Hybrid quantum-classical approach: quantum-inspired arXiv:1907.06840
deep learning using classical simulation Khan TM, Robles-Kelly A (2020) Machine learning: quantum vs clas-
Ganeshamurthy PA, Ghosh K, O’Meara C, Cortiana G, Schiefelbein- sical. IEEE Access 8:219275–219294
Lach J, Monti A (2024) Quantum multi-output gaussian processes Khoo JY, Gan CK, Ding W, Carrazza S, Ye J, Kong JF (2024) Bench-
based machine learning for line parameter estimation in electrical marking quantum convolutional neural networks for classification
grids and data compression tasks
Gao X, Zhang Z-Y, Duan L-M (2018) A quantum machine learning Khoshaman A, Vinci W, Denis B, Andriyash E, Sadeghi H, Amin
algorithm based on generative models. Sci Adv 4(12):eaat9004 MH (2018) Quantum variational autoencoder. IOP Publishing Ltd.
Gisin N, Ribordy G, Tittel W, Zbinden H (2002) Quantum cryptography. Published 12 September
Rev Mod Phys 74(1):145 Knill E, Laflamme R (1997) Theory of quantum error-correcting codes.
Golchha R, Verma GK (2023) Quantum-enhanced support vector clas- Phys Rev A 55(2):900
sifier for image classification. In 2023 IEEE 8th International Kordzanganeh M, Kosichkina D, Melnikov A (2023) Parallel hybrid
Conference for Convergence in Technology (I2CT), pp 1–6 networks: an interplay between quantum and classical neural net-
Gong W, Yuan D, Li W, Deng D-L (2024) Enhancing quantum works. Intell Comput 2:0028
adversarial robustness by randomized encodings. Phys Rev Res Kordzanganeh M, Buchberger M, Kyriacou B, Povolotskii M, Fischer
6(2):023020 W, Kurkin A, Somogyi W, Sagingalieva A, Pflitsch M, Melnikov A
Gopalakrishnan D, Dellantonio L, Pilato AD, Redjeb W, Pantaleo F, (2023) Benchmarking simulated and physical quantum processing
Mosca M (2024) qLUE: a quantum clustering algorithm for multi- units using quantum and hybrid algorithms. Adv Quantum Technol
dimensional datasets 6(8)
Grossi M, Ibrahim N, Radescu V, Loredo R, Voigt K, Von Altrock Koura A, Imoto T, Ura K, Matsuzaki Y (2024) Linear regression using
C, Rudnik A (2022) Mixed quantum-classical method for fraud quantum annealing with continuous variables
detection with quantum feature selection. IEEE Trans Quantum Krenn M, Landgraf J, Foesel T, Marquardt F (2023) Artificial intelli-
Eng 3:1–12 gence and machine learning for quantum technologies. Phys Rev
Gujju Y, Matsuo A, Raymond R (2024) Quantum machine learning on A 107(1)
near-term quantum devices: current state of supervised and unsu- Kulshrestha A, Safro I (2022) Beinit: avoiding barren plateaus in varia-
pervised techniques for real-world applications. Phys Rev Appl tional quantum algorithms. In 2022 IEEE International Conference
21(6):067001 on Quantum Computing and Engineering (QCE), pp 197–203
Haboury N, Kordzanganeh M, Schmitt S, Joshi A, Tokarev I, Abdallah Kumar V, Bass G, Tomlin C, Dulny J (2018) Quantum annealing for
L, Kurkin A, Kyriacou B, Melnikov A (2023) A supervised hybrid combinatorial clustering. Quantum Inf Process 17:1–14
quantum machine learning solution to the emergency escape rout- Kumar N, Yalovetzky R, Li C, Minssen P, Pistoia M (2024) Des-q:
ing problem a quantum algorithm to provably speedup retraining of decision
Havenstein C, Thomas D, Chandrasekaran S (2018) Comparisons of trees
performance between quantum and classical machine learning. Kurkin A, Hegemann J, Kordzanganeh M, Melnikov A (2024) Fore-
SMU Data Sci Rev 1(4):11 casting steam mass flow in power plants using the parallel hybrid
Havlíček V, Córcoles AD, Temme K, Harrow AW, Kandala A, Chow network
JM, Gambetta JM (2019) Supervised learning with quantum- Kwiat PG, Mitchell JR, Schwindt PDD, White AG (2000) Grover’s
enhanced feature spaces. Nature 567(7747):209–212 search algorithm: an optical approach. J Mod Opt 47(2–3):257–
He C, Li J, Liu W, Wang ZJ (2021) A low complexity quantum principal 266
component analysis algorithm Landman J (2021) Quantum algorithms for unsupervised machine
Hong X, Huang W-J, W-C Chien, Y Feng, Hsieh M-H, Li S, Ying M learning and neural networks
(2024) Equivalence checking of parameterised quantum circuits Lanyon BP, Weinhold TJ, Langford NK, Barbieri M, James DFV,
Houssein EH, Abohashima Z, Elhoseny M, Mohamed WM (2022) Gilchrist A, White AG (2007) Experimental demonstration of a
Hybrid quantum-classical convolutional neural network model for compiled version of Shor’s algorithm with quantum entanglement.
COVID-19 prediction using chest X-ray images. J Comput Des Phys Rev Lett 99(25):250505
Eng 9(2):343–363 Larocca M, Thanasilp S, Wang S, Sharma K, Biamonte J, Coles PJ,
Houssein EH, Abohashima Z, Elhoseny M, Mohamed WM (2022) Cincio L, McClean JR, Holmes Z, Cerezo M (2024) A review of
Machine learning in the quantum realm: the state-of-the-art, chal- barren plateaus in variational quantum computing. arXiv preprint
lenges, and future vision. Expert Syst Appl 194:116512 arXiv:2405.00781
Huggins WJ, O’Gorman BA, Rubin NC, Reichman DR, Babbush R, Li W, Deng D-L (2022) Recent advances for quantum classifiers. Sci
Lee J (2022) Unbiasing fermionic quantum Monte Carlo with a China Phys Mech Astron 65(2):220301
quantum computer. Nature 603(7901):416–420 Li W, Deng D-L (2025) Quantum delegated and federated learning
Hu F, Khan SA, Bronn NT, Angelatos G, Rowlands GE, Ribeill GJ, via quantum homomorphic encryption. Quantum Technologies,
Türeci HE (2024) Overcoming the coherence time barrier in quan- Research Directions, pp 1–6
tum machine learning on temporal data. Nature Commun 15(1) Li Y, Wang Y, Wang Y, Jiao L, Liu Y (2016) Quantum clustering using
Kerenidis I, Landman J, Luongo A, Prakash A (2018) q-means: a quan- kernel entropy component analysis. Neurocomputing 202:36–48
tum algorithm for unsupervised machine learning Li Y, Tian M, Liu G, Peng C, Jiao L (2020) Quantum optimization and
Kerenidis I, Landman J, Luongo A, Prakash A (2019) q-means: a quan- quantum learning: a survey. Ieee Access 8:23568–23593
tum algorithm for unsupervised machine learning. Adv Neural Inf Li W, Deng D-L (2024) Extracting reliable quantum outputs for noisy
Process Syst 32 devices. Nature Comput Sci, pp 1–2
Kerenidis I, Landman J, Mathur N (2021) Classical and quantum algo- Li W, Lu S, Deng D-L (2021) Quantum federated learning through blind
rithms for orthogonal neural networks quantum computing. Sci China Phys Mech Astron 64(10)
123
39 Page 54 of 55 Quantum Machine Intelligence (2025) 7:39
Li W, Lu Z, Deng D-L (2022) Quantum neural network classifiers: a Nikolaeva AS, Kiktenko EO, Fedorov AK (2024) Universal quantum
tutorial. SciPost Physics Lecture Notes computing with qubits embedded in trapped-ion qudits. Phys Rev
Liu N, Rebentrost P (2018) Quantum machine learning for quantum A 109(2):022615
anomaly detection. Phys Rev A 97(4):042315 O’Quinn W, Mao S (2020) Quantum machine learning: recent advances
Liu Y, Arunachalam S, Temme K (2021) A rigorous and robust quantum and outlook. IEEE Wirel Commun 27(3):126–131
speed-up in supervised machine learning. Nat Phys 17(9):1013– Otgonbaatar S, Datcu M (2021) A quantum annealer for subset feature
1017 selection and the classification of hyperspectral images. IEEE J
Lloyd S (2010) Quantum algorithm for solving linear systems of Sel Top Appl Earth Obs Remote Sens 14:7057–7065
equations. In: APS march meeting abstracts, vol 2010, pp D4– Ouedrhiri O, Banouar O, El Hadaj S, Raghay S (2023) A new approach
002 for quantum phase estimation based algorithms for machine learn-
Lloyd S, Weedbrook C (2018) Quantum generative adversarial learning. ing. In: The proceedings of the international conference on smart
Phys Rev Lett 121(4):040502 city applications. Springer, pp 145–154
Lloyd S, Mohseni M, Rebentrost P (2014) Quantum principal compo- Page L, Brin S, Motwani R, Winograd T (1999) The PageRank cita-
nent analysis. Nat Phys 10(9):631–633 tion ranking: bringing order to the web. Technical report, Stanford
Lloyd S, Mohseni M, Rebentrost P (2013) Quantum algorithms for InfoLab
supervised and unsupervised machine learning Pan X, Lu Z, Wang W, Hua Z, Xu Y, Li W, Cai W, Li X, Wang H, Song Y-
Lohani S, Lukens JM, Davis AA, Khannejad A, Regmi S, Jones P et al (2023) Deep quantum neural networks on a superconducting
DE, Glasser RT, Searles TA, Kirby BT (2023) Demonstration of processor. Nat Commun 14(1):4006
machine-learning-enhanced Bayesian quantum state estimation. Parthasarathy R, Bhowmik RT (2021) Quantum optical convolutional
New J Phys 25(8):083009 neural network: a novel image recognition framework for quantum
Lu S, Braunstein SL (2014) Quantum decision tree classifier. Quantum computing. IEEE Access 9:103337–103346
Inf Process 13(3):757–770 Pei-Xin S, Wen-Jie J, Wei-Kang L, Zhi-De L, Dong-Ling D (2021)
Lusnig L, Sagingalieva A, Surmach M, Protasevich T, Michiu O, Adversarial learning in quantum artificial intelligence. Acta Phys-
McLoughlin J, Mansell C, de’ Petris G, Bonazza D, Zanconati F, ica Sinica 70(14)
Melnikov A, Cavalli F (2024) Hybrid quantum image classification Perelshtein M, Sagingalieva A, Pinto K, Shete V, Pakhomchik A, Mel-
and federated learning for hepatic steatosis diagnosis. Diagnostics nikov A, Neukart F, Gesek G, Melnikov A, Vinokur V (2022)
14(5):558 Practical application-specific advantage through hybrid quantum
Maheshwari D, Sierra-Sosa D, Garcia-Zapirain B (2021) Variational computing
quantum classifier for binary classification: real vs synthetic Peruzzo A, McClean J, Shadbolt P, Yung M-H, Zhou X-Q, Love PJ,
dataset. IEEE access 10:3705–3715 Aspuru-Guzik A, O’Brien JL (2014) A variational eigenvalue
Mandadapu P (2024) Exploring quantum-enhanced machine learn- solver on a photonic quantum processor. Nature Commun 5(1)
ing for computer vision: applications and insights on noisy Peruzzo A, Mcclean J, Shadbolt P, Yung MH, Zhou X, Love P, Aspuru-
intermediate-scale quantum devices Guzik A, O’Brien J (2025) A variational eigenvalue solver on a
Mari A, Bromley TR, Izaac J, Schuld M, Killoran N (2020) Transfer quantum processor. Nat Commun 5
learning in hybrid classical-quantum neural networks. Quantum Pham A, Vlasic A (2024) Investigate the performance of distribution
4:340 loading with conditional quantum generative adversarial network
Martín-Guerrero JD, Lamata L (2022) Quantum machine learning: a algorithm on quantum hardware with error suppression
tutorial. Neurocomputing 470:457–461 Placidi L, Hataya R, Mori T, Aoyama K, Morisaki H, Mitarai K, Fujii K
McClean JR, Romero J, Babbush R, Aspuru-Guzik A (2016) The theory (2023) MNISQ: a large-scale quantum circuit dataset for machine
of variational hybrid quantum-classical algorithms. New J Phys learning on/for quantum computers in the NISQ era
18(2):023023 Poggiali A, Berti A, Bernasconi A, Del Corso GM, Guidotti R (2024)
Mesman K, Tang Y, Moller M, Chen B (2024) and S Feld. Neural Quantum clustering with k-means: a hybrid approach. Theoret
network parameter prediction on autoencoded variational quantum Comput Sci 992:114466
eigensolvers, Nn-ae-vqe Rainjonneau S, Tokarev I, Iudin S, Rayaprolu S, Pinto K, Lemtiuzh-
Meyer N, Berberich J, Mutschler C, Scherer DD (2024) Robustness and nikova D, Koblan M, Barashov E, Kordzanganeh M, Pflitsch M,
generalization in quantum reinforcement learning via Lipschitz Melnikov A (2023) Quantum algorithms applied to satellite mis-
regularization sion planning for earth observation. IEEE J Sel Top Appl Earth
Moreno Casares PA (2023) Fault tolerant quantum algorithms Obs Remote Sens 16:7062–7075
Mott A, Job J, Vlimant J-R, Lidar D, Spiropulu M (2017) Solving a Rebentrost P, Mohseni M, Lloyd S (2014) Quantum support vector
Higgs optimization problem with quantum annealing for machine machine for big data classification. Phys Rev Lett 113(13)
learning. Nature 550(7676):375–379 Ren, W, Li W, Xu S, Wang K, Jiang W, Jin F, Zhu X, Chen J, Song
Muqeet A, Ali S, Yue T, Arcaini P (2024) A machine learning-based Z, Zhang P, Dong H, Zhang X, Deng J, Gao Y, Zhang C, Wu
error mitigation approach for reliable software development on Y, Zhang B, Guo Q, Li H, Wang Z, Biamonte J, Song C, Deng
IBM’s quantum computers D-L, Wang H (2022) Experimental quantum adversarial learning
Muqeet A, Sartaj H, Arreieta A, Ali S, Arcaini P, Arratibel M, Gjøby JM, with programmable superconducting qubits. Nature Comput Sci
Veeraragavan NR, Nygård JF (2024) Assessing quantum extreme 2(11):711–717
learning machines for software testing in practice Roffe J (2019) Quantum error correction: an introductory guide. Con-
Murali P, Debroy DM, Brown KR, Martonosi M (2020) Architecting temp Phys 60(3):226–245
noisy intermediate-scale trapped ion quantum computers Romero J, Olson JP, Aspuru-Guzik A (2017) Quantum autoencoders
Nagy DTR, Czabán C, Bakó B, Hága P, Kallus Z, Zimborás Z (2024) for efficient compression of quantum data. Quantum Sci Technol
Hybrid quantum-classical reinforcement learning in latent obser- 2(4):045001
vation spaces Saggio V, Asenbeck BE, Hamann A, Strömberg T, Schiansky P, Dunjko
Nguyen T, Sipola T, Hautamäki J (2024) Machine learning applications V, Friis N, Harris NC, Hochberg M, Englund D, Wölk S, Briegel
of quantum computing: a review. Eur Conf Cyber Warfare Secur HJ, Walther P (2021) Experimental quantum speed-up in reinforce-
23(1):322–330 ment learning agents. Nature 591(7849):229–233
123
Quantum Machine Intelligence (2025) 7:39 Page 55 of 55 39
Sagingalieva A, Komornyik S, Senokosov A, Joshi A, Sedykh A, Tsang SL, West MT, Erfani SM, Usman M (2023) Hybrid quantum-
Mansell C, Tsurkan O, Pinto K, Pflitsch M, Melnikov A (2023) classical generative adversarial network for high resolution image
Photovoltaic power forecasting using quantum machine learning generation. IEEE Trans Quantum Eng
Sagingalieva A, Kordzanganeh M, Kenbayev N, Kosichkina D, Verstraete F, Nishino T, Schollwöck U, Bañuls MC, Chan GK, Stouden-
Tomashuk T, Melnikov A (2023) Hybrid quantum neural network mire ME (2023) Density matrix renormalization group, 30 years
for drug response prediction. Cancers 15(10) on. Nature Rev Phys 5(5):273–276
Sagingalieva A, Kordzanganeh M, Kurkin A, Melnikov A, Kuhmistrov Wang G (2017) Quantum algorithm for linear regression. Phys Rev A
D, Perelshtein M, Melnikov A, Skolik A, Dollen DV (2023) Hybrid 96(1):012335
quantum resnet for car classification and its hyperparameter opti- Wang Y, Liu J (2024) A comprehensive review of quantum
mization. Quant Mach Intell 5(2) machine learning: from NISQ to fault tolerance. Rep Prog Phys
Sagingalieva A, Kurkin A, Melnikov A, Kuhmistrov D, Perelshtein M, 87(11):116402
Melnikov A, Skolik A, Dollen DV (2022) Hyperparameter opti- Wang S, Fontana E, Cerezo M, Sharma K, Sone A, Cincio L, Coles
mization of hybrid quantum neural networks for car classification. PJ (2021) Noise-induced barren plateaus in variational quantum
arXiv preprint arXiv:2205.04878 algorithms. Nat Commun 12(1):6961
Sahu H, Gupta HP (2024) NAC-QFL: noise aware clustered quantum Wang X, Du Y, Luo Y, Tao D (2021) Towards understanding the power
federated learning of quantum kernels in the NISQ era. Quantum 5:531
Sakib S, Ahmed N, Kabir AJ, Ahmed H (2019) An overview of convo- Wang P-H, Chen J-H, Yang Y-Y, Lee C, Tseng YJ (2023) Recent
lutional neural network: its architecture and applications advances in quantum computing for drug discovery and devel-
Salih Hasan BM, Abdulazeez AM (2021) A review of principal com- opment. IEEE Nanatechnol Mag 17(2):26–30
ponent analysis algorithm for dimensionality reduction. J Soft Wang S, Czarnik P, Arrasmith A, Cerezo M, Cincio L, Coles PJ (2024)
Comput Data Mining 2(1):20–30 Can error mitigation improve trainability of noisy variational quan-
Salo-Ahen OMH, Alanko I, Bhadane R, Bonvin AMJJ, Vargas Honorato tum algorithms? Quantum 8:1287
R, Hossain S, Juffer AH, Kabedev A, Lahtela-Kakkonen M, Larsen Wang D, Chen J, Liang Z, Fu T, Liu X-Y (2024) Quantum-inspired
AS et al (2020) Molecular dynamics simulations in drug discovery reinforcement learning for synthesizable drug design
and pharmaceutical development. Processes 9(1):71 Wang S, Lin H (2023) Quantum reinforcement learning
Schuld M, Killoran N (2019) Quantum machine learning in feature Wang Z, van der Laan T, Usman M (2024) Quantum kernel principal
Hilbert spaces. Phys Rev Lett 122(4):040504 components analysis for compact readout of chemiresistive sensor
Schuld M, Sinayskiy I, Petruccione F (2015) Simulating a perceptron arrays
on a quantum computer. Phys Lett A 379(7):660–663 Wilson M, Vandal T, Hogg T, Rieffel EG (2021) Quantum-assisted
Schuld M, Bocharov A, Svore KM, Wiebe N (2020) Circuit-centric associative adversarial network: applying quantum annealing in
quantum classifiers. Phys Rev A 101(3):032308 deep learning. Quant Mach Intell 3:1–14
Senokosov A, Sedykh A, Sagingalieva A, Kyriacou B, Melnikov A Yamasaki H, Subramanian S, Sonoda S, Koashi M (2020) Learning
(2024) Quantum machine learning for image classification. Mach with optimized random features: exponential speedup by quantum
Learn: Sci Technol 5(1):015040 machine learning without sparsity and low-rank assumptions
Sharma D, Sabale VB, Singh P, Kumar A (2024) Harnessing quantum Yulianti LP, Surendro K (2022) Implementation of quantum annealing:
support vector machines for cross-domain classification of quan- a systematic review. IEEE Access 10:73156–73177
tum states Zhang YJ, Mu XD, Liu X-W, Wang XY, Zhang X, Li K, Wu TY, Zhao
Sharma D, Singh P, Kumar A(2023) Quantum-inspired attribute selec- D, Dong C (2022) Applying the quantum approximate optimiza-
tion algorithm: a fidelity-based quantum decision tree tion algorithm to the minimum vertex cover problem. Appl Soft
Simeone O et al (2022) An introduction to quantum machine learning Comput 118:108554
for engineers. Found Trends® Signal Process 16(1–2):1–223 Zhang Z, Gong W, Li W, Deng D-L (2024) Quantum-classical separa-
Slabbert D, Petruccione F (2024) Hybrid quantum-classical feature tions in shallow-circuit-based learning with and without noises
extraction approach for image classification using autoencoders Zhou L, Wang S-T, Choi S, Pichler H, Lukin MD (2020) Quan-
and quantum SVMS tum approximate optimization algorithm: performance, mecha-
Somogyi W, Pankovets E, Kuzmin V, Melnikov A (2024) Method for nism, and implementation on near-term devices. Phys Rev X
noise-induced regularization in quantum neural networks 10(2):021067
Stein SA, L’Abbate R, Mu W, Liu Y, Baheri B, Mao Y, Qiang G, Li A, Zoufal C, Lucchi A, Woerner S (2021) Variational quantum Boltzmann
Fang B (2021) A hybrid system for learning classical data in quan- machines. Quantum Mach Intell 3:1–15
tum states. In: 2021 IEEE International Performance, Computing, Zoufal C, Lucchi A, Woerner S (2019) Quantum generative adversarial
and Communications Conference (IPCCC). IEEE, pp 1–7 networks for learning and loading random distributions. npj Quant
Stokes J, Izaac J, Killoran N, Carleo G (2020) Quantum natural gradient. Inf 5(1):103
Quantum 4:269
Svore KM, Hastings MB, Freedman M (2013) Faster phase estimation.
arXiv preprint arXiv:1304.0741 Publisher’s Note Springer Nature remains neutral with regard to juris-
Tilly J, Chen H, Cao S, Picozzi D, Setia K, Li Y, Grant E, Wossnig L, dictional claims in published maps and institutional affiliations.
Rungger I, Booth GH, Tennyson J (2022) The variational quantum
eigensolver: a review of methods and best practices. Phys Rep Springer Nature or its licensor (e.g. a society or other partner) holds
986:1–128 exclusive rights to this article under a publishing agreement with the
Torlai G, Melko RG (2020) Machine-learning quantum states in the author(s) or other rightsholder(s); author self-archiving of the accepted
NISQ era. Annu Rev Condens Matter Phys 11(1):325–344 manuscript version of this article is solely governed by the terms of such
Trugenberger CA (2002) Quantum pattern recognition. Quantum Inf publishing agreement and applicable law.
Process 1:471–493
123