Unlocking Artificial Intelligence
Christopher Mutschler • Christian Münzenmayer
Norman Uhlmann • Alexander Martin
Editors
Unlocking
Artificial
Intelligence
From Theory to Applications
Editors
Christopher Mutschler Christian Münzenmayer
Division Positioning and Networks Division Smart Sensing and Electronics
Fraunhofer IIS, Fraunhofer Institute Fraunhofer IIS, Fraunhofer Institute
for Integrated Circuits IIS for Integrated Circuits IIS
Nürnberg, Germany Erlangen, Germany
Norman Uhlmann Alexander Martin
Division Development Center Fraunhofer IIS, Fraunhofer Institute
X-Ray Technology for Integrated Circuits IIS
Fraunhofer IIS, Fraunhofer Institute Nürnberg, Germany
for Integrated Circuits IIS
Fürth, Germany
ISBN 978-3-031-64831-1 978-3-031-64832-8 (eBook)
https://doi.org/10.1007/978-3-031-64832-8
This work was supported by Fraunhofer Institut für Integrierte Schaltungen IIS
© The Editor(s) (if applicable) and The Author(s) 2024. This book is an open access publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International
License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution
and reproduction in any medium or format, as long as you give appropriate credit to the original author(s)
and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this book are included in the book’s Creative Commons
license, unless indicated otherwise in a credit line to the material. If material is not included in the
book’s Creative Commons license and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the
relevant protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for
any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
If disposing of this product, please recycle the paper.
Preface
In recent years it has become apparent that the deep integration of artificial intelli-
gence (AI) methods in product and services is essential for companies in Germany
and world-wide to stay competitive. The use of AI allows large volumes of data to
be analyzed, patterns and trends to be identified, and well-founded decisions to be
made on an informative basis. It also enables the optimization of workflows, the au-
tomation of processes and the development of new services, thus creating potential
for new business models and significant competitive advantages.
The use of AI in industry offers new opportunities to increase productivity, im-
prove quality, reduce costs and generate new, innovative solutions. Customer satisfac-
tion can also be increased through improved customer interaction and personalized
offerings. The use of AI offers significant potential in terms of quality, efficiency and
competitiveness - not only for multinational enterprises but also for the small and
medium-sized enterprises (SME) which are the industrial backbone of the European
economy. On the one hand, the quality of products and services can be increased
through the use of suitable tools and methods, which minimizes the susceptibility to
errors, optimizes processes and thus increases customer satisfaction. The automa-
tion of recurring tasks enables resources to be freed up and can lead to increased
efficiency and productivity. On the other hand, the use of AI enables SMEs to better
implement customer requirements, offer innovative solutions, stand out from the
competition and remain competitive in an increasingly globalized and digitalized
economy.
However, the use of AI in SMEs and industry also brings new requirements, such
as building up specialist knowledge and mastering technological complexity. The
rapid development and the in-depth knowledge required to implement and support
suitable methods and tools currently pose major challenges for SMEs in particular.
To meet the above-mentioned challenges and support the adoption and integration
of AI in industry and SMEs, structural measures are required. One suitable measure,
for example, would be the financing of transfer structures such as the ADA Lovelace
Center. Such a targeted development of transfer structures facilitates the transfer of
knowledge between research institutions and companies and provides industry and
v
vi Preface
SMEs with low-threshold access to specialist knowledge and resources in order to
exploit the full potential of these technologies.
The ADA Lovelace Center is a pioneering competence center for AI in Bavaria, the
establishment of which was funded by the Bavarian State Ministry of Economic Af-
fairs, Regional Development and Energy. A central focus of the ADA Lovelace Center
is on the development of AI-based solutions for industrial applications in sectors of
outstanding importance for Bavaria. These include transportation and traffic, produc-
tion and Industry 4.0, rail transport, financial services and insurance, logistics and
healthcare as well as sports. Concepts and solutions for specific issues are researched
and implemented in close cooperation with the application partners. A wide range of
AI skills are applied and further developed to promote the targeted and sustainable
development of AI skills within partner companies. In addition to scientific research,
particular attention is paid to the promotion of young scientists, who are integrated
into industrial research at an early stage. The ADA Lovelace Center bundles and
expands the AI expertise and infrastructure of the Friedrich-Alexander-University
Erlangen-Nürnberg, Ludwig-Maximilians-University Munich, the Fraunhofer Insti-
tute for Integrated Circuits IIS, the Fraunhofer Institute for Integrated Systems and
Device Technology IISB and the Fraunhofer Institute for Cognitive Systems IKS.
Thus, the ADA Lovelace Center has significant expertise in all relevant AI processes.
The center has created an internationally visible network for the Bavarian econ-
omy, which is dedicated to the fundamental issues of data collection and analysis
using AI methods, taking into account data protection and data security. The ADA
Lovelace Center supports companies in the Bavarian economy by researching, devel-
oping and implementing concrete solutions for issues in the field of AI and enables
them to transform their business processes and develop new data-driven business
models. This book presents an excerpt from various application areas and method-
ologies and research areas of AI and explains how those methods and processes can
be used successfully in practice.
Nuremberg, Fürth, Erlangen The Editors
Acknowledgements
First of all the Editors want express their greatest thank to Nadine Chrobok-Pensky
and her team for their excellent, professional and also human-focused project man-
agement, motivation, organization, and friendly reminders with a huge amount of
commitment and patience. Your work and support of the whole team during the
project was outstanding and highly appreciated.
We would like to express our sincere gratitude to all the authors for their valuable
contribution to this book. The expertise and dedication have filled the book with
high value content in the field of artificial intelligence and its applications, making
it a valuable resource for readers in this field. Your thorough research before and
within our joint research project, your insightful analysis, and clear writing style
have undoubtedly played a crucial role in the success of this fantastic book. Your
commitment to delivering high-quality content is commendable and greatly appre-
ciated. We, the editors, thank you for your outstanding work. Your contribution will
undoubtedly make a significant impact on the readers and researchers in the field.
On behalf of the entire team and authors, we would like to express our profound
thank you to the Bavarian Ministry of Economic Affairs, Regional Development and
Energy for your generous support of the ADA Lovelace Center project. Without the
financial funding, it would not have been possible to successfully execute this project.
Your support has allowed us to conduct important research and gain valuable insights.
Through your funding, we were able to provide resources and materials that were
essential to our work for scientific community and local, national and international
industry. We say thank you to you for your trust in our project and your support
across all its stages. Your financial support has not only contributed to the realization
of this project but will also have a lasting impact on research in this field.
It is also very important to mention that such a piece of work can not be done
in such excellent quality without the support and advice of an highly rated advisory
board consisting of industry and scientific experts, who were always reachable and
willing to give advice, support and direction of research and development. The
complete ADA team says thank you for your contribution, enthusiasm and work in
all phases of the project.
vii
viii Acknowledgements
The editors also want to say thank you to all the people "behind the scenes" for
management, calculations, administrative tasks, organization of meetings and some
food, rooms, projectors, hot and cold drinks, good words of support, flexible and
agile management. Thank you for been with the complete team. Your work was
highly appreciated.
The entire team would like to express our sincere appreciation for the invaluable
collaboration and support of our cooperation partners. The expertise and dedication
have been instrumental in the success of this endeavor. The commitment to our
shared goals and your willingness to work together have greatly contributed to the
progress and achievements of the project. We are truly grateful for the opportunity
to collaborate with such a dedicated group of cooperation partners.
We also want to thank Ralf Gerstner from Springer Verlag for his patience and
continuous support, and the reviewer and proofreaders who helped us improve the
book.
Last but not least, the complete team wants to say thank you to all the coffee
machines all around which were able at all times during day and night to provide
everybody in need with excellent coffee to keep the work and innovation up and
running.
Nuremberg, Germany Christopher Mutschler
Erlangen, Germany Christian Münzenmayer
Fürth, Germany Norman Uhlmann
Nuremberg, Germany Alexander Martin
February 2024
Contents
Part I Theory
1 Automated Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Florian Karl, Janek Thomas, Jannes Elstner, Ralf Gross, Bernd Bischl
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Components of AutoML Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Ensembling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.4 Feature Selection and Engineering . . . . . . . . . . . . . . . . . . . 9
1.2.5 Meta-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.6 A Brief Note on AutoML in the Wild . . . . . . . . . . . . . . . . . 11
1.3 Selected Topics in AutoML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 AutoML for Time Series Data . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 Unsupervised AutoML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.3 AutoML Beyond a Single Objective . . . . . . . . . . . . . . . . . . 14
1.3.4 Human-In-The-Loop AutoML . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 Neural Architecture Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4.1 A Brief Overview of the Current State of NAS . . . . . . . . . 16
1.4.2 Hardware-aware NAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Sequence-based Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Christoffer Loeffler, Felix Ott, Jonathan Ott, Maximilian P. Oppelt,
Tobias Feigl
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Time Series Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.1 Time Series Data Streams . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.2 Pre-Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.3 Predictive Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
ix
x Contents
2.2.4 Post-Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3.1 Temporal Convolutional Networks . . . . . . . . . . . . . . . . . . . 33
2.3.2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.3 Transformer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.1 Time Series Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.2 Transfer Learning & Domain Adaptation . . . . . . . . . . . . . . 40
2.4.3 Model Interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.5 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3 Learning from Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Christopher Mutschler, Georgios Kontes, Sebastian Rietsch
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2 Concepts of Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2.1 Markov Decision Processes (MDPs) . . . . . . . . . . . . . . . . . . 50
3.2.2 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.2.3 Model-free Reinforcement Learning . . . . . . . . . . . . . . . . . . 52
3.2.4 General Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3 Learning purely through Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3.1 Exploration-Exploitation . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.4 Learning with Data or Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.1 Model-based RL with continuous Actions . . . . . . . . . . . . . 57
3.4.2 MBRL with Discrete Actions: Monte Carlo Tree Search . 59
3.4.3 Offline Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . 60
3.4.4 Hierarchical RL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.5 Challenges for Agent Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.5.1 Safety through Policy Constraints . . . . . . . . . . . . . . . . . . . . 65
3.5.2 Generalizability of Policies . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5.3 Lack of a Reward Function . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.6 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4 Learning with Limited Labelled Data . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Christoffer Loeffler, Rasmus Hvingelby, Jann Goschenhofer
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.2 Semi-Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.2.1 Classical Semi-Supervised Learning . . . . . . . . . . . . . . . . . . 80
4.2.2 Deep Semi-Supervised Learning . . . . . . . . . . . . . . . . . . . . . 80
4.2.3 Self-Training and Consistency Regularization . . . . . . . . . . 83
4.3 Active Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.3.1 Deep Active Learning (DAL) . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3.2 Uncertainty Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3.3 Diversity Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Contents xi
4.3.4 Balanced Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.4 Active Semi-Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.4.1 How can SSL and AL Work Together? . . . . . . . . . . . . . . . . 88
4.4.2 Are SSL and AL Always Mutually Beneficial? . . . . . . . . . 89
4.5 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5 The Role of Uncertainty Quantification for Trustworthy AI . . . . . . . . 95
Jessica Deuschel, Andreas Foltyn, Karsten Roscher, Stephan Scheele,†
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.2 Towards Trustworthy AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2.1 The EU AI Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.2.2 From Uncertainty to Trustworthy AI . . . . . . . . . . . . . . . . . . 98
5.3 Uncertainty Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.3.1 Sources of Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.3.2 Methods for Quantification of Uncertainty and Calibration103
5.3.3 Evaluation Metrics for Uncertainty Estimation . . . . . . . . . 107
5.4 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6 Process-aware Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Christian M.M. Frey, Simon Rauch, Oliver Stritzel, Moike Buck
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.2 Overview of Process Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.2.1 Process Mining - Basic Concept . . . . . . . . . . . . . . . . . . . . . 120
6.2.2 Process Mining - Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.2.3 Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.2.4 Four Quality Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2.5 Types of Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.3 Process-Awareness from Theory to Practice . . . . . . . . . . . . . . . . . . . 126
6.3.1 Predictive Analysis in Process Mining . . . . . . . . . . . . . . . . 127
6.3.2 Predictive Process Mining with Bayesian Statistics . . . . . . 128
6.3.3 Process AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.4 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7 Combinatorial Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Jan Krause, Tobias Kuen, Christopher Scholl
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.2 Solving Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.2.1 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.2.2 Exact Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.3 Modeling Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.3.1 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.3.2 Mixed Integer Programs and Connections to Machine
Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
xii Contents
7.3.3 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7.4 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8 Acquisition of Semantics for Machine-Learning and Deep-Learning
based Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Thomas Wittenberg, Thomas Lang, Thomas Eixelberger, Roland Gruber
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.2 Approaches to Acquire Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.2.1 Manual Annotation and Labeling . . . . . . . . . . . . . . . . . . . . 156
8.2.2 Data Augmentation Techniques . . . . . . . . . . . . . . . . . . . . . . 157
8.2.3 Simulation and Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 159
8.2.4 High-End Reference Sensors . . . . . . . . . . . . . . . . . . . . . . . . 163
8.2.5 Active Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
8.2.6 Knowledge Modeling Using Semantic Networks . . . . . . . . 166
8.2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
8.3 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Part II Applications
9 Assured Resilience in Autonomous Systems – Machine Learning
Methods for Reliable Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Gereon Weiss, Jens Gansloser, Adrian Schwaiger, Maximilian
Schwaiger
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
9.1.1 The Perception Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9.2 Approaches to reliable perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
9.2.1 Choice of Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
9.2.2 Unexpected Behavior of ML Methods . . . . . . . . . . . . . . . . 181
9.2.3 Reliable Object Detection for Autonomous Driving . . . . . 182
9.2.4 Uncertainty Quantification for Image Classification . . . . . 183
9.2.5 Ensemble Distribution Distillation for 2D Object
Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
9.2.6 Robust Object Detection in Simulated Driving
Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
9.2.7 Out-of-Distribution Detection . . . . . . . . . . . . . . . . . . . . . . . 191
9.3 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
10 Data-driven Wireless Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Maximilian Stahlke, Tobias Feigl, Sebastian Kram, Jonathan Ott, Jochen
Seitz, Christopher Mutschler
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
10.2 AI-Assisted Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
10.3 Direct Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Contents xiii
10.3.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
10.3.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
10.3.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10.3.4 Hybrid Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
10.3.5 Zone Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
10.3.6 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
10.3.7 Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
10.3.8 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
10.4 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
11 Comprehensible AI for Multimodal State Detection . . . . . . . . . . . . . . 215
Andreas Foltyn, Maximilian P. Oppelt
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
11.1.1 Cognitive Load Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 216
11.1.2 Challenges in Affective Computing . . . . . . . . . . . . . . . . . . . 217
11.2 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
11.2.1 Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
11.2.2 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
11.3 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
11.3.1 In-Domain Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
11.3.2 Cross-Domain Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 221
11.3.3 Interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
11.3.4 Improving ECG Representation Learning . . . . . . . . . . . . . 223
11.3.5 Deployment and Application . . . . . . . . . . . . . . . . . . . . . . . . 225
11.4 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
12 Robust and Adaptive AI for Digital Pathology . . . . . . . . . . . . . . . . . . . 229
Michaela Benz, Petr Kuritcyn, Rosalie Kletzander, Volker Bruns
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
12.2 Applications: Tumor Detection and Tumor-Stroma Assessment . . . 230
12.2.1 Generation of Labeled Data Sets . . . . . . . . . . . . . . . . . . . . . 232
12.2.2 Data Sets for Tumor Detection . . . . . . . . . . . . . . . . . . . . . . . 234
12.2.3 Data Set for Tumor-Stroma Assessment . . . . . . . . . . . . . . . 236
12.3 Prototypical Few-Shot Classification . . . . . . . . . . . . . . . . . . . . . . . . . 237
12.3.1 Robustness through Data Augmentation . . . . . . . . . . . . . . . 238
12.3.2 Out-of-Distribution Detection . . . . . . . . . . . . . . . . . . . . . . . 242
12.3.3 Adaptation to Urothelial Tumor Detection . . . . . . . . . . . . . 242
12.3.4 Interactive AI Authoring with MIKAIA® . . . . . . . . . . . . . . 243
12.4 Prototypical Few-Shot Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . 245
12.4.1 Tumor-Stroma Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . 246
12.5 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
xiv Contents
13 Safe and Reliable AI for Autonomous Systems . . . . . . . . . . . . . . . . . . . 251
Axel Plinge, Georgios Kontes, Sebastian Rietsch, Christopher Mutschler
13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
13.1.1 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
13.1.2 Reinforcement Learning for Autonomous Driving . . . . . . 253
13.2 Generating Environments with Driver Dojo . . . . . . . . . . . . . . . . . . . 255
13.2.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
13.3 Training safe Policies with SafeDQN . . . . . . . . . . . . . . . . . . . . . . . . . 258
13.3.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
13.3.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
13.4 Extracting tree policies with SafeVIPER . . . . . . . . . . . . . . . . . . . . . . 260
13.4.1 Training the Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
13.4.2 Verification of Decision Trees . . . . . . . . . . . . . . . . . . . . . . . 261
13.4.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
13.5 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
14 AI for Stability Optimization in Low Voltage Direct Current
Microgrids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Georg Roeder, Raffael Schwanninger, Peter Wienzek, Moritz Kerscher,
Bernd Wunder, Martin Schellenberger
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
14.2 Low Voltage DC Microgrids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
14.2.1 Control of Low Voltage DC Microgrids . . . . . . . . . . . . . . . 270
14.2.2 Stability of Low Voltage DC Microgrids . . . . . . . . . . . . . . 272
14.3 AI-based Stability Optimization for Low Voltage DC Microgrids . 274
14.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
14.3.2 Digital Network Twin and Generation of Labels to
Describe the Stability State . . . . . . . . . . . . . . . . . . . . . . . . . 275
14.3.3 LVDC Microgrid Surrogate Model Applying Random
Forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
14.3.4 Stability Optimization Applying Decision Trees . . . . . . . . 277
14.4 Implementation and Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
14.4.1 Measurement of Grid Stability . . . . . . . . . . . . . . . . . . . . . . . 278
14.4.2 Experimental Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
14.5 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
15 Self-Optimization in Adaptive Logistics Networks . . . . . . . . . . . . . . . . 287
Julius Mehringer, Ursula Neumann, Friedrich Wagner, Christopher
Scholl
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
15.2 A Brief Overview of Relevant Literature on Predicting the
All-Time Buy Quantity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
15.3 Predicting the All-Time Buy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
15.4 A Probabilistic Hierarchical Growth Curve model . . . . . . . . . . . . . . 291
Contents xv
15.5 Determining the Optimal Order Policy . . . . . . . . . . . . . . . . . . . . . . . . 294
15.5.1 Modeling Non-Linear Costs . . . . . . . . . . . . . . . . . . . . . . . . . 295
15.5.2 Robust Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
15.6 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
15.7 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
16 Optimization of Underground Train Systems . . . . . . . . . . . . . . . . . . . . 303
Lukas Hager, Tobias Kuen
16.1 Optimization of DC Railway Power Systems . . . . . . . . . . . . . . . . . . . 304
16.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
16.1.2 Optimal Power Flow and mathematical MIQCQP model . 304
16.1.3 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
16.2 Energy-Efficient Timetabling applied to a German Underground
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
16.2.1 Industrial Challenge and Motivation . . . . . . . . . . . . . . . . . . 313
16.2.2 Mathematical Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
16.2.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
16.3 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
17 AI-assisted Condition Monitoring and Failure Analysis for
Industrial Wireless Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Ulf Wetzker, Anna Richter, Vineeta Jain, Jakob Wicht
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
17.2 Verifying Data Source Accuracy in Protocol Analysis . . . . . . . . . . . 323
17.2.1 System Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
17.2.2 Autoencoder Architecture for Anomaly Detection . . . . . . . 326
17.2.3 Dataset and Performance Evaluation . . . . . . . . . . . . . . . . . . 327
17.3 Automated and User-friendly Spectral Analysis . . . . . . . . . . . . . . . . 328
17.3.1 ML-based Spectrum Analysis . . . . . . . . . . . . . . . . . . . . . . . 329
17.3.2 Generation of Training and Validation Data . . . . . . . . . . . . 329
17.3.3 Model Validation Using Artificial and Measurement Data 330
17.3.4 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
17.4 Cross-layer Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
17.4.1 Variable Adaptive Dynamic Time Warping: A Novel
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
17.4.2 Experimental Results and Discussion . . . . . . . . . . . . . . . . . 334
17.4.3 Implications for Research and Beyond . . . . . . . . . . . . . . . . 336
17.5 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
xvi Contents
18 XXL-CT Dataset Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Roland Gruber, Steffen Rüger, Moritz Ottenweller, Norman Uhlmann,
Stefan Gerth
18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
18.2 XXL-CT Dataset Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
18.3 Annotation Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
18.3.1 3D Instance Labelling Pipeline . . . . . . . . . . . . . . . . . . . . . . 344
18.3.2 3D Semantic Labelling Pipeline . . . . . . . . . . . . . . . . . . . . . 345
18.4 Training Infrastructure and Segmentation Results . . . . . . . . . . . . . . . 347
18.4.1 Instance Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
18.4.2 Semantic Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
18.5 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
19 Energy-Efficient AI on the Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Nicolas Witt, Mark Deutel, Jakob Schubert, Christopher Sobel, Philipp
Woller
19.1 AI on the Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
19.2 Energy-Efficient Classical Machine Learning . . . . . . . . . . . . . . . . . . 362
19.2.1 Classification of Time Series Data . . . . . . . . . . . . . . . . . . . 362
19.2.2 Multi-Objective Optimization . . . . . . . . . . . . . . . . . . . . . . . 363
19.2.3 Energy Prediction for Classical Machine Learning . . . . . . 364
19.2.4 EA-AutoML Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
19.2.5 Application Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
19.3 Energy-Efficient Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
19.3.1 Deep Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
19.3.2 Efficient Design Space Exploration . . . . . . . . . . . . . . . . . . . 373
19.3.3 Benchmarking Edge AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
19.4 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378