Papers by Luca Santinelli
A Sensitivity Analysis for Mixed Criticality: Trading Criticality with Computational Resource
2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA)
Statistical Power Estimation Dataset for External Validation GoF tests on EVT distribution
Data in Brief

IEEE Embedded Systems Letters
The Worst-Case Execution Time (WCET) is a critical parameter describing the largest value for the... more The Worst-Case Execution Time (WCET) is a critical parameter describing the largest value for the execution time of programs. Even though such a parameter is very hard to attain, it is essential as part of guaranteeing a real-time system meets its timing requirements. The complexity of modern hardware has increased the challenges of statically analysing the WCET and reduced the reliability of purely measured the WCET. This has led to the emergence of probabilistic WCETs (pWCETs) analysis as a viable technique. The low probability of appearance of large execution times of a program has motivated the utilization of rare events theory like Extreme Value Theory (EVT). As pWCET estimation based on EVT has matured as a discipline, a number of open challenges have become apparent when applying the existing approaches. Our paper enumerates key challenges while establishing a state of the art of EVT-based pWCET estimation methods.

Proceedings of the 27th International Symposium on Rapid System Prototyping Shortening the Path from Specification to Prototype - RSP '16, 2016
Current processors have gone through multiple internal optimization to speed-up the average execu... more Current processors have gone through multiple internal optimization to speed-up the average execution time e.g. pipelines, branch prediction. Besides, internal communication mechanisms and shared resources like caches or buses have a significant impact on Worst-Case Execution Times (WCETs). Having an accurate estimate of a WCET is now a challenge. Probabilistic approaches provide a viable alternative to single WCET estimation. They consider WCET as a probabilistic distribution associated to uncertainty or risk. In this paper, we present synthetic benchmarks and associated analysis for several LEON3 configurations on FPGA targets. Benchmarking exposes key parameters to execution time variability allowing for accurate probabilistic modeling of system dynamics. We analyze the impact of architecturelevel configurations on average and worst-case behaviors.

Probabilistic component-based analysis for networks
ACM SIGBED Review, 2016
Time-constrained networks have demanded so far for deterministic modeling and analysis in order t... more Time-constrained networks have demanded so far for deterministic modeling and analysis in order to guarantee their worst-case behavior. With this work we intend to apply both probabilistic modeling and probabilistic analyses to investigate such networks. The probabilistic framework we propose aims at guaranteeing confidence levels, in the form of probabilities, to the network timing constraints; the deterministic case remain a particular case, the worst-case, within the probabilistic framework. We focus on probabilistic bounds for defining probabilistic interfaces to network components and we study the way that probabilities propagate within networks by accounting for the dependences and the interactions between network components. Finally, we define and apply probabilistic performance metrics for evaluating network behavior with different degree of confidence due to the probabilities.
Static probabilistic timing analysis in presence of faults
2016 11th IEEE Symposium on Industrial Embedded Systems (SIES), 2016
EDF schedulability test for the E-TDL time-triggered framework
2016 11th IEEE Symposium on Industrial Embedded Systems (SIES), 2016
EDF Schedulability Analysis on Mixed-Criticality Systems with Permitted Failure Probability
2015 IEEE 21st International Conference on Embedded and Real-Time Computing Systems and Applications, 2015
Periodic state-machine aware real-time analysis
2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), 2015

Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, 2009
Power dissipation has constrained the performance boosting of modern computer systems in the past... more Power dissipation has constrained the performance boosting of modern computer systems in the past decade. Dynamic power management (DPM) has been implemented in many systems to change the system (or device) state dynamically to reduce the power consumption. This paper explores how to efficiently and effectively reduce the energy consumption to handle event streams with hard real-time or quality of service (QoS) guarantees. We adopt Real-Time Calculus to describe the event arrival by arrival curves in the interval domain. To reduce the implementation overhead, we propose a periodic scheme to determine when to turn on/off the system (or device). This paper first presents two approaches to derive periodic scheme to cope with systems with only one event stream, in which one approach derives an optimal solution for periodic power management with higher complexity and the other derives approximated solutions with lower complexity. Then, extensions are proposed to deal with multiple event streams. Simulation results reveal the effectiveness of our approaches.
A Probabilistic Calculus for Probabilistic Real-Time Systems
ACM Transactions on Embedded Computing Systems, 2015
Proceedings of the 9th IEEE International Symposium on Industrial Embedded Systems (SIES 2014), 2014
In a time-triggered system, activities like task releasing, operational mode switches, sensor rea... more In a time-triggered system, activities like task releasing, operational mode switches, sensor readings and actuations are all initiated at predetermined time instants. This paper proposes an extension of the TDL (Timing Definition Language) time-triggered compositional framework, and presents, based on the widely-applied methods, a condition for its schedulability. The schedulability condition developed accounts for multiple concurrently executing modules, multiple operational modes and mode switches. This way the system schedulability can be guaranteed in any execution condition.
Modeling uncertainties in safety-critical real-time systems: A probabilistic component-based analysis
7th IEEE International Symposium on Industrial Embedded Systems (SIES'12), 2012
ABSTRACT

2010 15th Asia and South Pacific Design Automation Conference (ASP-DAC), 2010
Multi-Processor Systems-on-Chip (MPSoC) are an increasingly important design paradigm not only fo... more Multi-Processor Systems-on-Chip (MPSoC) are an increasingly important design paradigm not only for mobile embedded systems but also for industrial applications such as automotive and avionic systems. Such systems typically execute multiple concurrent applications, with different execution modes. Modes define differences in functionality and computational resource demands and are assigned with an execution probability. We propose a dynamic mapping approach to maintain low power consumption over the system lifetime. Mapping templates for different application modes and execution probabilities are computed offline and stored on the system. At runtime a manager monitors the system and chooses an appropriate pre-computed template. Experiments show that our approach outperforms global static mapping approaches up to 45%.
Adaptive mechanisms for component-based real-time systems
2015 NASA/ESA Conference on Adaptive Hardware and Systems (AHS), 2015

Proceedings of the tenth ACM international conference on Embedded software - EMSOFT '10, 2010
Many real-time applications are designed to work in different operating modes each characterized ... more Many real-time applications are designed to work in different operating modes each characterized by different functionality and resource demands. With each mode change, resource demands of applications change, and static resource reservations may not be feasible anymore. Dynamic environments where applications may be added and removed online also need to adapt their resource reservations. In such scenarios, resource reconfigurations are needed for changing the resource reservations during runtime and achieve better resource allocations. There are a lot of results in the scientific literature of how to find the optimal amount of resources needed by an application in the different operating modes, or how an application can perform safe mode transitions. However, the problem of resource reconfigurations for systems with reservations has not been addressed. A resource scheduler should be reconfigured online in such a way that it still guarantees a certain amount of resources during the reconfiguration process, otherwise applications may miss deadlines. The paper proposes a framework for scheduling real-time applications through scheduling servers that provide resource reservations, and algorithms for changing the resource reservations online while still guaranteeing the feasibility of the system and the schedulability of applications. The framework analysis is integrated into a well-known modular performance analysis paradigm based on Real-Time Calculus. The results are illustrated with examples and a case study.
2010 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), 2010
Recently, researchers and engineers began considering the use of WSN in time-sensitive applicatio... more Recently, researchers and engineers began considering the use of WSN in time-sensitive applications. For effective real-time communications, it is important to solve the problem of contention to the communication medium providing an efficient bandwidth allocation mechanism. In this paper we tackle with the problem of performing timely detection of events by a WSN. We propose a real-time bandwidth allocation mechanism for IEEE 802.15.4 that maximizes event detection efficiency and reduces statistical uncertainty under network overload conditions. On-line strategies complement off-line guarantees to enhance the confidence level of the measurements.
Proceedings of the 20th International Conference on Real-Time and Network Systems - RTNS '12, 2012
Guaranteeing timing constraints is the main purpose of analyses for real-time systems. The satisf... more Guaranteeing timing constraints is the main purpose of analyses for real-time systems. The satisfaction of these constraints may be verified with probabilistic methods (relying on statistical estimations of certain task parameters) offering both hard and soft guarantees. In this paper, we address the problem of sampling applied to the distributions of worst-case execution times. The pessimism of presented sampling techniques is then evaluated at the level of response times.
Measurement-based approaches with extreme value worst-case estimations are beginning to be profic... more Measurement-based approaches with extreme value worst-case estimations are beginning to be proficiently considered for timing analyses. In this paper, we intend to make more formal extreme value theory applicability to safe worst-case execution time estimations. We outline complexities and challenges behind extreme value theory assumptions and parameter tuning. Including the knowledge requirements, we are able to conclude about safety of the probabilistic worst-case execution estimations from the extreme value theory, and execution time measurements.
Probabilistic Deadline Miss Analysis of Real-Time Systems Using Regenerative Transient Analysis
Proceedings of the 22nd International Conference on Real-Time Networks and Systems - RTNS '14, 2014
Uploads
Papers by Luca Santinelli