29th International Conference on Real-Time Networks and Systems
As time-sensitive applications are deployed spanning multiple edge clouds, delivering consistent ... more As time-sensitive applications are deployed spanning multiple edge clouds, delivering consistent and scalable latency performance across different virtualized hosts becomes increasingly challenging. In contrast to traditional real-time systems requiring deadline guarantees for all jobs, the latency service-level objectives of cloud applications are usually defined in terms of tail latency, i.e., the latency of a certain percentage of the jobs should be below a given threshold. This means that neither dedicating entire physical CPU cores, nor combining virtualization with deadline-based techniques such as compositional real-time scheduling, can meet the needs of these applications in a resource-efficient manner. To address this limitation, and to simplify the management of edge clouds for latency-sensitive applications, we introduce virtualization-agnostic latency (VAL) as an essential property to maintain consistent tail latency assurances across different virtualized hosts. VAL requires that an application experience similar latency distributions on a shared host as on a dedicated one. Towards achieving VAL in edge clouds, this paper presents a virtualizationagnostic scheduling (VAS) framework for time-sensitive applications sharing CPUs with other applications. We show both theoretically and experimentally that VAS can effectively deliver VAL on shared hosts. For periodic and sporadic tasks, we establish theoretical guarantees that VAS can achieve the same task schedule on a shared CPU as on a full CPU dedicated to time-sensitive services. Moreover, this can be achieved by allocating the minimal CPU bandwidth to time-sensitive services, thereby avoiding wasting CPU resources.
In model-based development, executable software (e.g., C or Java code) can be generated from a hi... more In model-based development, executable software (e.g., C or Java code) can be generated from a highlevel model using a code generator. However, the execution of the generated software on a target platform remains a challenge due to a mismatch in communication semantics assumed by the model and the platform-dependent software (e.g., sampling/actuation routines). This paper proposes an input/ output (I/O) interface module that bridges this semantic gap by means of buffers and interface policies, which explicitly capture the information required to adapt the model's communication semantics to that of the platform. We present a framework that can be used to systematically synthesize-directly from the model-the I/O interfaces and accompanying APIs that the generated software and the platformdependent software need to communicate with one another. Our interface policies can also encode relaxations of a model semantics that may not be implementable, thus making derivations of the implemented systems from the model traceable. We illustrate the applicability and the benefits of our framework with a case study of an infusion pump.
29th International Conference on Real-Time Networks and Systems
As time-sensitive applications are deployed spanning multiple edge clouds, delivering consistent ... more As time-sensitive applications are deployed spanning multiple edge clouds, delivering consistent and scalable latency performance across different virtualized hosts becomes increasingly challenging. In contrast to traditional real-time systems requiring deadline guarantees for all jobs, the latency service-level objectives of cloud applications are usually defined in terms of tail latency, i.e., the latency of a certain percentage of the jobs should be below a given threshold. This means that neither dedicating entire physical CPU cores, nor combining virtualization with deadline-based techniques such as compositional real-time scheduling, can meet the needs of these applications in a resource-efficient manner. To address this limitation, and to simplify the management of edge clouds for latency-sensitive applications, we introduce virtualization-agnostic latency (VAL) as an essential property to maintain consistent tail latency assurances across different virtualized hosts. VAL requires that an application experience similar latency distributions on a shared host as on a dedicated one. Towards achieving VAL in edge clouds, this paper presents a virtualizationagnostic scheduling (VAS) framework for time-sensitive applications sharing CPUs with other applications. We show both theoretically and experimentally that VAS can effectively deliver VAL on shared hosts. For periodic and sporadic tasks, we establish theoretical guarantees that VAS can achieve the same task schedule on a shared CPU as on a full CPU dedicated to time-sensitive services. Moreover, this can be achieved by allocating the minimal CPU bandwidth to time-sensitive services, thereby avoiding wasting CPU resources.
In model-based development, executable software (e.g., C or Java code) can be generated from a hi... more In model-based development, executable software (e.g., C or Java code) can be generated from a highlevel model using a code generator. However, the execution of the generated software on a target platform remains a challenge due to a mismatch in communication semantics assumed by the model and the platform-dependent software (e.g., sampling/actuation routines). This paper proposes an input/ output (I/O) interface module that bridges this semantic gap by means of buffers and interface policies, which explicitly capture the information required to adapt the model's communication semantics to that of the platform. We present a framework that can be used to systematically synthesize-directly from the model-the I/O interfaces and accompanying APIs that the generated software and the platformdependent software need to communicate with one another. Our interface policies can also encode relaxations of a model semantics that may not be implementable, thus making derivations of the implemented systems from the model traceable. We illustrate the applicability and the benefits of our framework with a case study of an infusion pump.
Uploads
Papers by Linh Linh Phan