eSulabSolutions
eSulabSolutions
  • Home
  • Our Team
  • Services
  • Success Stories
  • Sponsored Events
  • Selected Publications
  • Domain Metric based Tool
  • More
    • Home
    • Our Team
    • Services
    • Success Stories
    • Sponsored Events
    • Selected Publications
    • Domain Metric based Tool

  • Home
  • Our Team
  • Services
  • Success Stories
  • Sponsored Events
  • Selected Publications
  • Domain Metric based Tool

WEPPE 2026

WEPPE 2026 Program


The 6th Workshop on Education and Practice of Performance Engineering (WEPPE)
May 4th, 2026, Florence, Italy


WEPPE is Co-located with the 17th ACM/SPEC International Conference on Performance Engineering (ICPE 2026)


WEPPE 2026 Program Outline


8:50-9:00 - Opening Remarks - Alberto Avritzer, Esulabsolutions, Inc., USA


9:00-10:00 - Keynote - The role of C++ in education - and the role of education in C++. Bjarne Stroustrup, Columbia University, USA


10:00-10:25 - Performance Engineering Transformation. Alex Podelko, AWS, USA


10:25-10:45 - Coffee Break


10:45-11:10 - That’s a Very Sharp Sword You Have There – Hope You Know How to Use It. Dave Daly, Independent, USA


11:10-11:35 - Teaching performance evaluation topics in the era of LLMs. Andrea Marin, Ca' Foscari University of Venice, Italy


11:35-12:05 - Fair and reliable benchmarking of machine learning and AI models. Samuel Kounev, University of Würzburg, Germany


12:05-12:25 - Teaching Performance Engineering with Agentic AI, Lessons Learned from a Hands-On Educational Experience. Andrea Janes, Free University of Bozen/Bolzano, Italy


12:30-14:00 - Lunch Break


14:00-14:25 - A Performance Engineering Troublesome Journey from Academia to Industry. Tommaso Cucinotta, Scuola Superiore Sant'Anna, Italy


14:25-14:50 - Can we teach performance engineering using model interchange formats and AI? Cati Llado, University of the Balearic Islands, Spain


14:50-15:10 - Enabling Interactive Visualization for Near-Real-Time Performance Analysis of MPI Applications. Anna-Lena Roth, Fulda University of Applied Sciences, Germany


15:10-15:30 - Detecting Past and Future Change Points in Performance Data for Education and Practice. Arvid Trubin & Anfisa Trubina, TruTech Development, USA


15:30-15:40 - Wrap up and Conclusion - Alberto Avritzer


WEPPE 2026 Program Details


Overview

The 6th Workshop on Education and Practice of Performance Engineering (WEPPE) co-located with the International Conference on Performance Engineering (ICPE) 2026, the 17th ACM/SPEC International Conference on Performance Engineering. A Joint Meeting of WOSP/SIPEW sponsored by ACM SIGMETRICS and ACM SIGSOFT in Cooperation with SPEC.


Scope and Topics

The goal of the Workshop on Education and Practice of Performance Engineering (WEPPE) is to bring together University researchers and Industry Performance Engineers to study the gap between performance engineering education and the needs of performance engineering practice. We are interested in creating opportunities to share experiences between researchers who are actively teaching performance engineering and Performance Engineers who are applying Performance Engineering techniques in industry.


Detailed Agenda


Keynote 


Bjarne Stroustrup, Columbia University, USA

Title: The role of C++ in education - and the role of education in C++

Abstract:
First, I set the context by presenting the aims of C++, basically to support development of quality software. Next, I argue for using different approaches to what to teach and how to teach it dependent on the backgrounds and likely aims of the students. In this context, I focus on people aiming to develop quality foundational software and warn against teaching programming languages rather than software development (using some programming languages). We cannot do this efficiently without tool support, so I present an approach based on coding guidelines and enforcements. For all but the most extreme performance and latency requirements, the key to good performance is to directly represent the concepts of our solution, not complex hand-optimizations. Finally, I give a list of mostly unattainable wishes: If I ran the zoo.


Bio: Bjarne Stroustrup is the designer and original implementer of C++ as well as the author of The C++ Programming Language (4th Edition) and A Tour of C++ (3rd edition), Programming: Principles and Practice using C++ (3rd Edition), and many popular and academic publications. He is a professor of Computer Science at Columbia University in New York City. He did much of his most important work in Bell Labs.

Dr. Stroustrup is a member of the US National Academy of Engineering, and an IEEE, ACM, and CHM fellow. He received the 2018 Charles Stark Draper Prize, the IEEE Computer Society's 2018 Computer Pioneer Award, and the 2017 IET Faraday Medal. His research interests include distributed systems, design, programming techniques, software development tools, and programming languages. To make C++ a stable and up-to-date base for real-world software development, he has been a leading figure with the ISO C++ standards effort for 35 years. He holds a master’s in Mathematics from Aarhus University, where he is an honorary professor in the Computer Science Department, and a PhD in Computer Science from Cambridge University, where he is an honorary fellow of Churchill College.


Alex Podelko, AWS, USA

Title: Performance Engineering Transformation

Abstract: Performance engineering (PE) is undergoing a fundamental transformation driven by major industry trends including cloud computing, agile development, DevOps, and most recently, Artificial Intelligence (AI). This evolution is occurring along multiple, sometimes conflicting trajectories, raising questions about the future of PE as a distinct discipline. As system complexity and scale continue to increase - with AI contributing to this growth - performance considerations are receiving increased attention across the software development lifecycle.

PE is experiencing further integration with development (“Shift Left”) and operations (“Shift Right”). Emerging disciplines, such as Site Reliability Engineering (SRE) and FinOps, overlap substantially with traditional PE domains. AI represents the latest and potentially most transformative trend, poised to fundamentally reshape both systems behavior and performance engineering methodologies. While early indicators of AI's impact are visible, the full extent and specific implications for the discipline remain to be determined.

Bio: Alex Podelko is a senior performance engineer at Amazon Web Services (AWS), responsible for performance testing and optimization of Amazon Aurora. He has specialized in performance since 1997, working in different performance-related roles for MongoDB, Oracle/Hyperion, Aetna, and Intel before joining AWS. Alex periodically talks and writes about performance-related topics, advocating tearing down silo walls between different groups of performance professionals. He currently serves as a member of the SPEC Research Group Steering Committee.


David Daly, Independent, USA

Title: That’s a Very Sharp Sword You Have There – Hope You Know How to Use It

Abstract: Generative AI and LLMs are great tools. Like most tools, you can do a lot of damage with them if you don’t know what you are doing. Most people don’t know what they are doing when it comes to performance engineering.

Performance engineering is built on rigorous thinking and performance engineers interact with essentially all other fields of computing. That combination makes the performance engineering community particularly suited to benefit from Generative AI, while also giving our community a front row seat to many badly misguided uses. Let’s make the most of our opportunity, while limiting the damage from others.

Bio: David has worked on computer (hardware and software) performance across his career, from analytical models, to simulations, performance tests, and performance investigation. He has worked extensively on software performance testing, at various times focused on complete end-to-end automation, control of test noise and variability, working around test noise, and building processes to make sure that issues identified by the infrastructure were properly recognized and addressed. He has also built out performance and operational monitoring for production services, and built analytical and simulation models for IBM POWER systems. David is an active participant in the performance engineering community, including facilitating collaborations between industry and academia. He co-organized the first ever ICPE Data Challenge, open sourcing a formerly internal and proprietary data set of performance test results. The work in the challenge and the work that followed continued to advance the state of the art in performance regression detection.


Andrea Marin - University of Venice, Italy

Title: Teaching performance evaluation topics in the era of LLMs

Abstract: The rapid adoption of large language models (LLMs) is reshaping how students learn, practice, and demonstrate knowledge, raising new challenges for teaching performance evaluation courses.  This talk is aimed at opening a discussion on how performance evaluation can be effectively taught in the era of LLMs, not by avoiding these tools, but by integrating them as objects of critical analysis and pedagogical opportunity. We discuss concrete strategies for designing assignments, rubrics, and learning activities that help students understand what meaningful evaluation looks like when AI systems are part of the students' and teachers' workflows. The talk is based on the experience developed in teaching the course of "Software performance and scalability" at the university of "Ca' Foscari" of Venice, Italy.

Bio: Andrea Marin received his Ph.D. in Computer Science from the University of Venice in 2009. He is Professor of Computer Science at the same University. He is the (co-)author of over 100 technical papers in refereed international journals and conference proceedings. His current research focuses on the performance and reliability evaluation of computer systems using stochastic modeling techniques, specifically queueing models and product-form models. Since 2018 he has been teaching the course of "Software Performance and Scalability" at the University Ca' Foscari of Venice, Italy.


Sam Kounev - University of Würzburg, Germany

Title: Fair and reliable benchmarking of machine learning and AI models

Abstract: As AI becomes increasingly ubiquitous and AI models become more advanced, metrics and benchmarks to evaluate the underlying machine learning (ML) and reasoning algorithms are essential to assess and optimize predictive performance and computational efficiency, ensure reliability, and monitor fairness and bias. In this talk, we discuss the challenges and pitfalls in benchmarking and evaluation of ML and AI models. We provide an overview of common metrics for ML model evaluation, discussing the strengths and limitations of different metrics. We also look at the impact of how available datasets are broken down into data used for model training, validation, and testing. Finally, we provide an overview of ML benchmarks and discuss general approaches to the evaluation of self-awareness in computing systems.

Bio: Samuel Kounev is a Professor and Chair of Software Engineering at the University of Würzburg. His research spans the areas of software architecture, systems benchmarking, cyber security, and applied data science in the domains of cloud computing, cyber-physical systems, and scientific workflows for Earth observation. He has extensive experience in leading interdisciplinary research projects, for example, EU FP7 Marie Curie Initial Training Network (ITN) “RELATE” in cloud computing, or more recently the bidt project “ROOT” (Real-time Earth Observation of Forest Dynamics and Biodiversity). He is the main author of the first textbook on “Systems Benchmarking”, the 2nd edition of which was published by Springer in 2025. Samuel holds a PhD (Dr.-Ing.) degree in computer science from TU Darmstadt (Germany). Samuel is Founder and Elected Chair of the SPEC Research Group within the Standard Performance Evaluation Corporation (SPEC) as well as co-founder of several conferences in the field, including the ACM/SPEC International Conference on Performance Engineering (ICPE) and the IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), for which he has also been serving on the Steering Committees. His research has led to over 300 publications (with an h-index of 51) and multiple scientific and industrial awards including 10 Best Paper Awards, SPEC Presidential Award for "Excellence in Research”, Google Research Award, ABB Research Award, and VMware Academic Research Award.


Andrea Janes, Free University of Bozen-Bolzano, Italy

Title: Teaching Performance Engineering with Agentic AI, Lessons Learned from a Hands-On Educational Experience

Abstract: Recent advances in AI agents open new possibilities not only for automating performance engineering tasks, but also for teaching performance engineering in more interactive and experiential ways. Building on an agent-based approach to performance testing, we designed a teaching experience in which students learned core performance engineering concepts by interacting with autonomous agents rather than manually writing test scripts.This talk reflects that experience and focuses on the lessons learned from using agents as educational companions. We discuss how agent-driven workflows helped students explore system behavior, reason about performance scenarios, and interpret results, while also revealing typical pitfalls such as misplaced trust in automated outputs and misunderstandings around test environments. These limitations often became valuable teaching moments, encouraging students to critically assess results and assumptions.

We conclude by sharing practical insights for educators interested in integrating agentic AI into performance engineering courses, and by discussing how such tools can support learning when used to complement, rather than replace, foundational performance engineering principles.

Bio: Andrea Janes is an associate professor at the Free University of Bozen/Bolzano. He was previously a senior lecturer and researcher at the FHV Vorarlberg University of Applied Sciences in Dornbirn, Austria, and a researcher at the Free University of Bozen/Bolzano, Italy. He holds a Master's degree in Business Informatics from the Vienna University of Technology and a PhD in Computer Science (with honors) from the University of Klagenfurt, Austria. He obtained habilitation as an associate professor in Computer Science and Information processing systems. He is particularly interested in Lean and Agile approaches to software engineering, value-based software engineering, empirical software engineering, software testing, and technology transfer and he has published over 80 papers in journals, refereed conference proceedings, book chapters in those areas - https://dblp.org/pid/04/2902.html . Andrea is a co-author of the book Lean Software Development in Action published by Springer. He serves as a member of the program committee of international conferences such as ECSA 2024, SCAM 2024, SSP 2024, PROFES 2024, and SAC 2024.


Tommaso Cucinotta, Sant'Anna School of Advanced Studies in Pisa, Italy

Title: A Performance Engineering Troublesome Journey from Academia to Industry.

Abstract: Nowadays MSc programs in Computer Science & Engineering focus widely on providing students with foundations of computer architectures and programming languages, basic coding and programming abilities, enriched with software engineering and software lifecycle management concepts, and letting students gain some practical experience in this area with projects that typically have a limited complexity, often written entirely from scratch. Performance aspects seem to be primarily focused around fundamental concepts about common data structures and the computational complexity of their main operations. Nowadays, this seems quite at odds with what the job market expects to find in new grads, especially in big-techs, when it comes to selecting new employees: they require the ability to familiarize with and modify software projects with millions lines of code of an unimaginable complexity, with hundreds of dependencies, poor internal documentation if at all, where key design decisions are buried within git commit logs if one is lucky, where optimizing code for performance is among the most required skill where an exceptional detailed understanding is needed of such concepts as computer architectures, CPU instruction sets, memory hierarchies and cache-coherency protocols, interrupts handling, detailed knowledge of the operating system in use, its kernel-level architecture and internals, all the way to details of networking protocols and their implementations. All of these, often leading to the availability of hundreds of tunables that may be tweaked for squeezing more and more performance out of the software. This talk will discuss the gap we have in this area of undergraduate education, and what we may possibly do as lecturers for future generation performance engineers.

Bio: Tommaso Cucinotta is Associate Professor at the Real-Time Systems Lab (RETIS) of Sant'Anna School of Advanced Studies in Pisa, Italy. He has a MSc in Computer Engineering from University of Pisa, and a PhD in Computer Engineering from Sant'Anna. His research interests include operating systems, predictable execution and adaptive CPU scheduling for real-time software systems, with reference to a wide variety of platforms, ranging from embedded systems to infrastructures for Cloud and Distributed Computing and Network Function Virtualization (NFV). He has been Member of Technical Staff (MTS) in Bell Labs in Dublin (Ireland), investigating security and real-time performance of Cloud and NFV services. He has been a senior software development engineer in Amazon Web Services (AWS) in Dublin (Ireland), where he worked on improving the performance and scalability of the DynamoDB real-time NoSQL data-store. Since 2016, he is Associate Professor at Sant'Anna and coordinator of the Cyber-Physical Research Area since 2019. Tommaso Cucinotta coauthored roughly 150 international peer-reviewed scientific publications as well as 11 granted and 20 filed patents. He is also an active reviewer and is Associate Editor for the IEEE Transactions on Services Computing and IEEE Transactions on Cloud Computing journals.


Cati Llado, University of the Balearic Islands, Spain

Title: Can we teach performance engineering using model interchange formats and AI?

Abstract: Performance engineering education traditionally relies on analytical models supported by specialized queueing solvers. While effective, this approach creates strong tool dependencies and limits opportunities for qualitative reasoning when solvers are unavailable, difficult to maintain, or inappropriate for exploratory learning. This position paper argues that Model Interchange Formats (MIFs), combined with recent advances in Artificial Intelligence (AI), enable a solver-independent approach to teaching performance engineering. Our experience has been specifically grounded in the use of PMIF+, one instance of a Model Interchange Format, which provides a structured yet solver-agnostic representation of Queuing Networks based models. Rather than replacing analytical methods, AI can act as a pedagogical assistant that supports reasoning, interpretation, and exploratory analysis of MIF-based performance models. We discuss what this approach enables, where it fails, and its implications for performance engineering education. 

Bio: Catalina M. Lladó is a Lecturer in the Departament de Ciències Matemàtiques i Informàtica at the Universitat de les Illes Balears (Palma de Mallorca, Spain). She earned her Ph.D. in Computer Science from Imperial College London (UK) in 2002. Dr. Lladó has been actively involved in the international research community, serving on the program committees of leading conferences such as ICPE and Valuetools, and acting as a reviewer for prestigious international journals, including Performance Evaluation. Her research focuses on performance modeling and performance engineering of computer and communication systems, with particular expertise in model interchange formats. In addition, she maintains a strong interest in advancing research and innovation in the teaching of performance engineering.


Anna-Lena Roth[1], Jonas Posner[1], & Michael Kuhn[2]

[1] Fulda University of Applied Sciences, Fulda, Germany; [2] Otto von Guericke University, Magdeburg, Germany

Title: Enabling Interactive Visualization for Near-Real-Time Performance Analysis of MPI Applications

Abstract: Performance analysis is essential for understanding the behavior of parallel applications on High-Performance Computing (HPC) systems, identifying bottlenecks and load imbalances. Traditional analysis workflows are post-mortem: applications are instrumented, performance data is collected at runtime, and insights become available only after execution. This workflow is time-consuming, especially for large-scale  message Passing Interface (MPI) applications (i.e., running on many processes/nodes) requiring repeated analysis after code changes—and demands substantial expertise, posing a steep learning barrier for beginners. To address these challenges, in previous work [22–24], we proposed EduMPI Suite, which provides near-real-time visualization of MPI communication in an interactive GUI, making performance analysis immediately accessible in educational settings. While our previous work focused on usability and classroom integration, this paper presents the technical architecture and performance evaluation of EduMPI Suite’s measurement and data-management system.

We couple EduMPI, an extended Open MPI fork with integrated performance measurement, with EduStore, a time-series database that processes performance-relevant events in near-real-time for visualization in EduMPI GUI. EduMPI leverages a binary-based ingestion layer for high-throughput data insertion, avoiding costly data conversions. Combined with TigerData’s optimized time-series processing, this design ensures continuous data availability with negligible delay. On-the-fly aggregation and indexing enable responsive queries and smooth, interactive visualization of communication traces. Our evaluation demonstrates that EduMPI Suite introduces less than 3.8% runtime overhead while maintaining query latencies below 0.09 s, enabling interactive performance analysis of MPI applications. Usability studies [22–24] confirm that EduMPI Suite significantly reduces entry barriers for students and improves their ability to identify performance issues over conventional tool.


Arvid Trubin & Anfisa Trubina, TruTech Development, USA

Title: Detecting Past and Future Change Points in Performance Data for Education and Practice

Abstract: Typical tasks in performance engineering education and practice include anomaly detection, outlier detection, change point detection, and trend analysis, with the goal of anticipating future threshold breaches and capacity risks. This paper presents Statistical Exception and Trend Detection (SETDS), a method for detecting both past and future change points in performance time series and discusses its use as both an analytical technique and a teaching tool.

The paper provides an intuitive yet rigorous description of the SETDS method and demonstrates its practical application through Perfomalist, a free web-based tool that implements the approach. Perfomalist enables students and practitioners to visualize anomalies, trends, and change points using IT Control Charts and Exception Values, making advanced concepts in performance analysis more accessible. Perfomalist was also used as a baseline tool in a CMG.org Hackathon, where participants were tasked with identifying change points and anomalies in time-stamped performance data to reveal distinct phases and patterns. Finally, the method and tool have been successfully used in an online course on Performance Anomaly Detection offered through CMG.org, illustrating how real-world performance data and tooling can be effectively integrated into performance engineering education.







WEPPE 2026 CFP


Call for Papers


 6th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2026

17th ACM/SPEC International Conference on Performance Engineering

A Joint Meeting of WOSP/SIPEW sponsored by
      ACM SIGMETRICS and ACM SIGSOFT in Cooperation with SPEC

Florence, Italy

May 4-8, 2026

Scope and Topics

The goal of the Workshop on Education and Practice of Performance Engineering is to bring together University researchers and Industry Performance Engineers to share education and practice experiences. We are interested in creating opportunities to share experiences between researchers that are actively teaching performance engineering and  of  Performance Engineers that are applying Performance Engineering techniques in  industry.  

Topics of interest to the workshop include, but are not limited to the Education and Practice of:

  • Performance in the Data Center,  Cloud, Blockchain, IoT, Sensor Networks, and ML/AI 
  • Performance methods in software development
  • Model-driven performance engineering
  • Performance modeling and prediction
  • Application, validity, and futures of AI capabilities for performance engineering
  • Performance measurement and experimental analysis
  • Benchmarks (workloads, scenarios, and implementations)
  • Run-time performance and capacity management
  • Performance in cloud, virtualized, and multi-core systems
  • Performance-driven resource and power management
  • Performance of big data systems
  • Performance modeling and evaluation in other domains
  • Performance requirements specification 
  • Performance testing and validation
  • Tools and associated methods for modeling, monitoring, and analyzing performance
  • Relationship between performance engineering and architecture
  • All other topics related to performance engineering
  • Education about other quantitative attributes such as reliability, availability, power consumption, safety, security and survivability

Submission Guidelines

A variety of contribution styles for papers are solicited including: two-page abstracts, presentation, basic and applied research papers for novel scientific insights, industrial and experience papers reporting on education and or practice of the application of performance engineering or benchmarks in practice, and work-in-progress/vision papers for ongoing but yet interesting work. Different acceptance criteria apply based on the expected content of the individual contribution types. 

Authors will be requested to self-classify their papers according to topic and contribution style when submitting their papers. We specifically encourage position papers of at most 6 pages and experience reports of at most 10 pages. Submissions  need to be uploaded to the Weppe 2026 hotcrp website on:


weppe26.hotcrp.com


At least one author of each accepted paper is required to register at workshop at the full rate, attend the workshop and present the paper. Presented papers will be published in the ICPE 2025 conference proceedings that will be published by ACM and included in the ACM Digital Library. After the conference there will be a call for a special issue of a journal.

Important Dates

  • Abstract submission deadline: Jan 16, 2026
  • Paper submission deadline: Jan 23, 2026
  • Author Notification: Feb 04, 2026
  • Camera-ready version deadline: March 4, 2026 (firm)


General Chairs

Alberto Avritzer, eSulabSolutions

Matteo Camili, Politecnico di Milano

James Cusick, Ritsumeikan University, Shiga, Japan

Andrea Janes, Free University of Bozen-Bolzano, Italy


Web Site Chair

Alberto Avritzer, eSulabSolutions

Proposed Program Committee 

Cristina Abad, Escuela Superior Politécnica del Litoral

Alberto Avritzer, eSulabSolutions

Steffen Becker, Stuttgart University

Andre Bondi, Software Performance and Scalability Consulting 

David Daly, MongoDB,

Vittoria de Nito Persone, Tor Vergata University 

Alex Iosup, Vrije Universiteit Amsterdam

Raffaela Mirandola, Politecnico Milano 

Manoj Nambiar, Tata Consultancy Services 

Dorina Petriu, Carleton Univ. 

Evgenia Smirni, William and Mary, Williamsburg, VA

Connie Smith, Performance Engineering Services (PES)

Kishor Trivedi, Duke Univ. 

Catia Trubiani, GSSI L’Aquila

Ana-Lucia Varbanescu, University of Amsterdam

Murray Woodside, Carleton University




WEPPE 2025 Program

5th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  the International Conference on Performance Engineering (ICPE) 2025

Toronto, Canada

May 5, 2025


08:50-09:00: Opening - Minding the Gap between Education and Practice , Alberto Avritzer, eSulabSolutions, USA


9:00-9:25:  Guerrilla Techniques for Robust Performance Engineering, Neil Gunther, Perf Dynamics, USA


Abstract: The Guerrilla approach involves a set of techniques intended to overcome the lack of rigor in performance engineering by providing both students and professionals with a lingua franca that forces rigorous requirements to the surface. The base language comes from queueing theory because there is a 1-to-1 correspondence between the performance metrics that characterize queues and the performance metrics that characterize computer systems. Indeed, all computer systems, from your smartphone to Facebook.com, can be represented as a directed graph of queues.

Extended Abstract: https://gist.github.com/DrQz/70681b81d31828d42e7f7311304ebf7c


9:25-9:50:  From the deep end to coaching performance, James Cusick, Ritsumeikan University, Japan


Abstract: This presentation traces the arc of a single software developer’s career journey within the field of performance engineering. Beginning with virtually no background in the domain, three phases of evolution are documented. First, an initial immersion into the requirements of understanding the performance of a complex distributed system. The bootstrapping approach to learning the essentials of performance analysis in the face of urgent production anomalies is presented along with its challenges. The second phase describes a broadening of understanding of performance engineering as a discipline including methodological research, conducting industry projects, publishing experience reports, and providing university level education on the subject. Finally, the third phase outlines the coaching of other developers and architects in the expansion of core skills in performance engineering and their application in recent Digital Transformation projects. This path of experience traverses performance requirements specification, modeling, testing, and analysis from the perspective of skill acquisition and application in practical project settings across a variety of technical environments. Audience members will walk away with an understanding of the key skills to be developed for success in this area as well as an appreciation for the multiplicity of ways textbook approaches can be tailored for fit-to-purpose conditions. Finally, direct experiences as both an engineer and a manager allow for commentary on the technical, organizational, and political aspects of technology introduction as it applies to performance engineering methods. Both successful and unsuccessful scenarios are shared. This talk can assist others in preparing for the deployment of such methods and improve the quality of their products as well.


9:50-10:15: Performance Engineering: New and Conflicting Trends, Alex Podelko, Amazon Web Services (AWS), USA


Abstract: Performance engineering is adjusting to major industry trends – such as cloud computing, agile development, and DevOps. As systems scale and sophistication skyrocket, performance definitely gets more attention. However, such adjusting happens in different, sometimes conflicting, ways and the future of performance as a separate discipline is not clear. We may observe integration with development (“Shift Left”) and operations (“Shift Right”), as well as appearance of new disciplines that includes parts of performance engineering (such as SRE and FinOps). While some trends are clear (such as continuous performance testing or observability) - others are still being formed. It is not trivial to define the performance engineering body of knowledge at the moment.


10:15-10:45 Coffee Break 


10:45-11:10: How can we teach workload modeling in CS systems classes?, Cristina Abad, Escuela Superior Politécnica del Litoral, Ecuador


Abstract: In Computer Science curriculum guidelines, topics related to Performance Engineering have typically been listed as small, elective components, if at all. Even less has been said about how and when to teach workload modeling. In this talk, I discuss how Systems courses are a good place to include this topic, including suggestions on how to do so that are rooted in personal experience, existing literature and examples from course programs found online.


11:10-11:35:  cfdSCOPE: A Fluid-Dynamics Proxy App for Teaching Performance Engineering, P. Arzt, S. Kreutzer, T. Jammer, C. Bischof , TU Darmstadt Darmstadt, Hesse, Germany.


Abstract: Teaching performance engineering in high-performance computing (HPC) requires example codes that demonstrate bottlenecks and enable hands-on optimization. However, existing HPC applications and proxy apps often lack the balance of simplicity, transparency, and optimization potential needed for effective teaching. To address this, we developed cfdSCOPE, a compact, open-source computa- tional fluid dynamics (CFD) proxy app specifically designed for educational purposes. cfdSCOPE simulates flow in a 3D volume using sparse linear algebra, a common HPC workload, and com- prises fewer than 1,100 lines of code. Its minimal dependencies and transparent design ensure students can fully control and optimize performance-critical aspects, while its naive OpenMP parallelization provides significant optimization opportunities, thus making it an ideal tool for teaching performance engineering.


11:35-12:00: Overcoming Challenges in Teaching Performance-Related Tools, Catalina M. Lladó, 

The University of the Balearic Islands, Spain


Abstract: Teaching performance evaluation courses presents unique challenges for educators. Professors must navigate the complexities of integrating practical work with performance-related tools while dealing with constraints such as limited industry support, resource availability, and varying student skill levels. This article explores three primary approaches using industry tools, building on tools created by other students, and having students develop tools from scratch. Each option comes with its own benefits and drawbacks. By analysing these approaches, this article aims to provide strategies to enhance student engagement, foster learning, help professors, and open up a discussion on this challenging topic.


12:00-12:25:  Cultivating Performance Awareness in a Testing Project: A Focus on Machine-Readable Travel Documents, Lu Xiao, Andre B. Bondi, Eman AlOmar, Yu Tao, Stevens Institute of Technology, USA


Abstract: This paper presents a course project to integrate performance engineering concepts into a software testing and quality assurance curriculum. It uses the real-world context of validating and testing Machine-Readable Travel Documents (MRTDs) to integrate multi- ple testing techniques, including unit testing, mocking, mutation testing, and performance measurement. This integration allows students to “connect the dots” between different testing methodologies, enhancing their ability to apply them holistically in software testing projects. A key goal of the project is to help students under- stand how performance testing naturally fits into the overall testing process—just as it would in real-world practice—alongside func- tional testing. Students engage in hands-on exercises that require evaluating both functional correctness (e.g., conformance to MRTD standards) and performance attributes, such as execution time and the cost of encoding and decoding large sets of input records. The preliminary results suggest that this approach not only deepens students’ understanding of performance engineering but also en- courages them to view testing as a multifaceted process. We share this project with other educators as a framework for incorporating performance testing into software testing curricula, ensuring that students can practice critical testing skills in a real-world context.


12:30pm - Lunch 



WEPPE 2025


Call for Papers


 5th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2025

16th ACM/SPEC International Conference on Performance Engineering

A Joint Meeting of WOSP/SIPEW sponsored by
      ACM SIGMETRICS and ACM SIGSOFT in Cooperation with SPEC

Toronto, Canada

May 5-9, 2025

Scope and Topics

The goal of the Workshop on Education and Practice of Performance Engineering is to bring together University researchers and Industry Performance Engineers to share education and practice experiences. We are interested in creating opportunities to share experiences between researchers that are actively teaching performance engineering and  of  Performance Engineers that are applying Performance Engineering techniques in  industry.  

Topics of interest to the workshop include, but are not limited to Education and Practice of:

  • Performance in the Data Center,  Cloud, Blockchain, IoT, Sensor Networks, and ML/AI 
  • Performance methods in software development
  • Model-driven performance engineering
  • Performance modeling and prediction
  • Performance measurement and experimental analysis
  • Benchmarks (workloads, scenarios, and implementations)
  • Run-time performance and capacity management
  • Performance in cloud, virtualized, and multi-core systems
  • Performance-driven resource and power management
  • Performance of big data systems
  • Performance modeling and evaluation in other domains
  • Performance requirements specification 
  • Performance testing and validation
  • Relationship between performance engineering and architecture
  • All other topics related to performance engineering
  • Education of other quantitative attributes such as reliability, availability, power consumption, safety, security and survivability

Submission Guidelines

A variety of contribution styles for papers are solicited including: two-page abstracts, presentation, basic and applied research papers for novel scientific insights, industrial and experience papers reporting on education and or practice of the application of performance engineering or benchmarks in practice, and work-in-progress/vision papers for ongoing but yet interesting work. Different acceptance criteria apply based on the expected content of the individual contribution types. 

Authors will be requested to self-classify their papers according to topic and contribution style when submitting their papers. We specifically encourage position papers of at most 6 pages and experience reports of at most 10 pages. Submissions  need to be uploaded to the Weppe 2025 hotcrp website on:


weppe25.hotcrp.com


At least one author of each accepted paper is required to register at workshop at the full rate, attend the workshop and present the paper. Presented papers will be published in the ICPE 2025 conference proceedings that will be published by ACM and included in the ACM Digital Library. After the conference there will be a call for a special issue of a journal.

Important Dates

  • Abstract submission deadline: Jan 15, 2025
  • Paper submission deadline: Jan 20, 2025
  • Author Notification: Feb 04, 2025
  • Camera-ready version deadline: Feb 22, 2025


General Chairs

Alberto Avritzer, eSulabSolutions

Matteo Camili, Politecnico di Milano

James Cusick, Ritsumeikan University, Shiga, Japan

Andrea Janes, Free University of Bozen-Bolzano, Italy


Web Site Chair

Alberto Avritzer, eSulabSolutions

Proposed Program Committee 

Cristina Abad, Escuela Superior Politécnica del Litoral

Alberto Avritzer, eSulabSolutions

Steffen Becker, Stuttgart University

Andre Bondi, Software Performance and Scalability Consulting 

David Daly, MongoDB,

Vittoria de Nito Persone, Tor Vergata University 

Alex Iosup, Vrije Universiteit Amsterdam

Raffaela Mirandola, Politecnico Milano 

Manoj Nambiar, Tata Consultancy Services 

Dorina Petriu, Carleton Univ. 

Evgenia Smirni, William and Mary, Williamsburg, VA

Connie Smith, Performance Engineering Services (PES)

Kishor Trivedi, Duke Univ. 

Catia Trubiani, GSSI L’Aquila

Ana-Lucia Varbanescu, University of Amsterdam

Murray Woodside, Carleton University



Historical WEPPE 2023

Program


4th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2023

Coimbra, Portugal

April 15, 2023


Saturday, April 15, 2023 


09:00-09:10 -Opening - Minding the Gap between Education and Practice -  Alberto Avritzer, Matteo Camilli


9:10-9:30:   Performance Engineering Practices for Modern Industrial

Applications at ABB Research, Heiko Koziolek, ABB


Abstract: ABB is developing a vast range of software services for

process automation applications used in chemical production facilities,

power plants, and container ships. High responsiveness and resource

efficiency is important in this domain, both for real-time embedded

systems and distributed containerized systems, but performance

engineering can be challenging due to system complexity and application

domain heterogeneity. This talk provides experiences and lessons learned

from several selected case studies on performance engineering. It

illustrates testing performance of OPC UA pub/sub communication,

clustered MQTT brokers for edge computing, software container online

updates, and lightweight Kubernetes frameworks while highlighting the

applied practices and tools. The talk reports on challenges in workload

modeling, performance testing, and performance modeling.


9:30-9:50: Quantitative Analysis of Software Designs: Teaching Design and Experiences, Mir Alireza, Hakamian, University of Stuttgart


Abstract: The Software Quality and Architecture group (SQA) at the University of Stuttgart offers the Quantitative Analysis of Software Designs (QASD) course for master students. The goal is to give students the necessary skill to evaluate architecture alternatives of software systems quantitatively. The course offers a combination of required theoretical skills, such as applying stochastic processes and practical exercises using suitable tools.

The Challenge is providing teaching materials that balance necessary theoretical knowledge and appropriate tooling that can be used in practice.

As a Solution, the course is designed so that one-third is about the formalisms behind quantitative analysis, including stochastic processes and queuing theory. One-third is modeling languages, such as queuing networks, UML, and UML profiles, including MARTE. The other one-third uses tooling to model and analyze example systems.

During Corona, we provided students with an e-learning module with pre-recorded videos, online quizzes at the end of every chapter, and a virtual machine that pre-installed all the required tooling for the exercise sheets.

In the past two years, students' feedback was often positive regarding the balance between theory and tooling. However, it has to be emphasized that the number of students participating in the course has always been no more than ten. Hence, the student feedback has not been collected by the universities' survey.


9:50-10:10: Early Progress on Enhancing Existing Software Engineering Courses to Cultivate Performance Awareness, Andre Bondi and Lu Xiao, Stevens Institute of Technology


Abstract: Software engineering and computer science courses are frequently focused on particular areas in a way that neglects such cross-cutting quality attributes as performance, reliability, and security. We will the progress we have made in developing enhancements to some of our existing software engineering courses to draw attention and even lay the foundations of an awareness of performance consid- erations in the software development life cycle. In doing so, we wish to make performance considerations integral to the software engineering mindset while avoiding the need to remove current ma- terial from our existing courses. This work is part of an NSF-funded project for undergraduate curriculum development. 


10:10-10:30 - Break


10:30-10:50 -  Theory and Practice in Performance Evaluation Courses:

the challenge of online teaching, Andrea Marin, University Ca' Foscari of Venice


Abstract: In this talk, we report the experience of 5 years of teaching the course of

"Software Performance and Scalability" at the University Ca' Foscari of Venice, Italy.

The course is a reformulation of a more methodological course and finds place in a curriculum

of software engineering at master level.

In these years, we have made an important effort to include practical and lab experiences

(especially with benchmarking) in the topics of the course and these required to

set up a small dedicated lab where students could practice without interfering with

other department machines. This methodology has raised the interest in the course, and more

students have chosen it among the eligible courses. However, the advent of the covid pandemic

has made using the lab impossible and new strategies had to be explored.

We will discuss the workarounds that have been tried and the advantages and disadvantages that we noticed in the learning process.


10:50-11:10 -   Levelling up Performance and Performance Skills at MongoDB, David Daly, MongoDB


Abstract: MongoDB has invested in developing a performance infrastructure and a corresponding performance culture. All development engineers are expected to improve MongoDB performance, through adding performance tests, optimizing code, and fixing performance regressions. Investing in the infrastructure is clear: we develop and support tools to make it easy to track performance changes and improve performance. Investing in the culture includes formal and informal training. The training must ultimately both support strong developers with very limited performance backgrounds as well as developing our future performance experts.  


11:10-11:30 - Views From the Trenches: Current Trends in Performance Engineering, Alexander Podelko, Amazon


Abstract: Performance engineering is changing before our eyes adjusting to current industry trends – such as cloud computing, agile development, and DevOps. As systems scale and sophistication skyrocket, performance gets more attention. While it looks like some performance concepts like algorithm complexity became a must for anybody working in the industry, it still doesn’t result in a consistent view of performance. So it remains an open question what computer professionals should learn about performance – and, even more challenging, what is needed to prepare performance professionals (that actually have never been clearly answered - and now it is even less defined than it was before).


11:30-11:50 - Performance Analysis Tools for MPI Applications and their Use in Programming Education, Anna-Lena Roth and Tim Süß, Hochschule Fulda, University of Applied Sciences


Abstract: Performance analysis tools are frequently used to support the development of parallel MPI applications. They facilitate, e.g., the detection of errors, bottlenecks, or inefficiencies but differ substantially in their instrumentation, measurement, and type of feedback. Especially, tools that provide visual feedback are helpful for educational purposes. They provide a visual abstraction of program behavior, supporting learn- ers to identify and understand performance issues and write more efficient code. However, existing professional tools for performance analysis are very complex, and their use in beginner courses can be very demanding. Foremost, their instrumentation and measurement require deep knowledge and take a long time. Immediate, as well as straightforward feedback, is essential to motivate learners.

This paper provides an extensive overview of performance analysis tools for parallel MPI applications, which experi- enced developers broadly use today. It also gives an overview of existing educational tools for parallel programming with MPI and shows their shortcomings compared to professional tools. Using tools for performance analysis of MPI programs in educational scenarios can promote the understanding of program behavior in large HPC systems and support learn- ing parallel programming. At the same time, the complexity of the programs and the lack of infrastructure in educational institutions are barriers. These aspects will be considered and discussed in detail. 


11:50-12:20 - Panel discussion of presenters  - 


14 minutes (2 minutes x 7) each   presenter provides a position for discussion   on the gap between education and practice of performance engineering


16 minutes - open question/answers among panel and public

WEPPE 2023

Program


4th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2023

Coimbra, Portugal

April 15, 2023


Saturday, April 15, 2023 


09:00-09:10 -Opening - Minding the Gap between Education and Practice -  Alberto Avritzer, Matteo Camilli


9:10-9:30:   Performance Engineering Practices for Modern Industrial

Applications at ABB Research, Heiko Koziolek, ABB


Abstract: ABB is developing a vast range of software services for

process automation applications used in chemical production facilities,

power plants, and container ships. High responsiveness and resource

efficiency is important in this domain, both for real-time embedded

systems and distributed containerized systems, but performance

engineering can be challenging due to system complexity and application

domain heterogeneity. This talk provides experiences and lessons learned

from several selected case studies on performance engineering. It

illustrates testing performance of OPC UA pub/sub communication,

clustered MQTT brokers for edge computing, software container online

updates, and lightweight Kubernetes frameworks while highlighting the

applied practices and tools. The talk reports on challenges in workload

modeling, performance testing, and performance modeling.


9:30-9:50: Quantitative Analysis of Software Designs: Teaching Design and Experiences, Mir Alireza, Hakamian, University of Stuttgart


Abstract: The Software Quality and Architecture group (SQA) at the University of Stuttgart offers the Quantitative Analysis of Software Designs (QASD) course for master students. The goal is to give students the necessary skill to evaluate architecture alternatives of software systems quantitatively. The course offers a combination of required theoretical skills, such as applying stochastic processes and practical exercises using suitable tools.

The Challenge is providing teaching materials that balance necessary theoretical knowledge and appropriate tooling that can be used in practice.

As a Solution, the course is designed so that one-third is about the formalisms behind quantitative analysis, including stochastic processes and queuing theory. One-third is modeling languages, such as queuing networks, UML, and UML profiles, including MARTE. The other one-third uses tooling to model and analyze example systems.

During Corona, we provided students with an e-learning module with pre-recorded videos, online quizzes at the end of every chapter, and a virtual machine that pre-installed all the required tooling for the exercise sheets.

In the past two years, students' feedback was often positive regarding the balance between theory and tooling. However, it has to be emphasized that the number of students participating in the course has always been no more than ten. Hence, the student feedback has not been collected by the universities' survey.


9:50-10:10: Early Progress on Enhancing Existing Software Engineering Courses to Cultivate Performance Awareness, Andre Bondi and Lu Xiao, Stevens Institute of Technology


Abstract: Software engineering and computer science courses are frequently focused on particular areas in a way that neglects such cross-cutting quality attributes as performance, reliability, and security. We will the progress we have made in developing enhancements to some of our existing software engineering courses to draw attention and even lay the foundations of an awareness of performance consid- erations in the software development life cycle. In doing so, we wish to make performance considerations integral to the software engineering mindset while avoiding the need to remove current ma- terial from our existing courses. This work is part of an NSF-funded project for undergraduate curriculum development. 


10:10-10:30 - Break


10:30-10:50 -  Theory and Practice in Performance Evaluation Courses:

the challenge of online teaching, Andrea Marin, University Ca' Foscari of Venice


Abstract: In this talk, we report the experience of 5 years of teaching the course of

"Software Performance and Scalability" at the University Ca' Foscari of Venice, Italy.

The course is a reformulation of a more methodological course and finds place in a curriculum

of software engineering at master level.

In these years, we have made an important effort to include practical and lab experiences

(especially with benchmarking) in the topics of the course and these required to

set up a small dedicated lab where students could practice without interfering with

other department machines. This methodology has raised the interest in the course, and more

students have chosen it among the eligible courses. However, the advent of the covid pandemic

has made using the lab impossible and new strategies had to be explored.

We will discuss the workarounds that have been tried and the advantages and disadvantages that we noticed in the learning process.


10:50-11:10 -   Levelling up Performance and Performance Skills at MongoDB, David Daly, MongoDB


Abstract: MongoDB has invested in developing a performance infrastructure and a corresponding performance culture. All development engineers are expected to improve MongoDB performance, through adding performance tests, optimizing code, and fixing performance regressions. Investing in the infrastructure is clear: we develop and support tools to make it easy to track performance changes and improve performance. Investing in the culture includes formal and informal training. The training must ultimately both support strong developers with very limited performance backgrounds as well as developing our future performance experts.  


11:10-11:30 - Views From the Trenches: Current Trends in Performance Engineering, Alexander Podelko, Amazon


Abstract: Performance engineering is changing before our eyes adjusting to current industry trends – such as cloud computing, agile development, and DevOps. As systems scale and sophistication skyrocket, performance gets more attention. While it looks like some performance concepts like algorithm complexity became a must for anybody working in the industry, it still doesn’t result in a consistent view of performance. So it remains an open question what computer professionals should learn about performance – and, even more challenging, what is needed to prepare performance professionals (that actually have never been clearly answered - and now it is even less defined than it was before).


11:30-11:50 - Performance Analysis Tools for MPI Applications and their Use in Programming Education, Anna-Lena Roth and Tim Süß, Hochschule Fulda, University of Applied Sciences


Abstract: Performance analysis tools are frequently used to support the development of parallel MPI applications. They facilitate, e.g., the detection of errors, bottlenecks, or inefficiencies but differ substantially in their instrumentation, measurement, and type of feedback. Especially, tools that provide visual feedback are helpful for educational purposes. They provide a visual abstraction of program behavior, supporting learn- ers to identify and understand performance issues and write more efficient code. However, existing professional tools for performance analysis are very complex, and their use in beginner courses can be very demanding. Foremost, their instrumentation and measurement require deep knowledge and take a long time. Immediate, as well as straightforward feedback, is essential to motivate learners.

This paper provides an extensive overview of performance analysis tools for parallel MPI applications, which experi- enced developers broadly use today. It also gives an overview of existing educational tools for parallel programming with MPI and shows their shortcomings compared to professional tools. Using tools for performance analysis of MPI programs in educational scenarios can promote the understanding of program behavior in large HPC systems and support learn- ing parallel programming. At the same time, the complexity of the programs and the lack of infrastructure in educational institutions are barriers. These aspects will be considered and discussed in detail. 


11:50-12:20 - Panel discussion of presenters  - 


14 minutes (2 minutes x 7) each   presenter provides a position for discussion   on the gap between education and practice of performance engineering


16 minutes - open question/answers among panel and public

WEPPE 2023


Call for Papers


 4th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2023

14th ACM/SPEC International Conference on Performance Engineering

A Joint Meeting of WOSP/SIPEW sponsored by
      ACM SIGMETRICS and ACM SIGSOFT in Cooperation with SPEC

Coimbra, Portugal

April 15-19, 2023

Scope and Topics

The goal of the Workshop on Education and Practice of Performance Engineering is to bring together University researchers and Industry Performance Engineers to share education and practice experiences. We are interested in creating opportunities to share experiences between researchers that are actively teaching performance engineering and  of  Performance Engineers that are applying Performance Engineering techniques in  industry.  

Topics of interest to the workshop include, but are not limited to Education and Practice of:

  • Performance in the Data Center,  Cloud, Blockchain, IoT, Sensor Networks, and ML/AI 
  • Performance methods in software development
  • Model-driven performance engineering
  • Performance modeling and prediction
  • Performance measurement and experimental analysis
  • Benchmarks (workloads, scenarios, and implementations)
  • Run-time performance and capacity management
  • Performance in cloud, virtualized, and multi-core systems
  • Performance-driven resource and power management
  • Performance of big data systems
  • Performance modeling and evaluation in other domains
  • Performance requirements specification 
  • Performance testing and validation
  • Relationship between performance engineering and architecture
  • All other topics related to performance engineering
  • Education of other quantitative attributes such as reliability, availability, power consumption, safety, security and survivability

Submission Guidelines

A variety of contribution styles for papers are solicited including: two-page abstracts, presentation, basic and applied research papers for novel scientific insights, industrial and experience papers reporting on education and or practice of the application of performance engineering or benchmarks in practice, and work-in-progress/vision papers for ongoing but yet interesting work. Different acceptance criteria apply based on the expected content of the individual contribution types. 

Authors will be requested to self-classify their papers according to topic and contribution style when submitting their papers. We specifically encourage position papers of at most 6 pages and experience reports of at most 10 pages. Submissions  need to be uploaded to ICPE’s Easychair installation at:

https://easychair.org/conferences/?conf=weppe2023

At least one author of each accepted paper is required to register at workshop at the full rate, attend the workshop and present the paper. Presented papers will be published in the ICPE 2023 conference proceedings that will be published by ACM and included in the ACM Digital Library. After the conference there will be a call for a special issue of a journal.

Important Dates

  • Abstract submission deadline: Jan 15, 2023
  • Paper submission deadline: Jan 20, 2023
  • Author Notification: Feb 04, 2023
  • Camera-ready version deadline: Feb 22, 2023


General Chairs

Alberto Avritzer, eSulabSolutions

Andre Bondi, Software Performance and Scalability Consulting, LLC

Matteo Camili, Politecnico di Milano


Web Site Chair

Alberto Avritzer, eSulabSolutions

Proposed Program Committee (preliminary)

Cristina Abad, Escuela Superior Politécnica del Litoral

Alberto Avritzer, eSulabSolutions

Steffen Becker, Stuttgart University

Andre Bondi, Software Performance and Scalability Consulting 

David Daly, MongoDB,

Vittoria de Nito Persone, Tor Vergata University 

Andre Van Hoorn, Univ Stuttgart 

Alex Iosup, Vrije Universiteit Amsterdam

Raffaela Mirandola, Politecnico Milano 

Manoj Nambiar, Tata Consultancy Services 

Dorina Petriu, Carleton Univ. 

Evgenia Smirni, William and Mary, Williamsburg, VA

Connie Smith, Performance Engineering Services (PES)

Kishor Trivedi, Duke Univ. 

Catia Trubiani, GSSI L’Aquila

Ana-Lucia Varbanescu, University of Amsterdam

Murray Woodside, Carleton University



Display real testimonials

Are your customers raving about you on social media? Share their great stories to help turn potential customers into loyal ones.

WEPPE 2025

WEPPE 2025 Program

5th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  the International Conference on Performance Engineering (ICPE) 2025

Toronto, Canada

May 5, 2025


08:50-09:00: Opening - Minding the Gap between Education and Practice , Alberto Avritzer, eSulabSolutions, USA


9:00-9:25:  Guerrilla Techniques for Robust Performance Engineering, Neil Gunther, Perf Dynamics, USA


Abstract: The Guerrilla approach involves a set of techniques intended to overcome the lack of rigor in performance engineering by providing both students and professionals with a lingua franca that forces rigorous requirements to the surface. The base language comes from queueing theory because there is a 1-to-1 correspondence between the performance metrics that characterize queues and the performance metrics that characterize computer systems. Indeed, all computer systems, from your smartphone to Facebook.com, can be represented as a directed graph of queues.

Extended Abstract: https://gist.github.com/DrQz/70681b81d31828d42e7f7311304ebf7c


9:25-9:50:  From the deep end to coaching performance, James Cusick, Ritsumeikan University, Japan


Abstract: This presentation traces the arc of a single software developer’s career journey within the field of performance engineering. Beginning with virtually no background in the domain, three phases of evolution are documented. First, an initial immersion into the requirements of understanding the performance of a complex distributed system. The bootstrapping approach to learning the essentials of performance analysis in the face of urgent production anomalies is presented along with its challenges. The second phase describes a broadening of understanding of performance engineering as a discipline including methodological research, conducting industry projects, publishing experience reports, and providing university level education on the subject. Finally, the third phase outlines the coaching of other developers and architects in the expansion of core skills in performance engineering and their application in recent Digital Transformation projects. This path of experience traverses performance requirements specification, modeling, testing, and analysis from the perspective of skill acquisition and application in practical project settings across a variety of technical environments. Audience members will walk away with an understanding of the key skills to be developed for success in this area as well as an appreciation for the multiplicity of ways textbook approaches can be tailored for fit-to-purpose conditions. Finally, direct experiences as both an engineer and a manager allow for commentary on the technical, organizational, and political aspects of technology introduction as it applies to performance engineering methods. Both successful and unsuccessful scenarios are shared. This talk can assist others in preparing for the deployment of such methods and improve the quality of their products as well.


9:50-10:15: Performance Engineering: New and Conflicting Trends, Alex Podelko, Amazon Web Services (AWS), USA


Abstract: Performance engineering is adjusting to major industry trends – such as cloud computing, agile development, and DevOps. As systems scale and sophistication skyrocket, performance definitely gets more attention. However, such adjusting happens in different, sometimes conflicting, ways and the future of performance as a separate discipline is not clear. We may observe integration with development (“Shift Left”) and operations (“Shift Right”), as well as appearance of new disciplines that includes parts of performance engineering (such as SRE and FinOps). While some trends are clear (such as continuous performance testing or observability) - others are still being formed. It is not trivial to define the performance engineering body of knowledge at the moment.


10:15-10:45 Coffee Break 


10:45-11:10: How can we teach workload modeling in CS systems classes?, Cristina Abad, Escuela Superior Politécnica del Litoral, Ecuador


Abstract: In Computer Science curriculum guidelines, topics related to Performance Engineering have typically been listed as small, elective components, if at all. Even less has been said about how and when to teach workload modeling. In this talk, I discuss how Systems courses are a good place to include this topic, including suggestions on how to do so that are rooted in personal experience, existing literature and examples from course programs found online.


11:10-11:35:  cfdSCOPE: A Fluid-Dynamics Proxy App for Teaching Performance Engineering, P. Arzt, S. Kreutzer, T. Jammer, C. Bischof , TU Darmstadt Darmstadt, Hesse, Germany.


Abstract: Teaching performance engineering in high-performance computing (HPC) requires example codes that demonstrate bottlenecks and enable hands-on optimization. However, existing HPC applications and proxy apps often lack the balance of simplicity, transparency, and optimization potential needed for effective teaching. To address this, we developed cfdSCOPE, a compact, open-source computa- tional fluid dynamics (CFD) proxy app specifically designed for educational purposes. cfdSCOPE simulates flow in a 3D volume using sparse linear algebra, a common HPC workload, and com- prises fewer than 1,100 lines of code. Its minimal dependencies and transparent design ensure students can fully control and optimize performance-critical aspects, while its naive OpenMP parallelization provides significant optimization opportunities, thus making it an ideal tool for teaching performance engineering.


11:35-12:00: Overcoming Challenges in Teaching Performance-Related Tools, Catalina M. Lladó, 

The University of the Balearic Islands, Spain


Abstract: Teaching performance evaluation courses presents unique challenges for educators. Professors must navigate the complexities of integrating practical work with performance-related tools while dealing with constraints such as limited industry support, resource availability, and varying student skill levels. This article explores three primary approaches using industry tools, building on tools created by other students, and having students develop tools from scratch. Each option comes with its own benefits and drawbacks. By analysing these approaches, this article aims to provide strategies to enhance student engagement, foster learning, help professors, and open up a discussion on this challenging topic.


12:00-12:25:  Cultivating Performance Awareness in a Testing Project: A Focus on Machine-Readable Travel Documents, Lu Xiao, Andre B. Bondi, Eman AlOmar, Yu Tao, Stevens Institute of Technology, USA


Abstract: This paper presents a course project to integrate performance engineering concepts into a software testing and quality assurance curriculum. It uses the real-world context of validating and testing Machine-Readable Travel Documents (MRTDs) to integrate multi- ple testing techniques, including unit testing, mocking, mutation testing, and performance measurement. This integration allows students to “connect the dots” between different testing methodologies, enhancing their ability to apply them holistically in software testing projects. A key goal of the project is to help students under- stand how performance testing naturally fits into the overall testing process—just as it would in real-world practice—alongside func- tional testing. Students engage in hands-on exercises that require evaluating both functional correctness (e.g., conformance to MRTD standards) and performance attributes, such as execution time and the cost of encoding and decoding large sets of input records. The preliminary results suggest that this approach not only deepens students’ understanding of performance engineering but also en- courages them to view testing as a multifaceted process. We share this project with other educators as a framework for incorporating performance testing into software testing curricula, ensuring that students can practice critical testing skills in a real-world context.


12:30pm - Lunch 



WEPPE 2025 Program

5th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  the International Conference on Performance Engineering (ICPE) 2025

Toronto, Canada

May 5, 2025


08:50-09:00: Opening - Minding the Gap between Education and Practice , Alberto Avritzer, eSulabSolutions, USA


9:00-9:25:  Guerrilla Techniques for Robust Performance Engineering, Neil Gunther, Perf Dynamics, USA


Abstract: The Guerrilla approach involves a set of techniques intended to overcome the lack of rigor in performance engineering by providing both students and professionals with a lingua franca that forces rigorous requirements to the surface. The base language comes from queueing theory because there is a 1-to-1 correspondence between the performance metrics that characterize queues and the performance metrics that characterize computer systems. Indeed, all computer systems, from your smartphone to Facebook.com, can be represented as a directed graph of queues.

Extended Abstract: https://gist.github.com/DrQz/70681b81d31828d42e7f7311304ebf7c


9:25-9:50:  From the deep end to coaching performance, James Cusick, Ritsumeikan University, Japan


Abstract: This presentation traces the arc of a single software developer’s career journey within the field of performance engineering. Beginning with virtually no background in the domain, three phases of evolution are documented. First, an initial immersion into the requirements of understanding the performance of a complex distributed system. The bootstrapping approach to learning the essentials of performance analysis in the face of urgent production anomalies is presented along with its challenges. The second phase describes a broadening of understanding of performance engineering as a discipline including methodological research, conducting industry projects, publishing experience reports, and providing university level education on the subject. Finally, the third phase outlines the coaching of other developers and architects in the expansion of core skills in performance engineering and their application in recent Digital Transformation projects. This path of experience traverses performance requirements specification, modeling, testing, and analysis from the perspective of skill acquisition and application in practical project settings across a variety of technical environments. Audience members will walk away with an understanding of the key skills to be developed for success in this area as well as an appreciation for the multiplicity of ways textbook approaches can be tailored for fit-to-purpose conditions. Finally, direct experiences as both an engineer and a manager allow for commentary on the technical, organizational, and political aspects of technology introduction as it applies to performance engineering methods. Both successful and unsuccessful scenarios are shared. This talk can assist others in preparing for the deployment of such methods and improve the quality of their products as well.


9:50-10:15: Performance Engineering: New and Conflicting Trends, Alex Podelko, Amazon Web Services (AWS), USA


Abstract: Performance engineering is adjusting to major industry trends – such as cloud computing, agile development, and DevOps. As systems scale and sophistication skyrocket, performance definitely gets more attention. However, such adjusting happens in different, sometimes conflicting, ways and the future of performance as a separate discipline is not clear. We may observe integration with development (“Shift Left”) and operations (“Shift Right”), as well as appearance of new disciplines that includes parts of performance engineering (such as SRE and FinOps). While some trends are clear (such as continuous performance testing or observability) - others are still being formed. It is not trivial to define the performance engineering body of knowledge at the moment.


10:15-10:45 Coffee Break 


10:45-11:10: How can we teach workload modeling in CS systems classes?, Cristina Abad, Escuela Superior Politécnica del Litoral, Ecuador


Abstract: In Computer Science curriculum guidelines, topics related to Performance Engineering have typically been listed as small, elective components, if at all. Even less has been said about how and when to teach workload modeling. In this talk, I discuss how Systems courses are a good place to include this topic, including suggestions on how to do so that are rooted in personal experience, existing literature and examples from course programs found online.


11:10-11:35:  cfdSCOPE: A Fluid-Dynamics Proxy App for Teaching Performance Engineering, P. Arzt, S. Kreutzer, T. Jammer, C. Bischof , TU Darmstadt Darmstadt, Hesse, Germany.


Abstract: Teaching performance engineering in high-performance computing (HPC) requires example codes that demonstrate bottlenecks and enable hands-on optimization. However, existing HPC applications and proxy apps often lack the balance of simplicity, transparency, and optimization potential needed for effective teaching. To address this, we developed cfdSCOPE, a compact, open-source computa- tional fluid dynamics (CFD) proxy app specifically designed for educational purposes. cfdSCOPE simulates flow in a 3D volume using sparse linear algebra, a common HPC workload, and com- prises fewer than 1,100 lines of code. Its minimal dependencies and transparent design ensure students can fully control and optimize performance-critical aspects, while its naive OpenMP parallelization provides significant optimization opportunities, thus making it an ideal tool for teaching performance engineering.


11:35-12:00: Overcoming Challenges in Teaching Performance-Related Tools, Catalina M. Lladó, 

The University of the Balearic Islands, Spain


Abstract: Teaching performance evaluation courses presents unique challenges for educators. Professors must navigate the complexities of integrating practical work with performance-related tools while dealing with constraints such as limited industry support, resource availability, and varying student skill levels. This article explores three primary approaches using industry tools, building on tools created by other students, and having students develop tools from scratch. Each option comes with its own benefits and drawbacks. By analysing these approaches, this article aims to provide strategies to enhance student engagement, foster learning, help professors, and open up a discussion on this challenging topic.


12:00-12:25:  Cultivating Performance Awareness in a Testing Project: A Focus on Machine-Readable Travel Documents, Lu Xiao, Andre B. Bondi, Eman AlOmar, Yu Tao, Stevens Institute of Technology, USA


Abstract: This paper presents a course project to integrate performance engineering concepts into a software testing and quality assurance curriculum. It uses the real-world context of validating and testing Machine-Readable Travel Documents (MRTDs) to integrate multi- ple testing techniques, including unit testing, mocking, mutation testing, and performance measurement. This integration allows students to “connect the dots” between different testing methodologies, enhancing their ability to apply them holistically in software testing projects. A key goal of the project is to help students under- stand how performance testing naturally fits into the overall testing process—just as it would in real-world practice—alongside func- tional testing. Students engage in hands-on exercises that require evaluating both functional correctness (e.g., conformance to MRTD standards) and performance attributes, such as execution time and the cost of encoding and decoding large sets of input records. The preliminary results suggest that this approach not only deepens students’ understanding of performance engineering but also en- courages them to view testing as a multifaceted process. We share this project with other educators as a framework for incorporating performance testing into software testing curricula, ensuring that students can practice critical testing skills in a real-world context.


12:30pm - Lunch 



WEPPE 2025


Call for Papers


 5th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2025

16th ACM/SPEC International Conference on Performance Engineering

A Joint Meeting of WOSP/SIPEW sponsored by
      ACM SIGMETRICS and ACM SIGSOFT in Cooperation with SPEC

Toronto, Canada

May 5-9, 2025

Scope and Topics

The goal of the Workshop on Education and Practice of Performance Engineering is to bring together University researchers and Industry Performance Engineers to share education and practice experiences. We are interested in creating opportunities to share experiences between researchers that are actively teaching performance engineering and  of  Performance Engineers that are applying Performance Engineering techniques in  industry.  

Topics of interest to the workshop include, but are not limited to Education and Practice of:

  • Performance in the Data Center,  Cloud, Blockchain, IoT, Sensor Networks, and ML/AI 
  • Performance methods in software development
  • Model-driven performance engineering
  • Performance modeling and prediction
  • Performance measurement and experimental analysis
  • Benchmarks (workloads, scenarios, and implementations)
  • Run-time performance and capacity management
  • Performance in cloud, virtualized, and multi-core systems
  • Performance-driven resource and power management
  • Performance of big data systems
  • Performance modeling and evaluation in other domains
  • Performance requirements specification 
  • Performance testing and validation
  • Relationship between performance engineering and architecture
  • All other topics related to performance engineering
  • Education of other quantitative attributes such as reliability, availability, power consumption, safety, security and survivability

Submission Guidelines

A variety of contribution styles for papers are solicited including: two-page abstracts, presentation, basic and applied research papers for novel scientific insights, industrial and experience papers reporting on education and or practice of the application of performance engineering or benchmarks in practice, and work-in-progress/vision papers for ongoing but yet interesting work. Different acceptance criteria apply based on the expected content of the individual contribution types. 

Authors will be requested to self-classify their papers according to topic and contribution style when submitting their papers. We specifically encourage position papers of at most 6 pages and experience reports of at most 10 pages. Submissions  need to be uploaded to the Weppe 2025 hotcrp website on:


weppe25.hotcrp.com


At least one author of each accepted paper is required to register at workshop at the full rate, attend the workshop and present the paper. Presented papers will be published in the ICPE 2025 conference proceedings that will be published by ACM and included in the ACM Digital Library. After the conference there will be a call for a special issue of a journal.

Important Dates

  • Abstract submission deadline: Jan 15, 2025
  • Paper submission deadline: Jan 20, 2025
  • Author Notification: Feb 04, 2025
  • Camera-ready version deadline: Feb 22, 2025


General Chairs

Alberto Avritzer, eSulabSolutions

Matteo Camili, Politecnico di Milano

James Cusick, Ritsumeikan University, Shiga, Japan

Andrea Janes, Free University of Bozen-Bolzano, Italy


Web Site Chair

Alberto Avritzer, eSulabSolutions

Proposed Program Committee 

Cristina Abad, Escuela Superior Politécnica del Litoral

Alberto Avritzer, eSulabSolutions

Steffen Becker, Stuttgart University

Andre Bondi, Software Performance and Scalability Consulting 

David Daly, MongoDB,

Vittoria de Nito Persone, Tor Vergata University 

Alex Iosup, Vrije Universiteit Amsterdam

Raffaela Mirandola, Politecnico Milano 

Manoj Nambiar, Tata Consultancy Services 

Dorina Petriu, Carleton Univ. 

Evgenia Smirni, William and Mary, Williamsburg, VA

Connie Smith, Performance Engineering Services (PES)

Kishor Trivedi, Duke Univ. 

Catia Trubiani, GSSI L’Aquila

Ana-Lucia Varbanescu, University of Amsterdam

Murray Woodside, Carleton University



Historical WEPPE 2023

Program


4th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2023

Coimbra, Portugal

April 15, 2023


Saturday, April 15, 2023 


09:00-09:10 -Opening - Minding the Gap between Education and Practice -  Alberto Avritzer, Matteo Camilli


9:10-9:30:   Performance Engineering Practices for Modern Industrial

Applications at ABB Research, Heiko Koziolek, ABB


Abstract: ABB is developing a vast range of software services for

process automation applications used in chemical production facilities,

power plants, and container ships. High responsiveness and resource

efficiency is important in this domain, both for real-time embedded

systems and distributed containerized systems, but performance

engineering can be challenging due to system complexity and application

domain heterogeneity. This talk provides experiences and lessons learned

from several selected case studies on performance engineering. It

illustrates testing performance of OPC UA pub/sub communication,

clustered MQTT brokers for edge computing, software container online

updates, and lightweight Kubernetes frameworks while highlighting the

applied practices and tools. The talk reports on challenges in workload

modeling, performance testing, and performance modeling.


9:30-9:50: Quantitative Analysis of Software Designs: Teaching Design and Experiences, Mir Alireza, Hakamian, University of Stuttgart


Abstract: The Software Quality and Architecture group (SQA) at the University of Stuttgart offers the Quantitative Analysis of Software Designs (QASD) course for master students. The goal is to give students the necessary skill to evaluate architecture alternatives of software systems quantitatively. The course offers a combination of required theoretical skills, such as applying stochastic processes and practical exercises using suitable tools.

The Challenge is providing teaching materials that balance necessary theoretical knowledge and appropriate tooling that can be used in practice.

As a Solution, the course is designed so that one-third is about the formalisms behind quantitative analysis, including stochastic processes and queuing theory. One-third is modeling languages, such as queuing networks, UML, and UML profiles, including MARTE. The other one-third uses tooling to model and analyze example systems.

During Corona, we provided students with an e-learning module with pre-recorded videos, online quizzes at the end of every chapter, and a virtual machine that pre-installed all the required tooling for the exercise sheets.

In the past two years, students' feedback was often positive regarding the balance between theory and tooling. However, it has to be emphasized that the number of students participating in the course has always been no more than ten. Hence, the student feedback has not been collected by the universities' survey.


9:50-10:10: Early Progress on Enhancing Existing Software Engineering Courses to Cultivate Performance Awareness, Andre Bondi and Lu Xiao, Stevens Institute of Technology


Abstract: Software engineering and computer science courses are frequently focused on particular areas in a way that neglects such cross-cutting quality attributes as performance, reliability, and security. We will the progress we have made in developing enhancements to some of our existing software engineering courses to draw attention and even lay the foundations of an awareness of performance consid- erations in the software development life cycle. In doing so, we wish to make performance considerations integral to the software engineering mindset while avoiding the need to remove current ma- terial from our existing courses. This work is part of an NSF-funded project for undergraduate curriculum development. 


10:10-10:30 - Break


10:30-10:50 -  Theory and Practice in Performance Evaluation Courses:

the challenge of online teaching, Andrea Marin, University Ca' Foscari of Venice


Abstract: In this talk, we report the experience of 5 years of teaching the course of

"Software Performance and Scalability" at the University Ca' Foscari of Venice, Italy.

The course is a reformulation of a more methodological course and finds place in a curriculum

of software engineering at master level.

In these years, we have made an important effort to include practical and lab experiences

(especially with benchmarking) in the topics of the course and these required to

set up a small dedicated lab where students could practice without interfering with

other department machines. This methodology has raised the interest in the course, and more

students have chosen it among the eligible courses. However, the advent of the covid pandemic

has made using the lab impossible and new strategies had to be explored.

We will discuss the workarounds that have been tried and the advantages and disadvantages that we noticed in the learning process.


10:50-11:10 -   Levelling up Performance and Performance Skills at MongoDB, David Daly, MongoDB


Abstract: MongoDB has invested in developing a performance infrastructure and a corresponding performance culture. All development engineers are expected to improve MongoDB performance, through adding performance tests, optimizing code, and fixing performance regressions. Investing in the infrastructure is clear: we develop and support tools to make it easy to track performance changes and improve performance. Investing in the culture includes formal and informal training. The training must ultimately both support strong developers with very limited performance backgrounds as well as developing our future performance experts.  


11:10-11:30 - Views From the Trenches: Current Trends in Performance Engineering, Alexander Podelko, Amazon


Abstract: Performance engineering is changing before our eyes adjusting to current industry trends – such as cloud computing, agile development, and DevOps. As systems scale and sophistication skyrocket, performance gets more attention. While it looks like some performance concepts like algorithm complexity became a must for anybody working in the industry, it still doesn’t result in a consistent view of performance. So it remains an open question what computer professionals should learn about performance – and, even more challenging, what is needed to prepare performance professionals (that actually have never been clearly answered - and now it is even less defined than it was before).


11:30-11:50 - Performance Analysis Tools for MPI Applications and their Use in Programming Education, Anna-Lena Roth and Tim Süß, Hochschule Fulda, University of Applied Sciences


Abstract: Performance analysis tools are frequently used to support the development of parallel MPI applications. They facilitate, e.g., the detection of errors, bottlenecks, or inefficiencies but differ substantially in their instrumentation, measurement, and type of feedback. Especially, tools that provide visual feedback are helpful for educational purposes. They provide a visual abstraction of program behavior, supporting learn- ers to identify and understand performance issues and write more efficient code. However, existing professional tools for performance analysis are very complex, and their use in beginner courses can be very demanding. Foremost, their instrumentation and measurement require deep knowledge and take a long time. Immediate, as well as straightforward feedback, is essential to motivate learners.

This paper provides an extensive overview of performance analysis tools for parallel MPI applications, which experi- enced developers broadly use today. It also gives an overview of existing educational tools for parallel programming with MPI and shows their shortcomings compared to professional tools. Using tools for performance analysis of MPI programs in educational scenarios can promote the understanding of program behavior in large HPC systems and support learn- ing parallel programming. At the same time, the complexity of the programs and the lack of infrastructure in educational institutions are barriers. These aspects will be considered and discussed in detail. 


11:50-12:20 - Panel discussion of presenters  - 


14 minutes (2 minutes x 7) each   presenter provides a position for discussion   on the gap between education and practice of performance engineering


16 minutes - open question/answers among panel and public

WEPPE 2023

Program


4th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2023

Coimbra, Portugal

April 15, 2023


Saturday, April 15, 2023 


09:00-09:10 -Opening - Minding the Gap between Education and Practice -  Alberto Avritzer, Matteo Camilli


9:10-9:30:   Performance Engineering Practices for Modern Industrial

Applications at ABB Research, Heiko Koziolek, ABB


Abstract: ABB is developing a vast range of software services for

process automation applications used in chemical production facilities,

power plants, and container ships. High responsiveness and resource

efficiency is important in this domain, both for real-time embedded

systems and distributed containerized systems, but performance

engineering can be challenging due to system complexity and application

domain heterogeneity. This talk provides experiences and lessons learned

from several selected case studies on performance engineering. It

illustrates testing performance of OPC UA pub/sub communication,

clustered MQTT brokers for edge computing, software container online

updates, and lightweight Kubernetes frameworks while highlighting the

applied practices and tools. The talk reports on challenges in workload

modeling, performance testing, and performance modeling.


9:30-9:50: Quantitative Analysis of Software Designs: Teaching Design and Experiences, Mir Alireza, Hakamian, University of Stuttgart


Abstract: The Software Quality and Architecture group (SQA) at the University of Stuttgart offers the Quantitative Analysis of Software Designs (QASD) course for master students. The goal is to give students the necessary skill to evaluate architecture alternatives of software systems quantitatively. The course offers a combination of required theoretical skills, such as applying stochastic processes and practical exercises using suitable tools.

The Challenge is providing teaching materials that balance necessary theoretical knowledge and appropriate tooling that can be used in practice.

As a Solution, the course is designed so that one-third is about the formalisms behind quantitative analysis, including stochastic processes and queuing theory. One-third is modeling languages, such as queuing networks, UML, and UML profiles, including MARTE. The other one-third uses tooling to model and analyze example systems.

During Corona, we provided students with an e-learning module with pre-recorded videos, online quizzes at the end of every chapter, and a virtual machine that pre-installed all the required tooling for the exercise sheets.

In the past two years, students' feedback was often positive regarding the balance between theory and tooling. However, it has to be emphasized that the number of students participating in the course has always been no more than ten. Hence, the student feedback has not been collected by the universities' survey.


9:50-10:10: Early Progress on Enhancing Existing Software Engineering Courses to Cultivate Performance Awareness, Andre Bondi and Lu Xiao, Stevens Institute of Technology


Abstract: Software engineering and computer science courses are frequently focused on particular areas in a way that neglects such cross-cutting quality attributes as performance, reliability, and security. We will the progress we have made in developing enhancements to some of our existing software engineering courses to draw attention and even lay the foundations of an awareness of performance consid- erations in the software development life cycle. In doing so, we wish to make performance considerations integral to the software engineering mindset while avoiding the need to remove current ma- terial from our existing courses. This work is part of an NSF-funded project for undergraduate curriculum development. 


10:10-10:30 - Break


10:30-10:50 -  Theory and Practice in Performance Evaluation Courses:

the challenge of online teaching, Andrea Marin, University Ca' Foscari of Venice


Abstract: In this talk, we report the experience of 5 years of teaching the course of

"Software Performance and Scalability" at the University Ca' Foscari of Venice, Italy.

The course is a reformulation of a more methodological course and finds place in a curriculum

of software engineering at master level.

In these years, we have made an important effort to include practical and lab experiences

(especially with benchmarking) in the topics of the course and these required to

set up a small dedicated lab where students could practice without interfering with

other department machines. This methodology has raised the interest in the course, and more

students have chosen it among the eligible courses. However, the advent of the covid pandemic

has made using the lab impossible and new strategies had to be explored.

We will discuss the workarounds that have been tried and the advantages and disadvantages that we noticed in the learning process.


10:50-11:10 -   Levelling up Performance and Performance Skills at MongoDB, David Daly, MongoDB


Abstract: MongoDB has invested in developing a performance infrastructure and a corresponding performance culture. All development engineers are expected to improve MongoDB performance, through adding performance tests, optimizing code, and fixing performance regressions. Investing in the infrastructure is clear: we develop and support tools to make it easy to track performance changes and improve performance. Investing in the culture includes formal and informal training. The training must ultimately both support strong developers with very limited performance backgrounds as well as developing our future performance experts.  


11:10-11:30 - Views From the Trenches: Current Trends in Performance Engineering, Alexander Podelko, Amazon


Abstract: Performance engineering is changing before our eyes adjusting to current industry trends – such as cloud computing, agile development, and DevOps. As systems scale and sophistication skyrocket, performance gets more attention. While it looks like some performance concepts like algorithm complexity became a must for anybody working in the industry, it still doesn’t result in a consistent view of performance. So it remains an open question what computer professionals should learn about performance – and, even more challenging, what is needed to prepare performance professionals (that actually have never been clearly answered - and now it is even less defined than it was before).


11:30-11:50 - Performance Analysis Tools for MPI Applications and their Use in Programming Education, Anna-Lena Roth and Tim Süß, Hochschule Fulda, University of Applied Sciences


Abstract: Performance analysis tools are frequently used to support the development of parallel MPI applications. They facilitate, e.g., the detection of errors, bottlenecks, or inefficiencies but differ substantially in their instrumentation, measurement, and type of feedback. Especially, tools that provide visual feedback are helpful for educational purposes. They provide a visual abstraction of program behavior, supporting learn- ers to identify and understand performance issues and write more efficient code. However, existing professional tools for performance analysis are very complex, and their use in beginner courses can be very demanding. Foremost, their instrumentation and measurement require deep knowledge and take a long time. Immediate, as well as straightforward feedback, is essential to motivate learners.

This paper provides an extensive overview of performance analysis tools for parallel MPI applications, which experi- enced developers broadly use today. It also gives an overview of existing educational tools for parallel programming with MPI and shows their shortcomings compared to professional tools. Using tools for performance analysis of MPI programs in educational scenarios can promote the understanding of program behavior in large HPC systems and support learn- ing parallel programming. At the same time, the complexity of the programs and the lack of infrastructure in educational institutions are barriers. These aspects will be considered and discussed in detail. 


11:50-12:20 - Panel discussion of presenters  - 


14 minutes (2 minutes x 7) each   presenter provides a position for discussion   on the gap between education and practice of performance engineering


16 minutes - open question/answers among panel and public

WEPPE 2023


Call for Papers


 4th  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2023

14th ACM/SPEC International Conference on Performance Engineering

A Joint Meeting of WOSP/SIPEW sponsored by
      ACM SIGMETRICS and ACM SIGSOFT in Cooperation with SPEC

Coimbra, Portugal

April 15-19, 2023

Scope and Topics

The goal of the Workshop on Education and Practice of Performance Engineering is to bring together University researchers and Industry Performance Engineers to share education and practice experiences. We are interested in creating opportunities to share experiences between researchers that are actively teaching performance engineering and  of  Performance Engineers that are applying Performance Engineering techniques in  industry.  

Topics of interest to the workshop include, but are not limited to Education and Practice of:

  • Performance in the Data Center,  Cloud, Blockchain, IoT, Sensor Networks, and ML/AI 
  • Performance methods in software development
  • Model-driven performance engineering
  • Performance modeling and prediction
  • Performance measurement and experimental analysis
  • Benchmarks (workloads, scenarios, and implementations)
  • Run-time performance and capacity management
  • Performance in cloud, virtualized, and multi-core systems
  • Performance-driven resource and power management
  • Performance of big data systems
  • Performance modeling and evaluation in other domains
  • Performance requirements specification 
  • Performance testing and validation
  • Relationship between performance engineering and architecture
  • All other topics related to performance engineering
  • Education of other quantitative attributes such as reliability, availability, power consumption, safety, security and survivability

Submission Guidelines

A variety of contribution styles for papers are solicited including: two-page abstracts, presentation, basic and applied research papers for novel scientific insights, industrial and experience papers reporting on education and or practice of the application of performance engineering or benchmarks in practice, and work-in-progress/vision papers for ongoing but yet interesting work. Different acceptance criteria apply based on the expected content of the individual contribution types. 

Authors will be requested to self-classify their papers according to topic and contribution style when submitting their papers. We specifically encourage position papers of at most 6 pages and experience reports of at most 10 pages. Submissions  need to be uploaded to ICPE’s Easychair installation at:

https://easychair.org/conferences/?conf=weppe2023

At least one author of each accepted paper is required to register at workshop at the full rate, attend the workshop and present the paper. Presented papers will be published in the ICPE 2023 conference proceedings that will be published by ACM and included in the ACM Digital Library. After the conference there will be a call for a special issue of a journal.

Important Dates

  • Abstract submission deadline: Jan 15, 2023
  • Paper submission deadline: Jan 20, 2023
  • Author Notification: Feb 04, 2023
  • Camera-ready version deadline: Feb 22, 2023


General Chairs

Alberto Avritzer, eSulabSolutions

Andre Bondi, Software Performance and Scalability Consulting, LLC

Matteo Camili, Politecnico di Milano


Web Site Chair

Alberto Avritzer, eSulabSolutions

Proposed Program Committee (preliminary)

Cristina Abad, Escuela Superior Politécnica del Litoral

Alberto Avritzer, eSulabSolutions

Steffen Becker, Stuttgart University

Andre Bondi, Software Performance and Scalability Consulting 

David Daly, MongoDB,

Vittoria de Nito Persone, Tor Vergata University 

Andre Van Hoorn, Univ Stuttgart 

Alex Iosup, Vrije Universiteit Amsterdam

Raffaela Mirandola, Politecnico Milano 

Manoj Nambiar, Tata Consultancy Services 

Dorina Petriu, Carleton Univ. 

Evgenia Smirni, William and Mary, Williamsburg, VA

Connie Smith, Performance Engineering Services (PES)

Kishor Trivedi, Duke Univ. 

Catia Trubiani, GSSI L’Aquila

Ana-Lucia Varbanescu, University of Amsterdam

Murray Woodside, Carleton University



Historical WEPPE 2021

WEPPE 2021


Call for Papers

Call for Papers: 3rd  Workshop on Education and Practice of Performance Engineering (WEPPE) - co-located with  International Conference on Performance Engineering (ICPE) 2021

12th ACM/SPEC International Conference on Performance Engineering

A Joint Meeting of WOSP/SIPEW sponsored by
      ACM SIGMETRICS and ACM SIGSOFT in Cooperation with SPEC

Rennes, France

April 19-23, 2021

Scope and Topics

The goal of the Workshop on Education and Practice of Performance Engineering is to bring together University researchers and Industry Performance Engineers to share education and practice experiences. We are interested in creating opportunities to share experiences between researchers that are actively teaching performance engineering and  of  Performance Engineers that are applying Performance Engineering techniques in  industry.  

Topics of interest to the workshop include, but are not limited to Education and Practice of:

  • Performance in the Data Center,  Cloud, Blockchain, IoT, Sensor Networks, and ML/AI 
  • Performance methods in software development
  • Model-driven performance engineering
  • Performance modeling and prediction
  • Performance measurement and experimental analysis
  • Benchmarks (workloads, scenarios, and implementations)
  • Run-time performance and capacity management
  • Performance in cloud, virtualized, and multi-core systems
  • Performance-driven resource and power management
  • Performance of big data systems
  • Performance modeling and evaluation in other domains
  • Performance requirements specification 
  • Performance testing and validation
  • Relationship between performance engineering and architecture
  • All other topics related to performance engineering
  • Education of other quantitative attributes such as reliability, availability, power consumption, safety, security and survivability

Submission Guidelines

A variety of contribution styles for papers are solicited including: two-page abstracts, presentation, basic and applied research papers for novel scientific insights, industrial and experience papers reporting on education and or practice of the application of performance engineering or benchmarks in practice, and work-in-progress/vision papers for ongoing but yet interesting work. Different acceptance criteria apply based on the expected content of the individual contribution types. 

Authors will be requested to self-classify their papers according to topic and contribution style when submitting their papers. We specifically encourage position papers of at most 6 pages and experience reports of at most 10 pages. Submissions  need to be uploaded to ICPE’s Easychair installation at:

https://easychair.org/conferences/?conf=weppe2021

At least one author of each accepted paper is required to register at workshop at the full rate, attend the workshop and present the paper. Presented papers will be published in the ICPE 2021 conference proceedings that will be published by ACM and included in the ACM Digital Library. After the conference there will be a call for a special issue of a journal.

Important Dates

  • Abstract submission deadline: Jan 15, 2021
  • Paper submission deadline: Jan 20, 2021
  • Author Notification: Feb 04, 2021
  • Camera-ready version deadline: Feb 22, 2021

General Chairs

Alberto Avritzer, eSulabSolutions

Kishor Trivedi, Duke University

Alexandru Iosup, Vrije Universiteit Amsterdam

Web Site Chair

Alberto Avritzer, eSulabSolutions

Program Committee (preliminary)

Cristina Abad, Escuela Superior Politécnica del Litoral

Alberto Avritzer, eSulabSolutions

Steffen Becker, Stuttgart University

Andre Bondi, Software Performance and Scalability Consulting 

David Daly, MongoDB,

Vittoria de Nito Persone, Tor Vergata University 

Andre Van Hoorn, Univ Stuttgart 

Alex Iosup, Vrije Universiteit Amsterdam

Raffaela Mirandola, Politecnico Milano 

Manoj Nambiar, Tata Consultancy Services 

Dorina Petriu, Carleton Univ. 

Evgenia Smirni, William and Mary, Williamsburg, VA

Connie Smith, Performance Engineering Services (PES)

Kishor Trivedi, Duke Univ. 

Catia Trubiani, GSSI L’Aquila

Ana-Lucia Varbanescu, University of Amsterdam

Murray Woodside, Carleton University



Historical WEPPE 2021 Program

Program

09:00-09:10 -Opening - Minding the Gap between Education and Practice -  Alberto Avritzer, Kishor Trivedi, Alexandru Iosup

Abstract: We provide a summary report of Weppe 2019 as a starting point for WEPPE 2021 discussion. 


09:10-9:25 -  The role of analytical models in the engineering and science of computer systems - Y.C Tay, National University of Singapore

Abstract:  This talk reviews the role of analytical performance modeling

in engineering computer systems and developing computer science.

Specifically: (1) what can an analytical model offer?  (2) the role of assumptions; (3) Average Value Approximation (AVA); (4) when bottleneck analysis suffices; (5) reducing the parameter space; (6) how a model can be decomposed into submodels, so as to decouple different forces affecting performance, and thus analyze their interaction;(7) the concept of analytic validation; and (8) analysis with an analytical model.


9:25-9:40 - Performance monitoring guidelines - Maria Calzarossa, Luisa Massari, Daniele Tessera, Università di Pavia and Università Cattolica del Sacro Cuore

Abstract: Monitoring, that is, the process of collecting measurements on infrastructures and services, is an important subject of performance engineering. Although monitoring is not a new education topic, nowadays its relevance is rapidly increasing and its application is particularly demanding due to the complex distributed architectures of new and emerging technologies. As a consequence, monitoring has become a ``must have'' skill for students majoring in computer science and in computing-related fields. In this paper, we present a set of guidelines and recommendations to plan, design and setup sound monitoring projects. Moreover, we investigate and discuss the main challenges to be faced to build confidence in the entire monitoring process and ensure measurement quality.


9:40-9:55 - Experience with Teaching Performance Measurement and Testing in a Course on Functional Testing - Andre Bondi, Razieh Saremi,  Software Performance and Scalability Consulting LLC and  Stevens Institute of Technology

Abstract:  Stevens Institute of Technology offers a graduate course on functional software testing that addresses test planning driven by use cases, the use of software tools, and the derivation of test cases to achieve coverage with minimal effort. The course also contains material on performance testing. Teaching performance testing and measurement in a university setting is challenging because neither the students nor the university typically have access to a system that can be tested and measured. We addressed these challenges (a) by showing the students how resource usage could be measured in a controlled way with the instrumentation that comes with most modern laptops by default, and (b) by having the students use JMeter to measure the response times of existing websites . We describe how students were introduced to the concept of a controlled performance test by playing recordings of the same musical piece with and without video. We make recommendations for the future avoidance of the emergent ethical issue that one should not subject one does not own to anything but the most trivial loads. We also describe some successes and pitfalls in this effort.


9:55-10:10 - An Analysis of Distributed Systems Syllabi With a Focus on Performance-Related Topics - Cristina Abad,  Alexandru Iosup,  Edwin Boza, Eduardo Ortiz-Holguin, Escuela Superior Politecnica del Litoral and Vrije Universiteit Amsterdam

Abstract:  We analyze a dataset of 51 current (2019-2020) Distributed Systems syllabi from top Computer Science programs, focusing on finding the prevalence and context in which topics related to performance are being taught in these courses. We also study the scale of the infrastructure mentioned in DS courses, from small client-server systems to cloud-scale, peer-to-peer, global-scale systems. We make eight main findings, covering goals such as performance, and scalability and its variant elasticity; activities such as performance benchmarking and monitoring; eight selected performance-enhancing techniques (replication, caching, sharding, load balancing, scheduling, streaming, migrating, and offloading); and control issues such as trade-offs that include performance and performance variability.


10:10-10:25  - A New Course on Systems Benchmarking - For Scientists and Engineers, Samuel Kounev - University of Würzburg

Abstract: The talk will present an overview of a new course focussed on systems benchmarking, based on our experiences that have been gained over the past 15 years in teaching a regular graduate course on performance engineering of computing systems. The latter was taught at four diff erent European universities since 2006, including University of Cambridge, Technical University of Catalonia, Karlsruhe Institute of Technology, and University of W urzburg. The conception, design, and development of benchmarks requires a thorough understanding of the benchmarking fundamentals beyond understanding of the system under test, including statistics, measurement methodologies,

metrics, and relevant workload characteristics. The course addresses these issues in depth; it covers how to determine relevant system characteristics to measure, how to measure these characteristics, and how to aggregate the measurement results in a metric. Further, the aggregation of metrics into scoring systems, as well as the design of workloads, including workload characterization and modeling, are additional challenging topics that are covered. Finally, modern benchmarks and their application in industry and research will be discussed. Overall, the talk will provide an overview of the relevant topics and broad range of materials that have been collected in the past years as well as the extensive experience gained from teaching these topics at several leading European universities.


10:25-10:40 Performance Engineering and Database Development at MongoDB - David Daly, MongoDB

Abstract: Performance and the related properties of stability and resilience are essential to MongoDB. We have invested heavily in these areas: involving all development engineers in aspects of performance, building a team of specialized performance engineers to understand issues that do not fit neatly within the scope of individual development teams, and dedicating multiple teams to develop and support tools for performance testing and analysis. In this talk we discuss those efforts, key skills required in those efforts, and the topics we would encourage for increased coverage in CS curricula.


10:40-10:55 - Software Performance Engineering Education: What Topics Should be Covered? Connie U. Smith, Performance Engineering Service

Abstract: This presentation will consider elements of Software Performance Engineering (SPE) and how they have evolved. It will address both skills needed by practitioners and areas of research. Which topics should be covered going forward? Are they unique to SPE education? How should SPE education be integrated with other specialties in Computer Science and Engineering?

10:55-11:25 - Discussion Q/A - 

11:25-11:45 - Summary of Workshop

Historical WEPPE 2019 Program

Program

Monday, April 8, 2019 - co-located with ICPE’19

Keynote Speech: 

09:00-09:15 -Opening and Kishor Introduction - Industrial Experience,  Alberto Avritzer

09:15-10:15 - Keynote: Performance Engineering Education: A Viewpoint, Kishor Trivedi - 

10:15-10:30 - Open discussion about education and practice, with Kishor Trivedi (open questions)

Break 10:30-11:00

(Morning session moderator: Alberto Avritzer)

11:00-11:30 -Performance Engineering Roles in Industry - Challenges and Knowledge/Skills/Experience required to meet them  presented by Manoj Nambiar, Computing Systems TCS Research Mumbai, Maharashtra, India   

11:30-12:00   Practices in model component reuse for efficient dependability analysis presented by Fumio Machida,  University of Tsukuba,  Japan

12:00- 12:30 “What did I learn in Performance Analysis last year?”: Teaching Queuing Theory for Long-term Retention presented by Varsha Apte Computer Science and Engineering Department Indian Institute of Technology - Bombay Powai, Mumbai, India

12:30-13:00  Lessons from Teaching Analytical Performance Modeling presented by Y.C. Tay National University of Singapore, Singapore

Lunch 13:00-14:30

14:30-15:30 - Panel of Presenters - Moderator - Alberto Avritzer (5-7 questions)

Topic: Mind the Gap (Between Education and Practice)

Alberto introduces the panelists (1-minute statement) and explains the rules of engagement

Alberto Avritzer, Kishor Trivedi, Y.C. Tay, Varsha Apte, Fumio Machida, Manoj Nambiar
Break 15:30-16:00

(Hackathon Moderator: Varsha Apte)

16:00-17:30: Hackathon presentations and best submissions selections.

Copyright © 2026 eSulabSolutions - All Rights Reserved.

  • Sponsored Events

Powered by