Systematic Testing of Model-Based Code Generators
Systematic Testing of Model-Based Code Generators
9, SEPTEMBER 2007
Abstract—Unlike for conventional compilers for imperative programming languages such as C or ADA, no established methods for
safeguarding artifacts generated by model-based code generators exist despite progress in the field of formal verification. Several test
approaches dominate the engineering practice. This paper describes a general and tool-independent test architecture for code
generators used in model-based development. We evaluate the effectiveness of our test approach by means of testing optimizations
performed by the TargetLink code generator, a widely accepted and complex development tool used in automotive model-based
development.
1 INTRODUCTION
behavior of the generated code can be directly compared to
T HEway automotive embedded software is developed
has changed. Executable models are nowadays used at
all stages of development, from the first design down to
the simulation behavior of the model. 2) The semantics of
the modeling language often is not explicitly defined. The
implementation (model-based development). Models are semantics may depend on layout information (e.g., position
designed with popular graphical modeling languages, such of states) as well as on internal model settings (e.g., block
as Simulink/Stateflow from The MathWorks [1]. New parameters, handling of data types). Consequently, the
approaches allow the automatic generation of efficient semantics is embodied in the interpretation algorithms of
controller code directly from the Simulink and Stateflow the simulator [4]. 3) In particular, generators for data-driven
models via so-called code generators, such as TargetLink by languages as defined by Simulink constitute a new kind of
dSPACE [2] or the Real-Time Workshop Embedded Coder development tool. Code generators cannot simply perform
(RTW-EC) by The MathWorks [3]. A code generator is stepwise transformation from the hierarchical structure of
essentially a compiler that translates source programs the model into an abstract syntax tree of the target
represented in a graphical modeling language into an language. To the contrary, they must analyze data depen-
imperative programming language such as C or ADA. Code dencies to derive an appropriate sequence of computation
generators reduce the effort of software implementation which is then the spine of the generated code.
considerably. Also, the level of quality gained by early At present, model-based code generators are not as
quality assurance at the model level can lead to high-quality mature as established C or ADA compilers. The technolo-
code, provided that the code generator works correctly. Due gical risk of a code generator is high, because they 1) are
to these characteristics, there is a strong industrial demand used by a relatively small group of developers and 2) face a
for code generators. high rate of technological innovation causing new versions
Model-based code generators differ from traditional compi- to appear in short cycles. Therefore, a formal proof of code
lers in several respects. 1) Both the target language and the generator correctness is in practice infeasible. Hence,
source language are executable.1 Therefore, the execution productivity improvements achieved through the use of
model-based code generation tools cannot be fully
1. In some special cases, this can be achieved for traditional compilers as
well, e.g., in the case that an interpreter for the source languages is exploited. The generated code must still be checked with
available. In this case, both the program and the target code can be the same expensive effort as for manually written code even
executed. However, existing testing approaches for compilers rarely use
these potentials.
though intense quality measures had been spent on the
model already.
This paper describes a general and practical testing
. I. Stürmer is with Model Engineering Solutions, Friedrichstrasse 50, approach for model-based code generators. The approach
D-10117 Berlin, Germany. E-mail: [email protected]. makes heavy use of the fact that both input and output of
. M. Conrad is with The Mathworks GmbH, Adalperostrasse 45, D-85737
Ismaning, Germany. E-mail: [email protected]. the code generation are executable. The objectives of the
. H. Dörr is with Carmeq GmbH, Carnotstrasse 4, D-10587 Berlin, approach are threefold: 1) Systematic derivation of test
Germany. E-mail: [email protected]. cases must enforce the confidence in the test suite such that
. P. Pepper is with the Compiler Construction and Programming Languages
Group, Technical University Berlin, Franklinstrasse 28/29, D-10587 it can serve to validate the code generator. 2) Test cases
Berlin, Germany. E-mail: [email protected]. must be generated automatically to cover the high varia-
Manuscript received 31 Jan. 2006; revised 19 Oct. 2006; accepted 4 June 2007; bility of models. 3) Test suites must be executed and
published online 14 June 2007. evaluated automatically to cope with the fast release cycles
Recommended for acceptance by W. Schaefer. of code generators. We evaluate the effectiveness of our test
For information on obtaining reprints of this article, please send e-mail to:
[email protected], and reference IEEECS Log Number TSE-0020-0106. approach by validating the optimizations performed by the
Digital Object Identifier no. 10.1109/TSE.2007.70708. TargetLink code generator.
0098-5589/07/$25.00 ß 2007 IEEE Published by the IEEE Computer Society
STÜRMER ET AL.: SYSTEMATIC TESTING OF MODEL-BASED CODE GENERATORS 623
The remainder of this paper is structured as follows: fixed-point (FXP) arithmetic of the embedded target
Section 2 introduces model-based code generation. Section 3 processor (see [5] for details). The fixed-point data types
describes code generator optimizations. The theoretical are augmented with appropriate scaling information [6] in
threads that underpin a systematic code generator test are order to keep the precision error FXP numbers as low as
outlined in Section 4. Section 5 describes the systematic possible.2 The resulting refinement is the implementation
code generator test approach by means of an example. model. It contains all information that is needed for code
Section 6 presents the test results obtained from three case generation and enables the creation of efficient C code by
studies. Section 7 discusses the results and limitations, and the code generator.
it concludes the paper by summarizing its contributions Depending on the development stage and purpose, the
and suggesting future research directions. code is generated for the development computer (host), in
most cases, a standard PC (Fig. 1, right). In that case, a
classical compiler/linker combination is used for the
2 MODEL-BASED CODE GENERATION translation of the generated code into an executable. For
In model-based development, the implementation of a the target hardware—typically an evaluation board similar
control algorithm is developed by means of stepwise to the ECU—a so-called cross-compiler is required. Here, a
refinement of models. A so-called physical model is derived linker and loader build and load the binary code onto the
from the functional requirements specification of the soft- embedded device. The tool chain established by the
ware component (Fig. 1, upper left). The physical model modeling tool (editor and simulator), the tools for model-
captures the control algorithm and describes the behavior of to-code translation (e.g., code generator, (cross-)compiler,
the control function dependent on (continuous) input linker, loader), and, finally, the target hardware itself
signals and (internal or external) events. The physical comprise the code generation tool-chain (Fig. 1).
model typically uses floating-point (FLP) arithmetic and is Model-based code generation is one of the main
used to validate the functional behavior of the model with advantages of model-based development. The use of a
regard to the requirements stated in the requirements code generator leads to significant productivity improve-
specification. ments in the software implementation phase. Individual
In the field of motor vehicle engineering, embedded studies have shown a reduction in software development
systems are termed electronic control units (ECUs). The time by up to 20 percent through code generation [7]. If the
limited hardware resources of the ECU require a (high- manual verification process at the code level can also be
level) programming language with a small overhead (e.g., reduced, savings of up to 50 percent are reported. This
limited or no use of abstraction) and the efficient usage of conforms to internal information provided by other users.
system resources. Therefore, C is the language preference Summing up, productivity can increase up to 50 percent
for embedded software development. For reasons of compared to traditional manual coding.
economy, the microprocessors used in an ECU are pre-
ferably 8, 16, or 32-bit fixed-point processors. For the
reasons stated above, the physical model cannot serve 3 CODE GENERATOR OPTIMIZATIONS
directly as a basis for deriving production code for the ECU. Embedded systems for which code is generated often have
Therefore, the physical model has to be manually refined by limited resources. Therefore, optimization techniques must
implementation experts; for example, function parts are be applied whenever possible to generate efficient code with
allocated to different tasks and enhanced with the
necessary implementation details. Furthermore, the FLP 2. The incurred numerical errors caused by imprecision are called
arithmetic used in the physical model is adapted to the quantization errors.
624 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 33, NO. 9, SEPTEMBER 2007
4.2 Scope of Code Generator Testing determined by so-called back-to-back testing [13]. In back-
Validating all translation functions individually is ob- to-back testing, the test outputs of the model and the test
viously not sufficient for a “complete” code generator test. outputs of the code resulting from their execution with
A complete code generator test should take more test goals identical test vectors iðtÞ are compared. If the model and the
into account. For example, combinations of translation code display the same test outputs, they are considered to
rules, robustness, treatment of arithmetical exceptions, or be correct with respect to the set of test vectors executed. In
retest of known bugs from previous releases [52] are also the following, the simulation of the model on the host PC is
important test goals. Within the general scope of our termed Model-in-the-Loop simulation (MIL), the execution of
approach, each test goal for code generators is treated in a the code on the host PC is termed Software-in-the-Loop
specific test module. All test modules contribute to a simulation (SIL), and the execution of the code on the target
comprehensive code generator test suite. This paper focuses processor is termed Processor-in-the-Loop simulation (PIL).
on the module for testing optimization rules since optimi-
The time-dependent test outputs of MIL, SIL, and PIL are
zations are the most error-prone translations within a code
oMIL ðtÞ, oSIL ðtÞ, and oP IL ðtÞ (see Fig. 3, right). All three test
generator. But, even though we are focusing on the
outputs are pairwise compared by means of back-to-back
translation of code generation optimizations, the techniques
shown are applicable to other types of translations as well. testing.
Note that all three development artifacts, that is, the
4.3 Code Generator Test Cases model on the host PC, the generated code on the host PC,
At a first level, code generator testing is successful if invalid and the generated code on the target processor, are
test models are rejected and valid test models are translated considered for correct model-to-code translation.
into “functionally equivalent” code. Closer inspection In general, a valid translation requires that the execution
shows that the assessment of functional equivalence requires of the generated target code exhibits, for any given set of
a second level of consideration. The behavior of valid input data, the same observable effects as the execution of
models as well as of the generated code must be compared the source program (see [14]). Usually, this is interpreted as
for pass/fail determination. For that purpose, the test model a request for identical input/output behavior. However, in
and the generated code must be executed with the same set connection with model-based code generators, this demand
of test vectors to compare the behavior of model and code is too strong. Even in the case of a correct translation of a
(see Fig. 3, left). Therefore, a valid test case for a model- model into code, one cannot expect identical behavior.
based code generator in general consists of the test model Hence, traditional notions of correctness as they are used
(first-order test case) as well as a set of corresponding test for compilers do not apply. Rather, the definition of
vectors for this model (second-order test cases). Since the
correctness has to be based on a notion of sufficiently
test model can possess inner states, it is not sufficient to
similar behavior.
execute it with a single test case (e.g., constant values).
The signal comparison algorithm (Fig. 3, right) must be
Rather, second-order test cases must be time-dependent test
able to tolerate differences between the three timed series
vectors (test vector iðtÞ; see Fig. 3, left). The corresponding
test results of the model and code execution (test output oðtÞ; representing the test outputs oMIL ðtÞ, oSIL ðtÞ, and oP IL ðtÞ.
see Fig. 3, right) are again timed signal vectors. Quantization effects can lead to differences between oMIL ðtÞ
for physical models and oSIL ðtÞ/oP IL ðt) in the value
4.4 Dynamic Code Generator Testing and domain. Quantization effects occurring in the control flow
Functional Equivalence can also induce differences in the value domain. Further-
The validity of the translation process, i.e., whether or not more, oSIL ðtÞ and oP IL ðtÞ can differ due to the use of
the semantics of the model has been preserved, is different compilers.
626 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 33, NO. 9, SEPTEMBER 2007
4.7 State-of-the-Art and Related Work from given language standards. In compiler construction,
Systematic testing approaches are largely unexplored. The test suites are used to validate C or ADA compiler
few published testing procedures for model-based code implementations to ensure language conformance. Here,
generators used in practice can be divided into four test cases are created for each paragraph in the respective
categories, which are often applied consecutively or in language standard. The most prominent among the
combination with each other: C validation suites are [39], [40], [41], and [42]. For ADA
Test of Core Capability. With the testing of the core compilers, validation using a test suite has particularly
capabilities, individual Simulink and Stateflow language taken hold in the certification process for ADA compilers
constructs (e.g., so-called “basic blocks” of Simulink, such [18]. A comparison of C and ADA test suites is presented
as the summation block), as well as code patterns which are in [43].
applied during code generation, are tested rigorously Significant drawbacks of both test approaches are 1) test
against expected behavior. These blocks and patterns are references cannot be obtained automatically, 2) code gen-
varied with respect to data types and scaling information erator implementation details (specification, source code,
and are executed on different target processors. Conse- etc.) are not considered for testing, and 3) attention is
quently, it is quite common to have several thousand test focused on the correct translation of individual language
cases. The execution and result evaluation are largely constructs, which are often not explicitly subject to specific
automated [22], [23]. translation functions such as the optimizations. As a
Test of Combinations of Core Capabilities. Combina- consequence, these functions may remain untested to a
tions of individual blocks and frequently used modeling large extent. Additionally, a grammar-based test approach
patterns are tested against expected behavior. Here, the for model-based code generators would require a visual
main focus is often on the optimizations performed by the language grammar (see [44]), which recognizes graphical
code generator. The determination of expected values and layout information of the source models.
test result evaluation are performed manually [22]. In the context of safety-critical software generation,
Large-Scale Usage of Core Capabilities. Large real- code generator certification is often regarded as a reliable
world models are used to check the tools for robustness and procedure for safeguarding code generators (see [13] for
correctness. The test results are analyzed in detail by details). This third-party assessment guarantees that the
experts [6], [23]. Test models typically comprise up to code generator has been developed and checked with
1,000 blocks. respect to generally accepted process or safety standards,
Test of Code Generator Configurations. A (semiauto- such as IEC 61508 [45] and ISO/WD 26262 [46] in the
matic) system test checks the installation, configuration, and automotive industry and DO-178B [47] in the avionics
operation of the code generator on different PC configura- sector. However, code generator optimizations are not
tions and together with different software versions of the currently subject to code generator certification since the
tools involved in the tool-chain (e.g., compilers) [9], [23]. behavior of the optimizations and their combinations is
usually not completely specified.
4.7.1 Related Work
A variety of formal verification techniques has been applied
to compilers in order to show their correctness. Two
5 A SYSTEMATIC CODE GENERATOR TEST
prominent groups are compiler verification and translation APPROACH
validation approaches. A comprehensive survey on the In the following, we present our test approach for model-
published work for both approaches is provided in [24]. based code generators. Even though this approach is general
Compiler verification focuses on techniques that prove a and tool-independent, we have to use a concrete model-
compiler to be correct on every input program [25]. Since, based code generator for the evaluation of our approach.
up to now, no explicit semantics for Simulink and Stateflow For this purpose, we chose TargetLink by dSPACE [2].
is available or published, one of the essential prerequisites TargetLink is a widely accepted and complex development
for formal verification is missing. Translation validation tool frequently used in automotive model-based develop-
shows the correct translation of individual programs, but ment. TargetLink generates C code from the graphical
the compiler is not verified as such. Necula [26] and Zuck modeling languages Simulink and Stateflow by The Math-
et al. [27] report on translation validation of optimizing Works [1]. But this does not mean that our approach is
compilers. restricted to model-based code generators translating
Engineering approaches are more focused on testing. Simulink and Stateflow models into C code. The approach
Here, much research has been done in compiler testing, i.e., can be adapted to all model-based code generators,
on test case generation techniques for compilers. Automatic provided that both the source language and the target
test case generation typically generates programs to be language are executable.
translated as test cases from the grammar of the source It is worth noting that, for this work, dSPACE provided
language [28], [29], [30], [31], [32], [33], [34], [35] or from a us with an informal and partial specification of specific
formal grammar model [36]. The test programs are derived optimizations performed by TargetLink. The source code of
systematically by applying all possible productions of the the tool was not available; however, applied optimizations
grammar as originally proposed in [37]. For an evaluation are traced by the code generator.
of the different approaches, see [38]. Within the scope of First, we would like to identify from the discussion of the
manual test case creation, test cases are manually derived previous sections the four core objectives for model-based
628 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 33, NO. 9, SEPTEMBER 2007
Code generator testing. Note that these objectives must be d. Test results must be evaluated and documented.
met in order to come up with a systematic code generator The whole test process requires the automated
test approach: integration of a set of test tools, which must be
developed or, if existent, which have to be
1. Test Model Generation. A systematic strategy for integrated into the test environment.
generating appropriate first-order test cases is On the basis of the four objectives stated above, a
required. In order to automate this task, we
systematic test approach for model-based code generators
developed the Test Model Generator for Simulink
and Stateflow (ModeSSa, see Section 5.2). For the can be defined. The main activities and the test architecture
model-based code generator under consideration, are shown in Fig. 4. The individual steps of this approach
TargetLink by dSPACE, these test cases are execu- are described in Sections 5.1 to 5.5, supplemented with a
table Simulink and Stateflow test models which are running example. We would like to emphasize again that
grouped into test modules. A test module is a the following description is just an instance of a general and
sample of valid and invalid test models that serve
for the systematic test of a specific optimization tool-independent test approach, which meets objectives 1, 2,
rule. The test models for a particular optimization 3, and 4 and whose foundations were first outlined in [48].
rule are based on its specification and must cover
the functionality and all possible application con- 5.1 Formalization of the Optimization Rule
ditions of the optimization rule under test. Note The starting point for determining suitable test cases is an
that such a specification-based gray-box approach informal, textual specification of an optimization rule for the
requires a (at least partial) specification of the code generator provided by the tool supplier. The code
optimization rule under test and must recognize generator transformation rule to be tested (e.g., a switch block
graphical layout dependencies.
simplification) is then formalized as a graph transformation rule
2. Test Vector Generation. The second-order test cases
must be designed such that all possible simulation [49]. This kind of representation provides a clear specification
traces of the test model and all possible execution of how patterns of the input graph (model) are replaced,
traces of the generated code are taken into account supplemented, or deleted. Furthermore, the use of graph
for testing. This can be achieved by applying a transformation rules is not unusual as a formal specification
structural or white-box testing approach both on the technique for compilers and code generators [50].
model and on the code. The set of second-order test
cases is required to determine whether or not the Example. Fig. 5 shows a simplified variant of the formalized
generated code really exhibits the dynamic behavior optimization of Simulink switch blocks.
of the test model under all possible application
conditions. Trace equivalence must be considered The given graph transformation rule consists of a left-
because the control flow of the model and the
hand side (LHS, i.e., search pattern) and three alternative
generated code may differ: Optimization rules may
omit or merge branches of the model, or the code right-hand sides (RHS, i.e., replacement pattern). This is a
generator can produce additional code as discussed notational simplification for three rules with identical LHS.
in Section 3. In short, the objective of test vector Informally, the execution of the rule on a given host
generation is to force traces through all model parts graph G (a high-level representation of a corresponding
(and nothing else) and the full code.
Simulink model) works as follows:
3. Test Result Evaluation. After back-to-back-testing a
model and the corresponding code, the test results . Find a match for the LHS in G,
need to be evaluated automatically. As discussed in . remove all graph elements from G which have an
Section 4.4, identical test outputs from the model image in the LHS but not in the RHS results in the
and the generated code cannot be expected. As a intermediate graph G0 , and
consequence, sophisticated test result evaluation . create new graph elements as images for rule
methods that can handle differences in the value as elements that appear only in the RHS but not in
well as in the time domain, are required in order to
the LHS and embed them into G0 .
decide whether the test outputs of the generated
code on the host PC oSIL ðtÞ and the test outputs of In short, the LHS graph denotes the precondition while
the generated code on the target oP IL ðtÞ are the RHS denotes the postcondition for rule application.
compatible with the original test outputs of the Additional application conditions, such as node attributes,
model oMIL ðtÞ (see Section 4.4). are stated below the LHS (e.g., node A must be of kind
4. Test Environment. Because of the large number of variable (Var) or constant (Const)). The folding operator
tests (see Section 4.5), an automated test execution denotes that nodes A, B, and C may be identical.
environment is indispensable. The test infrastructure In Fig. 5, the Switch block with its threshold value is
should support the following tasks:
represented by the node D (see also Fig. 2), its input ports by
a. The generated test models must be translated the nodes A, B and C and the output port by node E. If, for
into code. instance, the value of control port B is always greater or
b. Test vectors must be handled and applied to the equal than the threshold the following single line of code
model and the code. will be generated for the example in Fig. 2:
c. Simulation traces of the model and execution
traces of the code must be captured. E=A ;
STÜRMER ET AL.: SYSTEMATIC TESTING OF MODEL-BASED CODE GENERATORS 629
5.2 Test Model Generation technique in order to systematically partition the (possibly
The graph transformation rule formally captures the infinite) input space of the graph transformation rule into a
optimization rule and serves as a “blueprint” for determin- finite number of equivalence classes. A method for system-
ing the test models. Now, the transformation rule’s atically deriving equivalence classes from a graph transfor-
potential input space is considered more closely. The input mation rule and for combining these classes is provided in
[52]. However, suitable combinations of equivalence classes
space is defined as a set of graph instances for which the
can be generated automatically to create test cases with the
rule could carry out a graph transformation. This set can, in
classification-tree editor CTE/XL [53]. The test cases
principle, be infinite. For this reason, we will use the defined in the CTE/XL can be regarded as abstract
Classification-Tree Method [51] as an underlying test design descriptions of first-order test cases, i.e., test models (see
Fig. 6).
Example. Fig. 6 presents the input space partitioning of the
graph transformation rule in the form of a classification
tree. Each of the three input ports A, B, and C of the switch
block (subtree Inputs) can be either a variable (var) or a
constant (const) signal source. In addition, two or even all
three inputs of the switch block can be supplied by one and
the same signal source (subtree Folding). A complete
combination of leafs results in 40 test cases. An excerpt is
presented in the combination table in the lower part of
Fig. 6. Test case 37 (Fig. 7), for instance, represents a switch
block with three variable signals, where ports A and C are
folded and, therefore, represented by the single input port
input1.
unusual to have a few hundred or even thousand potential design techniques at model level. A selection of test vectors
test cases. As the process of creating the models manually is then determined that satisfies a suitable white-box testing
based on the abstract test case descriptions in CTE/XL is (model coverage) criterion. This task can to a large extent be
time-consuming and error-prone, this step has been auto- automated by using tools such as Reactis [56].
mated. A CTE/XL test case description is transformed into Often, test vectors which structurally cover the model will
a specific Simulink or Stateflow test model using the test also cover major portions of the generated code. In [57], a
model generator ModeSSa [54] (Model Generator for strong correlation between decision coverage at model level
Simulink and Stateflow). ModeSSa creates a test model and branch coverage at code level was shown. A second set
for each test case specified in the CTE/XL combination of structural test cases is, however, required to cope with the
table. In order to do this, an XML representation of the structural differences of model and code caused by
classification tree is created and transformed into an XML optimizations and other transformations. After code genera-
representation of a corresponding Simulink or Stateflow tion has been carried out, structural test cases are determined
model using the graph transformation language GReAT on code level. Here, automation can be achieved, e.g., with
[55]. The transformation phase is followed by the genera- the evolutionary structural test tool ET [58].
tion of executable test models with Simulink model The union of the test vectors created at model level and
construction commands and Stateflow API commands. the test vectors created on the basis of the C code comprises
The lower half of Fig. 7 shows the test model for test case the set of test vectors used for checking the functional
TC 37 created with ModeSSa. The entire set of test models equivalence of the test model and the corresponding
generated in this way comprises first-order test cases, which C code. Depending on the individual test vectors, the
systematically check the functionality of the code generator merge process can result in one or more test vectors per test
with regard to the optimization rule under consideration. model. Typically, the number of test vectors will increase
with the size and complexity of the test model.
5.3 Test Vector Generation A single second-order test case 37.1 has been designed in
Before the behavior of the code generator can be checked, order to achieve full structural decision coverage on model
appropriate second-order test cases are required as an input level. Test case 37.1 simulates test model 37 for a period of
0.3 second. Input1 will be stimulated as follows:
for the models. In order to simulate all possible simulation
traces through a given test model, we apply structural test i1 ðtÞ37:1 ¼ t; t ¼ ½0; 0:3:
The stimulus for Input2 is:
8
< 0 ; 0:0 t < 0:1
i2 ðtÞ37:1 ¼ 0:5 ; 0:1 t < 0:2
:
1 ; 0:2 t 0:3:
TABLE 2 TABLE 3
Metric of the Code Generator Test Suite Problems and Errors Found by Systematic Tests
[31] F. Bazzichi and L. Spadafora, “An Automatic Generator for [60] P.D. Edwards, “The Use of Automatic Code Generation Tools in
Compiler Testing,” IEEE Trans. Software Eng., vol. 8, no. 4, pp. 343- the Development of Safety-Related Embedded Systems,” Proc.
353, 1982. Vehicle Electronic Systems European Conf. and Exhibition, 1999.
[32] W. Homer and R. Schooler, “Independent Testing of Compiler [61] C. Kaner, Lessons Learned in Software Testing. Wiley & Sons, 2001.
Phases Using a Test Case Generator,” Software—Practice and
Experience, vol. 19, no. 1, pp. 53-62, 1989. Ingo Stürmer studied computer science at the
[33] A.S. Boujarwah, K. Saleh, and J. Al-Dallal, “Testing Syntax and University of the Federal Armed Forces in
Semantic Coverage of Java Language Compilers,” Information and Munich. He worked as a PhD student at
Software Technology, vol. 41, pp. 15-28, 1999. DaimlerChrysler Research and Technology and
[34] E.G. Sirer and B.N. Bershad, “Using Production Grammars in as a researcher at the Fraunhofer Institute for
Software Testing,” Proc. Second Conf. Domain Specific Languages, Computer Architecture and Software Technol-
Oct. 1999. ogy (FIRST). He is the founder and a principal
[35] J. Harm and R. Lämmel, “Testing Attribute Grammars,” Proc. consultant of Model Engineering Solutions, a
Third Workshop Attribute Grammars and Their Application (WAGA), consultancy located in Berlin that focuses on
2000. model-based development of embedded con-
[36] S.V. Zelenov et al., ”Test Generation for Compilers and Other troller software. He is a member of the ACM (SIGSOFT) and GI, the
Formal Text Processors,” Programming and Computer Software, German society for computer science.
vol. 29, no. 2, pp. 104-111, 2003.
[37] P. Purdom, “A Sentence Generator for Testing Parsers,” BIT, Mirko Conrad received the diploma degree
vol. 12, no. 3, pp. 366-375, 1972. (MSc) in computer science in 1995 and the
[38] A.S. Boujarwah and K. Saleh, “Compiler Test Case Generation PhD degree in engineering in 2004 from
Methods: A Survey and Assessment,” Information and Software Technical University Berlin, Germany. His PhD
Technology, vol. 39, pp. 617-625, 1997. thesis on model-based testing of embedded
[39] ANSI/ISO FIPS-160 C Validation Suite (ACVS), http://www. automotive software was awarded the Hermann
peren.com, Mar. 2005. Appel Prize in Automotive Electronics. Before
[40] Plum Hall Validation Suite for ANSI C, http://www.plumhall. joining The MathWorks, where he leads the
com, Mar. 2005. model safety package team, he worked more
than 10 years as project manager and senior
[41] Associated Compiler Experts (ACE) SuperTest C&C++ Test and
research scientist at DaimlerChrysler Research and Technology. His
Validation Suite, http://www.ace.nl, Mar. 2005.
expertise includes model-based design and testing of embedded
[42] Nullstone for C, http://www.nullstone.com, Dec. 2005.
software, automotive software engineering, code generation, and
[43] M. Tonndorf, “Ada Conformity Assessments: A Model for Other safety-related applications. Since 2004, he has been a visiting lecturer
Programming Languages?“ ACM SIGAda Ada Letters, vol. 19, at the Humboldt-University Berlin. He is a member of the special interest
no. 3, pp. 89-99, 1999. groups on automotive software engineering (GI ASE) and on testing,
[44] K. Marriott and B. Meyer, “On the Classification of Visual analysis, and verification of software (GI TAV) of the German Computer
Languages by Grammar Hierarchies,” J. Visual Languages and Society and a member of the ACM.
Computing, vol. 8, no. 4, pp. 375-402, 1997.
[45] “Functional Safety of Electrical/Electronic/Programmable Elec- Heiko Dörr is with Carmeq, an engineering and
tronic Safety-Related Systems,” IEC 61508, Int’l Electrotechnical consulting subsidary of Volkswagen. He is a
Commission, 1999. technical manager for an international standar-
[46] “ISO/WD 26262: Road Vehicles—Functional Safety,” working dization project for automotive software. Further-
draft, Sept. 2005. more, he designs and heads an innovation
[47] “Software Considerations in Airbone Systems and Equipment program within the company. Before joining
Certification,” RTCA/DO-178B, Requirements and Technical Carmeq, he was manager for specification,
Concepts for Aviation Inc., Dec. 1992. design and implementation at DaimlerChrysler
[48] I. Stürmer and M. Conrad, “Test Suite Design for Code Generation Research and Technology. He developed meth-
Tools,” Proc. 18th Int’l IEEE Conf. Automated Software Eng., pp. 286- ods and techniques for tool integration, require-
290, 2003. ments management, and testing in model-based development of
[49] G. Rozenberg, Handbook of Graph Grammars and Computing by software-based systems. In his work, he was consulting on integrated
Graph Transformations, vol. 1. World Scientific, 1997. systems development for various DaimlerChrysler business units. He
[50] P. Baldan, B. König, and I. Stürmer, “Generating Test Cases for was the coorganizer of the ESEC 2003 satellite workshop on tool
Code Generators by Unfolding Graph Transformation Systems,” integration in system development and of the workshop on model-based
Proc. Int’l Conf. Graph Transformation (ICGT ’04), pp. 194-210, 2004. development of automotive systems that was part of Modellierung 2006.
[51] M. Grochtmann and K. Grimm, ”Classification Trees for Partition Since 2005, he has been an editor of the International Journal on
Testing,” Software Testing, Verification, and Reliability, vol. 3, pp. 63- Systems and Software Modeling. He is a member of INCOSE and GI.
82, 1993.
[52] I. Stürmer, “Systematic Testing of Code Generation Tools— Peter Pepper received the PhD degree from the
A Testsuite-Oriented Approach for Safeguarding Model-Based Technical University of Munich and also spent
Code Generation,” dissertation, Pro Business, 2006. some time as a research fellow at Stanford
[53] E. Lehmann and J. Wegener, ”Test Case Design by Means of the University. He holds a chair on compiler con-
CTE/XL,” Proc. Eighth European Int’l Conf. Software Testing, struction and programming languages at the
Analysis, and Rev. (EuroSTAR ’00), 2000. Technical University of Berlin. His main interests
[54] I. Stürmer, P. Pepper, and S. Heck, ”The Test Model Generator focus on language and compiler design, espe-
ModeSSa,” Software & System Modeling, submitted for publication. cially for functional languages, and on the
[55] G. Karsai et al., ”On the Use of Graph Transformation in the specification and formal development of safety-
Formal Specification of Model Interpreters,” J. Universal Computer critical software, particularly in the automotive
Science, vol. 9, no. 11, pp. 1296-1321, 2003. environment. He is a member of IFIP WG 2.1 and also of the ACM, the
[56] Reactis User’s Guide, Reactive Systems, Aug. 2002. IEEE, EASST and GI.
[57] A. Baresel et al., “The Interplay between Model Coverage and
Code Coverage,” Proc. 11th European Int’l Conf. Software Testing,
Analysis, and Rev., Dec. 2003. . For more information on this or any other computing topic,
[58] J. Wegener, H. Stahmer, and A. Baresel, ”Evolutionary Test please visit our Digital Library at www.computer.org/publications/dlib.
Environment for Automatic Structural Testing,” Information and
Software Technology, vol. 43, pp. 851-854, 2001.
[59] K. Lamberg et al., “Model-Based Testing of Embedded Auto-
motive Software Using MTest,” J. Passenger Cars—Electronic and
Electrical Systems, vol. 7, pp. 132-140, July, 2005.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.