Hector Thesis
Hector Thesis
SOFTWARE DEVELOPMENT
METRICS ON A DASHBOARD
Manresa’s
Manresa’s School
School of
of Engineering
Engineering –– EPSEM
EPSEM
Technical
Technical University
University of
of Catalonia
Catalonia
2016
2016
VISUALIZATION OF SOFTWARE
DEVELOPMENT METRICS ON A DASHBOARD
Advisor
Sebastia Vila Marta
Klaus Zesar
Faculty Representative
Sebastia Vila Marta
Committee
Bibliographic information:
Hector Miquel Vidal i Dura, 2016, VISUALIZATION OF SOFTWARE DEVELOPMENT
METRICS ON A DASHBOARD, Degree thesis, Manresa’s School of Engineering –
EPSEM, Technical University of Catalonia.
The major results of this metrics, will be monitored in an easy to read, single
page, real-time user interface, showing a graphical presentation of the current status
(snapshot) and historical trends of the SW production key performance indicators
to enable instantaneous and informed decisions to be made at a glance.
Metrics is not an interesting talk, but it a necessary thing. It is part of the necessary
overhead for delivering the product. If we can detect emerging delivering risks early,
then we can deal with them. If we don’t discover which their weight is, then we
are just blindsided and projects can fail. So it’s important to measure the right
things and ensure they are on track (To steer (drive) work in progress). Secondly,
it is important to measure improvement efforts because otherwise we know that
we are changing things and we know whether feel good but we don’t know the
real improvements, we can’t qualify them. (To support continuous improvements
efforts).
v
Resum
Resulta difícil entendre, i més encara millorar, la qualitat del programari sense
conèixer el procés de desenvolupament del mateix i dels seus productes. És necessari
dur a terme algun tipus de procés d’avaluació per tal de determinar la qualitat dels
productes (software). La investigació que es descriu en el present estudi és un intent
d’analitzar diversos paràmetres en la producció del desenvolupament del programari
i mostrar-los en una ”dashboard”. Es realitza, per tant, un breu recorregut per
l’enginyeria, la qualitat i els paràmetres del programari en l’entorn d’una empresa
informàtica real (Reval).
vi
Abstrakt
Ohne Kenntnis über Softwareentwicklungsprozesse oder –produkte die Qualität einer
bestimmten Software zu evaluieren oder gar zu verbessern, ist sehr schwierig. Mit
verschiedenen Messgrößen lassen sich Parameter wie Qualität und Funktionalität
einer Software gut berechnen. Die in dieser Arbeit beschriebenen Forschungsergeb-
nisse sind ein Versuch, die verschiedenen Messgrößen in der Softwareentwicklung zu
analysieren und sie in einem Dashboard anzuzeigen, um einen schnellen Überblick
über Qualität und Relevanz und Interdependenzen der Metriken der Software zu
erhalten.
Die Ergebnisse der metrischen Berechnungen sollen auf einer einzelnen Seite in
Echtzeit und leicht verständlich innerhalb eines Interfaces visualisiert werden, sodass
sowohl der Status Quo als auch historische Trends der Softwareproduktions-KPIs
auf einen Blick ersichtlich sind und wichtige Entscheidungen schnell und informiert
getroffen werden können.
Metriken über die Software zu erstellen mag kein sehr spannendes Thema sein, es
ist jedoch ein unbedingt nötiges, da das Verständnis über die Metriken für die Be-
triebsfähigkeit des Produkts ausschlaggebend sind. Wenn zukünftige Lieferprobleme
schnell erkannt werden können, kann damit sachgerecht umgegangen werden. Wenn
die Tragweite der Risiken nicht eingeschätzt werden kann, kann nicht proaktiv gehan-
delt werden und Projekte können darunter leiden. Aus diesem Grund ist es wichtig,
die richtigen Parameter zu messen und zu kontrollieren, um Qualitätssicherung zu
garantieren. Zusätzlich ist es natürlich auch wichtig, die Optimierungsprozesse zu
dokumentieren um zu sehen, in welche Richtig sie die Produkte führen und um
zukünftige Verbesserungsvorschläge zu evaluieren.
vii
Preface
Dear reader, the document you are reading right now is my final thesis with which I
conclude four years of studying ICT Engineering degree program at the Polytechnic
University of Catalonia.
I signed myself up for this degree out of pure curiosity for the topic, the ambition
to challenge myself and acquire another perspective of thinking. I came to the
conclusion that there was more to creating a good software product than just writing
its code. This conclusion made me realize that in order to increase my skills and
professionalism I had to increase my awareness of the aspects that together form
the complete software engineering.
While I’m writing this as the last part of my final thesis document I truly believe
that my skills, passion and appreciation of the software engineering process has
increased. I hope that my final thesis document proves this learning process to you,
the reader.
Working on my thesis research was a true learning experience for me. It has given
me a better understanding of what scientific research actually is. It especially has
given me a lot of respect for people that dedicate themselves to collecting facts
about any research subject and share these with other people in the world making
it possible to learn from.
ix
Contents
List of Figures xv
Abbreviations xix
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Manuscript organization . . . . . . . . . . . . . . . . . . . . . . . . . 3
4 Software Design 18
4.1 What is Software Design? . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2 Goals/Characteristics of Software Design . . . . . . . . . . . . . . . . 18
4.3 Software Design Terminology . . . . . . . . . . . . . . . . . . . . . . 19
4.3.1 Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3.2 Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3.3 Cohesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
xi
Contents
5 Software Testing 21
5.1 IEEE - Definition of Testing . . . . . . . . . . . . . . . . . . . . . . . 22
5.2 Testing Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 Verification vs. Validation . . . . . . . . . . . . . . . . . . . . . . . . 23
5.4 Testing methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.4.1 Static testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.4.2 Dynamic testing . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.5 Testing levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.5.1 Unit Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.5.2 Integration Testing . . . . . . . . . . . . . . . . . . . . . . . . 26
5.5.3 System Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.5.4 Operational Acceptance Testing . . . . . . . . . . . . . . . . . 27
5.6 Testing types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.6.1 Functional testing vs. non-Functional testing . . . . . . . . . . 27
5.6.2 Functional testing . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.6.3 Non-Functional testing . . . . . . . . . . . . . . . . . . . . . . 28
5.7 Automated testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6 Software Metrics 29
6.1 What is a Metric? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.1.1 Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.2 Purpose of the Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.3 Types of Metrics & Common Software Measurements . . . . . . . . . 31
6.4 Goal-Question-Metric (GQM) Paradigm . . . . . . . . . . . . . . . . 32
7 Dashboarding 35
7.1 Benefits of using a dashboard . . . . . . . . . . . . . . . . . . . . . . 35
7.2 Dashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8 Project Overview 39
9 Architecture 41
9.1 Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
9.2 Data Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
9.3 Data Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
9.4 Business Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
9.5 Dashboard - User Interface . . . . . . . . . . . . . . . . . . . . . . . . 43
10 Systems Overview 43
10.1 Sonar Qube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
10.2 JIRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
10.3 SVN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
10.4 Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
10.5 Test Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
xii
Contents
11 Data Layer 51
12 Dashboard 55
12.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
12.2 Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
12.3 Anatomy of a widget . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
12.4 Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
14 Conclusions 69
15 Future prospect 69
xiii
List of Figures
3.2 Baseline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Mainline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
xv
LIST OF FIGURES
xvi
List of Tables
xvii
Abbreviations
The following table describes the significance of various abbreviations and acronyms
used throughout the thesis. The page on which each one is defined or first used is
also given. Nonstandard acronyms that are used in some places to abbreviate the
names of certain white matter structures are not in this list.
xix
LIST OF TABLES
GQM: Goal-Question-Metric
xx
1 Introduction
Following the statement “If you can’t measure it, you can’t manage it” – W. Ed-
wards Deming, companies use metrics and measurement systems to monitor and
control the status of their projects and products. A successful measurement sys-
tem must be designed and developed based on company policies and strategies in
order to overcome the challenges with overwhelming information generated by soft-
ware applications for supporting decision making, prediction of events and product
tracking.
On one hand, we have the theory of Software Engineering that explains the principal
topics of SW production life-cycle. The purpose of this part is to give an overview
of the Software Engineering to understand the development part later on. So it will
cover software Basics, Configuration Management, Design, Testing and software
metrics.
The second part of this dissertation will be about the development of software that
is based on this theory discussed in the first part. Development of customized
application implemented to analyze the company development performance. All in
all, the mentioned system fetches data from different environments (such as JIRA,
SVN, Test Results, Jenkins, SQ...), analyzes the KPI’s and show the results in
a simple on-line dashboard. The final user will be the internal stakeholders of
the company, but focus in the middle management, so they can follow software
production performances of the company overall or individual squads/teams. The
implemented system which analyzes the development KPI’s has a Data Sources =>
Data Layer => Data Base => Business Logic => User Interface architecture. For
this, multiple programming languages and frame-tools have been used which will be
explained further down below.
1
1 Introduction
1.1 Motivation
This last year, during my exchange, I had Software Engineering as a subject in which
I had to program, and at the same time, this opened my eyes to what surrounds
software development. From stakeholders to software processes, from Scrum teams
to Black-Box testing, and so on. While discovering all this basics of software engi-
neering I was looking for an internship in Austria. So in February I joined to Reval
Holdings INC, an American SaaS (Software as a Service) company which provides
treasury and risk management on-demand software.
Inside the company, the Quality Assurance department suggested to keep track of
the life-cycle of its product (software) with KPI’s to a posteriori show them up in
a on-line Dashboard. In that way, all the internal stakeholders could see how the
product performs. Thus, it become as part of my research thesis and my effort for
the following four months.
1.2 Objective
The goal of this thesis is divided into two unrelated parts. Firstly, the requirements
that the application has to follow and secondly the repercussion of the metrics on
the company.
In the production of the application that is developed, the target will be behind
the key software goals. Those are functionality, usability, maintainability, flexibility
and efficiency. One of the main requirements that the company wants to focus on is
maintainability of the application. It is important since the versions, environments
and some values change from time to time.
From the metrics side, the goal is obtaining objective, reproducible and quantifiable
measurements, which may have numerous valuable applications in schedule and
budget planning, cost estimation, quality assurance testing, software debugging,
software performance optimization, and optimal personnel task assignments.
What it means is that all the internal stakeholders have the possibility to track
various parameters in each phase of the development cycle at glance with a dash-
board. Acquiring more transparency about the progress and efforts of each team,
thus finding room for improvement.
2
1.3 Manuscript organization
The aim of the first part is a general overview of Software Engineering. Also, the
objective of it is to give the reader a framework to understand the practical side
of this thesis. So the beginning of this paper will cover the very fundamentals
of Software Engineering, Software Management, Design and Testing, and finally
discuss Metrics and Data Visualization. It will give another perspective to software
engineers, allowing them to think about the wider implications of their work.
The second part of the thesis is a summary of the development of this project. The
architecture used to develop the application will be displayed with short descriptions.
At the same time, the data sources used to obtain the information will also be
explained. Then the thesis will cover two of the main parts of the application;
explanation of the internal working of the application (Data Layer) and then how
the user interface (the Dashboard) is built. Finally the final implementation will be
explained in more detail.
3
2 Software Engineering Basics
First of all, it will be necessary to make a slight introduction to the software engi-
neering area. It will cover the definition of SWENG, motivation and goals, activities
and you will see why a project fails or succeeds, and finally the Four "P’s" of Software
Development.
So the Software Engineering goal is the creation of software systems that are:
• Reliable
• Efficient
• Maintainable
• Produced economically
4
2.2 Why Software Fails or Succeeds?
The systematic approach that is used in software engineering is called software pro-
cess. It is a sequence of activities that leads to the production of a software product.
There are four basic activities that are shared along any software production. These
activities are:
• Software specification: Where customers and engineers define the software that
is produced and the constraints on its operation.
• Over budget.
5
2 Software Engineering Basics
Figure 2.1: Iron Triangle for Project Management (Cost, Scope and Schedule).
the others, or impact the quality of the project. Software development project
often fails because the organization sets unrealistic goals for the "Iron Triangle" of
software development. By breaking the Iron triangle, you often: Cancel the project
(>15%), Deliver late and/or over budget (50%), Deliver poor quality software or
Underdeliver. The solution would be, depending on the case, to vary the scope
(time boxing), vary the schedule or vary the resources. But, remember, nine women
can’t deliver a baby in one month. [Wik08]
The production of software systems can be extremely complex and has many chal-
lenges. Systems, particularly large ones, demand the coordination of many people,
called stakeholders, who must be organized into teams or squats and whose main
goal is to build a product that satisfies the defined requirements. They make a very
good and thorough effort which has to be organized into a characterized project,
with the purpose of success. A framework is needed within which the teams carry
out the activities to build the product, this is called process.
So, the effective project management is broken down into four P’s: people, prod-
uct, project and process. Communication and collaboration is all about people.
The product must address the correct problem. A sound process keeps the project
efficiently moving forward, and a project plan provides a road map to success.
6
2.3 Software Engineering Activities
Figure 2.2: Four P’s of Software Eng. Activities (People, Product, Project and
Process).
2.3.1 PEOPLE
People are the first and most important element of any successful project and iden-
tifying how people impact a project is crucial. The following categories of people
are involved in the software process (stakeholders):
7
2 Software Engineering Basics
• Motivation
The ability to encourage technical people to produce to their
best potential.
• Organization
The ability to mold existing processes (or invent new ones) that
will enable the initial concept to be translated into a final prod-
uct.
• Ideas or innovation
The ability to encourage people to create and feel creative even
when they must work within bounds established for a particular
software product or application.
2.3.2 PRODUCT
The products of a software development effort consist of much more than the source
and object codes (software to be built). It also includes the so called ’Artifacts’:
• Source Code
8
2.3 Software Engineering Activities
Depending on the product there are two types. Generic products type: Stand-alone
systems that are marketed and sold to any customer who wishes to buy them.
Customized products type: Software that is commissioned by a specific customer to
meet their own needs.
But there are two different products depending on the specification. Generic prod-
ucts by specification: Where the specification of what the software should do is
owned by the software developer and decisions on software changes are made by the
developer. Customized products by specification: In this case, the specification of
what the software should do is owned by the customer for the software and they are
the ones making decisions on software changes that are required.
2.3.3 PROJECT
A software project defines the activities and associated results needed to produce a
software product:
• Planning: Plan, monitor, and control the software project. Cost estimation of
human, hardware and software resources. Determining the feasibility of project
and planning information often revisited → Software Project Management
Plan.
9
2 Software Engineering Basics
2.3.4 PROCESS
10
The waterfall model: This takes the fundamental process activities of specifica-
tion, development, validation, and evolution and represents them as separate process
phases such as requirements specification, software design, implementation, testing
and so on.
11
3 Software Configuration Management
3 Software Configuration
Management
This chapter will help to define Software Configuration Management in simple terms
as the mechanism used to control the evolution of software projects. An under-
standing of what we mean by Software Configuration Management (SCM) is crucial
because if we don’t know what we want to do, we have no hope of converging in a
good Software-Development environment. So, on this section we will discuss some of
the Software Development best practices. Also it will introduce the main concepts
of the SCM goals, systems and processes that are used to implement those best
practices.
In summary, when multiple people are working on a given project, for each project
there are different versions of documents/software; and there are also different re-
leases to customers. Thus, a SCM is needed because it is easy to lose track of what
changes and component versions have been incorporated into each system version.
12
3.3 SCM Terminology
Regarding SCM processes all of them are important. But in this section we will
focus mainly on the Version Control (VC) Processes.
A brief definition of the most common/used terms in VC. These are: Configura-
tion Item (CI), Version, Variant, and Revision, Code-line, Baseline, Mainline and
Release, and finally Workspace.
Configuration Item
• Source Code
• Project Specification
• User Documentation
13
3 Software Configuration Management
Version
An instance of configuration item that differs, in some way, from other instances of
that item. Versions always have a unique identifier, which is often composed of the
configuration item name plus a version number. New versions of software system
are created as they change; Offering different functionality or tailored for particular
user requirements.
Variant
Functionally equivalent versions, but designed for different settings, e.g. hardware
and software, for different machines/OS.
Code-line
Baseline
14
3.3 SCM Terminology
Mainline
System Release
For mass market software, it is usually possible to identify two types of release:
• Minor Releases Which repairs bugs and fixes customer problems that have
been reported.
15
3 Software Configuration Management
Workspace
An isolated environment where a developer can work (edit, change, compile, test)
without interfering with other developers. Examples: Local directory under version
control, private workspace on the server.
Common Operations:
3. Change history recording All of the changes made to the ode of a system
or component are recorded and listed.
16
ment of several projects, which share components.
When implementing SCM tools and processes, you must define what practices and
policies to employ to avoid common configuration problems and maximize team
productivity. Many years of practical experience have shown that the following best
practices are essential to successful software development:
17
4 Software Design
4 Software Design
This chapter will focus on another one of the main steps of the Software Development
life-cycle: the Software Design. It covers architecture design, class design, User
interface design, algorithm design and finally protocol design. So in this part the
concept of Software Design and its characteristics will explained and some different
concepts to better understand the point of this research will be discussed.
Now that the definition and purpose of Software Design are clear, the next step is
to define the goals of this science of Software Design. Every software design plan
should cover:
18
4.3 Software Design Terminology
4.3.1 Component
Refers to a component any software or hardware that has a clear role. A component
can be isolated, allowing the programmer to replace it with a different component
that has equivalent functionality.
4.3.2 Coupling
Tight coupling (equal bad): when a group of classes are highly dependent on one
another. It makes modifying parts of systems difficult.
19
4 Software Design
Loose coupling:
• Minimal dependences of the method on the other parts of the source code
• Minimal dependences on the class members or external classes and their mem-
bers
• No side effects
4.3.3 Cohesion
Unplanned and random cohesion, which might be the result of breaking the
program into smaller modules for the sake of modularization, e.g. Utils- or
20
Helper-Classes.
• Logical cohesion
• Temporal cohesion
Elements of a module are organized such that they are processed at a similar
point in time.
• Procedural cohesion
• Communicational cohesion
• Sequential cohesion
Elements of module are grouped because the output of one element serves as
input to another.
21
5 Software Testing
5 Software Testing
This section is the last on the life-cycle of SW Development before the release. Thus,
it is really important that the SW passes this phase. In this chapter, the testing
will be explained, as well as the differences between verification and validation. The
"defect" will be defined, and multiple testing practises, levels, types, processes and
finally automated testing will be summarized and schematized.
"Defect"
22
5.2 Testing Goals
1. To demonstrate to the developer and the customer that the software meets its
requirements.
Validation
There are several approaches of Software Testing: Static v.s Dynamic testing
approach and The Box approach.
23
5 Software Testing
It involves working with the software, giving input values and checking if the output
is as expected by executing specific test cases which can be done manually or with
the use of an automated process.
Treats the software as a "black box", examining functionality without any knowledge
of internal implementation. It includes: equivalence partitioning, boundary value
analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing,
model-based testing, use case testing, exploratory testing and specification-based
testing.
24
5.4 Testing methods
The tester is only aware of what the software is supposed to do, not how it does
it. It also known as Specification-based testing technique or input/output
driven testing technique because it views the software as a black-box with inputs
and outputs.
25
5 Software Testing
Is the process of testing a product’s graphical user interface to ensure it meets its
written specifications like testing images and buttons alignment on any web page.
There are generally four recognized levels of tests: unit testing, integration testing,
component interface testing, and system testing. Tests are frequently grouped by
where they are added in the software development process, or by the level of speci-
ficity of the test. The main levels during the development process as defined by the
SWEBOK guide are unit-, integration-, and system testing that are distinguished
by the test target without implying a specific process model. Other test levels are
classified by the testing objective, as stated in [Wik01b].
Integration testing is the phase in which individual software modules are combined
and tested as a group to verify that the integrated system is ready for system testing.
It occurs after unit testing [[Wik03]].
26
5.6 Testing types
design of the code or logic. System testing is most often the final test to verify that
the system to be delivered meets the specification and its purpose [[Wik04b]].
After the system test has corrected all or most defects, the system will be delivered to
the user or customer for acceptance testing. Acceptance testing is a test conducted
to determine if the requirements of a specification or contract are met prior to its
delivery. Acceptance testing is basically done by the user or customer although other
stakeholders may be involved as well [[Cer13b]].
Non-functional testing refers to aspects of the software that may not be related to
a specific function or user action, such as scalability or other performance, behavior
under certain constraints, or security. Testing will determine the breaking point, the
point at which extremes of scalability or performance lead to unstable execution.
Non-functional requirements tend to be those that reflect the quality of the product,
particularly in the context of the suitability perspective of its users.
Smoke Testing
Smoke Testing is preliminary testing to reveal simple failures severe enough to reject
a prospective software release. e.g. a smoke test may ask basic questions like "Does
the program run?","Does it open a window?". The purpose is to determine the
degree of necessary repair to the application and to evaluate if further testing should
27
5 Software Testing
be done [[Wik13a]].
Regression Testing
Regression testing is a type of software testing that seeks to uncover new soft-
ware bugs, or regressions, in existing areas of a system after changes have been
made to them. Common methods of regression testing include rerunning previously
completed tests and checking whether program behavior has changed and whether
previously fixed faults have reemerged [[Wik02]].
Destructive Testing
Recovery Testing
Recovery testing is the activity of testing how well an application is able to recover
from crashes, hardware failures and other similar problems. Example: While an
application is receiving data from a network, unplug the connecting cable, wait and
plug in again [[Wik06c]].
Compatibility Testing
28
Software Performance Testing
• Load Testing is a testing that the system can continue to operate under a
specific load, that be large quantities of data or a large number of users.
• Volume testing is a way to test software functions even when certain com-
ponents (for example a file or database) increase radically in size.
Many programming groups are relying more and more on automated testing, espe-
cially groups that use test-driven development. There are many frameworks to write
tests in, and continuous integration software will run tests automatically every time
code is checked into a version control system, according to [Gur16].
While automation cannot reproduce everything that a human can (and the ways
how humans think), it can be very useful for regression testing. However, it does
require a well-developed test suite of testing scripts in order to be truly useful.
System building
The process of compiling the components or units that make up a system and linking
these with other components to create an executable program. System building is
normally automated so that recompilation is minimized. This automation may be
built into the language processing system (as in Java) or may involve software tools
to support system building, as stated by [Som09b].
29
6 Software Metrics
6 Software Metrics
This section covers the difference between measurements and metrics and the pur-
pose of metrics, based on the book [Ken07] wrote by Dave Nicolette.
Before defining a metric it’s necessary to first distinguish metrics from measurements
[Nic12].
6.1.1 Metric
• Inform stakeholders.
30
6.2 Purpose of the Metrics
"When you can measure what you are speaking about and express it in numbers,
you know something about it; but when you cannot measure it, when you cannot
express it in numbers, your knowledge is of a meagre and unsatisfactory kind: it
may be the beginnings of knowledge but you have scarcely in your thoughts
advanced to the stage of Science."
At Reval, metrics are used for two purposes: to help control the direction of work
in progress and to help monitor the effectiveness of process-improvement efforts.
Measurement is done using metrics. Three parameters are measured: process mea-
surement through process metrics, product measurement through product metrics,
and project measurement through project metrics.
Process metrics assess the effectiveness and quality of software processes, determine
the maturity of the process, effort required in the process, effectiveness of defect
removal during development, and so on. Product metrics is the measurement of
work product produced during different phases of software development. Project
metrics illustrate the project characteristics and their execution.
In software engineering, there are three kinds of entities and attributes to measure
31
6 Software Metrics
2. Products are any artifacts or documents that result from a process activity.
Products are not restricted to the items that the management is committed
to deliver to the customer. Any artifact or document produced during the
software life cycle can be measured.
The GQM paradigm is based on the theory that all measurement should be goal-
oriented. Each measurement collected is stated in terms of the major goals. Ques-
tions are then derived from the goals and help to refine, articulate, and determine
if the goals can be achieved. The metrics that are collected are then used to answer
the questions in a quantifiable manner.
Conceptual level (Goal) A goal is defined for an object, for a variety of reasons,
with respect to various models of quality, from various points of view and relative
to a particular environment.
32
6.4 Goal-Question-Metric (GQM) Paradigm
33
7 Dashboarding
This chapter will focus on the creation of a Dashboard for the visualization of the
metrics and key performance indicators (KPIs) that were discussed previously. So it
will cover the definition and purpose of Dashboard and types of data representation.
Also, the frame-tool that is used for this project will be discussed.
• Measurement of efficiencies/inefficiencies
35
7 Dashboarding
7.2 Dashing
• Use pre-made widgets, or custom made widgets with scss, html, and coffee-
script.
• Widgets harness the power of data bindings to keep things easy and simple.
• Use the API to push data to the dashboards, or make use of a simple Ruby
DSL (Domain-Specific Language) for fetching data.
36
7.2 Dashing
As explained before, every Dashing project comes with sample widgets and sample
dashboards to explore [Wik12]. The directory is setup as follows:
• Dashboards: One .erb file for each dashboard that contains the layout for
the widgets.
• Jobs: Ruby jobs for fetching data (e.g. for calling third party APIs like
Twitter).
• Public: Static files that should be served. A good place for a favicon or a
custom 404 page.
It comes with multiple predefined and sample widgets, like: Alert, Clock, Graph,
Comments, iFrame, Image, List, Meter, Number and Text (the Figure 7.1 displays
some of them). Since all the widgets are fully customizable this framework is used
to show the metrics.
37
8 Project Overview
The second part of this dissertation focuses on the project I have been developing
for four months at Reval Holdings Inc.. It starts with a small introduction which
gives a global examination of the companies workload. Then it will cover the project
architecture, making small comments on the different parts of it. At the same time
the data layer and the user interface will be analyzed. Finally, each part of the
dashboard will be presented followed by the conclusion.
Each major release (6 months) is divided in six Iterations. The first four are only
development. At the end of the 4th Iteration the Feature is completed, that means
that the software has all the functionality intended for the final version but is re-
quired some improvements and fixes before release. So, at the 5th Iteration the Code
is Freeze (is usually at the beginning of the system test phase) and the last Iteration
(6th) is the user-beta-testing before the release. At the end of the 6th Iteration a
new major starts, it will take again 6 months to complete, an so on.
The same happens in the patch release cycle. A difference of the major release,
39
8 Project Overview
the patch takes 1 month each to be completed, and has four iterations: two for
development, one featured completed and code freeze too.
So, during those six months the Reval Product Development department is split into
two parts. On one side we have Product Engineering which is working constantly on
the major version (current 16.1.0) on the other side there is the SWAT team which
is in charge of the patch versions (current 16.0.2).
Besides SWAT team, there are others teams inside the company. Those are dedicated
to distinct aspects depending on the software modules that they develop. They are
TS Business, TS Technology, TS UX, Corp UI, Corp Treasury, Corp Payments,
Corp Cash, Platform, Documentation, Build Automation and SWAT G.
It is essential to follow all these processes, iterations, teams/squats and observe how
they perform.
40
9 Architecture
The architecture of this project is divided into five parts: Data Sources, Data Layer,
Data Base, Business Logic and User Interface. On this project a CentOS machine has
been used to host the Data Layer, Data Base and the Dashing (BL & UI) framework.
The following diagram shows the global architecture used in this project:
41
9 Architecture
At the top of the diagram the multiple systems that Reval Inc. is hosting are dis-
played. The company stores the source code there, test reports, tasks management,
automation processes, etc. This is the data source of the project. It includes the
next systems: SonarQube, JIRA, SVN, Jenkins and Test Report. Each one of them
gives useful information about the software production and life-cycle phases.
The Data Layer is of the most important pieces of the architecture. It is Java
program build as seven modules and fourteen libraries. Every module is independent
(except DB connection and JSON parser) those are DBConnection, SVNSystem,
JiraSystem, SQSystem, JenkinsSystem and TestSystem. The purpose of this module
is to fetch the data from the company systems and store the "raw data" in the
database for further calculation at the Business Logic. Finally, it is important that
each module is executed by scheduler (crontab) with different time rates. Most of
the data comes with a XML or JSON format, so a decoder has to be implemented
for this secction.
For this project, a simple SQLite database is used. For each source of data a table is
created, and it has to have the same structure: ID, NAME, VALUE and TIME. The
ID is incremental in all the tables. The name is different depending on the source,
the information that contains and its type. The VALUE will always be an integer
that represent a measurement or a metric. Finally, the TIME is a time-stamp of
when this VALUE was collected from the system.
The following figure shows a series of columns from the JIRA table. As it is explained
before, JIRA is used for keep track of Issues or Tasks. The first element that it shows
is the NAME, which is composed by the information about the VALUE. For instance,
in this case this value is about the version "16.0.2" of the software, the squad/team
is "SWAT G" and the type of Issue is "New". Following is the VALUE which in this
case is 0. And finally the time-stamp; this was stored in the database at "3:01:05
PM" the "30/05/2016". Thanks to these values historical data for the widgets can
be obtained.
42
Figure 9.2: Data Base Example.
This part of the architecture is managed by the Dashing frame tool. As was ex-
plained before Dashing has three important elements: Dashboard, Widgets and
Jobs. In this case, the jobs are the business logic of this project. What they do
is query the necessary data from the DB, process it returning one metric or more
metrics and send them to the Dashboard file.
At this point the Dashbord is a simple html file. This links the Business Logic with
the User Interface. Thus, it is a grid full of widgets that show the metrics calculated
at the BL.
43
10 Systems Overview
10 Systems Overview
This last selection gives an overview of the the multiple Data Sources. All the
environments are hosted in the internal private network in Reval Headquarters in
Graz (Austria). Those are: Sonar Qube, JIRA, SVN, Jenkins and Test Reports.
The data from these systems will be fetched and stored in the database by the Data
Layer.
Each of these systems controls different aspects of the software production. On one
side, there are some that control or analyze the software production (SQ and SVN).
At the same time, there are others that focus on the software testing (Jenkins and
Test Report) and finally there is JIRA which is used for the whole product life-cycle.
It provides fully automated analysis: integrates with Maven, Ant, Gradle and con-
tinuous integration tools (Atlassian Bamboo, Jenkins, Hudson, etc.). And integrates
with Eclipse, Visual Studio and IntelliJ IDEA development environments through
the SonarLint plugins. It also integrates with external tools like: JIRA, Mantis,
LDAP, Fortify, etc. At the same time, it can be expanded with the use of plugins.
And finally, a great characteristic is that it implements the SQALE methodology to
evaluate technical debt [[Wik10]] [Sonar [Doc16]].
Inside the company, Sonar is the latest of the five systems added. The Quality
Assurance department is in charge of it, and its updated once a day (in the morning).
It could be updated more often, but it would not make sense since the source code
does not change that often. This is a very simple frame-tool which doesn’t calculate
any metrics . It measures them and displays the software development metric. The
most common metrics categories that we can find in Sonar Qube are the following:
44
10.1 Sonar Qube
• Issues: Number of new issues, count of issues with severity (blocker, critical,
major, minor or info), Open issues, etc.
• Quality Gates.
To obtain the data from Sonar Qube the Rest API provided at the Data Layer will
be used. Easy access to the metrics provided by SQ thought the rest API which will
be used to fetch the data from the DataLayer.
45
10 Systems Overview
10.2 JIRA
JIRA is an application that can be used to track all issues of a project. JIRA makes
the life cycle of issues transparent, and allows for a lot of collaboration. In JIRA,
issues can be organized, work assigned, and team activity can be followed through
a workflow. One of the benefits of JIRA is that it can be customized to reflect
the project elements, the type of issues, and the fields and screens available in each
workflow.
A JIRA workflow is a set of statuses and transitions that an issue moves through
during its life-cycle and typically represents processes within your organization.
46
10.3 SVN
10.3 SVN
/The Development teams is constantly working on this platform. There are all the
software versions of the software. Thus, together with the JIRA and Sonar Qube
we can get heavy information to process afterwards.
47
10 Systems Overview
10.4 Jenkins
It is used by the software testers to build and test the project/product continuously
in order to help the developers to integrate the changes to the project as quickly as
possible and obtain fresh builds. Jenkins is installed on a server where the central
build takes place. The following flowchart shows the basic work-flow of how Jenkins
works. For this system the information fetched that is interesting from the Rest
API will be provided in a JSON format. Afterwards the data is parsed to the main
database, as is show in [tut13].
48
10.5 Test Report
This is the internal testing interface developed by the Quality Assurance department.
It displays all the results of the automation tests, all of which are stored in a data
base, thus the purpose is only for research. The tests are for the current major
version (current 16.1.0) and the current patch (16.0.2).
Every week around 40.000 different tests are run with a lot of combinations and
environments. The variables of these environments are different OS (Windows,
Linux & MAC OS), multiple browsers (Chrome, Firefox or IE), divergent databases
(MYSQL or Oracle). The combination of all these environments are run for more
than 4.000 test-cases.
As was explained in the theory (Software Testing chapter), here is where the multiple
test types (suites) are applied. One of them is a basic suite for testing new, edit
or deleted code from the development department. It is also used as the functional
suite (black-box testing that bases its test cases on the specifications of the software
component under test), so it checks the functionality of the application. Most of the
test cases -> click logs in the test runs are made by a robot which follows the test
cases created in advance by the automation team.
49
10 Systems Overview
1. The Execution Round: for each iteration there are four Rounds: Round 0 (1st
week), 1 (2nd week) & 2 (3rd week) uses current builds, then the Round 3
(4th week) is the final round where code is freeze.
2. Test Run: is the test ID, which represents the combination of all attributes.
Every testrun is different to the others because it covers a specific version, in
different scenarios and host servers for an unequal build.
5. Current Status:
7. Initial F/E, JIRAs, Rerun: This is the amount of failed test. Usually at the
beginning of the testing some test fail. In case that a test fails it is tested again
(rerun), and if it still failing a JIRA task is created for revision (so someone
will check it manually) in case an application fails the JIRA task will be send
to RQA (to define) if in case the robot fails then a JIRA is created for manual
revision.
10. Config / Run Name: environment where the testrun is being tested.
50
11 Data Layer
51
11 Data Layer
The term Data Layer is a data structure which ideally fetches all data from a data
source that should be processed and passed from your website (or other digital
context) to other applications that you have linked to or store it in a data base. It is
a term used by Google Tag Manager in a variety of contexts and has been adapted
in this project architecture [Aha14].
The Data Layer is the core of this application. It is in charge of collecting a large
amount the data from the data sources, polish the data and finally gather it in a
data base.
The present Data Layer is divided in five individual main modules (one-to-one sys-
tem) which have an scheduler (Crontab) that executes them and two other shared
modules (DB communication and JSON & XML parser).
Each main module collects the data for a different system, as the following table
shows:
At is said in the Data Sources explanation (see Architecture), most of the Data
Sources have available access to the data by a REST API. It is very useful since
REST uses HTTP to create, read, update and delete, making it more easy to access
the information. With REST it’s possible to create URL queries, this URL is sent
to the server using a simpler GET request. Then, the HTTP reply is a raw result
data probably in an XML or JSON file. For the last step, the JSONParser module
is in charge to break it down, as is reported by [Eas10].
2. HTTP Proxy Authentication Certificate. This ensures that all data passed
remains private and integral.
52
4. REST API URL generation.
Cronetab
A crontab is a simple text file with a list of commands meant to be run at specified
times. It is edited with a command-line utility. These commands (and their run
times) are then controlled by the cron daemon, which executes them in the system
background. According to the Ubuntu Documentation [Doc15].
In our case, this is the cronetab file, which calls the scripts that execute each module
of the Data Layer:
The execution period of every file depends on the update rate of the Data Source
System. For instance, Test Report, Jenkins, SVN, and JIRA have more live and fresh
data, which changes often. In this case the data fetching is every 15 or 30 minutes.
However, the Sonar Qube system is triggered once a day to be update, thus cronetab
executes the fetching once a day, which represents the maximum efficiency.
53
XML file
Finally, one important characteristic to remark of the Data Layer is that almost
every static variable is set in a XML file in case it is necessary make any changes it
is more maintainable, and more successful at accomplishing one of the company’s
requirements. For instance, it is very useful when a Major or a Patch version changes
to another one.
12 Dashboard
This chapter will be limited on to the explanation of the framework for the met-
ric representation. In this section, the architecture that the Dashing uses will be
displayed and then the configuration of a single widget will be show.
12.1 Architecture
55
12 Dashboard
12.2 Dashboard
Each widget is represented by a div element needing data-id and data-view at-
tributes. The wrapping <li> tags are used for layout.
Listing 12.1: One .erb file for each dashboard that contains the layout for the widgets.
<% content_for (: title ) { " My ␣ super ␣ sweet ␣ dashboard " } % >
< div class = " gridster " >
< ul >
< li data - row = " 1 " data - col = " 1 " data - sizex = " 1 " data -
sizey = " 1 " >
< div data - id = " valuation " data - view = " Number " data -
title = " Current ␣ Valuation " data - prefix = " \ $ " > </ div
>
</ li >
</ ul >
</ div >
data-id: Sets the widget ID which will be used when pushing data to the widget. Two
widgets can have the same widget id, allowing to have the same widget in multiple
dashboards. When data is pushed to that id, each instance would be updated.
However, using different data- attributes allows customize them. Can be used any
arbitrary attribute — each one will be available within the widget logic.
3. A coffeescript file which allows you to handle incoming data & functionality.
56
12.3 Anatomy of a widget
< h1 class = " title " data - bind = " title " > </ h1 >
< h2 class = " value " data - bind = " current ␣ | ␣ shortenedNumber ␣ | ␣
prepend ␣ prefix " > </ h2 >
<p class = " more - info " data - bind = " moreinfo ␣ | ␣ raw " > </ p >
<p class = " updated - at " data - bind = " updatedAtMessage " > </ p >
Widgets use batman bindings in order to update their contents. Whenever the data
changes, the DOM will automatically reflect the changes.
As can be seen the piping ’|’ characters in some of the data-bind’s above. These are
Batman Filters, and permits easily format the representation of data.
Listing 12.3: Widget’s Coffeescript
class Dashing . Number extends Dashing . Widget
ready : ->
# This is fired when the widget is done being
rendered
57
12 Dashboard
12.4 Job
The Jobs provide the data to the widgets. To specify which widget has to be used,
is necessary assign the widget id. In this case, "Karma".
Dashing uses rufus-scheduler to schedule jobs. This job will run every minute, and
will send a random number to ALL widgets that have data-id set to "Karma".
Jobs are where to put stuff such as fetching metrics from a database, or calling a
third party API. Since the data fetch is happening in only one place, it means that
all instances of widgets are in sync. Server Sent Events are used in order to stream
data to the dashboards.
58
Listing 12.4: Job example
# : first_in sets how long it takes before the job is
first run . In this case , it is run immediately
SCHEDULER . every ’1 m ’ , : first_in = > 0 do | job |
send_event ( ’ karma ’ , { current : rand (1000) })
end
59
13 Final Implementation: Metrics - Widgets
Description: (process metric) This widgets gets the New, In Progress and
Closed task from JIRA and displays them first in a global overview for teams
and then every 15 seconds each team. This widget pretend to show at glance
the amount of tasks and its status. So managers, team leaders and team
members can follow the current situation of its work. The amount of task that
60
13.2 Code Changes
have they done, what are they doing and what they will have to do. This
widget is only focus on the current Major version.
Description: In this case, this is a historical line-char that shows the code
changes per team in the last 15 days. Here the internal stakeholders can
analyse which teams are working more on the source code and which others are
focusing in other tasks. At the same time, it can be used this information with
the JUnit widget to analyse if the amount of changes influence to a successful
build testing. Sometimes it happens that many changes solve a problem. So
on this widget it can be tracked. As the JIRA Progress widget, this one show
first the global overview to compare with other teams and then every 15 second
is amount of changes per team plus the sum of the total changes per team.
Also, this widget only covers the current Major version of the software.
61
13 Final Implementation: Metrics - Widgets
Description: This widget shows the average of the sum of all testings in one
build. First, it gets the amount of tests executed for one testrun. Then, get
the amount of all tests that were failing for one run given by testrunid (Initial
F/E # in overview page). Finally, get all the outstanding errors (not reviewed
yet, not rerunned yet, not jira linked yet) and calculate by the rule of three.
Giving as a result the success testing rate of the actual build. In this case both
Major and Patch current builds are displayed, every 15 seconds as well.
62
13.4 Daily Builds
Description: Here is displayed with a Donut-pie Chart and with green (suc-
cess) and red (fail) the success percent. For instance, if a day there are 4 build
which 3 are fail and one is successful the widget will be 75% in red color and
the rest 25% in green color. This widget help also to keep track of the actual
situation of the builds, so developers and the rest of stakeholders can keep an
eye on it.
63
13 Final Implementation: Metrics - Widgets
Description: This widget is similar to the previous one, since it shows the
successful and fail builds for each version. But in this case it gives a historical
perspective of the build of the last 15 days. It allow the managers to remember
how performed the builds in the last days.
64
13.6 NLOC
13.6 NLOC
65
13 Final Implementation: Metrics - Widgets
Description: This widget display the number of passing, ignored and failing
test from a test result file in the JUnit XML format. It also inform the duration
of the testing and the last update. With this widges the developers can keep
track on the testing of its software.
66
13.8 Timeline
13.8 Timeline
Description: This widget does not represent any metric, it just displays the
actual situation along the software life. So the purpose of this widget is to
allow the people involve in the project to keep track on the future events and
in the current phase status.
67
13 Final Implementation: Metrics - Widgets
68
14 Conclusions
The main focus during the development process of the application, were get measure-
ments that could be used to improve the processes. All features were streamlined for
maximum usability. This process also taught me the impacts how company KPI can
shape the decision making of the stakeholders. The perfect solution for a company
is not always the most powerful tool with the highest number of features. The best
solution for a software has to fulfill the specifications as precisely as possible.
While the development of this project, I growth analytical skills and got a more
proactive and dynamic attitude. At the same time, I been working with new lan-
guages and frame-woks. And I have acquire a better understanding of the software
production life-cycle, the software companies architecture and workflow.
69
15 Future prospect
15 Future prospect
The project is not finish yet, soon there will be new implementations. New metrics
will be used and more widgets will be created. An historical graph of open/closed
task will be featured, also other two that show the meantime to fix a failure and
other that display the effort of completing a task. At the same time, the company
would like to customize the dashboard for teams. So each team can access to its
own info-board from its own computer. Another new feature will be the use of a
client/server SQL database engine. In this case, will be an Oracle database. One
of the reasons is that Oracle DB been one the leader in relational database is able
to handle a lot of data in a well organized way. Moreover, it delivers a very stable
system for running databases with fail-over solutions, backups, quick ’data resets’ if
needed.
70
Bibliography
[Agi13] Leading Agile. How Do You Know Your Metrics Are Any Good. 2013.
url: http://www.leadingagile.com/2013/07/how- do- you- know-
your-metrics-are-any-good (visited on 06/05/2016).
[Aha14] Simo Ahavas. Google Tag Manager’s Data Model. [Online; accessed 05-
June-2016]. 2014. url: http : / / www . simoahava . com / analytics /
google-tag-manager-data-model/#gref (visited on 06/05/2016).
[Cai14] Larry Cai. Learn Dashing Widget in 90 minutes. 2014. url: http://
www . slideshare . net / larrycai / learn - dashing - widget - in - 90 -
minutes (visited on 06/05/2016).
[Cer13a] ISTQB Exam Certification. ISTQB Association. 2013. url: http : / /
istqbexamcertification . com / what - is - defect - or - bugs - or -
faults-in-software-testing (visited on 06/05/2016).
[Cer13b] ISTQB Exam Certification. ISTQB Exam Certification. 2013. url: http:
/ / istqbexamcertification . com / what - is - acceptance - testing/
(visited on 06/05/2016).
[Doc14] Atlassian Documentation. Working with workflows. 2014. url: https:
/ / confluence . atlassian . com / adminjiracloud / working - with -
workflows-776636540.html (visited on 06/05/2016).
[Doc15] Ubuntu Documentation. Cron How to. 2015. url: https : / / help .
ubuntu.com/community/CronHowto (visited on 06/05/2016).
[Doc16] Sonar Qube Documentation. SONAR QUBE. 2016. url: http://www.
sonarqube.org/ (visited on 06/05/2016).
[Eas10] Balamurugan Easwaran. Implementation advantages of rest. 2010. url:
http://www.slideshare.net/BalamuruganEaswaran/implementation-
advantages-of-rest (visited on 06/05/2016).
[Gei14] Matthias Geigers. SonarQube – What is it? How to get started? Why do
I use it? 2014. url: https://matthiasgeiger.wordpress.com/2014/
02/19/sonarqube- what- is- it- how- to- get- started- why- do- i-
use-it/ (visited on 06/05/2016).
[Gur16] Guru99. All About Compatibility Testing. 2016. url: http : / / www .
guru99.com/compatibility-testing.html (visited on 06/05/2016).
71
Bibliography
[Hut13] Project Hut. What is a SVN repository? 2013. url: https : / / www .
projecthut.com/what-is-svn-repository/ (visited on 06/05/2016).
[Ken07] C.S Kent. Sioftware Metrics. 2007. url: http://www.cs.kent.edu/
~jmaletic/cs63901/lectures/SoftwareMetrics.pdf (visited on 06/05/2016).
[Nic12] Dave Nicolette. Software Development Metrics. Phearson, 2012. isbn:
9780198520115.
[Pea06] Pearson. What Is Software Configuration Management? 2006. (Visited
on 06/05/2016).
[Som09a] Ian Sommerville. SOFTWARE ENGINEERING. Pearson, 209. isbn:
978-0-13-703515-1.
[Som09b] Ian Sommerville. SOFTWARE ENGINEERING. Pearson, 209. isbn:
978-0-13-703515-1.
[tut13] tutorialspoint. Jenkins Tutorial. 2013. url: http://www.tutorialspoint.
com/jenkins/ (visited on 06/05/2016).
[Wik01a] Wikipedia. oftware Testing. 2001. url: https://en.wikipedia.org/
wiki/Software_testing (visited on 06/05/2016).
[Wik01b] Wikipedia. Software testing. 2001. url: https://en.wikipedia.org/
wiki/Software_testing#Functional_vs_non-functional_testing
(visited on 06/05/2016).
[Wik01c] Wikipedia. Unit testing. 2001. url: https://en.wikipedia.org/wiki/
Unit_testing (visited on 06/05/2016).
[Wik02] Wikipedia. Regression Testing. 2002. url: https : / / en . wikipedia .
org/wiki/Regression_testing (visited on 06/05/2016).
[Wik03] Wikipedia. Integration testing. 2003. url: https : / / en . wikipedia .
org/wiki/Integration_testing (visited on 06/05/2016).
[Wik04a] Wikipedia. Cohesion (computer science). 2004. url: https : / / en .
wikipedia . org / wiki / Cohesion _ (computer _ science) (visited on
06/05/2016).
[Wik04b] Wikipedia. System Testing. 2004. url: https://en.wikipedia.org/
wiki/System_testing (visited on 06/05/2016).
[Wik06a] Wikipedia. Dashboard. 2006. url: https://en.wikipedia.org/wiki/
Dashboard_(management_information_systems) (visited on 06/05/2016).
[Wik06b] Wikipedia. Functional Testing. 2006. url: https : / / en . wikipedia .
org/wiki/Functional_testing (visited on 06/05/2016).
[Wik06c] Wikipedia. Recovery Testing. 2006. url: https://en.wikipedia.org/
wiki/Recovery_testing (visited on 06/05/2016).
[Wik08] Wikipedia. Project management triangle. 2008. url: htps://en.wikipedia.
org/wiki/Project_management_triangle (visited on 06/05/2016).
72
Bibliography
73