HCI in Business: Fiona Fui-Hoon Nah
HCI in Business: Fiona Fui-Hoon Nah
)
LNCS 8527
HCI in Business
First International Conference, HCIB 2014
Held as Part of HCI International 2014
Heraklion, Crete, Greece, June 22–27, 2014, Proceedings
123
Lecture Notes in Computer Science 8527
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
Fiona Fui-Hoon Nah (Ed.)
HCI in Business
First International Conference, HCIB 2014
Held as Part of HCI International 2014
Heraklion, Crete, Greece, June 22-27, 2014
Proceedings
13
Volume Editor
Fiona Fui-Hoon Nah
Missouri University of Science and Technology
Department of Business and Information Technology
101 Fulton Hall, 301 West 14th Street
Rolla, MO 65409, USA
E-mail: [email protected]
Thematic areas:
• Human–Computer Interaction
• Human Interface and the Management of Information
Affiliated conferences:
• Enterprise systems
• Social media for business
• Mobile and ubiquitous commerce
VI Foreword
• Gamification in business
• B2B, B2C, C2C e-commerce
• Supporting collaboration, business and innovation
• User experience in shopping and business
I would like to thank the Program Chairs and the members of the Program
Boards of all affiliated conferences and thematic areas, listed below, for their
contribution to the highest scientific quality and the overall success of the HCI
International 2014 Conference.
This conference could not have been possible without the continuous support
and advice of the founding chair and conference scientific advisor, Prof. Gavriel
Salvendy, as well as the dedicated work and outstanding efforts of the commu-
nications chair and editor of HCI International News, Dr. Abbas Moallem.
I would also like to thank for their contribution towards the smooth organi-
zation of the HCI International 2014 Conference the members of the Human–
Computer Interaction Laboratory of ICS-FORTH, and in particular
George Paparoulis, Maria Pitsoulaki, Maria Bouhli, and George Kapnas.
Human–Computer Interaction
Program Chair: Masaaki Kurosu, Japan
Cross-Cultural Design
Program Chair: P.L. Patrick Rau, P.R. China
Augmented Cognition
Program Chairs: Dylan D. Schmorrow, USA,
and Cali M. Fidopiastis, USA
HCI in Business
Program Chair: Fiona Fui-Hoon Nah, USA
External Reviewers
General Chair
Professor Constantine Stephanidis
University of Crete and ICS-FORTH
Heraklion, Crete, Greece
E-mail: [email protected]
Table of Contents
Enterprise Systems
Exploring Interaction Design for Advanced Analytics and Simulation . . . 3
Robin Brewer and Cheryl A. Kieliszewski
“There’s No Way I Would Ever Buy Any Mp3 Player with a Measly
4gb of Storage”: Mining Intention Insights about Future Actions . . . . . . . 233
Maria Pontiki and Haris Papageorgiou
Gamification in Business
A Framework for Evaluating the Effectiveness of Gamification
Techniques by Personality Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Charles Butler
1 Introduction
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 3–14, 2014.
© Springer International Publishing Switzerland 2014
4 R. Brewer and C.A. Kieliszewski
mastered a set of tasks and activities that are performed on a regular basis, and these
tasks often become automatic. In turn, this automation can make it difficult to elicit
detailed information from the expert about a set of tasks because she may
unintentionally leave out important details or essential steps when describing the
tasks [1,2].
The research presented in this paper was conducted within the context of a
modeling and simulation (M&S) tool called SPLASH (Smarter Planet Platform for
Analysis and Simulation of Health) [3]. Through SPLASH, end users with varying
degrees of expertise in analytics and simulation can design simulation experiments to
apply in a variety of fields including finance, urban planning, healthcare, and disaster
planning. This range of fields and end users poses challenges for how to
accommodate a wide array of expertise in M&S – that is, for people with deep
domain knowledge about the construction of models and simulations to people with
skill and expertise in running the simulation system and analyzing the output within a
particular field. In addition, the domain of modeling and simulation tends to
emphasize algorithm design and implementation rather than interface and interaction
design. Without a body of evidence of how scientists and analysts use modeling or
simulation tools, we had to work with a community of our intended end users to
identify expectations and interface design features. This paper describes the method
and results of using exploratory interviews, disruptive interviews, and participatory
ideation to elicit information from experts in the field of M&S to inform the design of
the SPLASH interaction.
2 Background
Modeling and simulation is a complex research area that typically draws from
mathematics, statistics, and business [6]. The process to create models and
Exploring Interaction Design for Advanced Analytics and Simulation 5
An expert can be defined as “an individual that we can trust to have produced
thoughtful, consistent and reliable evaluations of items in a given domain” [9].
Because experts have, in essence, 10,000+ hours of experience [2], they are very
familiar with a particular process and pattern to perform a task or activity. Therefore,
it may be easy for the expert to recall the process for performing a particular activity
or sequence of tasks but difficult to express the process to a novice. To study expert
activities, many routine tasks are documented using some form of observation
[10,11]. However, the tacit knowledge and reasoning may not be apparent to the
observer when experts are performing a routine task [12].
There are two intended user groups of SPLASH, both of which are considered to
be experts: scientists and analysts. The descriptions of our population were that
scientists typically design, build, and run models and simulation experiments.
Analysts run experiments after a model has been built and/or analyze results of the
6 R. Brewer and C.A. Kieliszewski
simulation run to aid in further decision-making. Both scientists and analysts are
experts in performing analytical tasks that we needed to better understand. To design
an interface for SPLASH, it was fundamental to understand what processes, tools, and
techniques our target users employ to build and run simulations to model and
understand potential system behavior.
For this study, we decided to use a series of three interview techniques to elicit
expert knowledge in a relatively short period of time – being sensitive to work
schedules and volunteer participation of our pool of professionals. Interviewing is a
common HCI technique for eliciting information from stakeholders for rich
qualitative analysis. Interviews can take many different forms including unstructured,
semi-structured, and structured [13]. We started our investigation with semi-
structured exploratory interviews to gain an understanding of what it is to do M&S
work and to further structure the remaining two investigation phases of disruptive
interviews and participatory ideation.
Disruptive interviews are derived from semi-structured interviews and can aid
in the recall of past steps to complete a process that may have become automatic
and taken for granted [12,14]. The interview technique uses a specific scenario that is
then constrained over time by placing limitations on the options available to the
participant. The constraints of the scenario are iteratively refined so that the
participant must reflect on the processes and their reasoning. This technique borrows
from condensed ethnographic interviews [12] that transform discussion from broad
issues to detailed steps [15]. It is critical that disruptive interviews consider the
context of the interviewees’ processes. Understanding such context allows the
researcher to design interview protocols appropriate to the constraints a person
typically encounters in their work.
Participatory ideation (PI) is a mash-up of two existing techniques, participatory
design and social ideation. Participatory design is often described as ‘design-by-
doing’ [16] to assist researchers in the design process. This method is often used
when researchers and designers want to accurately design a tool for an audience they
are not familiar with [17]. Complementary to this, social ideation is the process of
developing ideas with others via a web-enabled platform and utilizes brainstorming
techniques to generate new ideas [18]. Both participatory design and social ideation
are intended for early stage design and to engage with the users of the intended tool.
We interviewed professional scientists and analysts to investigate their
expectations for the design of a technology such as SPLASH. The research questions
we aimed to address were:
• RQ1: What are people’s expectations for a complex cross-disciplinary modeling
and simulation tool?
• RQ2: How should usable modeling and simulation interfaces be designed for
non-technical audiences?
3 Methods
3.2 Disruption
All of the participants were remote for the participatory ideation phase that was
conducted to elicit early-stage interface prototype design ideas. Because all of our
participants were remote, we used an asynchronous, online collaboration tool called
Twiddla [21] as an aid to collect input. The participants were placed into one of two
conditions: individual ideation or group ideation. For this phase we recruited two
scientists and one analyst for the individual ideation condition, and two scientists and
two analysts for the group ideation condition.
We started with individual ideation, where the participants were given a blank
canvas and asked to sketch ideas for model and data source selection, composition,
and expected visualization(s) of simulation output based on one of the four scenarios
that was created from the exploratory phase. Key interface and interaction features
from the individual ideation output were then summarized and displayed as a starting
point on the Twiddla drawing canvas for the group ideation participants. We
hypothesized that the group ideation would produce more robust ideas because
participants wouldn’t need to create a new concept, but could simply build upon a set
of common ideas [22].
4 Results
The three phases of this work each provided insight towards answering our research
questions and built upon the findings of the previous phase(s). Here we provide the
key results for each.
The disruptive interviews provided insight into the selection and prioritization of
model and data sources – a key element to composite modeling. We were able to
explore steps taken when options are suddenly limited and how one would work
through the challenge. In doing so, there were disruption-based triggers that prompted
participants to deliberately reflect on and express how they would complete the
scenario – as illustrated in the following statement:
“When you build a simulation model you can collect everything in the world
and build the most perfect model and you ask what are my 1st order effects?
What are the ones I think are most critical? If I don't have them in there, my
simulation model would be way off. The second ones are secondary effects...
Those are the ones if I don't have enough time, I could drop those.”
This next phase resulted in sketches of interface ideas generated by the participants.
Recall that the participatory ideation phase was designed with two conditions of
participation: individual ideation and group ideation. The findings show similarities
between the user groups, but also ideas unique to scientists and to analysts. In
addition, we unexpectedly found that even though our group ideation participants
were provided a sketch to start from (based on the individual ideation results), it was
ignored by all of them and each decided to start with a blank design canvas. What
follows is a summary of the design ideas that were mutual to analysts and scientists
and then those that were specific to each participant group.
Once the results of the participatory ideation phase were aggregated, three mutual
interaction and interface design ideas stood-out. The first design idea was a feature to
support browsing and exploration of model and data sources that would afford
examination of schemas and/or variables prior to selection for use in a scenario. The
second was a feature to compare the output of multiple simulation runs for a
particular scenario to better understand the trade-offs of selecting one simulation
solution compared to another (Fig. 1). The third feature was an audience-specific
dashboard for making complex decisions that would provide a summary of the model
and data sources that were used when running the simulation.
Fig. 1. Example sketch of a simulation output where it would be easy to compare scenarios
Fig. 2. Example of expected flow and interaction features for composite modeling
5 Discussion
The results of this series of interviews helped us better understand our target users and
inform subsequent interface prototype design. Specifically, the use of constraints as
disruption in the interviews served as effective triggers, prompting and focusing our
experts to provide details about how they would go about designing a composite
model. These triggers demonstrated the usefulness of disruptive interviews [12,14,15],
and although [9] suggests that experts tend to produce consistent and reliable
evaluations of the work that they perform, we found that they are not particularly
consistent in the manner that they reflect on their process of doing so. In addition, we
were able to efficiently collect interaction expectations and interface design input from
the experts we worked with through participatory ideation.
12 R. Brewer and C.A. Kieliszewski
During the initial process of building a composite model, our analyst community
expected a tool that would provide recommendations. These recommendations ranged
from an automated reference providing which model and data sources to use for a
particular scenario to suggestions for how to then couple the data and models in order
to run the simulation. This ran counter to what our scientist community expected.
Where, they were familiar with building the models and wanted to be able to
interrogate the data and model sources to investigate elements such as provenance,
robustness, and limitations prior to selection for use. A compromise that may satisfy
both participant groups would be to implement an exploratory search and browse
feature where users are not recommended models and data sources, but must
prioritize the information needed before beginning the information retrieval process.
An exploratory search and browse feature may be useful for interactive navigation
of model and data sources to identify the appropriate elements to create a composite
model. For example, take two use cases we found for creating a composite model.
The first is that users may know the specific scenario or issue that they want to
analyze using a composite model; and to facilitate the identification of appropriate
and useful source components, they want to perform a search using specific keywords
or questions. The second use case is that users are in the early stages of defining their
project scope and want to run a simplified or meta-simulation to explore what is
important in order to identify the appropriate source components for the design of the
composite model. This loose exploration would be equivalent to browsing content on
a system, or browsing a larger set of possible scenarios, and getting approximate
output based on approximate inputs. This would allow the user the luxury of having a
basic understanding of the model and data requirements to target particular source
components.
Implementing an exploratory search and browse would require the underlying
systems to have information about the source components (most likely through
metadata, e.g., [3]) along with a set of composite model templates to enable this
manner of recommendation system. Alternatively, a more manual approach could be
taken such as prompting the user to identify known factors to be explored prior to
building the simulation, or identify the important relationships between source
components. This would lead to the system displaying either a dashboard of specific
sources or a catalog of different scenarios to consider. Participants agreed this
exploration should include a high level of interaction with different tuning knobs and
a visualization recommendation interface. In addition, audience-specific dashboards
would be useful for making complex decisions, providing a summary of the
simulation models and source components used in the simulations.
For the simulation output, our results show that both user groups want a
comparison feature that illustrates trade-offs of important scenario factors used in the
final simulation. In addition, they would prefer recommended visualizations for the
simulation to best understand and interpret the generated output. Overall, we saw a
desire to explore model and data sources before and after use in a simulation.
Exploring Interaction Design for Advanced Analytics and Simulation 13
6 Conclusions
This paper describes the results of the first stages of a research effort to explore
interaction expectations for a modeling and simulation technology. The study was set
within the context of a composite modeling and simulation technology called
SPLASH that enables the coupling of independent models (and their respective data
sources) to examine what-if trade-offs for complex systems. Our participant pool
included scientists and analysts; both considered experts in the areas of modeling,
simulation, and analytics. Without the benefit of interaction conventions for modeling
and simulation technologies, we used three techniques (exploratory interviews,
disruptive interviews, and participatory ideation) to elicit information from experts in
the field of modeling and simulation to inform the interaction design of the SPLASH
interface.
Our results show that there are differences in interaction expectations between
scientists and analysts. Our scientists wanted considerably more explicit features and
functionality to enable deep precision for modeling and simulation tasks; whereas our
analysts wanted simplified functionality with intelligent features and recommendation
functionality. We also found some common ground between our participants, such as
both groups wanting a comparison feature to show trade-offs based on simulation
output. Our findings point towards a semi-automated interface that provides a
recommended starting point and allows for flexibility to explore component sources
of models and data prior to selection for use, along with a pre-screening capability to
quickly examine potential simulation output based on an early idea for a composite
model.
References
1. Chilana, P., Wobbrock, J., Ko, A.: Understanding Usability Practices in Complex
Domains. In: Proceedings of the 28th International Conference on Human Factors in
Computing Systems, CHI 2010, pp. 2337–2346. ACM Press (2010)
2. Ericsson, K.A., Prietula, M.J., Cokely, E.T.: The Making of an Expert. Harvard Business
Review: Managing for the Long Term (July 2007)
3. Tan, W.C., Haas, P.J., Mak, R.L., Kieliszewski, C.A., Selinger, P., Maglio, P.P., Li, Y.:
Splash: A Platform for Analysis and Simulation of Health. In: IHI 2012 – Proceedings of
the 2nd ACM SIGHIT International Health Informatics Symposium, pp. 543–552 (2012)
4. Maglio, P.P., Cefkin, M., Haas, P., Selinger, P.: Social Factors in Creating an Integrated
Capability for Health System Modeling and Simulation. In: Chai, S.-K., Salerno, J.J.,
Mabry, P.L. (eds.) SBP 2010. LNCS, vol. 6007, pp. 44–51. Springer, Heidelberg (2010)
5. Kieliszewski, C.A., Maglio, P.P., Cefkin, M.: On Modeling Value Constellations to
Understand Complex Service System Interactions. European Management Journal 30(5),
438–450 (2012)
6. Robinson, S.: Conceptual Modeling for Simulation Part I: Definition and Requirements.
Journal of the Operational Research Society 59(3), 278–290 (2007a)
7. Robinson, S.: Conceptual Modeling for Simulation Part II: A Framework for Conceptual
Modeling. Journal of the Operational Research Society 59(3), 291–304 (2007b)
14 R. Brewer and C.A. Kieliszewski
8. Haas, P., Maglio, P., Selinger, P., Tan, W.: Data is Dead... Without What-If Models.
PVLDB 4(12), 11–14 (2011)
9. Amatriain, X., Lathia, N., Pujol, J.M., Kwak, H., Oliver, N.: The Wisdom of the Few. In:
Proceedings of the 32nd International ACM SIGIR Conference on Research and
Development in Information Retrieval - SIGIR 2009, pp. 532–539. ACM Press (2009)
10. Karvonen, H., Aaltonen, I., Wahlström, M., Salo, L., Savioja, P., Norros, L.: Hidden Roles
of the Train Driver: A Challenge for Metro Automation. Interacting with Computers 23(4),
289–298 (2011)
11. Lutters, W.G., Ackerman, M.S.: Beyond Boundary Objects: Collaborative Reuse in
Aircraft Technical Support. Computer Supported Cooperative Work (CSCW) 16(3), 341–
372 (2006)
12. Comber, R., Hoonhout, J., Van Halteran, A., Moynihan, P., Olivier, P.: Food Practices as
Situated Action: Exploring and Designing for Everyday Food Practices with Households.
In: Computer Human Interaction (CHI), pp. 2457–2466 (2013)
13. Merriam, S.B.: Qualitative Research and Case Study Applications in Education. Jossey-
Bass (1998)
14. Hoonhout, J.: Interfering with Routines: Disruptive Probes to Elicit Underlying Desires.
In: CHI Workshop: Methods for Studying Technology in the Home (2013)
15. Millen, D.R., Drive, S., Bank, R.: Rapid Ethnography: Time Deepening Strategies for HCI
Field Research. In: Proceedings of the 3rd Conference on Designing Interactive Systems:
Processes, Practices, Methods, and Techniques, pp. 280–286 (2000)
16. Kristensen, M., Kyng, M., Palen, L.: Participatory Design in Emergency Medical Service:
Designing for Future Practice. In: Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, pp. 161–170. ACM Press (2006)
17. Hagen, P., Robertson, T.: Dissolving Boundaries: Social Technologies and Participation in
Design. Design, pp. 129–136 (July 2009)
18. Faste, H., Rachmel, N., Essary, R., Sheehan, E.: Brainstorm, Chainstorm, Cheatstorm,
Tweetstorm: New Ideation Strategies for Distributed HCI Design. In: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems, pp. 1343–1352 (2013)
19. IBM, http://www.ibm.com/smarterplanet/us/en/
smarter_cities/overview/index.html
20. Dedoose, http://www.dedoose.com/
21. Twiddla, http://www.twiddla.com
22. Osborn, A.F.: Applied Imagination, 3rd edn. Oxford (1963)
Decision Support System Based on Distributed
Simulation Optimization for Medical Resource Allocation
in Emergency Department
Tzu-Li Chen
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 15–24, 2014.
© Springer International Publishing Switzerland 2014
16 T.-L. Chen
1 Introduction
In recent years, Taiwan has gradually become an aging society. The continuous
growth of the senior population annually accelerates the increase and growth rate in
emergency department (ED) visits. According to statistics from the Department of
Health, Executive Yuan, from 2000 to 2010, the overall number of people making
emergency visits in 2000 was 6,184,031; the figure had surged rapidly to 7,229,437 in
2010, demonstrating a growth rate of approximately 16%.
People making emergency visits and the growth rate for these visits have risen
rapidly in the past 11 years. Such an increase causes an imbalance between supply
and demand, and ultimately creates long-term overcrowding in hospital EDs. This
phenomenon is primarily caused by the sharp increase in patients (demand side), and
the insufficient or non-corresponding increase in medical staffing (supply side).
Consequently, medical staff capacity cannot accommodate excessive patient loads,
compelling patients to wait long hours for medical procedures, thus contributing to
long-term overcrowding in EDs.
The imbalance in supply and demand also prolongs patient length of stay (LOS) in
the ED. According to data from the ED at Taiwan National University Hospital, Shin
et al. (1999) found that, among 5,810 patients, approximately 3.6% (213 patients) had
stayed over 72 hours in the ED. Of these 213 patients, some had waited for physicians
or beds, whereas some had waited in the observation room until recovery or to be
cleared of problems before being discharged. These issues frequently lead to long-
term ED overcrowding. Based on data analysis of the case hospital examined in this
research, among 43,748 patients, approximately 9% (3,883 patients) had stayed in the
ED for over 12 hours, approximately 3% (1,295) had stayed over 24 hours, and
approximately 1% (317 patients) had stayed in the ED for 72 hours.
Hoot and Aronky (2008) postulated three solutions to address the overcrowding of
EDs: (1) Increase resources: solve supply deficiency by adding manpower, number
of beds, equipment, and space. (2) Effective demand management: address problems
of insufficient supply by implementing strategies, such as referrals to other depart-
ments, clinics, or hospitals. (3) Operational research: explore solutions to ED over-
crowding by exploiting management skills and models developed in operational
research. For instance, determining effective resource allocation solutions can
improve the existing allocation methods and projects, ultimately enhancing ED effi-
ciency, lowering patient waiting time, and alleviating ED overcrowding.
Among the previously mentioned solutions, the first solution is not attainable in
Taiwan, because most hospital EDs have predetermined and fixed manpower, budget,
and space; hence, resources cannot be expanded to resolve the problem. The second
solution is not legally permitted in Taiwan, and is essentially not applicable. Both of
the preceding solutions are seemingly inappropriate and not applicable; therefore, this
study adopted the third solution, which entailed constructing an emergency flow si-
mulation model by conducting operational research. Additionally, the simulation op-
timization algorithm was used to identify the optimal medical resource allocation
solution under the constraint of limited medical resources to attain minimal average
patient LOS and minimal MWC, subsequently ameliorating ED overcrowding.
The main purpose of this research was to determine a multi-objective simulation
optimization algorithm that combines a non-dominated sorting genetic algorithm II
Decision Support System Based on Distributed Simulation Optimization 17
This study was based on the ED flow of a certain hospital as a research target. It has
been established that patient arrival interval times and service times of each medical
service obey specific stochastic distributions; each type of medical resource (such as
staff, equipment, and emergency beds), and the presumed resource allocation at any
time is deterministic or fixed and does not change dynamically according to time.
Under these pre-established conditions, a multi-objective emergency medical re-
sources optimization allocation problem in which the primary goals were minimal
average LOS and minimal average MWC was sought. Under restricted medical re-
sources, this study aimed to obtain the most viable solution for emergency medical
resource allocation.
Index:
i :Index for staff type ( i = 1,..., I ), such as doctor and nurse etc.
j :Index for working area ( j = 1,..., J ), such as registration area, emergency
and critical care area, treatment area and fever area etc.
18 T.-L. Chen
Subject to
lk ≤ Yk ∀k (4)
X
j
ij ≤ ui ∀i (5)
Yk ≤ uk ∀k (6)
Decision Support System Based on Distributed Simulation Optimization 19
Simulation VM n
Coordinator 6.
VM Dispatcher
In this experiment, we primarily compared the simulation times for varying numbers
of VMs to identify the differences when applying the proposed distribution simulation
optimization model and the effects that the number of VMs had on the simulation
times. In addition, this experiment analyzed the differences in simulation times for
various allocation strategies with equal numbers of VMs.
We adopted the integrated NSGA II_MOCBA as the experimental algorithm, and
employed the optimal NSGA II parameter settings determined in the previous
experiments. The parameter settings were as follows: generation = 10, population size
= 40, C = .7, M = .3, and the termination condition = generation (10).
The initial number of simulation iterations for the MOCBA was n0 = 5 , with a
possible increase of Δ = 30 , and P *{CS} = 0.95 for every iteration.
Regarding the number of VMs, we conducted experiments using 1, 6, 12, and 18
VMs. Table1 shows the execution times for the simulation programs with varying
numbers of VMs and allocation strategies. Besides 1 VM, two methods can be used
for allocating the remaining numbers of VM, specifically, including and excluding
Decision Support System Based on Distributed Simulation Optimization 23
Table 1. The execution times for the simulation programs with varying numbers of VMs and
allocation strategies
1. The overall execution time for 1 VM approximated a month (28 d). However, the
execution time was reduced significantly to approximately 4 and 1.5 days when
the number of VMs was increased to 6 and 18, respectively (Table 1). In addition,
the curve exhibited a significant decline from 1 VM to 18 VMs. Thus, we can con-
firm from these results that the proposed distributed simulation optimization
framework can effectively reduce simulation times.
2. The overall execution time was reduced from approximately 4 days to 1 day when
the number of VMs increased from 6 to 18 (Table 1). In addition, the curve exhi-
bited a decline from 6 VMs to 18 VMs. These results indicate that the simulation
times can be reduced by increasing the number of VMs.
3. With a fixed number of VMs, the time required to divide and allocate simulation
iterations to numerous VMs is shorter than that for allocating the entire number of
iterations to 1 VM (Error! Reference source not found.1). Considering 6 VMs as
an example, the execution time without dividing and allocating the number of si-
mulation times was 112 h, whereas the execution time with dividing and allocating
the number of iterations was 105.5 h. These results indicate that distributing the
24 T.-L. Chen
number of simulation times among numerous VMs can reduce the overall execu-
tion time.
4. According to the experimental results, we infer that a limit exists when the number
of VMs is increased to significantly reduce the simulation times. In other words,
when a specific number of VMs is added to a low number of available VMs, the
simulation time is significantly reduced. However, when the number of VMs
increases to a specific amount, the reduction in simulation time becomes less sig-
nificant, eventually reaching convergence. This indicates that after a certain num-
ber of VMs, the simulation time dos not decline with additional VMs.
6 Conclusion
References
1. Ahmed, M.A., Alkhamis, T.M.: Simulation optimization for an ED healthcare unit in
Kuwait. European Journal of Operational Research 198, 936–942 (2009)
2. Chen, C.H., Lee, L.H.: Stochastic simulation optimization: An Optimal Computing Budget
Allocation. World Scientific Publishing Co. (2010)
3. Hoot, N.R., Aronsky, D.: Systematic Review of ED Crowding: Causes, Effects, and
Solutions. Health Policy and Clinical Practice 52(2), 126–136 (2008)
4. Lee, L.H., Chew, E.P., Teng, S., Goldsman, D.: Finding the non-dominated Pareto set for
multi-objective simulation models. IIE Transactions 42(9), 656–674 (2010)
5. Pitombeira Neto, A.R., Gonçalves Filho, E.V.: A simulation-based evolutionary multiobjec-
tive approach to manufacturing cell formation. Computers & Industrial Engineering 59,
64–74 (2010)
The Impact of Business-IT Alignment on Information
Security Process
Abstract. Business-IT Alignment (BITA) has the potential to link with organi-
zational issues that deal with business-IT relationships at strategic, tactical and
operational levels. In such context, information security process (ISP) is one of
the issues that can be influenced by BITA. However, the impact has yet not
been researched. This paper investigates the BITA impact on ISP. For this in-
vestigation, the relationships of elements of the Strategic Alignment Model and
the components of Security Values Chain Model are considered. The research
process is an in-depth literature survey followed by case study in two organiza-
tions located in United States and the Middle East. The results show clear
impact of BITA on how organizations would distribute allocated security budg-
et and resources based on the needs and risk exposure. The results should
support both practitioners and researchers to gain improved insights of the rela-
tionships between BITA and IT security components.
1 Introduction
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 25–36, 2014.
© Springer International Publishing Switzerland 2014
26 M. El Mekawy, B. AlSabbagh, and S. Kowalski
BITA at operational level requires social perspective and aspects like interaction,
shared understanding/knowledge across teams and personnel. Even thought BITA is
shown to have potential impact on ISP at different organizational levels, little research
has been done is this area (Saleh, 2011). Given the fact that the ISP focuses on rela-
tionships between business and IT for supporting BITA, the complexity of its nature
is increased when considering different views on IT in organizations and how to util-
ize it in regard of business objectives.
This paper investigates the impact of BITA on ISP. For this investigation, the
relationships of elements of the Strategic Alignment maturity Model (SAM) devel-
oped by Luftman (2000) and the components of the Security Values Chain Model
(SVCM) developed by Kowalski & Boden (2002) are considered. The remainder of
the paper is structured as follows: the research approach is discussed in section 2. The
implications of BITA and ISP are presented in section 3 and 4 respectively. Potential
relationships between BITA components and SVCM are presented in section 5.
Results and analyses are presented in section 6 followed by conclusions in section 7.
2 Research Approach
The followed research method and process are namely an in-depth literature survey
followed by case study research. The literature survey aimed to study theories behind
BITA and ISP and hypothesize the impact of BITA criteria on SVCM’s components.
Following that, qualitative data was collected from two organizations through semi-
structured interviews with four respondents in each organization i.e. selected to
represent strategic and senior management at both business and IT in both organiza-
tions. The results where codified and compared to the proposed hypotheses.
The first organization (Referred as Company-A) is a midsize insurance company in
the Midwest of the United Stated. The second organization (Referred as Company-B)
is a governmental entity located in the Middle East and acts as national regulator for
communication and technology business.
Results from BITA research show that organizations that successfully align their
business and IT strategy can increase their business performance (Kearns & Lederer,
2003). BITA can also support analysis of potential role of IT in an organization when
it supports to identify emergent IT solutions in the marketplace that can be opportuni-
ties for changing business strategy and infrastructure (Henderson & Venkatraman.
1993). Not only researchers, but business and IT practitioners have also emphasized
the importance of BITA. In the annual survey of the Society for Information Man-
agement, BITA was first on the top management concern from 2003-2009 with the
exception of 2007 and 2009 in which it was second (Luftman & Ben-Zvi, 2010).
Therefore, practitioners should place special attention on BITA and particularly on
how it is achieved, assessed and maintained in organizations.
Fig. 1. Luftman’s Strategic Alignment Maturity (SAM) (adapted from Luftman. 2000)
Different efforts have been oriented towards assessing BITA by proposing theoret-
ical models that can be applied as supportive tools for addressing different BITA
components. An extensive study by El-Mekawy et al. (2013) collected those models
with their components in a comparative framework. Although Henderson and Venka-
traman are seen as the founding fathers of BITA modeling (Avison et al., 2004),
Luftman’s model (SAM) has gained more popularity in practice (Chan & Reich,
2007). This gain is due to the following motivation: a) It follows a bottom-up
approach by setting goals, understanding linkage between Business and IT, analyzing
and prioritizing gaps, evaluating success criteria, and consequently sustaining align-
ment, b) It presents strategic alignment as a complete holistic process which encom-
passes not only establishing alignment but also its maturity by maximizing alignment
enablers and minimizing inhibitors (Avison et al., 2004), c) SAM focuses on different
BITA areas by modularity in six criteria, and d) Since its inception, SAM has been
used by several researchers and in number of industries for assessing BITA and its
components. Therefore, SAM is selected to be used in this study for assessing BITA
28 M. El Mekawy, B. AlSabbagh, and S. Kowalski
and analyzing the proposed impact on ISP. SAM classifies BITA in six criteria
(Table 1) consisting of 38 attributes (Figure 1) in five maturity levels: Ad Hoc, Com-
mitted, Established Focused, Managed, and Optimized Process. This classification
gives clear view of alignment and helps to spot particular areas of where an organiza-
tion needs to improve for maximizing values of IT investments.
The chain consists of five security access controls: deterrent, protective (preven-
tive), detective, responsive (corrective) and recovery. These controls represent input
points to IS (Table 2) in which an action may take place to stop undesired actions on
the system. AlSabbagh & Kowalski (2012) operationalized the security value chain as
a social metric for modeling the security culture of IT workers individuals at two
organizations. Their research showed how IT workers’ and individuals’ security
culture diverse given security problem at personal, at enterprise and national level.
The research also studied the influence of available fund on security culture.
Control Definition
Deter for reducing chances of exploiting existing vulnurability without actually reducing the
exposure. E.g. consequences of violating a company security policy.
Protect for preventing occuring of security incident (e.g. access control implementations).
Detect for identifying and characterize a security incident (e.g. monitoring system alarm).
Respond for remediating the damage caused by a security incient (e.g. incidet response plan).
Recover for compensating for the losses incurred due to a security incident (e.g. security
incident insurance).
Over years, different studies have shown clear impact of business objectives and per-
formance on ISP (e.g. Huang et al., 2005; Johnson & Goetz, 2007). Other studies
focused on the impact of IT strategies and how IT is perceived on ISP (e.g. von Solms
and von Solms, 2004; Doherty & Fulford, 2005). As the relationship between busi-
ness and IT is represented by BITA, the impact of BITA on ISP is apparent. However,
it is neither analyzed in studies of BITA nor in studies of ISP (Saleh, 2011). In this
section, indications of BITA impact on ISP are presented. Each criterion of SAM is
described by which it influences the access controls of the security value chain. Hypo-
thetically, we expect to find at least one existing reflection of each SAM criterion on
an access control. With the help of SAM’s attributes in each criterion, more various
interesting relations may be addressed.
• Communications. Based on the findings of Herath & Herath (2007), it is indicated
that matured channels and metrics for communications between business and IT
have a strong impact on how ISP is perceived in an organization. This also influ-
ences the way the organization reacts and responses to the security attacks. How-
ever, as found by Huang et al. (2006), it can be concluded that achieving complete
information security is virtually impossible. This is due to the need for matured
communications in an organization to be further extended to include suppliers,
partners and customers which potentially increases the risks to attacks. Therefore,
matured communications in BITA is found to have less expenditure in detecting,
responding and recovering but no clear indications for deterring and protecting.
The Impact of Business-IT Alignment on Information Security Process 31
In this section, results and analyses of BITA assessment are presented in subsection
6.1 followed by the analyses of BITA and ISP in subsection 6.2.
It is also seen to bring values to the organization and co-adapt with business to
enable/drive strategic objectives. These conditions indicate a level of maturity 4.
• Scope and Architecture. In both Company-A and Company-B, IT is considered
as a catalyst for changes in the business strategy with a matured IT architecture. In
addition to that, IT standards are defined and enforced at the functional unit level
with emerging coordination across functional units. Although they are integrated
across the organisation, but they are not extended to include customer and supplier
perspectives which make a matured level of 3.
• Skills. In Company-A, the environment is characterized as innovative and encour-
aging especially at functional units. However, it has initial, technical training and
little rewards. The career crossover is limited to only strategic levels, and the envi-
ronment is dominated by top business managers who have more locus of power
than IT managers. The overall matured level is then assessed as 1. In Company-B,
innovation is strongly motivated especially at functional units with cross training
and limited change readiness. The top business management has domination and
locus of power for IT management. Career crossover is extended but to the senior
management and functional units. The overall maturity is indicated at level 3.
• Company-A. The interviews show potential impact of BITA maturity on ISP. For
instance, while business perceives IT as a cost for business, senior and mid-level
business managers have limited understanding of IT. Business seems not to care
about security spending. The budget is allocated with no questions or awareness on
how effectively used. This is also reflected in the fact that IT metrics are primarily
technical. BITA maturity level seems to be focused and managed process. There is
a formal feedback process for reviewing and improving measurement results. Both
business and IT conduct formal strategic planning across the organisation but not
extended to partners/alliances. What has also been understood during the inter-
views is that there is no awareness regarding the need for having the five types of
security access controls. One of the interviewees was even supported to get figures
providing spending distribution according to the five controls.
Table 3. Ideal and Expected Security Value Chain in Company-A based on Collected Data
accountability when security is violated. More than 10% of security budget is allo-
cated to such deterring controls. The same problem is observed regarding recovery
controls implementations. As business does not understand why IT needs to have
active support licenses for its applications, the business decided not to renew any
license. It is known in IT that having such support available is vital for providing
means of recovery for potential issues. The business has considered having active
support licenses as an extra cost which is not used most of the time. The limited
maturity in Communications and Skills has also resulted in more severe issues
related to human resourcing. Business is not allocating enough funds for hiring
senior security consultants who can improve the organization’s security position.
Business perceives IT as an enabler to business objectives and changes, however,
with insufficient turnovers. This perception has resulted in having budget con-
straints for IT and difficulties in approving it.
Table 4. Ideal and Current Security Value Chain in Company-B based on Collected Data
References
1. Adams, J.: Risk. Taylor & Francis, London (1995)
2. Al-Hamdani, W.A.: Non risk assessment information security assurance model. In:
Proceedings of the Information Security Curriculum Development Conference, pp. 84–90.
ACM, Kennesaw (2009)
3. AlSabbagh, B., Kowalski, S.: Developing Social Metrics for Security – Modeling the
Security Culture of IT Workers Individuals (Case Study). In: Proceedings of the 5th Inter-
national Conference on Communications, Computers and Applications (2012)
4. Amer, S.H., Hamilton, J.A.: Understanding security architecture. In: Proceedings of the
Spring Simulation Multi-conference, Society for Computer Simulation, Canada (2008)
5. Avison, D., Jones, J., Powell, P., Wilson, D.: Using and Validating the Strategic
Alignment Model. Journal of Strategic Information Systems 13, 223–246 (2004)
6. Barabanov, R., Kowalski, S.: Group Dynamics in a Security Risk Management Team
Context: A Teaching Case Study. In: Rannenberg, K., Varadharajan, V., Weber, C. (eds.)
SEC 2010. IFIP AICT, vol. 330, pp. 31–42. Springer, Heidelberg (2010)
7. Beautement, A., Sasse, M.A., Wonham, M.: The compliance budget: managing security
behaviour in organisations. In: NSPW 2008, pp. 47–58 (2008)
8. Benbya, H., McKelvey, B.: Using Coevolutionary and Complexity Theories to Improve IS
Alignment: A multi-level approach. Journal of Information Tech. 21(4), 284–298 (2006)
9. Chan, Y.E., Huff, S.L., Barclay, D.W., Copeland, D.G.: Business Strategic Orientation,
IS Strategic Orientation, and Strategic Alignment. ISR 8(2), 125–150 (1997)
10. Chan, Y.E.: Why haven’t we mastered alignment? The Importance of the informal organi-
zation structure. MIS Quarterly 1, 97–112 (2002)
11. Chan, Y.E., Reich, B.H.: IT alignment: what have we learned? Journal of Information
Technology 22(4), 297–315 (2007b) (advance online publication)
12. Doherty, N.F., Fulford, H.: Do information security policies reduce the incidence of
security breaches: an exploratory analysis. IRM Journal 18(4), 21–38 (2005)
13. El-Mekawy, M., Perjons, E., Rusu, L.: A Framework to Support Practitioners in Evaluat-
ing Business-IT Alignment Models. AIS Electronic Library (2013)
14. Gordon, L.A., Loeb, M.P.: The Economics of Information Security Investment. ACM
Transactions on Information and Systems Security 5(4), 438–457 (2002)
15. Gordon, L.A., Loeb, M.P., Lucyshyn, W., Richardson, R.: CSI/FBI Computer Crime and
Security Survey. Computer Security Institute (2005)
16. Henderson, J., Venkatraman, N.: Strategic alignment: leveraging information technology
for transforming organizations. IBM Systems Journal 32(1), 472–484 (1993)
17. Herath, H.S.B., Herath, T.C.: Cyber-Insurance: Copula Pricing Framework and Implica-
tions for Risk Management. In: Proceedings of the Sixth Workshop on the Economics of
Information Security, Carnegie Mellon University, June 7-8 (2007)
18. Huang, C.D., Hu, Q., Behara, R.S.: Investment in information security by a risk-averse
firm. In: Proceedings of the 2005 Softwars Conference, Las Vegas, Nevada (2005)
19. Huang, C.D., Hu, Q., Behara, R.S.: Economics of Information Security Investment in the
Case of Simultaneous Attacks. In: Proceedings of the Fifth Workshop on the Economics of
Information Security, Cambridge University, pp. 26–28 (2006)
20. Johnson, M.E., Goetz, E.: Embedding Information Security into the Organisation. IEEE
Security & Privacy 16 – 24 (2007)
21. Kearns, G.S., Lederer, A.L.: The Effect of Strategic Alignment on the use of IS-Based
Resources for Competitive Advantage. Journal of Strategic IS 9(4), 265–293 (2000)
36 M. El Mekawy, B. AlSabbagh, and S. Kowalski
22. Kowalski, S.: The SBC Model: Modeling the System for Consensus. In: Proceedings of
the 7th IFIP TC11 Conference on Information Security, Brighton, UK (1991)
23. Kowalski, S., Boden, M.: Value Based Risk Analysis: The Key to Successful Commercial
Security Target for the Telecom Industry. In: 2nd Annual International Common Criteria
CC Conference, Ottawa (2002)
24. Kowalski, S., Edwards, N.: A security and trust framework for a Wireless World: A Cross
Issue Approach, Wireless World Research Forum no. 12, Toronto, Canada (2004)
25. Kumar, V., Telang, R., Mukhopahhyay, T.: Optimally securing interconnected information
systems and assets. In: 6th Workshop on the Economics of IS, CM University (2007)
26. Lacity, M.C., Willcocks, L., Feeny, D.: IT outsourcing: maximise flexibility and control.
Harvard Business (1995)
27. Lee, S.W., Gandhi, R.A., Ahn, G.J.: Establishing trustworthiness in services of the critical
infrastructure through certification and accreditation. SIGSOFT Softw. Eng. Notes 30(4),
1–7 (2005)
28. Leonard, J., Seddon, P.: A Meta-model of Alignment. Communications of the Association
for Information Systems 31(11), 230–259 (2012)
29. Luftman, J.: Assessing Business-IT Alignment Maturity. Communications of the Associa-
tion for Information Systems 4, Article 14 (2000)
30. Luftman, J.N.: Managing IT Resources. Prentice Hall, Upper Saddle (2004)
31. Luftman, J., Ben-Zvi, T.: Key Issues for IT Executives: Difficult Economy’s Impact on IT.
MIS Quarterly Executive 9(1), 49–59 (2010)
32. Oltedal, S., Moen, B., Klempe, H., Rundmo, T.: Explaining Risk Perception. An evalua-
tion of cultural theory. Norwegian University of Science and Technology (2004)
33. Ogut, H., Menon, N., Raghunathan, S.: Cyber Insurance and IT security investment: Im-
pact of interdependent risk. In: Workshop on the Economics of Information Security,
WEIS 2005, Kennedy School of Government, Harvard University, Cambridge, Mass.
(2005)
34. Reich, B.H., Benbasat, I.: Factors That Influence The Social Dimension of Alignment
Between Business And IT Objectives. MIS Quarterly 24(1), 81–113 (2000)
35. Sabherwal, R., Chan, Y.E.: Alignment Between Business and IS Strategies: A Study of
Prospectors, Analyzers, and Defenders. IS Research 12(1), 11–33 (2001)
36. Saleh, M.: Information Security Maturity Model. Journal of IJCSS 5(3) (2011)
37. Schwaninger, M.: From dualism to complementarity: a systemic concept for the research
process. International Journal of Applied Systemic Studies 1(1), 3–14 (2007)
38. Smaczny, T.: Is an alignment between business and information technology the appropri-
ate paradigm to manage IT in today’s organisations? Management Decision 39(10),
797–802 (2001)
39. Tarafdar, M., Qrunfleh, S.: IT-Business Alignment: A Two-Level Analysis. Information
Systems Management 26(4), 338–349 (2009)
40. Whitman, M.E., Mattord, H.J.: Principles of Information Security. Thomson Course Tech.
(2003)
41. Van Der Zee, J.T.M., De Jong, B.: Alignment is Not Enough: Integrating business and in-
formation technology management with the balanced business scoreboard. Journal of
Management Information Systems 16(2), 137–156 (1999)
42. von Solms, B., von Solms, R.: The ten deadly sins of information security management.
Computers & Security 23(5), 371–376 (2004)
43. Yee, K.P.: User Interaction Design for Secure Systems. In: Faith Cranor, L., Garfinkel, S.
(eds.) Security and Usability: Designing Secure Systems that People Can Use, pp. 13–30.
O’Reilly Books (2005)
Examing Significant Factors and Risks Affecting
the Willingness to Adopt a Cloud–Based CRM
1 Introduction
Although Cloud Computing has been undergoing rapid evolution and advancement, it
is still an emerging and complex technology [1], and our understanding of, and regu-
latory guidance related to cloud computing is still limited [2]. These limitations raise
significant concerns about security, privacy, performance, and trustworthiness of
cloud-based applications. [3, 4]. While the cloud offers a number of advantages, until
some of the risks are better understood and controlled, cloud services might not be
adopted with as much alacrity as was expected [5].
Although there are studies investigating the implementation of CRM systems
[6, 7], there is a lack of research in adopting cloud-based CRMs. To successfully
adopt and implement a cloud-based CRM, client organizations need to have under-
standing about cloud computing, its characteristics, and need to take into account the
risks involved when deciding to migrate their applications to the cloud. Cloud servic-
es providers also need to enhance their understanding of client users’ behavior such
as how they act and what factors affect their choice, in order to increase the rate of
adoption.
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 37–48, 2014.
© Springer International Publishing Switzerland 2014
38 N. Le Thi Quynh, J. Heales, and D. Xu
2 Literature Review
This study explores the roles of Risks relating to Tangible Resources, Intangible
Resources, and Human Resources; perceived usefulness, perceived ease of use, sub-
jective norm and Trust in the adoption of Cloud-Based CRMs. The study is informed
by the Resource-Based View Framework, Risk and Trust Theories, and the Technolo-
gy Acceptance Model (TAM2).
We adopt the Efraim, Linda [8] view of Cloud Computing as the general term for
infrastructures that use the Internet and private networks to access, share, and deliver
computing resources with minimal management effort or service provider interaction.
In the cloud context, users pay for the services as an operating expense instead of the
upfront capital investment [9].
Cloud computing provides several advantages, including cost reduction [4, 9], or-
ganizational agility and often competitive advantage [10, 11]. However, there is a lot
of uncertainty and skepticism around the cloud that stakeholders in cloud computing
(e.g. providers, consumers and regulators) should take into account, including the gap
in cloud capabilities, security, and audit and control risks. The next sections examine
these risks more thoroughly.
Efraim, Linda [8 pg. 324] define CRM as the methodologies and software tools that
automate marketing, selling, and customer services functions to manage interaction
between an organization with its customers, and to leverage customer insights to
acquire new customers, build greater customer loyalty, and increase profit level.
One of the biggest benefits of a cloud-based CRM is that it is easily accessible via
mobile devices from any location, at any time [8 pg. 328]. In addition, cloud-based
CRM allows enterprises, especially Small and Medium Enterprises (SMEs) not only
to achieve cost benefits through pay-per-use, without a large upfront investment, but
also to mimic their larger rivals to effectively manage and enhance customer relation-
ship processes.
The original TAM model does not incorporate the effect of the social environment
on behavioral intention. Therefore, we apply TAM2 [14], which hypothesizes per-
ceived usefulness, perceived ease of use, and subjective norm as the determinants of
Usage Intention, to our conceptual research model.
We apply TAM2 to our theoretical foundation and define the constructs as follows:
Perceived usefulness, for the purpose of this paper, is defined as the degree to
which an individual believes that using a cloud-based CRM would improve his or her
job performance. Seven capabilities of cloud computing, namely controlled interfaces,
location independence, sourcing independence, ubiquitous access, virtual business
environments, addressability and traceability, and rapid elasticity [10], enable users to
access the application, internal and external resources over the internet easily and
seamlessly. This has made cloud-based CRMs advantageous to client organizations.
Perceived ease of use of cloud-based CRMs refers to the extent to which a user
believes that using a cloud-based application would be free of effort.
As one characteristic of cloud-based applications is the ease with which to switch
between service providers, the higher degree that the users can use the application and
its functions to help them in daily operations without investing a lot of effort on learn-
ing how to use during the trial time, the more probability that they will be willing to
adopt the application.
Subjective norm, for the purpose of this paper, is the degree to which an individual
perceives that others believe he/ she should use a specific cloud-based CRM. The
advantage of virtual communities and social networks is that it allows users to share
and exchange ideas and opinions within communities. An individual’s behavior will
be reinforced by the multiple neighbors in the social network who provide positive
feedback and ratings [15], especially, when subscribing to a new application or pur-
chasing a product, so users tend to evaluate the product by examining reviews of
others [16] . The following propositions follow:
P1: Perceived Usefulness will positively affect the Willingness to Adopt Cloud
Based CRMs.
P2a: Perceptions of Cloud-based CRMs Ease of Use will positively affect Per-
ceived Usefulness.
P2b: Perceptions of Cloud-based CRMs Ease of Use will positively affect the Wil-
lingness to Adopt Cloud Based CRMs.
P3: Subjective Norm will positively affect the Willingness to Adopt Cloud Based
CRMs.
2.4 Trust
Trust has been regarded as the heart of relationships of all kinds [17] and a primary
enabler of economic partnerships [18]. Building trust is particularly important when
an activity involves uncertainty and risk [19]. In the context of cloud computing, un-
certainty and risk are typically high because of the lack of standards, regulations and
complexity of technology, etc. [1, 9]. This leads to a significant concern for enterpris-
es about TRUST in cloud-based applications [20].
40 N. Le Thi Quynh, J. Heales, and D. Xu
Antecedents of Trust
Prior research on Trust has proposed a number of trust antecedents: knowledge-based
trust, institution-based trust, calculative-based trust, cognition-based trust and perso-
nality-based trust [for more details, see 21].
We consider the initial level-of-trust formation, would directly affect the organiza-
tion’s willingness to adopt.
Personality-based trust – Personal perception is formed based on the belief that
others are reliable and well-meaning [22], resulting in a general tendency to believe to
others and so trust them [23]. This disposition is especially important for new organi-
zational relationships, where the client users are inexperienced with service providers
[24].
Cognition-based trust – perception of reputation: is built on first impression rather
than experiential personal interactions [23]. In the context of cloud-based CRMs, to
access trustworthiness of cloud service providers, client organizations tend to
base their evaluation on secondhand information provider’s reputation. Reputation
of providers is also particularly important when considering cloud adoption and
implementation [25].
Institution-based Trust – perception of Structural Assurance: is formed from safety
nets such as regulations, guarantees, legal recourse [26].
A Service-level agreement (SLA) is a negotiated contract between a cloud service
provider with client organization. Cloud service providers use SLAs to boost the con-
sumer’s trust by issuing guarantees on service delivery.
Knowledge-based Trust: is formed and developed over time though the interaction
between participants [21, 27]. This type of trust might be absent for the first meet
between service provider and client organization. However, during the trial time,
interaction and communication between parties will affect to the level of trust in each
other, thus improving their behavioral intention to continue adopting the application.
Based on our argument above, and because we are using already validated meas-
ures of trust, we make the following complex proposition:
P4: Personal Perception, Perception of Reputation of a cloud-based CRM provid-
er, Perception of Structural Assurances built into a cloud-based CRM, and Know-
ledge-based Trust will positively affect Trust in a cloud-based CRM provider.
Consequences of Trust
Heightened level of Trust, as a specific belief in a service provider, are associated
with heightened willingness to use services supplied by that provider. Cloud compu-
ting is still in its infancy [28], and contains a certain level of complexity of technolo-
gy [29] and immaturity of standards, regulations, and SLAs, thus we propose :
P5: Trust in a Cloud-based CRM Provider will positively affect the Willingness to
Adopt a Cloud-based CRM.
Trust in a cloud service provider implies the belief that service provider will
deliver accurate and qualified services, as expected. Users are less likely to accept
unexpected failure of the system or network, and unqualified performance of service.
Therefore, a service provider’s subjective guarantee, through SLAs, and other
elements such as the provider’s reputation or customer services, during the trial time,
Significant Factors and Risks Affecting the Willingness to Adopt a Cloud–Based CRM 41
would bolster user’s confidence. Such a guarantee is likely to increase the likelihood
that the CRM application will improve users’ performance in managing the customer
relationship. Conversely, adopting an application from an untrustworthy service
provider might result in reduced usefulness. Based on this, we propose that:
P6: Trust in a Cloud-based CRM Provider will positively affect the Perceived
Usefulness of Cloud-based CRMs.
The RBV explains the role of resources in firm performance and competitive advan-
tage [30]. Barney [30] went on to show that to achieve sustained competitive advan-
tage, resources must be “valuable, rare, difficult to imitate, and non-substitutable”.
When putting the RBV in the context of cloud computing, there are a number of or-
ganizational resources that can affect the competitiveness and performance of the
firms. First, by accessing current infrastructures and using complementary capabilities
from cloud providers, clients can focus on internal capabilities and core competencies
to achieve competitive advantage [11]. Second, one characteristic of cloud-based
applications is the ease with which to switch between service providers, and the num-
ber of options for customers has increased over time. Customers tend to seek qualified
products, and if service providers cannot ensure necessary resources and capabilities,
they might lose their current and potential customers into their competitors.
Therefore, the more uncertainty that affects the effectiveness of the firm’s
resources, the less probability that firms might achieve good performance and com-
petitive advantage.
de-duplication and merging processes also provides significant challenges for service
providers [34].
However, trust in a cloud service provider, resulting from the provider’s reputation
and their structural assurance (e.g. SLAs), to some extent, can lessen the fear of inci-
dents and risks related to data security and privacy. In the cloud context, cloud users
face insecure application programming interfaces (APIs), malicious insiders, data
breaches, data loss, and account hijacking [4, 31]. In addition, cloud-provider may be
perceived to have too much power to view and potentially abuse sensitive customer
data. Therefore, a provider with a good reputation and sufficient security mechanisms
will provide confidence that customer data will be stored and protected against illegal
access, and therefore increase the likelihood of adopting the cloud-based application.
Based on our argument above, we make the following propositions:
P7a: The Data-Related Risks will negatively affect the Willingness to Adopt Cloud
Based CRMs.
P7b: Trust moderates the relationship between Data-Related Risks and the Will-
ingness to Adopt Cloud Based CRMs.
Economic Risks
With a cloud-based application, the business risk is decreased by a lower upfront
investment in IT infrastructure [3], although there is still the uncertainty of hidden
risks during the time customers use the application. For example, to maximize the
number of capabilities of an application, customers may have to pay more to get the
advanced version [35]. The more reliable and specialized the hardware, software and
services offered, the higher the price service providers would set [36].
Furthermore, with the Medium and Large size enterprises migrating their enter-
prise applications such as CRMs and ERPs to cloud based environments, the cost of
transferring organizational data is likely to increase, especially if the organization
applies the hybrid cloud deployment model where data would be stored in different
distinct cloud infrastructures (e.g. private, community and public) [37]. Thus;
P8: The Economic Risks will negatively affect the Willingness to Adopt Cloud
Based CRMs.
IT Infrastructure risks
IT Infrastructure risks are the possibility that the service provider may not deliver the
expected level of infrastructure. That is the network infrastructure is not provided
with the speed or reliability at the level expected. One positive characteristic of cloud
computing is the rapid elasticity, which enables the scaling up or down of service
usage, based on virtualization technology [11]. However, risks such as the unpredict-
able performance of virtual machines, frequent system outages, and connectivity
problems, can affect all a provider’s customers at once, with significant negative im-
pacts on their business operations. [4].
IT infrastructure risks also include the risk of problems related to the integration
between cloud-based applications and internal systems. The perceived IT infrastruc-
ture risks mentioned above are likely to influence the user’ perception that the CRM
might not perform as smoothly and seamlessly as expected. Thus;
P9: The IT Infrastructure Risks will negatively affect the Perceived Cloud-based
CRM Usefulness.
Significant Factors and Risks Affecting the Willingness to Adopt a Cloud–Based CRM 43
Managerial risks
From the psychosocial view, it is noted that IT executives might be conscious of
negative consequences from adopting cloud-based applications [35]. The likelihood
of successfully implementing a new system largely depends on good project man-
agement and leadership skills [39], and effective coordination and interaction with
stakeholders [38]. Because cloud-based CRMs involve business process changes,
integration of the new system into an existing IT infrastructure and system, and ex-
ploitation new technologies, it is necessary for technological and organization-
specific knowledge of how to implement cloud solutions to operate business transac-
tions as well as achieve business objectives [39].
The managerial risk might be reduced if there is a strong belief in the cloud-service
providers. Trust can bolster the executive’s optimism about the desirable conse-
quences [21, 23], as a result, they might willing to adopt cloud-based application
when they trust the service provider. We propose that managerial risk will affect the
willingness of adoption of cloud-based CRMs; this proposition is moderated by Trust
in a cloud-based CRM provider.
P11a: The Managerial Risks will negatively affect the Willingness to Adopt Cloud
Based CRMs.
P11b: Trust moderates the relationship between Managerial Risks and the Will-
ingness to Adopt Cloud Based CRMs.
Strategic risk
Strategic risks include the risks that cloud-based CRM clients might be heavily
dependent on the service providers and their applications. The cloud-based CRM
applications may not be flexible enough to respond to changes in their business
strategies and thus ensure alignment between IT and business strategies [35].
44 N. Le Thi Quynh, J. Heales, and D. Xu
A high degree of dependence on a cloud provider may also cause vendor lock-in
and business continuity issues [4, 31].
However, trust in a cloud provider, resulting from the provider’s reputation and
structural assurance (e.g. SLAs), to some extent, can lessen this fear. When the pro-
vider issues guarantees about data ownership, disaster recovery plans, standards, and
assurances that regulations are followed, the level of trust is raised. Thus, a provider
with a strong reputation can give the impression that it is able to sustain superior
profit outcomes. [40].Thus;.
P12a: The Strategic Risks will negatively affect the Willingness to Adopt Cloud
Based CRMs.
P12b: Trust moderates the relationship between Strategic Risks and the Willing-
ness to Adopt Cloud Based CRMs.
Audit risk
Audit risk is the probability of there will be material misstatements in the client
organization’s financial statements. This can result from the lack of internal control
and governance, ambiguous agreement on data ownership, and/or immature regula-
tions and standards for cloud computing.
SAS No.107 [41] categorizes audit risk into three components: inherent risk, con-
trol risk, and detection risk. Inherent risk is possibility that a material misstatement in
the client's financial statements will occur in the absence of appropriate internal con-
trol procedures. Control risk is the risk that material misstatement will not be detected
and corrected by management's internal control procedures. Detection risk is the risk
that the auditor will not detect material misstatement. Cloud computing places an
increased burden on the auditor [2], and the lack of understanding of cloud computing
in terms of technical and business aspects, as well as the risks associated with cloud
computing, might lead to an increase in detection risk.
These risks can affect the Trust in cloud service providers, if they do not issue
appropriate SLAs that specify the provider’s responsibilities for services, data owner-
ship and regulations and standards they would follow. Thus;
P13: Increasing level of Audit Risk will negatively affect Trust in cloud-based
CRM provider.
Following from the review presented on the previous section, we propose the research
model depicted in Figure 1.
4 Research Method
We seek to gather data from individual users who have commissioned a trial test of a
cloud-based CRM and examination phase before deciding to fully adopt the CRM. To
test this model we consider a survey-based approach is the most appropriate [see 43].
The following steps need to be taken:
1. We adopt measures from the literature for each of the constructs in the model, and
operationalize them so that they can be used to gather the required data.
2. A preliminary web analysis of constructs was performed to validate the measures
developed in the model. We collected user comments from 3 cloud-based CRM
applications, namely Salesforce.com, Insightly, and Zoho CRM on the Apple Apps
store, Google apps Marketplace, Google Play and Blackberry World. 1579 com-
ments were collected by users who were considering trialling, or who were trialling
the applications.
3. Based on the analysis of the preliminary data, we ensure all comments can be cate-
gorised by our constructs in the final questionnaire.
4. A large-scale survey would then be conducted to test our model of factors and risks
involved in the adoption of a cloud-based CRM.
This paper presents the factors and risks involved in the adoption of a cloud-based
CRM. These factors and risks were derived from the analysis of research conducted
into the adoption of information technology and systems, cloud computing, trust, and
audit risk. From this research foundation a model was developed and presented.
This research will help provide more insights about client user behaviour toward
the adoption a cloud-based CRM. This study also offers several practical implications.
First, perception of risks together may inhibit the cloud-based CRM adoption. It is
recommended that cloud service providers develop appropriate strategies to counter
these concerns. For example, effective risk-mitigation strategies may include strong
guarantees, better transparency and more consumer control of data and processes.
Client users may be more willing to overlook the perceived risks if they know what is
happening with their application and data, and they are confident that the service
provider is trustworthy and can perform efficiently to ensure the system run smoothly.
Second, our study suggests that the cloud-based CRM adoption depends heavily on
perceived usefulness, perceived ease of use and a trusting belief in the cloud service
provider. By acting in a competent and honest manner, a cloud service provider can
maintain high trust, resulting the willingness to adopt and retaining of users of its
cloud-based CRM from organization clients.
Future studies may include other aspects that might influence the adoption such as
organizational characteristics (e.g. firm size, organizational strategies, maturity of
current information systems, etc.), industry characteristics (e.g. competitive intensity)
and personals characteristic (e.g. gender, age, experience, etc.)
References
1. Blaskovich, J., Mintchik, N.: Information Technology Outsourcing: A Taxonomy of Prior
Studies and Directions for Future Research. Journal of Information Systems 25(1), 1–36
(2011)
2. Alali, F.A., Chia-Lun, Y.: Cloud Computing: Overview and Risk Analysis. Journal of
Information Systems 26(2), 13–33 (2012)
3. Pearson, S.: Privacy, Security and Trust in Cloud Computing, in Technical Reports,
HP: HP (2012)
4. Armbrust, M., et al.: A View of Cloud Computing. Communications of the ACM 53(4),
50–58 (2010)
5. Youseff, L., Butrico, M., Da Silva, D.: Toward a unified ontology of cloud computing.
In: Grid Computing Environments Workshop, GCE 2008. IEEE (2008)
6. Kim, H.-S., Kim, Y.-G., Park, C.-W.: Integration of firm’s resource and capability to
implement enterprise CRM: A case study of a retail bank in Korea. Decision Support
Systems 48(2), 313–322 (2010)
7. Avlonitis, G.J., Panagopoulos, N.G.: Antecedents and consequences of CRM technology
acceptance in the sales force. Industrial Marketing Management 34(4), 355–368 (2005)
8. Efraim, T., Linda, V., Gregory, W.: Information Technology for Management, 9th edn.
(2013)
Significant Factors and Risks Affecting the Willingness to Adopt a Cloud–Based CRM 47
9. Marston, S., et al.: Cloud computing — The business perspective. Decision Support
Systems 51(1), 176–189 (2011)
10. Iyer, B., Henderson, J.C.: Preparing for the Future: Understanding the Seven Capabilities
of Cloud Computing. MIS Quarterly Executive 9(2), 117–131 (2010)
11. Iyer, B., Henderson, J.C.: Business value from Clouds: Learning from Users. MIS Quarter-
ly Executive 11(1), 51–60 (2012)
12. Fishbein, M., Ajzen, I.: Belief, attitude, intention and behavior: An introduction to theory
and research (1975)
13. Davis Jr., F.D: A technology acceptance model for empirically testing new end-user in-
formation systems: Theory and results. Massachusetts Institute of Technology (1986)
14. Venkatesh, V., Davis, F.D.: A Theoretical Extension of the Technology Acceptance Mod-
el: Four Longitudinal Field Studies. Management Science 46(2), 186–204 (2000)
15. Centola, D.: The Spread of Behavior in an Online Social Network Experiment.
Science 329(5996), 1194–1197 (2010)
16. Park, D.-H., Lee, J., Han, I.: The Effect of On-Line Consumer Reviews on Consumer
Purchasing Intention: The Moderating Role of Involvement. International Journal of Elec-
tronic Commerce 11(4), 125–148 (2007)
17. Morgan, R.M., Shelby, D.H.: The Commitment-Trust Theory of Relationship Marketing.
Journal of Marketing 58(3), 20–38 (1994)
18. Gefen, D.: What Makes an ERP Implementation Relationship Worthwhile: Linking Trust
Mechanisms and ERP Usefulness. Journal of Management Information Systems 21(1),
263–288 (2004)
19. Luhmann, N.: Familiarity, confidence, trust: Problems and alternatives. Trust: Making and
Breaking Cooperative Relations 6, 94–107 (2000)
20. Huang, J., Nicol, D.: Trust mechanisms for cloud computing. Journal of Cloud Compu-
ting 2(1), 1–14 (2013)
21. Gefen, D., Karahanna, E., Straub, D.W.: Trust and TAM in Online Shopping: An
Integrated Model. MIS Quarterly 27(1), 51–90 (2003)
22. Wrightsman, L.S.: Interpersonal trust and attitudes toward human nature. Measures of
Personality and Social Psychological Attitudes 1, 373–412 (1991)
23. McKnight, D.H., Cummings, L.L., Chervany, N.L.: Initial Trust Formation in New Orga-
nizational Relationships. The Academy of Management Review 23(3), 473–490 (1998)
24. Gefen, D.: E-commerce: the role of familiarity and trust. Omega 28(6), 725–737 (2000)
25. Koehler, P., et al.: Cloud Services from a Consumer Perspective. In: AMCIS. Citeseer
(2010)
26. Sitkin, S.B.: On the positive effects of legalization on trust. Research on Negotiation in
Organizations 5, 185–218 (1995)
27. Holmes, J.G.: Trust and the appraisal process in close relationships (1991)
28. Misra, S.C., Mondal, A.: Identification of a company’s suitability for the adoption of cloud
computing and modelling its corresponding Return on Investment. Mathematical and
Computer Modelling 53(3-4), 504–521 (2011)
29. Subashini, S., Kavitha, V.: A survey on security issues in service delivery models of cloud
computing. Journal of Network and Computer Applications 34(1), 1–11 (2011)
30. Barney, J.: Firm Resources and Sustained Competitive Advantage. Journal of Manage-
ment 17(1), 99 (1991)
31. Nicolaou, C.A., Nicolaou, A.I., Nicolaou, G.D.: Auditing in the Cloud: Challenges and
Opportunities. CPA Journal 82(1), 66–70 (2012)
48 N. Le Thi Quynh, J. Heales, and D. Xu
32. Barwick, H.: Cloud computing still a security concern: CIOs, September 17-20 (2013),
http://www.cio.com.au/article/526676/
cloud_computing_still_security_concern_cios/?fp=16&fpid=1
33. Emison, J.M.: 9 vital questions on moving Apps to the Cloud, in InformationWeek Reports
(2012)
34. Buttle, F.: Customer relationship management. Routledge
35. Benlian, A., Hess, T.: Opportunities and risks of software-as-a-service: Findings from a
survey of IT executives. Decision Support Systems 52(1), 232–246 (2011)
36. Durkee, D.: Why cloud computing will never be free. Commun. ACM 53(5), 62–69 (2010)
37. Dillon, T., Wu, C., Chang, E.: Cloud computing: Issues and challenges. In: 2010 24th
IEEE International Conference on Advanced Information Networking and Applications
(AINA). IEEE (2010)
38. Finnegan, D.J., Currie, W.L.: A multi-layered approach to CRM implementation: An inte-
gration perspective. European Management Journal 28(2), 153–167 (2010)
39. Garrison, G., Kim, S., Wakefield, R.L.: Success Factors for Deploying Cloud Computing.
Communications of the ACM 55(9), 62–68 (2012)
40. Roberts, P.W., Dowling, G.R.: Corporate Reputation and Sustained Superior Financial
Performance. Strategic Management Journal 23(12), 1077–1093 (2002)
41. AICPA, Audit Risk and Materiality in Conducting an Audit. Statement on Auditing Stan-
dards No.107, AICPA (2006)
42. Sun, B.: Technology Innovation and Implications for Customer Relationship Management.
Marketing Science 25(6), 594–597 (2006)
43. Yin, R.K.: Case study research: Design and methods, vol. 5. Sage (2003)
Towards Public Health Dashboard Design Guidelines
1 Introduction
Public health crises such as the recent Listeria outbreaks or the 2009 influenza pan-
demic require the immediate attention of public health directors and practitioners who
coordinate diagnosis and care for affected populations. Continual monitoring of the
public health environment allows for faster response and may reduce the impact of
such emergencies. To address this need, digital dashboards have been shown to be an
effective means to quickly assess and communicate the situation. Often these dash-
boards include computerized interactive tools that are typically used by managers to
visually ascertain the status of their organization (in this case, the public health envi-
ronment) via key performance indicators (Cheng et al., 2011). Dashboards allow users
to monitor one or more systems at a glance by integrating them and summarizing key
metrics in real time to support decision making (Kintz, 2012; Morgan et al., 2008). In
the medical field, dashboards continue to expand and have been used for purposes
such as emergency response coordination (Schooley et al., 2011), patient monitoring
(Gao et al., 2006), and influenza surveillance (Cheng et al., 2011).
The US states of Nebraska, Kansas, and Oklahoma use a public health emergency
response information system (PHERIS) to allow hospital microbiology laboratorians
to monitor and report public health episodes across their state. In the case of a
potential outbreak the PHERIS is the tool used by the microbiologists at the clinical
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 49–59, 2014.
© Springer International Publishing Switzerland 2014
50 B. Lechner and A. Fruhling
laboratory to consult with epidemiology experts at the State Public Health Laboratory
through a secure connection over the Internet. This system provides functionality to
send informational text and images of specimens between laboratories and the state
public health laboratory. However, to further enhance the functionality and usability
of the PHERIS it would be ideal if there were a single display screen (e.g. digital
dashboard) where the State Public Health Director could immediately assess if there
are any potential outbreaks on the cusp of happening with just a glance.
The first aim of our study is to analyze and apply dashboard specific design guide-
lines we identified in our literature review through a new dashboard interface opti-
mized for real-time disease outbreak and public health emergency surveillance.
Second, we will evaluate if there are any missing guidelines.
In the remainder of this paper, we begin by presenting background information on
the public health area, on the PHERIS (the system that is used in this study), and
on the various dashboard design guidelines found in the literature. Next, we present
our application of the selected medical dashboard guidelines to the new dashboard
design. Then we present our analysis of missing dashboard guidelines. We conclude
with remarks on the next phases planned for this study.
2 Background
The intent of the PHERIS (STATPack™) system used in this study was to address
critical health communication and biosecurity needs in State Public Health Laboratory
Towards Public Health Dashboard Design Guidelines 51
As shown in Error! Reference source not found., the number of guidelines spe-
cific to public health monitoring dashboards is relatively low -- only two studies pro-
viding a total of nine guidelines fall into this field (highlighted with an asterisk).
When we widen the criteria to include all medical dashboard guidelines, four more
studies presenting 33 guidelines can be included. Furthermore, there are two relevant
papers discussing 11 best practices for medical/public health emergency response
systems design. Also, two studies in the field of information visualization and general
dashboard design have some overlapping relevancy and thus, are included.
The dashboard and data visualization guidelines developed by Few (2006) and
Tufte (2001) were reviewed and considered in this study. Even though they are gener-
al in nature and not specific to medical dashboards we included them, because they
provide important contributions to information visualization and dashboard user inter-
face design.
We also included Turoff et al. (2004)’s eight design principles for emergency
response information systems (not necessarily dashboards) in our literature review.
We decided to do this because Turoff’s principles are concerned with the content
required to make emergency response information systems useful.
After identifying the most salient studies, we performed a meta-analysis of all the
guidelines for dashboard design. In total, 58 guidelines were identified in the litera-
ture. Among these there were several recurring themes as well as guidelines unique to
the medical field.
The most common themes were those of designing dashboards as customizable,
actionable “launch pads”, supporting correct data interpretation, and aggregating and
summarizing information. Also frequently mentioned were adherence to conventions,
minimalist design, in-line guidance and user training, workload reduction, and using
GIS interfaces. 33 of the guidelines were unique to the field of medical dashboards,
while 17 were not applicable and 7 were too general.
The other 50 guidelines can be sorted into these eight themes that emerged from
their review. Error! Reference source not found. shows the number of guidelines in
each thematic area and the studies represented within.
Towards Public Health Dashboard Design Guidelines 53
knowledge and expertise where there were gaps (Fruhling, 2006; Lechner et al., 2013;
Read et al., 2009). Figures 1 and 2 show the same STATDash, but at different states.
Error! Reference source not found. shows the overview screen, while Error!
Reference source not found. shows the location drill-down screen.
health laboratory experts displayed on a single screen. This was achieved by showing
the status of each location with a color code on a map. We also included two charts
below the map that show the history of alert activity at various intervals: yearly,
monthly or daily.
Activity is organized by routine/exercise and emergency/urgent alerts to allow the
user to determine if a state of urgency exists. The right side of the screen shows
details of recent activity (recent alerts received, clients becoming unavailable, and
images stored). This list can be filtered to show only activity of a certain type. In ad-
dition, users can customize the thresholds used to determine the color a location is
displayed in.
The dashboard is also actionable (Few, 2006; Morgan et al., 2008). Clicking on a
location marker allows the user to view details about that location, such as recent
alerts and images, contact information, and access to advanced functionality. In addi-
tion, clicking on a point in one of the charts shows the details of that data point.
These dashboard features require few touches/clicks to navigate the system
(Schooley et al., 2011). When a user wanted to send an alert to a client in the old user
interface, they had to click on the “Send Message” button, then locate the name of the
client in a long list, select it, and then type their message.
As mentioned above, aggregated information is data that has been gathered and
expressed in a summary form, often for the purposes of statistical analysis. In our
example, the STATDash shows information aggregated at different levels. At the top
of the screen, a statement informs the user about the overall level and trend of activi-
ty. The map allows a user to see activity by location at a glance by implementing a
traffic light metaphor and different colors to convey meaning (Cheng et al., 2011;
Morgan et al., 2008). The section on the right hand side shows more detailed, action-
able information about the most recent -- most urgent -- activity. Finally, the two
charts at the bottom give a summary of historical data. These four elements give a
non-redundant, condensed, complete picture of disease activity following the guide-
lines presented by Few (2006) and Gao et al. (2006).
56 B. Lechner and A. Fruhling
Adherence to convention can be thought of as systems adhering to the same look and
feel across the entire user interface and using familiar, established user interface ele-
ments. Convention was observed by retaining the same principles for core functionali-
ty as before, including alert meta-data and transmission. The terminology and labels
within the system have also remained the same. Familiar symbols such as the map
markers and traffic light color coding were employed. As such, it will be easy for
users to learn to use the new dashboard, as they will already be accustomed to the
functionality and terminology (Gao et al., 2006).
The map in the center of the dashboard provides situational awareness of disease ac-
tivity and trends. This graphical display is combined with the performance indicators
Towards Public Health Dashboard Design Guidelines 57
above and below the map for a multi-faceted view of the current status (Schooley et
al., 2011). The map allows users to pan and zoom and select clients to view detailed
information and interact with them.
3.9 Content
Every alert and image stored within the system is identified by its source and location,
time of occurrence, and status (emergency, urgent, routine, or exercise) (Fruhling,
2006; Turoff et al., 2004). This allows users to clearly determine the source and sever-
ity of an alert and respond to it accordingly in the case of an emergency.
Up-to-date information that is updated whenever a user loads a screen (Turoff et
al., 2004) is of great importance in an emergency response medical system and fully
implemented in STATDash, to ensure all users have the most current information
available to them for decision making.
3.10 Guidelines
Of the guidelines reviewed for this study, there were two guidelines that were not as
salient for PHERIS dashboards; rather they are just best overall practices. “Adherence
to conventions” is certainly a useful heuristic for designing dashboards, but it is too
general to be included in a set of best practices specific to PHERIS dashboards. In a
similar vein, providing “in-line guidance and training” is also too general. This guide-
line is applicable not only to this specific kind of dashboard, but to all computer sys-
tems in general (Nielsen, 1993).
The guidelines we found in our literature search were helpful in many ways; however,
we identified two gaps. Therefore, we are proposing the following new guidelines.
This guideline seeks to reduce the users’ cognitive load by including all indicators on
a single screen without a need for navigation. In addition, charts and graphs should be
used where sensible to show trends visually and for quick interpretation.
5 Conclusion
In conclusion, our analysis found several of the guidelines cited in the literature to be
appropriate and useful for public health surveillance dashboard design, yet, we also
discovered there were missing guidelines. Therefore, we propose two new guidelines:
minimize cognitive processing, and use of temporal trend analysis techniques. A limi-
tation of this study is that we have not validated the two proposed guidelines nor have
we conducted any user usability evaluation on our proposed STATDash design.
Therefore, the next phase of our research is to involve users in conducting various
usability evaluations on STATDash.
References
1. Centers for Disease Control and Prevention: CDC responds to disease outbreaks 24/7
(2011), http://www.cdc.gov/24-7/cdcfastfacts/
diseaseresponse.html
2. Cheng, C.K.Y., Ip, D.K.M., Cowling, B.J., Ho, L.M., Leug, G.M., Lau, E.H.Y.: Digital
dashboard design using multiple data streams for disease surveillance with influenza sur-
veillance as an example. Journal of Medical Internet Research 13, e85 (2011)
3. Diaper, D.: Task Analysis for Human-Computer Interaction. Ellis Horwood, Chichester
(1989)
4. Dolan, J.G., Veazie, P.J., Russ, A.J.: Development and initial evaluation of a treatment
decision dashboard. BMC Medical Informatics and Decision Making 13, 51 (2013)
5. Few, S.: Information Dashboard Design. O’Reilly, Sebastopol (2006)
6. Fruhling, A.: Examining the critical requirements, design approaches and evaluation
methods for a public health emergency response system. Communications of the Associa-
tion for Information Systems 18, 1 (2006)
7. Gao, T., Kim, M.I., White, D., Alm, A.M.: Iterative user-centered design of a next genera-
tion patient monitoring system for emergency medical response. In: AMIA Annual
Symposium Proceedings, pp. 284–288 (2006)
8. Institute of Medicine: The Future of Public Health. National Academy Press (1988)
9. Kintz, M.: A semantic dashboard language for a process-oriented dashboard design me-
thodology. In: Proceedings of the 2nd International Workshop on Model-Based Interactive
Ubiquitous Systems, Copenhagen, Denmark (2012)
10. Lechner, B., Fruhling, A., Petter, S., Siy, H.: The chicken and the pig: User involvement in
developing usability heuristics. In: Proceedings of the Nineteenth Americas Conference on
Information Systems, Chicago, IL (2013)
11. Morgan, M.B., Brandstetter IV, B.F., Lionetti, D.M., Richardson, J.S., Chang, P.J.: The
radiology digital dashboard: effects on report turnaround time. Journal of Digital Imag-
ing 21, 50–58 (2008)
12. Nielsen, J.: Usability Engineering. Academic Press, San Diego (1993)
13. Read, A., Tarrell, A., Fruhling, A.: Exploring user preferences for dashboard menu
design. In: Proceedings of the 42nd Hawaii International Conference on System Sciences,
pp. 1–10 (2009)
14. Schmidt, K.: Functional analysis instrument. In: Schaefer, G., Hirschheim, R., Harper, M.,
Hansjee, R., Domke, M., Bjoern-Andersen, N. (eds.) Functional Analysis of Office
Requirements: A Multiperspective Approach, pp. 261–289. Wiley, Chichester (1988)
Towards Public Health Dashboard Design Guidelines 59
15. Schooley, B., Hilton, N., Abed, Y., Lee, Y., Horan, T.: Process improvement and consum-
er-oriented design of an inter-organizational information system for emergency medical re-
sponse. In: Proceedings of the 44th Hawaii International Conference on System Sciences,
pp. 1–10 (2011)
16. Tufte, E.R.: The Visual Display of Quantitative Information, 2nd edn. Graphics Press,
Cheshire (2001)
17. Turnock, B.J.: Public Health: What It Is and How It Works. Jones and Bartlett Publishers,
Sudbury (2009)
18. Turoff, M., Chumer, M., Van de Walle, B., Yao, X.: The design of a dynamic emergency
response management information system (DERMIS). Journal of Information Technology
Theory and Application 5, 1–35 (2004)
19. World Health Organization: Public health (2014),
http://www.who.int/trade/glossary/story076/en/
20. Zhan, B.F., Lu, Y., Giordano, A., Hanford, E.J.: Geographic information system (GIS)
as a tool for disease surveillance and environmental health research. In: Proceedings
of the 2005 International Conference on Services, Systems and Services Management,
pp. 1465–1470 (2005)
Information Technology Service Delivery
to Small Businesses
Abstract. This paper reports findings from a study conducted to evaluate Intel’s
Service Delivery Platform for small businesses. The Service Delivery Platform
adopted a Software-as-a-Service (SaaS) approach, and aimed to deliver infor-
mation technology (IT) services on a pay-as-you-go subscription model. The
majority of small business decision makers found the solution appealing. Nev-
ertheless, wide adoption of the solution will be contingent on quality and
breadth of service offerings, cost, reliability of service delivery, and respon-
siveness of support.
1 Introduction
Small businesses in all countries are an important part of the economy [2, 4]. In the
USA, more than 98% of all firms are small businesses with less than one hundred
employees; these businesses employ about 36% of the total work force (USA census
data, 2004). They represent a market segment that is eager to explore or grow their
business with the help of new information technology. From 2004 to 2008 we visited
more than 50 small businesses to understand their technology needs in various areas,
including collaboration, information management, and IT manageability. We found
that IT landscapes in small businesses were smaller, but just as complex, as those in
large organizations. Small business needs included networks, servers, personal com-
puters, phones, printers, and many other hardware equipment. Like larger businesses,
they needed software applications for productivity, business process automation, and
internal and external collaboration. However, they were much more constrained than
larger businesses in terms of resources, knowledge, and expertise regarding informa-
tion technology. Small business owners consistently told us that they had challenges
in understanding and keeping up with the newest developments in technology, and in
selecting the best solutions for their businesses. They also had difficulty quickly
deploying solutions, maintaining a highly managed computing environment, and
providing end-user support. Many small businesses depended on external service
providers for IT management. These service providers were looking for solutions that
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 60–67, 2014.
© Springer International Publishing Switzerland 2014
Information Technology Service Delivery to Small Businesses 61
could help them to build trusted relationships more effectively with customers, and to
manage IT for different businesses more efficiently.
The Service Delivery Platform is designed to address these needs and challenges
for business owners and for service providers. The platform adopts a Software-as-a-
Service [1] approach. It aggregates services from different vendors, and aims to
deliver the services to small businesses with a “pay-as-you-go” subscription model.
Services here intend to cover applications that businesses may need for their daily
operations, including IT managerial, employee productivity and business processes.
The platform provides a web-based portal that is targeted to two types of users – 1)
business owners and decision makers, who will use the portal to conduct research on
IT solutions, and review recommendations and feedback from other users; 2) Internal
or external IT administrators, who manage services and provide support for end users.
The portal supports key user tasks such as service subscription, device management,
status monitoring, and remote trouble shooting and support. Key portal components
include:
This research was conducted to evaluate an early prototype of the Service Deliver
Platform with small business owners and their internal or external IT administrators.
In-depth interviews were conducted with twenty businesses in several locations across
the United States, including New Jersey, New York, and Oregon. The primary goal
was to understand their key perceptions regarding the value of such a solution, inten-
tion to adopt, decision factors, and potential adoption hurdles. To support further
design and development of the web portal, the research also tried to understand per-
ceived usefulness of its key features, and priorities of potential service offerings on
the platform.
2 Method
The two-hour interviews were conducted on the businesses’ sites with both the
business owners/decision maker and internal or external IT staff.
After general discussions about their business background and current IT practices,
the solution of Service Delivery Platform was presented to the interviewees with sto-
ryboards and visual paper prototypes or mockups. Afterward, those interviewed were
asked to 1) rate usefulness of major features of the platform and describe how the
features might be used in their organizations, and 2) review different potential service
offerings in the catalog and discuss whether they were interested in subscribe to dif-
ferent IT services from the platform 3) discuss overall appeal of the solution, adoption
hurdles and concerns.
3 Results
Out of the twenty businesses we interviewed, fifteen rated the platform solution as
appealing or very appealing. The businesses expressed general interest in subscribing
to services in areas related to security and protection, employee productivity (e.g.,
word processing and E-mail), and external service provider support. However, the
businesses also pointed out that their adoption would be contingent on a number of
factors, including cost, the breadth and quality of service catalog offerings, reliability
of service delivery, and responsiveness of support.
The businesses identified a number of values and benefits in the Service Delivery
Platform. Key values include ease of service deployment, ease of control and
management, pay-as-you-go flexibility, and potentials for preventive management.
challenging is trying to keep up with what’s available as far as new equipment and
what we can use”, and “it is time-consuming (to do research). I have no idea on what
is out there.”
The key features of the Service Delivery Platform appear to address this challenge.
One key benefit that business owners and IT staff identified was that the platform
potentially allowed easy research, and much quicker decision or deployment of IT
solutions.
The business owners viewed the service catalog as a place where they could con-
duct research on new technology, view opinions of other business owners and rec-
ommendations from other users. In addition, the platform provided a mechanism for
them to easily experiment with different potential solutions. For example, with mi-
nimal commitment they could easily install an application on one or several computer
and experiment with it. The ability to cancel services at any time gave users more
confidence to try out different services.
Ease of Control. Another key perceived benefit is ease of control and management.
IT staff liked the remote subscription service. Especially for external IT staff, the
ability to remotely install and uninstall services would allow them to more efficiently
provide support to customers in different businesses. They were most interested in
features allowing them to efficiently manage services for multiple computers. For
example:
• Creating an image or configuration with a set of various services, and then apply-
ing the image to a computer to install multiple services together.
• Copying the service configuration of one computer to another one: for example,
when a user’s computer needed to upgrading to a new service configuration. As
an IT staff said: “The hardest thing when upgrading a computer, is to get all that
information back over (to the new computer).”
In addition, the portal provided a centralized location for IT staff to track assets
and licenses, allowing businesses to view all their devices and the software installed
on each device.
A number of businesses mentioned current challenges in tracking software
licenses. As one owner said: “one of the challenges we run into is trying to keep track
of everything we have, all the software versions, all the licenses we have, the latest
downloads. That becomes extremely cumbersome.” Another IT staff said: “it is huge
being able to consolidate all your clients into one view.” The businesses pointed out
that the visibility also allowed them to more effectively plan for future technology
needs.
Flexibility. For the subscription-based payment model, the businesses identified two
main potential benefits: flexibility and cost saving. The ability to subscribe to or to
terminate service subscription at any time allowed businesses to pay for what they
were actually using. It enabled businesses to easily access expensive applications that
they did not use frequently, or not all of the time, such as video and image editing
64 M. Lu et al.
applications. The users also identified the benefits of easy decommissioning of ser-
vices from devices. As one owner said “That’s the hardest thing for a small guy (to
decommission devices); Jay leaves the company tomorrow, his laptop is sitting there,
no one’s using it, I want to be able to turn Jay’s office just in a manner that says Jay’s
not using it.” Another owned pointed out that “it is a much better approach than the
yearly commitment type.”
Preventive Management. Another key perceived benefit was that the Service Deli-
very Platform would allow businesses to shift from reactive IT management models
to proactive and preventive management models. It was observed that IT management
in these businesses was mostly reactive, in the sense that IT administrators acted
when users approached them with problems. The Service Delivery Platform offered
features such as asset tracking, device status monitoring, service status monitoring,
and service activity support. With these features, businesses would be more aware of
what devices were in the environment, how they were used, and how everything was
running. As a result, those interviewed said they would be able to “address issues
before catastrophic impact,” “more effectively anticipate and plan for user needs,”
“easily create a budget for IT services,” and “do more fire prevention instead of fire-
fighting.”
The participants were asked to rate usefulness of the main features on the platform,
using a five-point scale with 5 being “very useful”, and 1 being “not useful at all.”
Table 1 summarized the highest rated features. These ratings were consistent with
participants’ discussions on key values of the platform. The most highly rated features
were related to ease of service deployment, preventive management, centralized
tracking and control.
Both business owners and their IT administrators were interested in ability to
quickly deploy services with a “service image or profile”, or by “duplicating service
configuration from one device to another.” The values of these features were abilities
to quickly provision or deploy a computer for a user. Similarly, when a computer was
no longer used, for example, after a user had left the company, businesses wanted to
quickly “decommission” the computer so that they would not pay for the services.
The features of “real time service status” and “device status” were found useful
because they allowed internal or external IT administrator to closely monitor their
computing environments and take proactive actions if needed. Finally, the businesses
owners liked the ability to “track all their assets” via the portal, and the ability to
receive and review a “service activity report” to understand what services they had
received, and how much they had cost; the information would be useful for creating a
budget plan for the future.
Information Technology Service Delivery to Small Businesses 65
Features Rating
Real time service status 4.4
Device asset tracking on the portal 4.2
Service configuration duplication - Allow quick 4.2
deployment of PC to replace an old one
The businesses were most interested in services related to security and protection,
and basic employee productivity including office, email and file sharing applications.
High levels of interests in security and protection services were consistent with the
66 M. Lu et al.
participants’ discussions on their current challenges. One major challenge pointed out
by several different businesses was protection from malware or spam email from the
Internet. As one IT staff said “A big problem is people download some programs that
make their PC not working. It (PC) slows down and becomes unusable. It is very
time consuming to solve the problem.” As another business pointed out, “Biggest
thing we have to watch is e-mail spam… What the server spends most of its time
doing is rejecting spam. .. 5,000 to 8,000 collectively a day we get hit with.”
In contrast, the businesses expressed lower levels of interests (<50%) in more so-
phisticated applications such as voice over IP (VoIP), database, business intelligence
(BI), virtual private network (VPN), remote firewalls, project management, customer
relationship management, and content management. The main reasons given for the
lower level of interests were: lack of needs, and existence of similar applications that
they were not likely to replace in the near term.
Even though the businesses demonstrated enthusiasm in the solution of Service Deli-
very Platform, they pointed out several potential adoption hurdles.
• Cost: the interviewees could perceive the cost saving benefits from the subscrip-
tion-based service model, nevertheless, they mentioned that they would carefully
compare its cost to that of more traditional purchase models or shop at multiple
places to look for the prices. It was critical for the platform to provide compelling
pricing models so that businesses could reduce the total cost of IT operations.
• Quality and breadth of service offerings: Even though the businesses expressed
more levels of interests in different services, they expected the service catalog to
offer a wide collection of high quality of services. The participants mentioned that
the best adoption entry points were when businesses were purchasing new comput-
ers or a new business was formed. At the time, they expected the service catalog to
provide services for all basic computing needs.
• Reliability: Businesses expected the platform to deliver and install services in a
highly reliable fashion, and that the services would not cause any disruption to PC
performance. As one owner said. “We cannot afford any downtime -- every minute
we will be losing money.”
• Responsiveness of support: Business owners expected a very quick support re-
sponse, a response as fast as they currently received from internal staff or local
service providers. “They should be just one phone call or one email away.”
4 Discussions
Small businesses have large and complex demands for information technology, none-
theless lack expertise and resources to stay abreast with the newest developments.
From this study, small businesses experience numerous pain points with traditional
models of software or service management, including research, purchasing, deploy-
ment, license management, maintenance contracts, and expensive upgrades. Software-
as-a-service approaches appear to have the advantage of provide the simplicity and
Information Technology Service Delivery to Small Businesses 67
• A service catalog with information tailored to business owners. Typically they are
not technology experts and are not interested in technical details.
• Easy communication with external service providers, for example, the ability
to receive reports on what services have been provided, proactive and tailored rec-
ommendations on what technology might be useful for the businesses.
• Quick deployment, with the ability to easily experiment with different solutions,
and then quickly deploy solutions.
• Technical details in service catalog as they need much more detailed information
on different services offered in the catalog.
• Well integrated service management tools, including asset tracking, service sub-
scription management, status monitoring, and device remote control for manage-
ment or trouble shooting purposes.
• Service bundling and packaging that provides the ability to easily create different
service bundling or packages for different business customers or end users.
• Customer management and support tools for external service providers that support
customer management, such as billing, support ticket management, and communi-
cation with customers.
References
1. Bennett, K., Layzell, P., Budgen, D., Brereton, P., Macaulay, L., Munro, M.: Service-Based
Software: The Future for Flexible Software. In: Proceedings of Seventh Asia-Pacific Soft-
ware Engineering Conference, pp. 214–221 (2000)
2. Berranger, P., Tucker, D., Jones, L.: Internet Diffusion in Creative Micro-business: Identi-
fying Change Agent Characteristics as Critical Success Factors. Journal of Organizational
Computing and Electronic Commerce 11(3), 197–214 (2001)
3. Rogers, E.M.: New Product Adoption and Diffusion. Journal of Consumer Research 2,
290–301 (1976)
4. Thong, J.Y.L.: An Integrated Model of Information Systems Adoption in Small Businesses.
Journal of Management Information Systems 15(4), 187–214 (1999)
Charting a New Course for the Workplace
with an Experience Framework
Abstract. Like many, our company had a wealth of data about business users
that included both big data by-products of operations (e.g., transactions) and
outputs of traditional User Experience (UX) methods (e.g. interviews). To fully
leverage the combined intelligence of this rich data, we had to aggregate big da-
ta and the outputs of traditional UX together. By connecting user stories to big
data, we could test the generalizability of insights of qualitative studies against
the larger world of business users and what they actually do. Similarly, big data
benefited from the rich contextual insights found in more traditional UX stu-
dies. In this paper, we present a hybrid analysis approach that allowed us to le-
verage the combined intelligence of big data and outputs of UX methods. This
approach allowed us to define an over-arching experience framework that pro-
vided actionable insights across the enterprise. We will discuss the underlying
methodology, key learnings and how the work is revolutionizing experience de-
cision making within the enterprise.
1 Introduction
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 68–79, 2014.
© Springer International Publishing Switzerland 2014
Charting a New Course for the Workplace with an Experience Framework 69
paper, we present a hybrid analysis approach that allowed us to leverage the com-
bined intelligence of big data and outputs of UX methods to define an over-arching
experience framework that is being used to frame the One IT experience and seed
human-centric transformation within the enterprise. We will discuss the underlying
methodology, decompose the framework, and provide examples of how it is being
used by the larger IT shop. Lastly, we will map the evolution of this effort over the
last two years, share learnings and insights from our journey, and discuss the benefits
of having a data-driven, re-usable and over-arching experience vision to guide enter-
prise decision-making.
2 Background
The data that enterprises collect every day is a storehouse of information about
business users. It includes enterprise transactions, social data, support tickets, web
logs, internet searches, clickstream data, and much more. Enterprises often manage
data related to users in silos around infrastructure or application support. Similarly,
analysis efforts focus on identifying problems related to the silo. Despite the rich
information contained in this data, it is seldom used to improve the cross-enterprise
experience of business users. Similar to how outside corporations examine the cus-
tomer usage and interactions (e.g. Amazon, Google) to tailor the experience of pur-
chasing or support for customers [1], enterprises could utilize knowledge about
employees to enhance their business experience. However, tools to derive insights
from big data are immature, especially with respect to UX; and analysis is hampered
by the fact that most of this data is incompatible, incomprehensible, and messy to tie
together. Further, even when this data is connected, big data is a backwards look at
what has been. It cannot help enterprises fully understand what motivates the user
behavior that they track or understand the full context in which it occurred. It does not
help enterprises spot future looking opportunities for providing new value to their
users, design a better solution, or better engage their users; and those places are where
user experience has the most potential to add value to the enterprise. Big data lacks
the contextual insights necessary for user-centric design and innovation.
Fortunately, where big data falls short, more traditional UX methods excel. Many
UX methods rely on user narratives or observations that come from interviews, parti-
cipatory design sessions, social media, or open-ended comments on surveys. They
provide the qualitative color that yields the richer understanding of the holistic expe-
rience necessary for experience innovation or improvement. While traditional UX has
a wide variety of methods (e.g. affinity diagrams, qualitative coding) to help UX
professionals transform qualitative data into insights, they often only talk to small
numbers of users which puts their generalizability in question in the corporate envi-
ronment. In addition, the output of these methods does not lend itself to easy mixing
with big data; nor are user narratives usually analyzed to the point where underlying
structures are visible [2]. And, much like the transactional data the enterprise collects,
data collected by UX professionals often remains siloed and is not re-used or used to
form a larger understanding of the enterprise experience.
70 F. McCreary et al.
Leveraging the combined intelligence of big data and traditional UX data can be a
daunting task as the data sets lack connections, or a way to pull together the diverse
data and connect to specific aspects of the experience. Sociotechnical systems theory
and macro ergonomics offer a way of connecting disparate data and provide a theoret-
ical model for understanding the holistic user experience. They have been used suc-
cessfully to holistically assess how well a technology fits its users and their work
environment in relationship to enterprise priorities using diverse data types [3, 4].
They are especially useful for examining the business experience, as success requires
IT understand how their “technology” impacts other elements of the user’s world.
Enterprises collect large amounts of user data in terms of user demographics (e.g.
role, organization) and as by-products of user transactions (e.g., portal usage, support
tickets). Aggregated together they provide a holistic picture of the enterprise expe-
rience. While some data is considered confidential (e.g., age), other data is more pub-
licly available (e.g., app use). Regardless, all data is typically protected in enterprises
which necessitates both legal and privacy negotiation before aggregating the data.
Prior to making any attempt to integrate the data sets, the raw data was anonymized
by replacing all employee identifiers with an encrypted unique identifier.
When we initially went to gather the user data, we naively expected an enterprise-
level data map that would help us locate relevant data. Instead, the process was a trea-
sure hunt for data that could enrich our understanding of employee usage of enterprise
products and services. The data was a mix of structured and unstructured data. Data
formats were sometimes undocumented, and often inconsistent within and between
datasets, with formatting often changing over time resulting in inconsistencies within
a single dataset. The management of structured versus unstructured data meant tra-
deoffs between what was known and what could be feasibly stored or analyzed. We
regularly exceeded the limits of our data storage and analysis capabilities and some-
times had to distill raw data into meaningful summary data. For instance, support
tickets were reduced to total number of tickets and mean time between tickets. This
Charting a New Course for the Workplace with an Experience Framework 71
mountain of data was then distilled into individual employee usage footprints using
the coded identifiers. By organizing the data in terms of individual users, we could
more easily discern individual patterns, allowing us to more easily integrate new
quantitative information as it was discovered.
The final coding tree represented the users’ over-arching mental model of the expe-
rience [6] and defined the experience users wanted the enterprise to deliver. It mapped
patterns of user behavior and needs, with detail to get to requirements. We then
looked for meta-patterns, or schemas shared by enterprise users, again using an affini-
ty diagram type exercise [6] as a way of data sense-making. The derived meta-
patterns became the foundation of the experience framework.
We then connected the narratives with our “big” enterprise data using the coded
identifiers. Rather than merge the whole narratives as unstructured data, we defined
summary measures based on the coding framework. These summary measures
72 F. McCreary et al.
connected the user stories with the larger dataset to help us discover patterns across
datasets. For each node in the first few levels, we specified two summary measures:
(1) total number of references coded for the node, and (2) number of references coded
for the node that were negative (i.e., pain point). Using correlational methods, ma-
thematically best “fit” patterns were identified in the combined dataset based on simi-
larities in how employees used and talked about enterprise products and services. We
used non-parametric methods as the data was often non-normal. Cross-references
between the datasets allowed us to find connections and validate our findings from
other data sets [5]. This process was highly iterative with a continuous cycle of data
and user research. By making the combined dataset a living thing, we could add in
more as needed and it ensured the enterprise has a constant pulse of user needs, can
strategically identify key opportunities, and can respond more quickly when new
needs arise. The final best “fit” patterns became the building blocks of the experience
framework and will be discussed more in the next section.
The experience framework is a conceptual map of the desired user experience and our
intent was for the framework to become the common language and shared framework
for designing and evaluating enterprise services for the Intel user. In order to facilitate
the ability of product teams to use the framework, we introduced large-scale, layered
storytelling to unify the supporting framework collateral. The underlying stories focus
on particular elements of the dataset and ignore the rest. Strung together they map the
desired enterprise experience but individually only tell a piece. The data set is too
large and diverse to be told by a single story. Users of the experience framework take
these stories and data to create their own stories relevant to their product; many
stories are possible from the same data.
Different framework elements provide different insights. Themes define the enter-
prise experience vision that spans the many products and services provided by Intel
IT. Segments define the user groups that must be taken into account when creating the
enterprise experience, while influencers and activities help IT understand the role it
plays in core enterprise tasks and its impact on the overall experience. Much has been
learned about how to most effectively use this information with product teams and the
collateral has iteratively evolved to better help teams make sense of the large dataset.
Social media is used extensively to socialize the framework; training and workshops
were developed to optimize its use by service and portfolio teams.
components and the strategic functionality necessary to bring them to life. They were
packaged as quality “trading cards” and are used by teams while setting UX strategy
and product roadmaps. Each card details the key use scenarios for that quality and
proposed functionality. Experience qualities are further broken down into experience
elements which document key usage scenarios and requirements users expect in
products. This information was packaged in theme vision books and as 8x10 cards to
facilitate use during face-to-face design sessions. Three themes, 12 qualities, 59
experience elements and hundreds of requirements detail the desired over-arching
experience and are summarized in Table 1.
Table 1. The themes and qualities that framed the envisioned experience [5]
Theme Qualities
Feed Me Seamless - Transparent. Integrated but flexible.
I quickly and easily find Simple - Quick and easy. Language I can understand.
Meaningful - Points me in the right direction, aids me in sense-
the information I need to
making of information, and helps me work smarter.
speed my work. Proactive – Push me relevant information, make me aware of
changes before they happen, and help me not be surprised.
Connect Me Purposeful - Together we do work.
Connect me with the Easy - Easy to work together and connect.
Cooperative – Larger environment is supportive of me.
people, resources, and
Presence - Always present or at least I feel like you are near.
expertise I need to be
successful.
Know Me Recognized - Know who I am.
My information is known, Personalized - Implicitly know what I need.
Customized - Give me choices.
protected and used to im-
Private - My information is under my control. Always pro-
prove provided services. tected and secure.
enterprise experience and detail key pain points associated with a particular element.
They also help teams identify potential partners when improving the experience and
the potential impact of design changes.
Core activities provide product teams with specifics in how employees use and inte-
ract with enterprise products to accomplish shared tasks common to all employees
and provide teams with high-level journey maps for various key activities such as
“learn” or “find information.” The activity journey maps also describe key segment
differences relative to the activity, and provide a jumping off point.
An early adopter of the framework within Intel was the collaboration portfolio, which
is comprised of a set of technologies that help Intel employees collaborate and
includes social media, meeting tools, meetings spaces, and shared virtual workspaces.
The impact of the framework has been wide-ranging, from setting portfolio UX strat-
egy to vendor selection to helping an agile product team move faster. They evolved
our original approach by combining use of the experience framework with elements
of presumptive design [8]. The experience themes along with what was already
known about a particular audience (e.g., field sales) formulated the starting “presump-
tions” on which designs were based. These starting presumptions were then validated
using low cost methods and prototypes. In this section, we provide an overview of
how the framework aided their team.
The framework provided significant insights about what Intel employees need from
the enterprise collaboration experience. We provided teams with experience maps of
the employee vision of the future for enterprise collaboration. The key needs included
the mindmap and the user needs defined by experience qualities and elements to get
the design process started. They isolated the elements relevant to collaboration and
completed a heat map to identify how well today’s capabilities are meeting target
requirements for each collaboration element and how important each of those
elements are to enterprise users. Answers to these questions helped the team set their
UX roadmap and to prioritize where to focus first. For example, an element critical to
initiating collaboration is “Bump into Interesting,” which is about helping users se-
rendipitously bump into information or people that are interesting and useful to them.
In this case, the team found the portfolio didn’t have solutions that were meeting the
target requirements.
Both the framework and deep dive research repeatedly highlighted expert or expertise
finding as a key need. The agile-based project team used the experience themes as a
starting point for their efforts to rapidly go from concept discussions to prototype.
During the initial team kickoff, the team found the strongest affinity with the Connect
Me and Feed Me themes which focus on the need to quickly find information and
connect employees with expertise. The associated element cards were a starting point
for the team’s Vision Quest activities and were a catalyst to helping the team form a
design hypothesis around core presumptions of what features and capabilities should
be included in the solution. Many of the early presumptions the team captured were
based on previously gathered user data, and the experience elements.
A series of contextual scenarios were written from the design hypothesis which
were then organized to form a high-level “narrative” or persuasive story of the prod-
uct vision. These were then documented in a storyboard. The experience themes
inspired many of the design patterns reflected in the proof-of-concept (POC) proto-
types, and the storyboard contained a swim lane the team used to map the experience
themes. To validate design presumptions, several intervals of presumptive design tests
were conducted with end-users in tandem with design activities. Features not vali-
dated as “valuable” by users were removed from the storyboard and product
vision. The vision iteratively became more defined and evolved into a ‘lightweight’
clickable prototype used to engage stakeholders and the technical team in feasibility
discussions.
6 Discussion
• Teams should use the qualities to evaluate their own product at the start of using
the framework; it is key to learning and provides a baseline for improvement.
• Experience quality cards are paramount for setting vision and strategy. They spark
conversation and provide easy functionality checklists to feed UX roadmaps.
• Product teams need experience element cards that provide user requirements,
scenarios, and key audience differences once they move from strategy to design.
• Sample designs that embody the experience themes and elements are important to
spark new ideas or conversations about how the pattern can be improved.
• Different people have different learning styles and different teams have different
ways to work together. If collateral doesn’t resonate, iterate, iterate, iterate.
• We have found that generating design ideas is often fastest when you have a hard-
copy of element cards and other experience theme collateral so participants can
“re-use” collateral elements in discussions and prototyping.
Charting a New Course for the Workplace with an Experience Framework 77
Making the Story Consumable. The size of our dataset made keeping the UX story
consumable extremely difficult. How do you turn mountains of user data into a
framework that can be digested by a diverse audience? We answered this challenge by
developing a multi-layered storytelling approach which included a variety of collater-
al forms – from vision books to quality cards, element cards, and reference sheets. We
also created job aids, including an evaluation spreadsheet that allows teams to grade
their solution according to the framework. Even with the wide range of collateral
available, teams can still find it unwieldy to work with, especially in the beginning.
Newcomers can easily lose their way in the multi-layered story so we work directly
with teams to help them understand the framework.
Exponentially Increasing Big Data. In the two years since the introduction of the
framework, the underlying data set has grown 275% and the supporting story-telling
collateral has grown by 870%. That’s a lot of information for anyone to digest and
maintain. While the challenges of use are large, the value of incorporating additional
data in the framework is immense. Increasing the variety of data allows us to identify
correlations of activities, allowing us to refine the enterprise footprint to increase our
understanding of user behavior and needs. Lastly, although collateral growth is begin-
ning to stabilize based on active use by Intel IT project teams, the underlying data set
is expected to grow even more rapidly in coming years as analysis tools become
capable of handling even larger data sets. Only about 30% of available user transac-
tional data has been incorporated in the current framework and the amount of data
continues to increase on a daily basis further exacerbating the challenges of re-use and
sense-making by project teams.
Enabling Social Storytelling and Knowledge Sharing. The framework and collater-
al put a face to the big data and provide an approach to defining a unified enterprise
experience, but they are merely the tip of the iceberg of potential insights that could
be derived from the underlying data set. Today, storytelling is primarily limited to the
research team that produced the experience framework or the UX professionals who
work directly with them. The rich data available on individuals, specific job roles,
different organizations, and geographic areas makes possible a great many more sto-
ries than our current collateral. The lack of “self-service” environments to enable
utilization of the data limits its broader utilization.
78 F. McCreary et al.
The majority of our collateral resides in flat files or posts in social media forums.
The framework has not yet been brought to life online and no easy methods exist for
teams to share outside of forum posts. Until the structure is available online and anno-
tatable, widespread sharing and associated efficiencies are unlikely to occur. We need
to enable project teams to not only re-use existing knowledge but also to add to it with
detailed stories of use and new data.
7 Conclusion
In a world where businesses are constantly expected to move faster and workers be-
come increasingly sophisticated in their expectations of technology, an experience
framework can help speed up the business and become a force for UX transformation.
This hybrid approach is a fundamental shift in the management of the business expe-
rience from the perspective of UX and enterprise IT. By aggregating big data and the
outputs from more traditional UX together, UX teams can more quickly seed UX
within businesses. By connecting user stories to big data we can understand if our
insights from qualitative studies are generalizable to larger groups of business users.
Presenting big data in ways typically used by traditional UX (e.g., personas) can make
it more accessible. Together, big data and UX data are more powerful.
The experience framework defines interaction norms across enterprise tools and
serves as design guard rails to help developers create better interfaces. A common
framework and language understood by all results in more productive team discus-
sions that generate strategy and design ideas faster. However, transformation using
the framework is possible only when the findings are communicated in various ways
so that it resonates with the broad base of people who work together to define and
develop the workplace experience. A developer will look at the framework collateral
thru a different lens than a business analyst or a service owner. Furthermore, trans-
formation is a participatory process—it is not something that can be done by merely
Charting a New Course for the Workplace with an Experience Framework 79
throwing the framework over the wall to the business. For change to happen, all levels
of the organization must participate in the conversation and take ownership of how
their own role impacts the enterprise experience. The road to transformation that is
paved by an enterprise framework is often hard, uphill, and fraught with challenge,
but for those who take this journey, an experience framework can help seed a shared
vision and light the way for the action needed to bring the vision to life and signifi-
cantly improve the business user experience.
References
1. Madden, S.: How Companies like Amazon Use Big Data to Make You Love Them,
http://www.fastcodesign.com/1669551/
how-companies-like-amazon-use-big-data-to-make-you-love-them
2. Tuch, A., Trusell, R., Hornbaek, K.: Analyzing Users’ Narratives to Understand Experience
with Interactive Products. In: Proc. CHI 2013, pp. 2079–2088. ACM Press (2013)
3. McCreary, F., Raval, K., Fallenstein, M.: A Case Study in Using Macroergonomics as a
Framework for Business Transformation. Proceedings of the Human Factors and Ergonom-
ics Society Annual 50(15), 1483–1487 (2006)
4. Kleiner, B.: Macroergonomics as a Large Work-System Transformation Technology.
Human Factors and Ergonomics in Manufacturin 14(2), 99–115 (2004)
5. McCreary, F., McEwan, A., Schloss, D., Gómez, M.: Envisioning a New Future for the
Enterprise with a Big Data Experience Framework. To appear in the Proceedings of 2014,
World Conference on Information Systems and Technologies (2014)
6. Young, I.: Mental Models: Aligning Design Strategy with Human Behavior. Rosenfeld
Media (2008)
7. Beyer, H., Holtzblatt, K.: Contextual Design. Interactions 6(1), 32–42 (1999)
8. Frishberg, L.: Presumptive Design: Cutting the Looking Glass Cake. Interactions 13, 18–20
(2006)
The Role of Human Factors in Production Networks
and Quality Management
1 Introduction
Many of today’s products are built from a large number of components that are deli-
vered by a number of different suppliers. To enable a company to profitably manufac-
ture its products, an efficient and viable production network is required. However, in
today’s globalized world these networks have reached a very high complexity [1].
Decision makers in current production networks need to have a comprehensive over-
view of the interrelationships of their company, the suppliers, and customers of many
of different products and components. The arising problems are twofold: Not only do
the decision makers have to ensure that enough components are available in the pro-
duction process, but also a sufficient quality of the components has to be assured.
Modern Enterprise Resource Planning systems support people in their decision
making. However, the huge quantity of presented and retrievable information might
lead to information overflow and users who might focus on the wrong parameters,
leading to inefficiencies, low product quality, or lower profits in the production
networks. Human behavior in production networks and quality management is insuf-
ficiently explored. In order to study decision making processes in quality management
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 80–91, 2014.
© Springer International Publishing Switzerland 2014
The Role of Human Factors in Production Networks and Quality Management 81
and to develop tools that can give suitable support to decision makers, we developed a
web based simulation that puts users into the role of decision makers.
This publication serves a dual purpose: First, we present the design and implemen-
tation of a simulation game for quality management in production networks. Second,
we analyze the effect of human behavior and characteristics in the developed game as
well as the consequences for real world companies.
The Q-I game model is designed around three pivotal decisions (see Figure 1 for a
schematic representation). First, players have to invest in the inspection of incoming
82 R. Philipsen et al.
goods. Second, players need to control the investments in their company’s internal
production quality. Third, similar to the Beer Distribution Game, players need to
manage the procurement of vendor parts. The players have to find an optimal trade-
off between these three dimensions in order to make the highest profit. The influences
of these dimensions on the company’s profit are explained in the following.
The first dimension contains the inspection planning and control of supplier parts,
including complaint management between the manufacturer and his supplier.
Inspections at goods receipt can cause an ambivalent behavior of quality and pro-
duction managers. While the inspection itself is not a value-adding process and hence
a driver of variable and fixed production costs, inspections give the managers the
opportunity to protect their production systems from faulty parts and goods. Also, it
facilitates the supplier evaluation and development since the quality of supplied parts
and goods is measured.
The production quality dimension is taking the production and final product quality
of the manufactured goods into account. Investments in production quality will in-
crease costs, but it will decrease the number of customer complaints.
To assure a continuous production, the player has to procure necessary parts from
its supplier. Contrary to the Beer Distribution Game, the customer demand is kept
constant within the Q-I game, in order to leave the focus on the decisions of quality
management. Nevertheless the player has to consider scrapped parts due to low pro-
duction quality or blocked parts due to poor supplier product quality in their orders.
The Q-I game gains complexity through the introduction of random events. First,
the quality of the vendor parts can change drastically. Second, the internal production
quality can change. Possible reasons are broken machines, better processes, failures in
the measurement instruments, etc. Third, the customer demand may shift.
After implementing the Q-I-Game with Java EE 7, it was used in a study to validate
the game model and research possible effects of human factors on players’ perfor-
mances within the game. In the following sections, we present the defined variables,
the experimental setup, and the sample of the study.
Detailed logs of investments, incomes, costs and profits of each simulated month
were used to analyze the players’ behaviors within the game. The achieved profit was
used as the central measure for the players’ performances. In addition, several infor-
mation about the players’ interactions with the game were recorded: duration of read-
ing the instructions, time to complete a month as well as a round, the number of help
accesses and the number of adjustments to investments and orders.
to recruit participants for the study. Each had to play 2 rounds of 24 month each. 219
people started the online pre-survey, 129 played both rounds of the game and finished
the post-survey. The obtained dataset was revised to eliminate players who did not
play seriously, i.e. who placed excessive investments or orders or did not change the
settings at all. Therefore, two cases had to be removed for not performing any adjust-
ment during both rounds. Accordingly, the final revised dataset contained 127 cases.
Although the participants had to play 24 simulated month per round, only the data of
up to and including month 20 were used in the analysis to exclude possible changes of
players’ strategies late in the game like emptying the warehouse completely.
3.6 Participants
97 (76.4%) of the participants were male, 30 (23.6%) were female. They were be-
tween 17 and 53 years of age. The mean (M) age was 27.7 years (SD 7.2 years).
58.6% (60) of the participants reported a university degree as their highest achieved
level of education. 39.7% (50) participants had a high school diploma and 6.3% (8)
had vocational training. The average level of previous experiences regarding the sub-
ject matter were rather high. 67.7% (86) had previous knowledge in quality manage-
ment, 65.9% (83) in business studies and 57.5% (73) in production management.
The participants’ average personality traits regarding the five factor model were
comparable to the reference sample of Rammstedt [10] with the exception of a
slightly lower level of agreeableness. The only significant difference between men
and women regarding this model was found at the neuroticism scale (F(1, 125) =
7.498, p = .007 < .05*): men showed lower average levels (M = 1.99, SD = 0.97) than
women (M = 2.58, SD =1.22). In addition, gender related differences were found
regarding all three inventories of needs (recognition, power, security) (p < .05* for all
needs), technical self-efficacy (p = .000 < .05*), willingness to take risks (p = .002 <
.05*) and performance motivation (p = .000 < .05*). With the exception of the need
for security men showed higher average levels in all aforementioned scales. In con-
trast, there was no significant difference found regarding the attitude towards quality.
4 Results
The result section is structured as follows: First, we will present the impact of the
game mechanics and instructions on the player’s performance. Second, we will have a
closer look at the impact of user diversity. Furthermore, we will present the effects of
behavior and strategies within the game. Last, we will report the ranking task results.
The data was analyzed by using uni- and multivariate analyses of variance
(ANOVA, MANOVA) as well as bivariate correlations. Pillai’s trace values (V) were
used for significance in multivariate tests, and the Bonferroni method in pair-wise
comparisons. The criterion for significance was p < .05 in all conducted tests. Median
splits were used for groupings unless the factor offered a clear dichotomy.
Unless otherwise described, the effects in the following are valid for both rounds of
the game. However, for clarity reasons, only the effect values of the second round will
The Role of Human Factors in Production Networks and Quality Management 85
be reported. All profit related values like means and standard deviations will be re-
ported in thousands for similar reasons; for computations the exact values were used.
A two-way ANOVA revealed that the drop of internal production quality had a
significant effect on players’ average profits (F(1, 122) = 12.342, p = .001 < .05*); in
particular, players averagely performed significantly worse under game conditions
containing the aforementioned drop. On the other hand, the spontaneous drop of sup-
plier’s quality had no significant influence on average profits.
With both possible quality drops controlled, the presence of signal lights had no
significant effect on players’ average profits (p = .537, n.s.). Also, the impact of sig-
nal light availability within any of the four possible game conditions resulting from
quality drop combinations did not reach the criterion of significance. Both the pres-
ence of signal lights and the quality drops of supplier and internal production as expe-
rimental variables will be controlled in the computations of the following sections.
There was a strong correlation between players’ average profits in the first and in the
second round (r=.730, p=.000 < .05*); accordingly, participants who achieved a
high/low profit in the first round, on average achieved the same level of profit in the
second round. Furthermore, players’ mean profit increased significantly between the
first (M = -19.0, SD = 258.5) and the second round (M = 76.6, SD = 218.3) with
Pillai’s trace value (V) = 0.23, F(1, 126) = 36.6, p = .000 < .05*.
86 R. Philipsen et al.
Several aspects of user diversity have been studied for potential effects on players’
performances within the game. First, male participants made a higher average profit
(M = 104.9, SD = 187.1) than women (M = -14.7, SD = 282.5). However, the effect is
only significant for the second round (F(1, 124) = 7.160, p = .008 < .05*), not the first
round (F(1, 124) = 3.235, p = .074, n.s.). Second, there was no correlation between
age and the player’s profit (r = .057, p = .553, n.s.). Previous experiences did not
influence the game performance, e.g., neither knowledge in quality management (p =
.087, n.s.) nor business studies (p = .070, n.s.) had a significant effect on performance
within the game with game conditions controlled. Although participants with a high
level of domain knowledge performed better under game conditions containing the
aforementioned drop of internal production’s quality (M2QM = 86.8, SD2QM = 150.3)
than players with low knowledge (M2QM = -59.5, SD2QM = 333.8), this effect was only
significant in the second round of the game (F(1, 58) = 4.928, p = .030 < .05*).
In addition to the customary demographic data several personality traits were ana-
lyzed. First, none of the “Big Five personality traits” of Rammstedt et al. [10]
impacted the players’ performances significantly (p > .05, n.s. for all indexes).
Second, and contrary to several previous studies, there was no significant relation
between technical self-efficacy and achieved average profit (r = .163, p = .084, n.s.).
Third, there was no effect of the willingness to take risks on players’ performances.
Neither the “General Risk Aversion”-index of Mandrik & Bao [11] (r = -.174, p =
.065, n.s.) nor the “Need for Security”-index of Satow [12] (r = .054, p = .573, n.s.)
correlated with the achieved profits. Moreover, the personal attitude towards quality
did not correlate with participants’ average performances within the game (r = .109, p
= .248, n.s.).
Two main factors were analyzed regarding the players’ behaviors within the game.
First, the duration of playing correlated with players’ average profits in the first round
(r = .301, p = .001 < .05*). Therefore, spending a higher amount of time for a game
averagely led to significantly higher profits in the first round. However, the effect was
no longer significant in the second round (r = .142 p = .112, n.s.).
Second, the number of adjustments correlated with players’ performances
(r = .303, p = .001 < .05*). Users who adapted their investments and orders frequently
achieved higher mean profits. A per-month analysis revealed that the average number
of adjustments made by participants who achieved a high profit exceeded the adjust-
ments of low performers in every month, as shown in Figure 2. Moreover, there was a
peak in high performers’ adjustments in month 11 as a reaction to the spontaneous
drops of the supplier’s and/or the internal production’s quality in month 10. This
change in interaction between month 10 and 11 is significant for high performers
(V = .164, F(1, 62) = 12.140, p = .001 < .05*). In contrast, there was no significant
change in the adaption behavior of low performers at that time (V = .001, F(1, 63) =
0.088, p = .768, n.s.). Also, there is a medium correlation between the averagely
The Role of Human Factors in Production Networks and Quality Management 87
performed adjustments in the first and the second round (r = .580, p = .000 < .05*). In
particular, players who frequently/rarely adapted their investments and orders in the
first round, acted similarly in the second round.
3
2
1
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
month
Fig. 2. Average adjustments per month of high and low performers in the second round
100
50 M = 21.1
(SD = 280.4)
0
low high
level of quality orientation in strategy
Fig. 3. Means (SD) of profit regarding strategies with different levels of quality orientation
88 R. Philipsen et al.
Most of all, the level of quality orientation in players’ strategies correlated signifi-
cantly with the average performances (r = .370, p = .000 < .05*); therefore, partici-
pants with a quality-oriented strategy averagely performed better (M = 136.1, SD =
96.3) than participants who were inclined to ignore quality aspects (M = 21.1, SD =
280.4), as shown in Figure 2.
The participants also had to rank different requirements regarding their demands on
the provision of data. There was no significant difference in the average rankings of
any of the factors before and after playing the game (p > .05, n.s. for all pre-post fac-
tor pairs); therefor, the absolute positions were equal in both pre- and post-game rank-
ing. Participants identified the data quality as the most important aspect (M = 1.8, SD
= 0.9), followed by the visualization of data (M = 2.3, SD = 1.1), decision support (M
= 2.8, SD = 1.2) and the volume of data, as shown in Table 2. Pairwise comparison
revealed that there is no significant difference between the average rankings of “Good
data visualization” and “Decision support” (p = .059, n.s.). In contrast, for all other
comparisons of two factors the criterion of significance (p < .05, n.s. for all compari-
sons) was reached.
5 Discussion
Regarding the technical factors influencing game complexity we learned the easiest
condition is the one without drops in either the supplier’s quality or the internal pro-
duction quality. To our surprise, however, we found that the most difficult condition
to play is one with drops only in the internal production quality drops, but the suppli-
er’s quality stays constant. Counterintuitively, this condition is even more difficult to
play than the condition in which both qualities drop. We suspect that to be the case,
because the consequences of the quality drops are easier to notice within the company
dashboard, as the number of returned parts increases and the incoming quality de-
creases (two visible changes), while only one measure changes if only the production
quality decreases.
Interestingly, the display of traffic lights indicating the supplier’s quality and the
internal production quality did not influence the decision quality of the players and
the performance within the game. Interviews with players after the game suggest that
players had difficulties to understand the correct meaning of the traffic signals.
While the investigation of the game mechanics yielded clear findings, the search
for human factors that explain performance was only partially successful in this study.
We learned underlying factors exist that explain game performance, as players who
did well in the first round of the game also did well in the second round (i.e. high
correlation of the performances of the first and second round of the game). However
none of the variables assessed prior to the interaction with the game explained game
performance with adequate accuracy. Surprisingly, the positive impact of high tech-
nical self-efficacy on performance [9] could not be replicated within this study. None-
theless, players with good performance can be differentiated from players with bad
performance when in-game metrics or the post-game survey are considered. First,
players who achieved higher profits in the game took more time than players who
achieved lower profits. Second, good players not only spent more time on the game,
they also perform more changes within the game’s decision cockpit. Both findings are
in line with previous studies [14] and suggest that intense engagement with the
subject leads to a better performance. It is unclear however, what causes this effect:
90 R. Philipsen et al.
Are people who perform better in the game just more motivated, and therefore spend
more time on the game and on changes within the game, or do better players have an
increased overview over the company data and are therefore able to adapt more quick-
ly to changing scenarios.
Using games as a vehicle to mediate learning processes is getting more and more
popular in various disciplines [15]. Our findings suggest that our game-based ap-
proach for teaching fundamentals of quality management also works very well. First,
we found that the game is learnable and that the player’s performance increases from
the first to the second round of the game, showing that the players gained expertise in
making complex decisions for the simulated company. Second, the intention of the
game is to raise the awareness about quality management and shift the attention to-
wards quality management techniques within the game. After the game the players’
relative weighting of quality management was significantly higher than before the
game. Hence we can conclude, that the Q-I game is a suitable tool for teaching quality
management within vocational trainings, university courses or advanced trainings.
Contrary to previous studies, we could not identify human factors that explain game
performance. We suspect that the small number of participants per experimental con-
dition, the large noise and huge spread within the data makes the dataset difficult to
evaluate. In a follow-up study we will therefore reduce the number of experimental
factors and increase the number of participants per condition, assuming that this will
yield clearer results. Furthermore, the questions assessing the game strategy from the
post-game survey will be rephrased and used in the pre-game survey, as we then hope
to be able to predict game performance according to player strategy. In addition, we
assume that information processing ability is also influencing performance within the
game; hence we will closely investigate the effect of information processing capacity
and speed on the outcome of the game in a follow-up study.
The traffic signs were conceptualized to indicate the results from quality audits of
the supplying company and of the internal production quality, not as indicators that
represent current quality levels. However, many people misinterpreted these indica-
tors and assumed that they show exactly that. A future version of the decision cockpit
will therefore clarify this issue and provide both, a clear indicator of the current sup-
plier quality and the current production quality, as well as clear indicators that
represent the results from quality audits.
The overall rating of the game was fairly positive and we found that it increased
the awareness of the importance of quality management in supply chain management.
Acknowledgements. The authors thank Hao Ngo and Chantal Lidynia for their sup-
port. This research was funded by the German Research Foundation (DFG) as part of
the Cluster of Excellence “Integrative Production Technology for High-Wage Coun-
tries” [16].
The Role of Human Factors in Production Networks and Quality Management 91
References
1. Forrester, J.W.: Industrial dynamics. MIT Press, Cambridge (1961)
2. Bossel, H.: Systeme Dynamik Simulation – Modellbildung, Analyse und Simulation
komplexer Systeme, p. 24. Books on Demand GmbH, Norderstedt (2004)
3. Robinson, S.: Simulation: The Practice of Model Development and Use, pp. 4–11.
John Wiley & Sons, West Sussex (2004)
4. Greasley, A.: Simulation Modelling for Business, pp. 1–11. Ashgate Publishing Company,
Burlington (2004)
5. Kühl, S., Strodtholz, P., Taffertshofer, A.: Handbuch Methoden der Organisations
forschung, pp. 498-578. VS Verlag für Sozialwissenschaften, Wiesbaden (2009)
6. Hardman, D.: Judgement and Decision Making. In: Psychological Perpectives,
pp. 120–124. John Wiley & Sons, West Sussex (2009)
7. Kamiske, G., Brauer, J.: ABC des Qualitätsmanagements, p. 24. Carl Hanser Verlag,
München (2012)
8. Beier, G.: Kontrollüberzeugungen im Umgang mit Technik [Locus of control when inte-
racting with technology]. Report Psychologie 24(9), 684–693 (1999)
9. Brauner, P., Runge, S., Groten, M., Schuh, G., Ziefle, M.: Human Factors in Supply Chain
Management – Decision making in complex logistic scenarios. In: Yamamoto, S. (ed.)
HCI 2013, Part III. LNCS, vol. 8018, pp. 423–432. Springer, Heidelberg (2013)
10. Rammstedt, B., Kemper, C.J., Klein, M.C., Beierlein, C., Kovaleva, A.: Eine kurze Skala
zur Messung der fünf Dimensionen der Persönlichkeit: Big-Five-Inventory-10 (BFI-10).
In: GESIS – Leibniz-Institut für Sozialwissenschaften (eds.) GESIS-Working Papers, vol.
22. Mannheim (2012)
11. Mandrik, C.A., Bao, Y.: Exploring the Concept and Measurement of General Risk
Aversion. In: Menon, G., Rao, A.R. (eds.) NA - Advances in Consumer Research, vol. 32,
pp. 531–539. Association for Consumer Research, Duluth (2005)
12. Satow, L.: B5T. Psychomeda Big-Five-Persönlichkeitstest. Skalendokumentation und
Normen sowie Fragebogen mit Instruktion. In: Leibniz-Zentrum für Psychol. Inf. und Do-
kumentation (ZPID) (eds.) Elektron. Testarchiv (2011), http://www.zpid.de
13. Xu, Y., Zhu, J., Huang, L., Zheng, Z., Kang, J.: Research on the influences of staff’s
psychological factors to total quality management practices: An empirical study of Chinese
manufacturing industry. In: 2012 IEEE International Conference on Management of Inno-
vation and Technology (ICMIT), pp. 303–308 (2012)
14. Dörner, D.: Die Logik des Mißlingens. Strategisches Denken in komplexen Situationen.
rororo, Reinbek (2013)
15. Schäfer, A., Holz, J., Leonhardt, T., Schroeder, U., Brauner, P., Ziefle, M.: From boring to
scoring – a collaborative serious game for learning and practicing mathematical logic for
computer science education. Computer Science Education 23(2), 87–111 (2013)
16. Brecher, C.: Integrative Production Technology for High-Wage Countries. Springer,
Heidelberg (2012)
Managing User Acceptance Testing of Business
Applications
Abstract. User acceptance testing (UAT) events gather input from actual
system users to determine where potential problems may exist in a new soft-
ware system or major upgrade. Modern business systems are more complex and
decentralized than ever before making UAT more complicated to perform. The
collaborative nature of facilitated UAT events requires close interaction be-
tween the testers and the facilitation team, even when located in various loca-
tions worldwide. This study explores the best approaches for facilitating UAT
remotely and globally in order to effectively facilitate geographically-dispersed
actual system users in performing UAT exercises. While research suggests user
involvement is important, there is a lack of understanding about the specifics of
how to best engage users for maximizing the results, and our study addresses
this gap. This study examines the following research questions: How should
UAT facilitators (1) schedule user participation with a minimum impact to their
regular work duties and maximum ability to be present when testing and not
be distracted; (2) enable direct interactions with users including face-to-face
conversations during the UAT event and access to user computer screens for
configuration and validation; and (3) utilize quality management software that
can be used seamlessly by all involved in UAT. To examine these questions,
we utilize Social Presence Theory (SPT) to establish a conceptual lens for
addressing these research questions. SPT supports that the communication envi-
ronment must enable people to adopt the appropriate level of social presence
required for that task. This study proposes a theoretically-derived examination
based on SPT of facilitated UAT delineating when and how facilitators should
involve actual system users in the UAT activities either through local facilita-
tion or remote hosting of UAT exercises, among other options.
1 Introduction
The purpose of user acceptance testing (UAT) is to gather input from actual system
users, those who have experience with the business processes and will be using the
system to complete related tasks (Klein, 2003; Larson, 1995). Actual users bring
knowledge of process flows and work systems and are able to test how the system
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 92–102, 2014.
© Springer International Publishing Switzerland 2014
Managing User Acceptance Testing of Business Applications 93
meets all that is required of it, including undocumented inherent requirements, and
where potential problems may surface. UAT is a critical phase of testing that typically
occurs after the system is built and before the software is released. Modern business
systems are more complex and decentralized than ever before making UAT more
complicated to perform. The global nature of commerce continues to push business
systems deployments well beyond traditional geographic boundaries. The global
nature of such deployments has created new challenges for the execution of UAT and
the effective participation of geographically dispersed actual system users. The colla-
borative nature of facilitated UAT events requires close interaction between the
testers and the facilitation team (Larson, 1995), even when located in various loca-
tions worldwide. However current obstacles exist such as, global dispersion of the
user base, travel expenses and extended time away from regular work assignments.
This study explores the best approaches for facilitating UAT remotely and globally in
order to effectively facilitate geographically-dispersed actual system users in perform-
ing UAT exercises.
Systems development theory suggests users should be involved throughout the
development lifecycle, yet involving the users is often difficult. One study of case
organizations found different approaches and strategies for the facilitation of user
involvement (Iivari, 2004; Lohmann and Rashid, 2008). An important aspect in
human computer interaction is usability evaluation that improves software quality
(Butt and Fatimah, 2012). User involvement occurs between industry experts who use
the system and the development team suggesting it is imperative to have senior and
experienced user representation involved (Majid et al., 2010). One study of the degree
of user involvement in the process indicates that user involvement is mainly concen-
trated in the functional requirements gathering process (Axtell et al., 1997). Software
firms spend approximately 50-75% of the total software development cost on debug-
ging, testing, and verification activities, soliciting problem feedback from users to
improve product quality (Muthitacharoen and Saeed, 2009).
Today, the distinction between development and adoption are blurring which
provides developers with opportunities for increasing user involvement (Hilbert et al.,
1997). User involvement is a widely accepted principle in the development of usable
systems, yet it is a vague concept covering many approaches. Research studies illu-
strate how users can be an effective source of requirements generation, as long as role
of users is carefully considered along with cost-efficient practices (Kujala, 2003).
User’s participation is important for successful software program execution (Butt and
Fatimah, 2012) and business analyst facilitation and patience in UAT events is critical
whether the system is a new installation, major upgrade, or commercial-off-the-shelf
package (Beckett, 2005; Klein, 2003; Larson, 1995). In summary, while research
suggests user involvement is important, there is a lack of understanding about the
specifics of how to best engage users for maximizing the results, and our study
addresses this gap.
This study examines the following research questions: How should UAT facilita-
tors (1) schedule user participation with a minimum impact to their regular work
duties and maximum ability to be present when testing and not be distracted; (2) ena-
ble direct interactions with users including face-to-face conversations during the UAT
94 R. Poston, K. Sajja, and A. Calvert
event and access to user computer screens for configuration and validation; and (3)
utilize quality management software that can be used seamlessly by all involved
in UAT.
To examine these questions, we recognize the need to resolve the complexity of
communication challenges among technology facilitators and business users. We
draw on Social Presence Theory (SPT) to establish a conceptual lens for addressing
these research questions. Traditionally, SPT classifies different communication media
along a continuum of social presence. Social presence (SP) reflects the degree of
awareness one person has of another person when interacting (Sallnas et al., 2000).
People utilize many communication styles when face-to-face (impression leaving,
contentiousness, openness, dramatic existence, domination, precision, relaxed flair,
friendly, attentiveness, animation, and image managing (Norton, 1986) or when on-
line (affective, interactive, and cohesive (Rourke et al., 2007). SPT supports that the
communication environment must enable people to adopt the appropriate level of
social presence required for that task. This study proposes a theoretically-derived
examination based on SPT of facilitated UAT delineating when and how facilitators
should involve actual system users in the UAT activities either through local facilita-
tion or remote hosting of UAT exercises, among other options.
2 Theoretical Background
To examine the challenges of facilitating actual system users in UAT events, SPT
incorporates a cross-section of concepts from social interdependence and media rich-
ness theories. SPT promotes that through discourse, intimacy and immediacy create a
degree of salience or being there between the parties involved (Lowenthal, 2010).
Researchers have found perception of the other party’s presence is more important
than the capabilities of the communications medium (Garrison et al., 2000). Thus,
UAT events will need to enable the appropriate level of SP for users to learn their role
in UAT and execute testing activities.
Facilitating users in remotely-hosted UAT events draws similarities to online
teaching activities. The similarities emanate from both activities comprising novice
users working with expert facilitators to learn new knowledge, tackle new skills, and
express confusion and questions in text-written print. SP has been established as a
critical component of online teaching success. Table 1 encapsulates select research in
the online teaching domain, illustrating the growing support for designing courses and
maintaining a personal presence to influence student satisfaction and learning. This
research helps us identify factors needed for user success in an online UAT event
context. SP largely reflects the trust-building relationship a facilitator or instructor
creates with users or students. SP is more easily developed in face-to-face richer
media settings, however SP can be encouraged in computer-mediated learner media
settings as well.
Managing User Acceptance Testing of Business Applications 95
Instructor immediacy
Aragon, 2003 Course design, instructor, and partici- Creating a platform for SP
pant strategies Instructors can establish and maintain
SP encouraging student participation
96 R. Poston, K. Sajja, and A. Calvert
Research examining UAT activities suggests both facilitator and users need face-
to-face communication options when the system under test is newly developed (Lar-
son, 1995). Typical UAT timelines involve: A system almost fully developed, user
guides and training materials developed by the technology group, business analytic
review and input on these materials then drawing up the test scripts, users performing
tests based on the scripts and with open unscripted use, user reporting issues to the
business analyst who reviews and logs the appropriate defects for the development
team to address. This is repeated until the users sign off that the system works as
needed (Larson, 1995). Research illustrates the UAT process can be improved with
users having the ability to engage in direct interactions with both the business analyst
and development teams when questions arise (Larson, 1995).
Facilitated testing by the real time users can be implemented in 3 ways (Seffah and
Habied-Mammar, 2009): 1. Require remote users to travel to a local facility, 2. Send
facilitator to remote locations, 3. Facilitator from local facility does computer me-
diated conferencing (CMC) with users in remote location. Each of these approaches
establishes different communication environments. SPT suggests facilitated UAT
local facilitation or remote hosting of UAT exercises will require different dimensions
of where and how facilitators should involve users in the UAT activities. Table 2
demonstrates researchers’ views on facilitated UAT approaches and how SPT
attributes are expected to affect three different UAT approaches based on studies of
SP in online teaching. Remote users travelling to local facility and facilitator travel-
ling to remote locations are treated as same in Table 2 as both are similar to instructor
teaching to students face to face while remote UAT is compared with online teaching.
As Table 2 illustrates how attributes of SP tend to be low for remote UAT events
because face-to-face communications are highly advantages when establishing high
SP. Also, online research on SP for online learning is high if SP is established using
various techniques like incentives, course design, etc.
Table 2. (Continued.)
Mostly used in research examining online education, SPT informs remote commu-
nications environments by examining the way people represent themselves online
through the way information is shared (e.g., how messages are posted and interpreted
by others) and how people related to each other (Kehrwald, 2008). When face-to-face,
people use everyday skills to share information through multiple cues using rich
nonverbal communication inherent in tone of voice and facial expression. Richer
communications allow individuals to provide and respond to the sight, sound, and
smell of others which inherently provides an awareness of the presence of others
(Mehrabian, 1969). Online information sharing lacks the cues needed to create an
awareness of the presence of others and offers the ability to discuss information but
not to connect or bond with others on a more personal level (Sproull and Kiesler,
1986). Research studies of online education have found that the lack of SP impedes
interactions and as a result hinders student-learning performance (Wei et al., 2012).
One proposed solution is to combine the use of both asynchronous (pre-produced
98 R. Poston, K. Sajja, and A. Calvert
content accessed by users when needed) and synchronous (real-time, concurrent audio
and video connections) components, with synchronous efforts providing a much more
full social exchange greatly increasing the potential for SP. Thus, SP is an important
factor in information exchange when learning and performance are required, as is the
case of user participation in UAT events.
The research methodology follows a qualitative approach in gathering case study data
on UAT practices in order to provide descriptive and explanatory insights into the
management activities in software development work. This approach has been used
successfully in prior research (Pettigrew, 1990; Sutton, 1997) and allows us to induce
a theoretical account of the activities found in empirical observations and analysis of
team member’s viewpoints. This approach is also known to lead to accurate and use-
ful results by including an understanding of the contextual complexities of the envi-
ronment in the research analysis and outcomes. Finally, this approach encourages an
understanding of the holistic systematic view of the issues and circumstances of the
situation being addressed, in this case the issues of managing development projects
from team member perspectives about their testing practices (Checkland et al., 2007;
Yin, 1989). To identify the practices, we selected a large multinational fortune 500
company known to have successful UAT events. The focus of our study is specific to
the UAT practices of large scale complex globally-deployed software development
projects.
4 Data Collection
The results reported in the present study are based on interviews with UAT facilita-
tors. Our data gathering began with the creation of semi-structured interview proto-
cols which comprised both closed and open-ended questions. To inform our interview
question development, we reviewed documentation about the company, and held
background discussions with company personnel. The data collection methods em-
ployed focused on interviewees’ perspectives on UAT issues, roles played by various
stakeholders involved, and the challenges of incorporating actual systems users in the
process. Face-to-face interviews of approximately 1 to 1.5 hours were conducted with
various project stakeholders. The goal of these interviews was to identify and better
understand the issues related to UAT. In total, we interviewed 8 stakeholders. Inter-
views were conducted between November 2013 and January 2014, with additional
follow-up clarification Q&A sessions conducted over e-mail. Job descriptions of
those interviewed are shown in Table 3.
Managing User Acceptance Testing of Business Applications 99
5 Findings
In this research, we gathered and analyzed interview data from a large multinational
company with multiple stakeholders of UAT events along with best practices from the
research literature. From these data sources, we next address the research questions
100 R. Poston, K. Sajja, and A. Calvert
proposed earlier to offer insights about managing UAT events. For completely new
complex systems and novice UAT participants, SP will be a critical factor enabling
better testing outcomes. In this case, facilitators should schedule user participation
locally at the testing location where face-to-face interactions can occur. While cogni-
zant of the need to minimize the impact to users’ regular work duties and keep from
having work requirements outside of regular working hour, these events can be con-
centrated into a shorter timeframe and more efficiently administered when everyone is
together. Accommodating users locally maximizes users’ ability to be present when
testing and not be distracted. Complicated tasks and difficult questions can be ad-
dressed and more readily communicated. Additionally, peer-to-peer face-to-face
learning can be enabled, which has been shown to improve outcomes (Tu, 2000).
Media richness theory has long held that richer media are the key to building trust-
ing relationships (Campbell, 2000). Media richness theory suggests settings should be
assessed on how well they support the ability of communicating parties to discern
multiple information cues simultaneously, enable rapid feedback, establish a personal
message, and use natural language. Richer media tend to run on a continuum from
rich face-to-face settings to lean written documents. Thus, consistent with above, for
completely new complex systems and novice UAT participants, richer media settings
are needed to enable direct interactions with users including face-to-face conversa-
tions during the UAT event and access to user computer screens for configuration and
validation. Richer settings also enable facilitators to collaborate and train users to
improve information sharing. Furthermore, peer-to-peer learning and immediacy of
replies for help and answers enables a more productive UAT outcome. When users
are located in distant remote locations, time lags between queries and answers im-
pedes productivity and dedication to task.
Quality management software (QMS) enables standard procedures and processes,
effective control, maintainability, higher product quality at a reduced cost (Ludmer,
1969). In our interviews with facilitators and user acceptance testers we found that
QMS plays a critical role while performing UAT. UAT testers use QMSs to read and
execute test scripts, input result of their tests, log defects and verify defects are fixed.
Facilitators use QMSs to write test scripts, review the results of test runs, track de-
fects, prioritize defects, and assign defects to developers. In summary, QMS serves as
a common platform for facilitators and UAT testers.
Facilitators are tasked with training non-technical business users on how to
use QMS technical tools. QMS that are globally available in the market include HP
Quality Center, IBM Rational Quality Manager etc. These tools have a plethora of
multilingual support with study materials, user guides and social networking com-
munities. The next steps with this research is to determine how to replicate SP created
in a face-to-face UAT event within a remote UAT experience.
References
1. Aragon, S.R.: Creating social presence in online environments. New Directions for Adult
and Continuing Education (100), 57–68 (2003)
2. Axtell, C.M., Waterson, P.E., Clegg, C.W.: Problems Integrating User Participation into
Software Development. International Journal of Human-Computer Studies, 323–345
(1997)
Managing User Acceptance Testing of Business Applications 101
24. Norton, R.W.: Communicator Style in Teaching: Giving Good Form to Content. Commu-
nicating in College Classrooms (26), 33–40 (1986)
25. Pettigrew, A.M.: Longitudinal Field Research on Change: Theory and Practice. Organiza-
tion Science 1(3), 267–292 (1990)
26. Picciano, A.: Beyond student perceptions: Issues of interaction, presence, and performance
in an online course. Journal of Asynchronous Learning Networks 6(1), 21–40 (2002)
27. Richardson, J.C., Swan, K.: Examining social presence in online courses in relation to stu-
dents’ perceived learning and satisfaction. Journal of Asynchronous Learning Net-
works 7(1), 68–88 (2003)
28. Rourke, L., Anderson, T., Garrison, D.R., Archer, W.: Assessing social presence in asyn-
chronous text-based computer conferencing. The Journal of Distance Education/Revue de
l’Éducation à Distance 14(2), 50–71 (2007)
29. Russo, T., Benson, S.: Learning with invisible others: Perceptions of online presence
and their relationship to cognitive and effective learning. Educational Technology and
Society 8(1), 54–62 (2005)
30. Sallnas, E.L., Rassmus-Grohn, K., Sjostrom, C.: Supporting presence in collaborative
environments by haptic force feedback. ACM Transactions on Computer-Human Interac-
tions 7(4), 461–467 (2000), Science 8(1), 97–106 (2000)
31. Seffah, A., Habieb-Mammar, H.: Usability engineering laboratories: Limitations and
challenges toward a unifying tools/practices environment. Behaviour & Information Tech-
nology 28(3), 281–291 (2009)
32. Sproull, L., Keisler, S.: Reducing social context cues: Electronic mail in organizational
communication. Management Science 32(11), 1492–1513 (1986)
33. Sutton, R.I.: Crossroads-The Virtues of Closet Qualitative Research. Organization
Science 8(1), 97–106 (1997)
34. Swan, K., Shih, L.F.: On the nature of development of social presence in online course
discussion. Journal of Asynchronous Learning Networks 9(3), 115–136 (2005)
35. Tu, C.H.: Online learning migration: Form social learning theory to social presence theory
in a CMC environment. Journal of Network and Computer Applications 2, 27–37 (2000)
36. Tu, C.H., McIsaac, M.: The relationship of social presence and interaction in online
classes. The American Journal of Distance Education 16(3), 131–150 (2002)
37. Walther, J.B., Burgoon, J.K.: Relational commination in computer-mediated interaction.
Human Communication Research 19(1), 50–88 (1992)
38. Wei, C., Chen, N., Kinshuk: A model for social presence in online classrooms. Education-
al Technology Research and Development 60(3), 529–545 (2012)
39. Yin, R.K.: Case Study Research: Design and Methods. Sage Publications, Beverly Hills
(1984)
How to Improve Customer Relationship Management
in Air Transportation Using Case-Based Reasoning
Abstract. This paper describes research that aims to provide a new strategy for
Customer Relationship Management for Air Transportation. It presents our pro-
posed approach based on Knowledge Management processes, Enterprise Risk
Management and Case-Based Reasoning. It aims to mitigate risks facing in air
transportation process. The principle of this method consists in treating a new
risk by counting on previous former experiments (case of reference). This type
of reasoning rests on the following hypothesis: if a past risk and the new one are
sufficiently similar, then all that can be explained or applied to the past risks or
experiments (case bases) remains valid if one applies it to the new risk or for
new situation which represents the new risk or problem to be solved. The idea
of this approach consists on predicting adapted solution basing on the existing
risks in the case base having the same contexts.
1 Introduction
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 103–111, 2014.
© Springer International Publishing Switzerland 2014
104 R. Sammout, M. Souii, and M. Elghoul
avoided and reduced through preemptive action [1] [2]. For example: death, injuries
form turbulence and baggage, dissatisfaction, bad provision of information, bad
communication , misunderstanding, noise and mobility, bad cleaner staff, bad service
quality, bad presentation of safety rules, lack or lost of baggage, uncomfortability of
customer, lack of respect etc. Generally, these risks have great impacts on the achiev-
ing the origination objectives. In this context, our approach’s aim is to mitigate the
danger based on the interaction between Enterprise Risk Management (ERM) and
KM and using the CBR. The idea is to deal with all the risks that may affect customer
during the air transportation process from the registration of the customer to the ana-
lytics and feedback post-journey. Furthermore, it also endeavors also to create new
opportunities in order to enhance the capacity of building a perceived value to its
customers.
Based on KM processes [3], our method has four phases (Fig. 1): (1) Knowledge
creation and sharing phase, (2) Knowledge analyzing phase, (3) Knowledge storage
phase, (4) Knowledge application and transfer phase.
The purpose of this phase is the identification of risk caused customer dissatisfaction.
It includes two steps as below:
How to Improve Customer Relationship Management 105
Formulate New Request. The employee faces a risk and wants to know how to solve
it. He formulates a request to the system specifying the risk. The system treats the
request based on the CBR method and answers the employee with the appropriate
solution adapted on his/her context based on fuzzy logic.
Step1: Selecting the similar cases. This step is based on the contextual filtering. The
system uses the characteristics of context in order to compare the new case (NC) with
the existing cases (EC) using the following formula:
1
Communities of Practice (CoP) are techniques used in KM, the purpose is to connect people
with specific objective that voluntarily want to share knowledge [4].
106 R. Sammout, M. Souii, and M. Elghoul
set {a1i, a2i, …, ani that differs in number from a risk to another. Two contexts
are similar if its attributes are respectively similar. C1= a11 ∪ a21 ∪ …∪ an1; C2= a12 ∪
a22 ∪ …∪ an2 with .
Sim (NC; EC) = (SimR (Ri; Rj),SimC (Ci; Cj)) (2)
SimC (Ci; Cj) = (Sim (a1i; a1j), Sim (a2i; a2j), …, Sim (ani; an)) (3)
With an represents an attribute that characterises the context C.
And i, j are the coefficients of two different contexts relative to the same risk.
Sim(NC; EC) = (SimR(Ri; Rj), Sim (a1i; a1j), Sim (a2i; a2j), …, Sim(ani; anj) ) (4)
Step2: Adapting the new solution. Basing on the selected cases, the idea is to pro-
pose an adapted solution to the new context. It is a combination of many parts of the
solutions (Si, Sj …) from the most similar cases. To this end, this step is segmented
into three levels fuzzification, fuzzy inference and defuzzification.
Fuzzy inference. It aims to assess the contributions of all active rules. The fuzzy infe-
rence is affected from a rules database. Each fuzzy rule expresses a relationship be-
tween the input variables (context attributes similarity Sim) and the output variable
(relevance of the solution "S"). The fuzzy rule in our approach is as follows:
If (Sim is A) Then (S is B)
Where Sim is the context attributes similarity correlated (the premises of the rule), S
is the relevance of the solution (the conclusion of the rule), and A and B are linguistic
terms determined by the fuzzy sets.
How to Improve Customer Relationship Management 107
In the Mamdani model, the implication and aggregation are two fragments of the
fuzzy inference. It is based on the use of the minimum operator "min" for implication
and the maximum operator "max" for the aggregation rules.
F (ri,ci,si) = (6)
F(ri,ci,si) is the function associated with the case ci with µ(s) is the membership func-
tion of the output variable si and ri is the rules.
The fuzzy inference releases a sorted list of relevant solutions LF.
LF = {(si, F(ri,ci,si)) \ (ri,ci, ci) ∈ Bc }
The informational content SI is an integral part of the relevant solution from the sorted
list LF of relevant solutions, maximizing the similarity Sim correlated to the retrieved
case. The solution recommended to the user is a combination of torque solutions (SI).
When an employee is faced a new risk, he can formulate a new request in order to find an
appropriate solution. The figure 3 presents the interface that can be used by an employee.
Step 2: Adapting the new solution. This step is divided into three levels as below:
Fuzzification. The fuzzifier mapes two inputs numbers (Sim(wind) and
Sim(pressure)) into fuzzy membership. The universe of discourse represents by U =
[0, 1]. We propose Low, Medium and High as the set of linguistic terms. The mem-
bership function implemented for Sim(wind) and Sim(pressure) is trapezoid.
The figure 5 describes the partition of fuzzy classes. It aims to divide the universe
of discourse of each linguistic variable on fuzzy classes. It is universal for all the lin-
guistic variables as below: Low [-0.36 -0.04 0.04 0.36], Medium [0.14 0.46 0.54
0.86], High [0.64 0.96 1.04 1.36].
Fuzzy inference. It defines mapping from input fuzzy sets into output fuzzy sets bas-
ing on the active rules (cf. Fig. 6). The number of rules in this case is: 32 *1=9 rules.
110 R. Sammout, M. Souii, and M. Elghoul
Defuzzification. It is based on Mamdani model (cf. Fig.7) which incorporates the cen-
ter gravity method by the evaluation of the set of rules in the fuzzy inference. It maps
output fuzzy into a crisp values.
For the example of Hurricane Katrina, the solution is adapted from the solution of
Hurricane Charley (wind= 280, pressure= 902) F= 0.387.
At this level of our work, the adapted solution resulted from the previous phase will
be evaluated by an expert. Then, the validated solutions will be retain in the case base.
How to Improve Customer Relationship Management 111
Training and lesson learning session will be establishing for the employees basing on
the case base retained from the previous phase. The purpose of this process is to
exploit the previous experiences in order to improve the intellectual capital and com-
petences of the employees and facilitate the management of risk caused customer
dissatisfaction.
4 Conclusion
In this paper, we presented a crucial and generic approach based on the interaction
between two disciplines KM and ERM and using CBR and fuzzy logic in order to
enhance CRM in AT. First, by identifying risks caused customer dissatisfaction.
Second, proposing new solutions responding to risks faced in all touch points of the
AT process. Finally, the application of a learning process from the previous expe-
riences (risk and solutions) for the employees will be established. A challenge for
future research will be to refine the optimization of the adapted solution based on
genetic algorithm.
References
1. Monahan, G.: Enterprise Risk Management: A Methodology for Achieving Strategic Objec-
tives. John Wiley & Sons Inc., New Jersey (2008)
2. International Organization for Standardization, ISO (2009)
3. Alavi, M., Leidner, D.: Review: Knowledge management and knowledge management sys-
tems: Conceptual foundations and research issues. MIS Quarterly 25(1), 107–136 (2001)
4. Rodriguez, E., Edwards, J.S.: Before and After Modeling: Risk Knowledge Management is
required, Society of Actuaries. Paper presented at the 6th Annual Premier Global Event on
ERM, Chicago (2008)
5. Coyle, L., Cunningham, P., Hayes, C.: A Case-Based Personal Travel Assistant for
Elaborating User Requirements and Assessing Offers. In: 6th European Conference on Ad-
vances in Case-Based Reasoning, ECCBR, Aberdeen Scotland, UK (2002)
6. Lajmi, S., Ghedira, C., Ghedira, K.: CBR Method for Web Service Composition. In:
Damiani, E., Yetongnon, K., Chbeir, R., Dipanda, A. (eds.) SITIS 2006. LNCS, vol. 4879,
pp. 314–326. Springer, Heidelberg (2009)
7. Aamodt, A.: Towards robust expert systems that learn from experience an architectural
framework. In: Boose, J., Gaines, B., Ganascia, J.-G. (eds.) EKAW-89: Third European
Knowledge Acquisition for Knowledge-Based Systems Workshop, Paris, pp. 311–326
(July 1989)
Toward a Faithful Bidding
of Web Advertisement
1 Introduction
Web marketing is a key activity of e-commerce today. Due to the proliferation of
internet technology, available internet marketing data become huge and complex.
Efficient use of such large data maximizes the profit of web marketing. Although
there exist a variety of studies such as [1],[2],[3] motivated by these backgrounds,
actual business scenes still rely on the operators’ know-how. There still remains
room for improvement on data usage.
For example, Fig.1 shows how operators who are working for an advertising
agency make their decision on advertisement. They decide the allocation of ad-
vertising budget using the Fig.1. X-axis is the number of past actions by the
customers. Here, actions are typically web clicks toward the purchase and the
installation of software. The target of advertising agency is the maximization
of the actions. Y-axis is the budget (costs) used to advertise web pages for the
purchase and the software installation. One another target of advertising agency
is the minimization of this cost. Cost effectiveness which is typically calculated
by X/Y (i.e., actions/costs) is important.
An example of know-how which we interviewed from operators of an adver-
tising agency is: “If the current web advertisement is laid out on the lower right
segment, increase the budget since the past advertisement worked well (having
height cost efficiency)”. This know-how is reasonable if the number of data is
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 112–118, 2014.
c Springer International Publishing Switzerland 2014
Toward a Faithful Bidding of Web Advertisement 113
KƉĞƌĂƚŝŽŶŵĂƉŽĨĂĚǀĞƌƚŝƐĞŵĞŶƚ
ĂŐĞŶĐLJ
ƐƉĞŶƚ
ĂĐƚŝŽŶƐ
㼄㻙㼍㼤㼕㼟㻌㼕㼟㻌㼚㼡㼙㼎㼑㼞㻌㼛㼒㻌㼜㼍㼟㼠㻌㼍㼏㼠㼕㼛㼚㼟㻌㼎㼥㻌㼠㼔㼑㻌㼏㼡㼟㼠㼛㼙㼑㼞㼟㻌㼟㼡㼏㼔㻌㼍㼟㻌㼣㼑㼎㻌㼏㼘㼕㼏㼗㼟㻌㼠㼛㼣㼍㼞㼐
㼠㼔㼑㻌㼜㼡㼞㼏㼔㼍㼟㼑㻌㼍㼚㼐㻌㼕㼚㼟㼠㼍㼘㼘㼍㼠㼕㼛㼚㻌㼛㼒㻌㼟㼛㼒㼠㼣㼍㼞㼑㻚㻌㼅㻙㼍㼤㼕㼟㻌㼕㼟㻌㼠㼔㼑㻌㼎㼡㼐㼓㼑㼠㻌㼡㼟㼑㼐㻌㼠㼛㻌㼍㼐㼢㼑㼞㼠㼕㼟㼑
㼣㼑㼎㻌㼜㼍㼓㼑㼟㻌㼒㼛㼞㻌㼠㼔㼑㻌㼍㼏㼠㼕㼛㼚㼟㻚
EƵŵďĞƌŽĨĐůŝĐŬƐĞĂĐŚǁĞď
ĂĚǀĞƌƚŝƐĞŵĞŶƚ
ϭϬϬϬϬϬϬ ĚǀĞƌƚŝƐĞŵĞŶƚ ǁŚŝĐŚŚĂƐ
ϭϬϬϬϬϬ ůŝƚƚůĞĚĂƚĂĂŶĚƵŶƌĞůŝĂďůĞ͘
dŚĞŶƵŵďĞƌŽĨĐůŝĐŬƐ
ϭϬϬϬϬ
ϭϬϬϬ
ϭϬϬ
ϭϬ
ϭ
ZĂŶŬŽĨƚŚĞǁĞďĂĚǀĞƌƚŝƐĞŵĞŶƚ
sufficient and reliable. However, we have found that they don’t have enough data
in most cases. Fig.2 shows the fact we found. In Fig.2, Y-axis shows the number
114 T. Uchida, K. Ozaki, and K. Yoshida
of clicks for some web advertisement. X-axis shows the rank of the web advertise-
ment in clicks order. Although the total number of data is large, most of the data
plotted on Fig.1 has little data and statistically unreliable. The operator use too
trifling attributes to plot data on Fig.1. In this study, we propose a method to
enlarge the number of data each plot on Fig.1 has. The enlargement increases
the statistical reliability of data and increases the adequacy of the operators’
judgments.
Here, c is the number of the observed actions (i.e., purchase or software instal-
lation). n is the number of the observed clicks which users made on the adver-
tisement. p is the real n/c. s is the approximate value of the each percentile
point of the normal distribution. E is the error of estimated c/n. To calculate
95% confidential interval (α=0.05), we set s as 1.96 in this paper. In the rest of
this paper, we propose a method which makes clusters of similar advertisement
whose E calculated by Eq. (2), i.e., error, is small. By using cluster with small
error, we try to realize faithful bidding.
μci i e−μi
f (ci ) = (4)
ci !
J
μi = n i e β 0 + j=1 βj xij
(5)
J
logμi = logni + β0 + βj xij (6)
j=1
Attribute Value
Age Ex) 10-19, 20-29,,,
Region Ex) Tokyo, Oosaka,,,
Sex Male, Female
User Interest Ex) Fashion, Sports,,,
Contents Ex) Movie, Music,,,
3 Experimental Results
To shows the advantage of the proposed method, we have applied the proposed
method on the data shown in Fig.2. Fig.3 and 4 show results. Fig.3 shows the
116 T. Uchida, K. Ozaki, and K. Yoshida
estimated error for the cluster of advertisements. Here clusters are formed by
grouping advertisements with same attributes. All the attributes are used to
make clusters for Fig.3. X-axis shows the errors of clusters. It is E calculated
by Eq. (3). Y-axis shows the number of actions gained by the advertisement
(actions share). It also shows the total cost for the advertisement (spent share)
and cost-effectiveness (actions share/spent share). For example, the height of
left most histograms indicates low error rate ( E<0.2, i.e. error<0.2 ). The cus-
tomer actions won by corresponding advertisements are 62% with error rate less
than 0.2. Although the use of budget on this segment seems to be reasonable,
the clusters made with all attributes fail in allocating budget on this segment.
Actually, budget used on the same advertisements is only 32%. This result shown
in Fig.3 shows our start point of improvement.
ĂĚǀĞƌƚƐŝŶŐƌĞƐƵůƚĚĂƚĂ
;dŚĞŶƵŵďĞƌŽĨĂĚǀĞƌƚŝƐĞŵĞŶƚŝƐϯϱ͕ϳϯϯ
ϭϬϬй Ϯ͘ϱϬ ĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ
Ɛ ϵϬй
Ś ϴϬй Ϯ͘ϬϬ
Ă ϳϬй
ƌ ϲϬй ϭ͘ϱϬ
Ğ
ϱϬй ĂĐƚŝŽŶƐͺƐŚĂƌĞ
ϰϬй ϭ͘ϬϬ ƐƉĞŶƚͺƐŚĂƌĞ
ƌ
Ă ϯϬй
ŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ
ƚ ϮϬй Ϭ͘ϱϬ
Ğ ϭϬй
Ϭй Ϭ͘ϬϬ
HϬ͘Ϯ HϬ͘ϰ HϬ͘ϲ HϬ͘ϴ Hϭ͘Ϭ Hϭ͘Ϯ ϭ͘ϮH
ŵĂdžĞƌƌŽƌƌĂƚĞŽĨŵĞĂƐƵƌĞĚĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐŝŶϵϱйĐŽŶĨŝĚĞŶĐĞŝŶƚĞƌǀĂů;Ϳ
Fig.4 shows the process of improvement by our proposed method. Data shown
in Fig.2 is based on 35,733 advertisements of 177 clients. We have applied the
method on data of 177 advertisements of one client company to make Fig.4. The
reason we have used the data of only one client is that the value of actions/clicks
varies according to the industry. For example, the value of actions/clicks for
cosmetics is far larger than that of real estimate. Mixing result of such indus-
tries makes the figure unclear. In Fig.4, X-axis is the error of estimated cost-
effectiveness of operations (E of equation 3). Y-axis is the cost-effectiveness (total
actions/total cost for the advertisements). Size of the circle is spent share (to-
tal cost of advertisement / total cost of all advertisements). Fig.4 (a) shows
the results of clusters formed with all attributes xij (i.e., start point). Fig.4 (b)
shows results of clusters formed with selected attributes xij (i.e., the results by
the proposed method). Fig.4 (c) shows results of clusters formed with randomly
Toward a Faithful Bidding of Web Advertisement 117
ƌĞƐƵůƚĚĂƚĂŽĨĂŶĂĚǀĞƌƚŝƐĞƌ
;ĨƵůůĂƚƚƌŝďƵƚĞƐͿ
ϭ͘ϰϬϬ
ϭ͘ϮϬϬ
ĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ
ϯй Ϯй
ϭ͘ϬϬϬ
ϭϴй ϯй
Ϭ͘ϴϬϬ ϮϮй ϴй ϰй Ϯй
ϭϬй
Ϭ͘ϲϬϬ ϭϳй
ĐŽƐƚƐŚĂƌĞ
Ϭ͘ϰϬϬ
Ϭ͘ϮϬϬ
Ϭ͘ϬϬϬ
Ϭ͘ϬϬϬ Ϭ͘ϮϬϬ Ϭ͘ϰϬϬ Ϭ͘ϲϬϬ Ϭ͘ϴϬϬ ϭ͘ϬϬϬ
ŵĂdžĞƌƌŽƌƌĂƚĞŽĨŵĞĂƐƵƌĞĚĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐŝŶϵϱй
ĐŽŶĨŝĚĞŶĐĞŝŶƚĞƌǀĂů;Ϳ
㻔㼍㻕 㼀㼔㼕㼟㻌㼟㼔㼛㼣㼟㻌㼐㼍㼠㼍㻌㼛㼒㻌㼍㼚㻌㼍㼐㼢㼑㼞㼠㼕㼟㼑㼞㻚㻌㻴㼑㼞㼑㻘㻌㻺㼛㻌㼍㼐㼢㼑㼞㼠㼕㼟㼑㼙㼑㼚㼠㻌㼏㼘㼡㼟㼠㼑㼞
㻌㼕㼟㻌㼞㼑㼘㼕㼍㼎㼘㼑㻚㻌㻮㼑㼏㼍㼛㼡㼟㼑㻘㻌㼑㼍㼏㼔㻌㼏㼘㼡㼟㼠㼑㼞㻌㼔㼍㼟㻌㼘㼕㼠㼠㼘㼑㻌㼐㼍㼠㼍㻌㼍㼚㼐㻌㼡㼚㼞㼑㼘㼕㼍㼎㼘㼑㻚
ƌĞƐƵůƚĚĂƚĂŽĨĂŶĂĚǀĞƌƚŝƐĞƌ
;ŽŶůLJĂƚƚƌŝďƵƚĞƐƐĞůĞĐƚĞĚďLJŽƵƌŵĞƚŚŽĚͿ
ϭ͘ϰϬϬ
ϰй
ϭ͘ϮϬϬ ϯй
ĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ
ϭ͘ϬϬϬ ϭй
ϱϮй
Ϭ͘ϴϬϬ
ϯϰй
Ϭ͘ϲϬϬ
ĐŽƐƚƐŚĂƌĞ
Ϭ͘ϰϬϬ
Ϭ͘ϮϬϬ
Ϭ͘ϬϬϬ
Ϭ͘ϬϬϬ Ϭ͘ϮϬϬ Ϭ͘ϰϬϬ Ϭ͘ϲϬϬ Ϭ͘ϴϬϬ ϭ͘ϬϬϬ
ŵĂdžĞƌƌŽƌƌĂƚĞŽĨŵĞĂƐƵƌĞĚĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐŝŶϵϱй
ĐŽŶĨŝĚĞŶĐĞŝŶƚĞƌǀĂů;Ϳ
㻔㼎㻕 㻤㻢㻑㻌㼛㼒㻌㼏㼛㼟㼠㻌㼎㼑㼏㼛㼙㼑㻌㼞㼑㼘㼕㼍㼎㼘㼑㻚㻌㻭㼚㼐㻌㼛㼜㼑㼞㼍㼠㼛㼞㼟㻌㼏㼛㼡㼘㼐㻌㼐㼕㼟㼏㼛㼢㼑㼞㻌㼎㼛㼠㼔
㼑㼒㼒㼑㼏㼠㼕㼢㼑㻌㼏㼘㼡㼟㼠㼑㼞㻌㼍㼚㼐㻌㼕㼚㼑㼒㼒㼑㼏㼠㼕㼢㼑㻌㼏㼘㼡㼟㼠㼑㼞㻌㼣㼔㼕㼏㼔㻌㼕㼟㻌㼑㼚㼛㼡㼓㼔㻌㼞㼑㼘㼕㼍㼎㼘㼑㻚
㼀㼔㼑㼥㻌㼏㼛㼡㼘㼐㻌㼐㼑㼏㼕㼐㼑㻌㼛㼚㻌㼍㼘㼘㼛㼏㼍㼠㼑㼠㼕㼛㼚㻚
ƌĞƐƵůƚĚĂƚĂŽĨĂŶĂĚǀĞƌƚŝƐĞƌ
;ĂƚƚƌŝďƵƚĞƐƐĞůĞĐƚĞĚƌĂŶĚŽŵůLJͿ
ϭ͘ϰϬϬ
ϯй
ϭ͘ϮϬϬ
ĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ
ϭ͘ϬϬϬ ϭϮй
ϯϳй
ϰϭй
Ϭ͘ϴϬϬ
Ϭ͘ϲϬϬ
ĐŽƐƚƐŚĂƌĞ
Ϭ͘ϰϬϬ
Ϭ͘ϮϬϬ
Ϭ͘ϬϬϬ
Ϭ͘ϬϬϬ Ϭ͘ϮϬϬ Ϭ͘ϰϬϬ Ϭ͘ϲϬϬ Ϭ͘ϴϬϬ ϭ͘ϬϬϬ
ŵĂdžĞƌƌŽƌƌĂƚĞŽĨŵĞĂƐƵƌĞĚĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐŝŶϵϱй
ĐŽŶĨŝĚĞŶĐĞŝŶƚĞƌǀĂů;Ϳ
㻔㼏㻕 㻣㻤㻑㻌㼛㼒㻌㼏㼛㼟㼠㻌㼎㼑㼏㼛㼙㼑㻌㼞㼑㼘㼕㼍㼎㼘㼑㻚㻌㻮㼡㼠㻌㼑㼒㼒㼑㼏㼠㼕㼢㼑㼚㼑㼟㼟㻌㼛㼒㻌㼑㼍㼏㼔㻌㼏㼘㼡㼟㼠㼑㼞㻌
㼣㼔㼕㼏㼔㻌㼕㼟㻌㼑㼚㼛㼡㼓㼔㻌㼞㼑㼘㼕㼍㼎㼘㼑㻌㼕㼟㻌㼟㼕㼙㼕㼘㼍㼞㻚㻌㼀㼔㼡㼟㻘㻌㼠㼔㼑㼥㻌㼏㼛㼡㼘㼐㻌㼚㼛㼠㻌㼐㼑㼏㼕㼐㼑㻌㼛㼚㻌㼍㼘㼘㼛㼏㼍㼠㼑㼕㼛
selected attributes xij for the comparison purpose. As shown in figures, using
all attributes results too many clusters, and all E of clusters are larger than 0.2
(see Fig.4 (a)). Although, Fig.4 (a) shows the results with slightly larger clusters
than that with clusters used in Fig.2, none of cluster has E less than 0.2. In the
118 T. Uchida, K. Ozaki, and K. Yoshida
practical view points, E larger than 0.2 is too large. Thus none of results shown
in Fig.2 has enough accuracy. On the contrary, 86% of results in Fig.4 (b) have E
less than 0.2. This shows the clear improvement of accuracy. Moreover, it shows
the fact the one large cluster which has 52% of advertisements has clear advan-
tage of cost-effectiveness over another large cluster with 34% of advertisements.
Note that this improvement cannot be achieved by random attributes selection
(Fig.4 (c)). In Fig.4 (c), 78% of results have E less than 0.2. However, the found
clusters have no clear cost-effectiveness over other clusters. Thus we cannot use
the results of Fig.4 (c), i.e., randomly selected attributes.
4 Conclusion
In this paper, we have proposed a method to realize faithful bidding of web
advertisement. The characteristics of the proposed method are:
References
1. Schlosser, A.E., Shavitt, S., Kanfer, A.: Survey of Internet users’ attitudes toward
Internet advertising. Journal of Interactive Marketing 13(3), 34–54 (1999)
2. Manchanda, P., Dube, J.-P., Goh, K.Y., Chintagunta, P.K.: The Effect of Banner
Advertising on Internet Purchasing. Journal of Marketing Research 43(1), 98–108
(2006)
3. Shabbir, G., Niazi, K., Siddiqui, J., Shah, B.A., Hunjra, A.I.: Effective advertising
and its influence on consumer buying behavior. MPRA Paper No. 40689 (August
2012)
4. Hogg, R.V., McKean, J.W., Craig, A.T.: Introduction to Mathematical Statistics,
6th edn. Person Education, Inc. (June 2004)
5. Dobson, A.J.: An Introduction to Generalized Linear Models, ch. 9, 3rd edn. Chap-
man and Hall/CRC (November 2001)
Social Media for Business
An Evaluation Scheme for Performance Measurement
of Facebook Use
An Example of Social Organizations in Vienna
1 Introduction
Online social networks have evolved from a niche to a mass phenomenon that epito-
mizes the digital era [1]. With a daily average use of 30 to 60 minutes [2] by one
billion users [3], the world’s largest social network, Facebook, has become an integral
part of everyday life [3]. In recent years, organizations have recognized the impor-
tance of using Facebook to achieve their organizational goals. Research on the use of
Facebook tends to focus on for-profit companies or end users, and rarely investigates
how social organizations use Facebook, especially in German-speaking regions.
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 121–132, 2014.
© Springer International Publishing Switzerland 2014
122 C. Brauer, C. Bauer, and M. Dirlinger
The few existing studies mainly discuss the general importance of social media for
social organizations (e.g., [4-6]). Because these studies commonly use qualitative
research methods, there are few quantitative results on the use of Facebook in social
organizations. For example, Waters [7] investigated the use of social media in non-
profit organizations. The analysis of expert interviews and focus groups showed that
social organizations use Facebook to build and maintain relationships with their
stakeholders. Other studies, in contrast, have revealed that social organizations use
Facebook primarily to describe the organization but do not leverage the interaction
possibilities and networking opportunities that Facebook offers. Furthermore, re-
search shows that the majority of social organizations start using social media without
having an integrated social media strategy or a sophisticated Facebook strategy. Most
studies on Facebook use in social organizations comes from the United States (e.g.,
[8]); in German-speaking regions, empirical research on that topic is scarce. Annually
since 2009, Kiefer [5] has investigated the use of online social networks in a cross-
sectional study of 60 German non-profit organizations [5, 9, 10]; however, this
research only considers organizations in three fields of practice (environmental/nature
protection, international affairs, social affairs). While Kiefer’s work may identify
Facebook as the strongest online social network of non-profit organizations, it has not
garnered profound insights about the use and the development potential of online
social networks. To date, there is no scientific work based on real data that investi-
gates the use and the development potential of Facebook for social organizations.
Against this background, the present article is dedicated to the following research
questions: How can the use of Facebook be evaluated in terms of performance mea-
surement? How do social organizations perform with respect to their use of Face-
book? To what extent are these organizations utilizing Facebook’s potential? This
article introduces an evaluation scheme that includes nine categories of performance
measurement. Using social organizations in Vienna as our example, we demonstrate
the scheme’s applicability and, with various indicators and benchmarks, we evaluate
the level of sophistication of each organization’s use of Facebook. We investigated all
social organizations based in Vienna (N=517), including those in all fields of practice,
based on publicly available Facebook data from 1 January 2012 to 30 June 2012. We
analyzed the organizations’ use of the various Facebook functionalities as well as the
2479 publicly available Facebook posts for the respective time period.Due to the
topic’s relevance and the lack of comparative studies, this research contributes to both
science and practice. The next section presents a literature review of Facebook use by
non-profit organizations and discusses performance measurement of this use. Subse-
quently, the data collection is described and the research results and evaluation
scheme are presented. Finally, research results are discussed and new fields of
research are identified.
2 Related Work
In this section, we present related work concerning online social networks, with a
focus on Facebook use by non-profit organizations. Then, we describe performance
metrics for measuring the success of a Facebook page for social organizations.
An Evaluation Scheme for Performance Measurement of Facebook Use 123
3 Research Procedure
In order to answer the research questions, we conducted an empirical study of Face-
book use among social organizations in Vienna. Our analysis is based on publicly
accessible data, from which we calculated the various performance metrics.
Social Affairs and Consumer Protection; this resulted in a set of 1682 social organiza-
tions based in Vienna (retrieved on 12 April 2012). After removing organizations
from the data set that were assigned to multiple fields of practice, we had a list of 517
social organizations. 25 organizations were removed from the list because they were
either not within the scope of the definition of a social organization by Dimmel [16]
or were already closed. Then, for every organization on the list, we investigated
whether it had registered a Facebook page. Only 73 of the 492 (14.8%) social organi-
zations in Vienna had its own Facebook page. For 127 (25.8%) organizations, the
umbrella organization or the carrier of the organization operated the Facebook page.
18 organizations used Facebook via a “Facebook personal profile” and 104 via “Fa-
cebook Community”. 292 social organizations (59.4%) did not have a Facebook page.
3.2 Coding Schemes for the Analysis of Facebook Pages and Posts
The coding scheme for the analysis of the Facebook pages was developed ex ante
based on Waters, Burnett, Lamm and Lucas [8]1. Using this coding scheme, the vari-
ous applications within Facebook (e.g., “information”, “views”, and “applications”)
were analyzed. In addition, the Facebook pages were analyzed to determine which
applications, out of all those offered, were used by the social organizations. Further-
more, for deeper insights into how social organizations use Facebook, we conducted a
content analysis of the posts in the organizations’ Facebook timelines (all posts from
1 January 2012 to 30 June 2012). The coding scheme was developed inductively from
raw data and was adapted during the coding phase. For every Facebook post, we cap-
tured a formal description and a description of the content. The formal information
included the date of the entry, the number of “likes”, the number of comments, and
the sharing frequency of the post within Facebook. Regarding the content of posts, we
recorded whether the posts were manually entered or automatically retrieved (for
instance via other online social networks), and whether they contained links, photos,
videos, or audio files. Finally, we classified all Facebook posts by topic.
4 Research Results
1
The coding schemes can be requested from the authors.
An Evaluation Scheme for Performance Measurement of Facebook Use 125
Table 1. Social organizations ranked by percentage of Facebook pages per field of practice
communicating with internal stakeholders. The relatively low number of posts about
issues of organizational structure also indicates that Facebook is used for external
rather than internal communication. Fundraising is another of the social organiza-
tions’ most frequent post topics (190 posts; 7.7%). Fundraising posts were published
by 39 of the 73 social organizations. In these posts, the organizations call for dona-
tions, report on fundraising activities and fundraising dedications, and express thanks
to donors (139 post; 5.6%). During the investigation period, the social organizations
published an average of 2.64 posts about fundraising issues. In addition, three social
organizations have implemented a specific Facebook application for soliciting dona-
tions. Overall, Facebook’s potential for fundraising is not being exploited to its full
extent; there is room for improvement. Examples of individual success stories were
published 30 times out of all the Facebook posts (1.2%). Few posts dealt with volun-
teer management (2.4%, n = 59). 60 posts (2.4%) included greetings for holidays or
seasonal events. Approximately 3% of the posts contained humorous pictures, videos,
and recommendations for cultural events.
4.5 Relation between Self-written Posts and Posts Written by Other Users
The relationship between self-written posts and those written by other users is a key
metric of an organization’s interaction with Facebook users [18]. During the investi-
gation period, 15.5% of posts were written by users, and 35 social organizations did
not receive any posts written by users. In this context, it should be mentioned that 12
organizations deactivated the possibility for users to respond to posts.Overall, posts
written by other users resulted in an average of 8.33 “likes” per post and 0.74 com-
ments per post. Posts written by users had reached a total of 391 “likes” and 143
comments. In comparison, the responses to posts written by users had lower interac-
tivity impact and achieved on average only 0.86 “likes” and 0.31 comments.Another
indicator of a successful Facebook page is a high number of posts by users that were
commented on by the organization [18]. The analysis revealed that 70.4% of posts
written by users were marked with “like” or commented on by the respective organi-
zations. Other Facebook users responded significantly more often to user-generated
128 C. Brauer, C. Bauer, and M. Dirlinger
posts with “likes” (69.7%, n=318) or comments (17.8%, n=81) from the organiza-
tions, compared to user posts without reactions by the social organizations, where a
total of only 16% of user posts had been marked with “like” (n=73) and 13.6% of
posts were commented on (n=62).
Table 2. Evaluation Scheme for the Use of Facebook by Social Organizations in Vienna
line, reaction to
References
1. Richter, A., Koch, M.: Funktionen von Social-Networking-Diensten. In: Multikonferenz
Wirtschaftsinformatik 2008 (2008)
2. Royal Pingdom: Facebook, YouTube, our collective time sinks (stats) (2011)
3. Facebook, http://newsroom.fb.com/Key-Facts (accessed April 18, 2013)
4. Curtis, L., Edwards, C., Fraser, K.L., Gudelsky, S., Holmquist, J., Thornton, K.,
Sweetser, K.D.: Adoption of social media for public relations by nonprofit organizations.
Public Relations Review 36, 90–92 (2010)
5. Kiefer, K.: Social Media Engagement deutscher NPO. Performance Management in
Nonprofit-Organisationen 386 (2012)
6. Lovejoy, K., Saxton, G.D.: Information, Community, and Action: How Nonprofit Organi-
zations Use Social Media. J. of Computer. Mediated Communication 17, 337–353 (2012)
7. Waters, R.D.: The use of social media by nonprofit organizations: An examination from
the diffusion of innovations perspective. In: Handbook of Research on Social Interaction
Technologies and Collaboration Software: Concepts and Trends. IGI, Hershey (2010)
8. Waters, R.D., Burnett, E., Lamm, A., Lucas, J.: Engaging stakeholders through social
networking: How nonprofit organizations are using Facebook. Public Relations Review
35, 102–106 (2009)
9. Kiefer, K.: NGOs im Social Web. Eine inhaltsanalytische Untersuchung zum Einsatz
und Potential von Social Media für die Öffentlichkeitsarbeit von gemeinnützigen Organi-
sationen. Institut für Journalistik und Kommunikationsforschung. Universität Hannover,
Hanover, Germany (2009)
10. Kiefer, K.: NPOs im Social Web: Status quo und Entwicklungspotenziale. In: Fundraising
im Non-Profit-Sektor, pp. 283–296. Springer (2010)
11. Briones, R.L., Kuch, B., Liu, B.F., Jin, Y.: Keeping up with the digital age: How the
American Red Cross uses social media to build relationships. Public Relations Review 37,
37–43 (2011)
12. Miller, D.: Nonprofit organizations and the emerging potential of social media and internet
resources. SPNHA Review 6, 4 (2010)
132 C. Brauer, C. Bauer, and M. Dirlinger
13. Reynolds, C.: Friends Who Give: Relationship-Building and Other Uses of Social Net-
working Tools by Nonprofit Organizations. The Elon Journal of Undergraduate Research
in Communications 2, 15–40 (2011)
14. Heidemann, J., Klier, M., Landherr, A., Probst, F.: Soziale Netzwerke im Web–Chancen
und Risiken im Customer Relationship Management von Unternehmen. Wirtschaftsinfor-
matik & Management 3, 40–45 (2011)
15. Reisberger, T., Smolnik, S.: Modell zur Erfolgsmessung von Social-Software-Systemen.
In: Multikonferenz Wirtschaftsinformatik, pp. 565–577 (2008)
16. Dimmel, N.: Sozialwirtschaft in der Sozialordnung. In: Dimmel, N. (ed.) Das Recht der
Sozialwirtschaft, pp. 9–58. Wien/Graz, Austria (2007)
17. Badelt, C., Pennerstorfer, A., Schneider, U.: Der Nonprofit Sektor in Österreich. In: Simsa,
R., Meyer, M., Badelt, C. (eds.) Handbuch der Nonprofit-Organisation, pp. 55–75 (2013)
18. Brocke, A., Faust, A.: Berechnung von Erfolgskennzahlen für Facebook Fan-Pages.
ICOM 10, 44–48 (2011)
19. Facebook, http://www.facebook.com/business/build
(accessed April 18, 2013)
20. Reimerth, G., Wigand, J.: Welche Inhalte in Facebook funktionieren: Facebook Postings
von Consumer Brands und Retail Brands unter der Lupe. knallgrau, Vienna, Austria
(2012)
Understanding the Factors That Influence the Perceived
Severity of Cyber-bullying
1 Introduction
F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 133–144, 2014.
© Springer International Publishing Switzerland 2014
134 S. Camacho, K. Hassanein, and M. Head
between cyber-bullying and traditional bullying [13] and (iii) strategies used by vic-
tims to deal with cyber-bullying incidents (e.g. deleting unwanted messages, changing
e-mail address) [14-15].
Researchers have used different measures of cyber-bullying, relying mainly on
providing specific behavioral examples of what this phenomenon entails and asking a
global question as to whether individuals have experienced cyber-bullying [16].
Furthermore, some measures have been developed to specifically measure cyber-
victimization [17-18] and those are concerned with the frequency at which certain
behaviors (e.g. insulting language in e-mails) occur. In general, cyber-bullying meas-
ures used to date are concerned with the incidence of specific behaviors and do not
consider that the victims’ perception of those behaviors may vary (e.g. the same be-
havior may be interpreted as harmless by some people and rather hurtful by others)
[19]. Moreover, there is a lack of research studying the degree to which victims perce-
ive cyber-bullying as being harmful [20].
This study addresses the above gap by introducing the construct of perceived cy-
ber-bullying severity to measure a victim’s evaluation of cyber-bullying. In addition,
this study proposes a set of factors that may affect a victim’s perception of cyber-
bullying severity.
2 Theoretical Background
Lazarus and Folkman (1984) proposed the Transactional Theory of Stress and Coping
(TTSC). They defined psychological stress as a relationship between a person and the
environment that is seen by the person as taxing her resources or threatening her well-
being [21]. Embedded in this definition is the fact that although there may be objec-
tive conditions that can be considered as stressors (e.g. natural disasters, having an
argument with a loved person), individuals will vary in the degree and type of reac-
tion to these stressors. In order to understand the individuals’ varied reactions when
facing the same stressful situation, it is necessary to understand the cognitive
processes that take place between the stressor and the reaction [21].
TTSC proposes cognitive appraisal as the mediating factor, which reflects the
changing relationships between individuals with certain characteristics (e.g. values,
thinking style) and an environment that must be predicted and interpreted [21]. Spe-
cifically, the theory outlines a primary appraisal of the stressor and a secondary
appraisal of the coping mechanisms available to deal with the stressor [22]. In the
primary appraisal phase, individuals determine if and how the situation is relevant to
their goal attainment or well-being. When the situation affects negatively goal attain-
ment and/or well-being (i.e. it is stressful), individuals determine the extent to which
the situation is harming, threatening, or challenging [23]. Harm refers to damage that
has already occurred and threat refers to a future potential damage, while challenge
produces a positive motivation in individuals to overcome obstacles [24]. After the
primary appraisal phase, individuals move to the secondary appraisal phase where
they evaluate their options in terms of coping with the stressful situation [24].
Understanding the Factors That Influence the Perceived Severity of Cyber-bullying 135
The proposed research model is shown in Figure 1. The constructs and hypotheses
included in the model, along with their appropriate support, are described below.
-Saliency
-Sensitivity Message
-Frequency
-Offensiveness
Medium
Perceived importance
Awareness of
provision of recourse
Victim
H4 +
Neuroticism
Perceived
Self-esteem cyber-bullying
severity
Bully
Power differential
Relationship strength
Audience
Size
Sensitivity
Reaction
higher stress in such situations [36]. In light of these arguments, it is expected that
confronted with the same cyber-bullying episode, individuals with low self-esteem or
high neuroticism will perceived it as more severe than others. Thus, we hypothesize
that:
H4: Neuroticism is positively related to PCS
H5: Self-esteem is negatively related to PCS