0% found this document useful (0 votes)
63 views814 pages

HCI in Business: Fiona Fui-Hoon Nah

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views814 pages

HCI in Business: Fiona Fui-Hoon Nah

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 814

Fiona Fui-Hoon Nah (Ed.

)
LNCS 8527

HCI in Business
First International Conference, HCIB 2014
Held as Part of HCI International 2014
Heraklion, Crete, Greece, June 22–27, 2014, Proceedings

123
Lecture Notes in Computer Science 8527
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
Fiona Fui-Hoon Nah (Ed.)

HCI in Business
First International Conference, HCIB 2014
Held as Part of HCI International 2014
Heraklion, Crete, Greece, June 22-27, 2014
Proceedings

13
Volume Editor
Fiona Fui-Hoon Nah
Missouri University of Science and Technology
Department of Business and Information Technology
101 Fulton Hall, 301 West 14th Street
Rolla, MO 65409, USA
E-mail: [email protected]

ISSN 0302-9743 e-ISSN 1611-3349


ISBN 978-3-319-07292-0 e-ISBN 978-3-319-07293-7
DOI 10.1007/978-3-319-07293-7
Springer Cham Heidelberg New York Dordrecht London

Library of Congress Control Number: 2014939121

LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Web


and HCI
© Springer International Publishing Switzerland 2014
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and
executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication
or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location,
in ist current version, and permission for use must always be obtained from Springer. Permissions for use
may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution
under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication,
neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or
omissions that may be made. The publisher makes no warranty, express or implied, with respect to the
material contained herein.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Foreword

The 16th International Conference on Human–Computer Interaction, HCI


International 2014, was held in Heraklion, Crete, Greece, during June 22–27,
2014, incorporating 14 conferences/thematic areas:

Thematic areas:

• Human–Computer Interaction
• Human Interface and the Management of Information

Affiliated conferences:

• 11th International Conference on Engineering Psychology and Cognitive


Ergonomics
• 8th International Conference on Universal Access in Human–Computer
Interaction
• 6th International Conference on Virtual, Augmented and Mixed Reality
• 6th International Conference on Cross-Cultural Design
• 6th International Conference on Social Computing and Social Media
• 8th International Conference on Augmented Cognition
• 5th International Conference on Digital Human Modeling and Applications
in Health, Safety, Ergonomics and Risk Management
• Third International Conference on Design, User Experience and Usability
• Second International Conference on Distributed, Ambient and Pervasive
Interactions
• Second International Conference on Human Aspects of Information Security,
Privacy and Trust
• First International Conference on HCI in Business
• First International Conference on Learning and Collaboration Technologies

A total of 4,766 individuals from academia, research institutes, industry, and


governmental agencies from 78 countries submitted contributions, and 1,476 pa-
pers and 225 posters were included in the proceedings. These papers address
the latest research and development efforts and highlight the human aspects of
design and use of computing systems. The papers thoroughly cover the entire
field of human–computer interaction, addressing major advances in knowledge
and effective use of computers in a variety of application areas.
This volume, edited by Fiona Fui-Hoon Nah, contains papers focusing on the
thematic area of HCI in Business, addressing the following major topics:

• Enterprise systems
• Social media for business
• Mobile and ubiquitous commerce
VI Foreword

• Gamification in business
• B2B, B2C, C2C e-commerce
• Supporting collaboration, business and innovation
• User experience in shopping and business

The remaining volumes of the HCI International 2014 proceedings are:

• Volume 1, LNCS 8510, Human–Computer Interaction: HCI Theories,


Methods and Tools (Part I), edited by Masaaki Kurosu
• Volume 2, LNCS 8511, Human–Computer Interaction: Advanced Interaction
Modalities and Techniques (Part II), edited by Masaaki Kurosu
• Volume 3, LNCS 8512, Human–Computer Interaction: Applications and Ser-
vices (Part III), edited by Masaaki Kurosu
• Volume 4, LNCS 8513, Universal Access in Human–Computer Interaction:
Design and Development Methods for Universal Access (Part I), edited by
Constantine Stephanidis and Margherita Antona
• Volume 5, LNCS 8514, Universal Access in Human–Computer Interaction:
Universal Access to Information and Knowledge (Part II), edited by
Constantine Stephanidis and Margherita Antona
• Volume 6, LNCS 8515, Universal Access in Human–Computer Interaction:
Aging and Assistive Environments (Part III), edited by Constantine Stephani-
dis and Margherita Antona
• Volume 7, LNCS 8516, Universal Access in Human–Computer Interaction:
Design for All and Accessibility Practice (Part IV), edited by Constantine
Stephanidis and Margherita Antona
• Volume 8, LNCS 8517, Design, User Experience, and Usability: Theories,
Methods and Tools for Designing the User Experience (Part I), edited by
Aaron Marcus
• Volume 9, LNCS 8518, Design, User Experience, and Usability: User Expe-
rience Design for Diverse Interaction Platforms and Environments (Part II),
edited by Aaron Marcus
• Volume 10, LNCS 8519, Design, User Experience, and Usability: User Expe-
rience Design for Everyday Life Applications and Services (Part III), edited
by Aaron Marcus
• Volume 11, LNCS 8520, Design, User Experience, and Usability: User
Experience Design Practice (Part IV), edited by Aaron Marcus
• Volume 12, LNCS 8521, Human Interface and the Management of Informa-
tion: Information and Knowledge Design and Evaluation (Part I), edited by
Sakae Yamamoto
• Volume 13, LNCS 8522, Human Interface and the Management of Infor-
mation: Information and Knowledge in Applications and Services (Part II),
edited by Sakae Yamamoto
• Volume 14, LNCS 8523, Learning and Collaboration Technologies: Designing
and Developing Novel Learning Experiences (Part I), edited by Panayiotis
Zaphiris and Andri Ioannou
Foreword VII

• Volume 15, LNCS 8524, Learning and Collaboration Technologies:


Technology-rich Environments for Learning and Collaboration (Part II),
edited by Panayiotis Zaphiris and Andri Ioannou
• Volume 16, LNCS 8525, Virtual, Augmented and Mixed Reality: Designing
and Developing Virtual and Augmented Environments (Part I), edited by
Randall Shumaker and Stephanie Lackey
• Volume 17, LNCS 8526, Virtual, Augmented and Mixed Reality: Applica-
tions of Virtual and Augmented Reality (Part II), edited by Randall
Shumaker and Stephanie Lackey
• Volume 19, LNCS 8528, Cross-Cultural Design, edited by P.L. Patrick Rau
• Volume 20, LNCS 8529, Digital Human Modeling and Applications in Health,
Safety, Ergonomics and Risk Management, edited by Vincent G. Duffy
• Volume 21, LNCS 8530, Distributed, Ambient, and Pervasive Interactions,
edited by Norbert Streitz and Panos Markopoulos
• Volume 22, LNCS 8531, Social Computing and Social Media, edited by
Gabriele Meiselwitz
• Volume 23, LNAI 8532, Engineering Psychology and Cognitive Ergonomics,
edited by Don Harris
• Volume 24, LNCS 8533, Human Aspects of Information Security, Privacy
and Trust, edited by Theo Tryfonas and Ioannis Askoxylakis
• Volume 25, LNAI 8534, Foundations of Augmented Cognition, edited by
Dylan D. Schmorrow and Cali M. Fidopiastis
• Volume 26, CCIS 434, HCI International 2014 Posters Proceedings (Part I),
edited by Constantine Stephanidis
• Volume 27, CCIS 435, HCI International 2014 Posters Proceedings (Part II),
edited by Constantine Stephanidis

I would like to thank the Program Chairs and the members of the Program
Boards of all affiliated conferences and thematic areas, listed below, for their
contribution to the highest scientific quality and the overall success of the HCI
International 2014 Conference.
This conference could not have been possible without the continuous support
and advice of the founding chair and conference scientific advisor, Prof. Gavriel
Salvendy, as well as the dedicated work and outstanding efforts of the commu-
nications chair and editor of HCI International News, Dr. Abbas Moallem.
I would also like to thank for their contribution towards the smooth organi-
zation of the HCI International 2014 Conference the members of the Human–
Computer Interaction Laboratory of ICS-FORTH, and in particular
George Paparoulis, Maria Pitsoulaki, Maria Bouhli, and George Kapnas.

April 2014 Constantine Stephanidis


General Chair, HCI International 2014
Organization

Human–Computer Interaction
Program Chair: Masaaki Kurosu, Japan

Jose Abdelnour-Nocera, UK Heidi Krömker, Germany


Sebastiano Bagnara, Italy Chen Ling, USA
Simone Barbosa, Brazil Chang S. Nam, USA
Adriana Betiol, Brazil Naoko Okuizumi, Japan
Simone Borsci, UK Philippe Palanque, France
Henry Duh, Australia Ling Rothrock, USA
Xiaowen Fang, USA Naoki Sakakibara, Japan
Vicki Hanson, UK Dominique Scapin, France
Wonil Hwang, Korea Guangfeng Song, USA
Minna Isomursu, Finland Sanjay Tripathi, India
Yong Gu Ji, Korea Chui Yin Wong, Malaysia
Anirudha Joshi, India Toshiki Yamaoka, Japan
Esther Jun, USA Kazuhiko Yamazaki, Japan
Kyungdoh Kim, Korea Ryoji Yoshitake, Japan

Human Interface and the Management of Information


Program Chair: Sakae Yamamoto, Japan

Alan Chan, Hong Kong Hiroyuki Miki, Japan


Denis A. Coelho, Portugal Shogo Nishida, Japan
Linda Elliott, USA Robert Proctor, USA
Shin’ichi Fukuzumi, Japan Youngho Rhee, Korea
Michitaka Hirose, Japan Ryosuke Saga, Japan
Makoto Itoh, Japan Katsunori Shimohara, Japan
Yen-Yu Kang, Taiwan Kim-Phuong Vu, USA
Koji Kimita, Japan Tomio Watanabe, Japan
Daiji Kobayashi, Japan
X Organization

Engineering Psychology and Cognitive Ergonomics


Program Chair: Don Harris, UK

Guy Andre Boy, USA Axel Schulte, Germany


Shan Fu, P.R. China Siraj Shaikh, UK
Hung-Sying Jing, Taiwan Sarah Sharples, UK
Wen-Chin Li, Taiwan Anthony Smoker, UK
Mark Neerincx, The Netherlands Neville Stanton, UK
Jan Noyes, UK Alex Stedmon, UK
Paul Salmon, Australia Andrew Thatcher, South Africa

Universal Access in Human–Computer Interaction


Program Chairs: Constantine Stephanidis, Greece,
and Margherita Antona, Greece

Julio Abascal, Spain Georgios Kouroupetroglou, Greece


Gisela Susanne Bahr, USA Patrick Langdon, UK
João Barroso, Portugal Barbara Leporini, Italy
Margrit Betke, USA Eugene Loos, The Netherlands
Anthony Brooks, Denmark Ana Isabel Paraguay, Brazil
Christian Bühler, Germany Helen Petrie, UK
Stefan Carmien, Spain Michael Pieper, Germany
Hua Dong, P.R. China Enrico Pontelli, USA
Carlos Duarte, Portugal Jaime Sanchez, Chile
Pier Luigi Emiliani, Italy Alberto Sanna, Italy
Qin Gao, P.R. China Anthony Savidis, Greece
Andrina Granić, Croatia Christian Stary, Austria
Andreas Holzinger, Austria Hirotada Ueda, Japan
Josette Jones, USA Gerhard Weber, Germany
Simeon Keates, UK Harald Weber, Germany

Virtual, Augmented and Mixed Reality


Program Chairs: Randall Shumaker, USA,
and Stephanie Lackey, USA

Roland Blach, Germany Hirokazu Kato, Japan


Sheryl Brahnam, USA Denis Laurendeau, Canada
Juan Cendan, USA Fotis Liarokapis, UK
Jessie Chen, USA Michael Macedonia, USA
Panagiotis D. Kaklis, UK Gordon Mair, UK
Organization XI

Jose San Martin, Spain Christopher Stapleton, USA


Tabitha Peck, USA Gregory Welch, USA
Christian Sandor, Australia

Cross-Cultural Design
Program Chair: P.L. Patrick Rau, P.R. China

Yee-Yin Choong, USA Sheau-Farn Max Liang, Taiwan


Paul Fu, USA Katsuhiko Ogawa, Japan
Zhiyong Fu, P.R. China Tom Plocher, USA
Pin-Chao Liao, P.R. China Huatong Sun, USA
Dyi-Yih Michael Lin, Taiwan Emil Tso, P.R. China
Rungtai Lin, Taiwan Hsiu-Ping Yueh, Taiwan
Ta-Ping (Robert) Lu, Taiwan Liang (Leon) Zeng, USA
Liang Ma, P.R. China Jia Zhou, P.R. China
Alexander Mädche, Germany

Online Communities and Social Media


Program Chair: Gabriele Meiselwitz, USA

Leonelo Almeida, Brazil Anthony Norcio, USA


Chee Siang Ang, UK Portia Pusey, USA
Aneesha Bakharia, Australia Panote Siriaraya, UK
Ania Bobrowicz, UK Stefan Stieglitz, Germany
James Braman, USA Giovanni Vincenti, USA
Farzin Deravi, UK Yuanqiong (Kathy) Wang, USA
Carsten Kleiner, Germany June Wei, USA
Niki Lambropoulos, Greece Brian Wentz, USA
Soo Ling Lim, UK

Augmented Cognition
Program Chairs: Dylan D. Schmorrow, USA,
and Cali M. Fidopiastis, USA

Ahmed Abdelkhalek, USA Rosario Cannavò, Italy


Robert Atkinson, USA Joseph Cohn, USA
Monique Beaudoin, USA Andrew J. Cowell, USA
John Blitch, USA Martha Crosby, USA
Alenka Brown, USA Wai-Tat Fu, USA
XII Organization

Rodolphe Gentili, USA Keith Niall, USA


Frederick Gregory, USA Tatana Olson, USA
Michael W. Hail, USA Debra Patton, USA
Monte Hancock, USA June Pilcher, USA
Fei Hu, USA Robinson Pino, USA
Ion Juvina, USA Tiffany Poeppelman, USA
Joe Keebler, USA Victoria Romero, USA
Philip Mangos, USA Amela Sadagic, USA
Rao Mannepalli, USA Anna Skinner, USA
David Martinez, USA Ann Speed, USA
Yvonne R. Masakowski, USA Robert Sottilare, USA
Santosh Mathan, USA Peter Walker, USA
Ranjeev Mittu, USA

Digital Human Modeling and Applications in Health,


Safety, Ergonomics and Risk Management
Program Chair: Vincent G. Duffy, USA

Giuseppe Andreoni, Italy Tim Marler, USA


Daniel Carruth, USA Jianwei Niu, P.R. China
Elsbeth De Korte, The Netherlands Michelle Robertson, USA
Afzal A. Godil, USA Matthias Rötting, Germany
Ravindra Goonetilleke, Hong Kong Mao-Jiun Wang, Taiwan
Noriaki Kuwahara, Japan Xuguang Wang, France
Kang Li, USA James Yang, USA
Zhizhong Li, P.R. China

Design, User Experience, and Usability


Program Chair: Aaron Marcus, USA

Sisira Adikari, Australia Federico Gobbo, Italy


Claire Ancient, USA Emilie Gould, USA
Arne Berger, Germany Rüdiger Heimgärtner, Germany
Jamie Blustein, Canada Brigitte Herrmann, Germany
Ana Boa-Ventura, USA Steffen Hess, Germany
Jan Brejcha, Czech Republic Nouf Khashman, Canada
Lorenzo Cantoni, Switzerland Fabiola Guillermina Noël, Mexico
Marc Fabri, UK Francisco Rebelo, Portugal
Luciane Maria Fadel, Brazil Kerem Rızvanoğlu, Turkey
Tricia Flanagan, Hong Kong Marcelo Soares, Brazil
Jorge Frascara, Mexico Carla Spinillo, Brazil
Organization XIII

Distributed, Ambient and Pervasive Interactions


Program Chairs: Norbert Streitz, Germany,
and Panos Markopoulos, The Netherlands

Juan Carlos Augusto, UK Ingrid Mulder, The Netherlands


Jose Bravo, Spain Anton Nijholt, The Netherlands
Adrian Cheok, UK Fabio Paternó, Italy
Boris de Ruyter, The Netherlands Carsten Röcker, Germany
Anind Dey, USA Teresa Romao, Portugal
Dimitris Grammenos, Greece Albert Ali Salah, Turkey
Nuno Guimaraes, Portugal Manfred Tscheligi, Austria
Achilles Kameas, Greece Reiner Wichert, Germany
Javed Vassilis Khan, The Netherlands Woontack Woo, Korea
Shin’ichi Konomi, Japan Xenophon Zabulis, Greece
Carsten Magerkurth, Switzerland

Human Aspects of Information Security, Privacy and Trust


Program Chairs: Theo Tryfonas, UK,
and Ioannis Askoxylakis, Greece

Claudio Agostino Ardagna, Italy Gregorio Martinez, Spain


Zinaida Benenson, Germany Emilio Mordini, Italy
Daniele Catteddu, Italy Yuko Murayama, Japan
Raoul Chiesa, Italy Masakatsu Nishigaki, Japan
Bryan Cline, USA Aljosa Pasic, Spain
Sadie Creese, UK Milan Petković, The Netherlands
Jorge Cuellar, Germany Joachim Posegga, Germany
Marc Dacier, USA Jean-Jacques Quisquater, Belgium
Dieter Gollmann, Germany Damien Sauveron, France
Kirstie Hawkey, Canada George Spanoudakis, UK
Jaap-Henk Hoepman, The Netherlands Kerry-Lynn Thomson, South Africa
Cagatay Karabat, Turkey Julien Touzeau, France
Angelos Keromytis, USA Theo Tryfonas, UK
Ayako Komatsu, Japan João Vilela, Portugal
Ronald Leenes, The Netherlands Claire Vishik, UK
Javier Lopez, Spain Melanie Volkamer, Germany
Steve Marsh, Canada
XIV Organization

HCI in Business
Program Chair: Fiona Fui-Hoon Nah, USA

Andreas Auinger, Austria Scott McCoy, USA


Michel Avital, Denmark Brian Mennecke, USA
Traci Carte, USA Robin Poston, USA
Hock Chuan Chan, Singapore Lingyun Qiu, P.R. China
Constantinos Coursaris, USA Rene Riedl, Austria
Soussan Djamasbi, USA Matti Rossi, Finland
Brenda Eschenbrenner, USA April Savoy, USA
Nobuyuki Fukawa, USA Shu Schiller, USA
Khaled Hassanein, Canada Hong Sheng, USA
Milena Head, Canada Choon Ling Sia, Hong Kong
Susanna (Shuk Ying) Ho, Australia Chee-Wee Tan, Denmark
Jack Zhenhui Jiang, Singapore Chuan Hoo Tan, Hong Kong
Jinwoo Kim, Korea Noam Tractinsky, Israel
Zoonky Lee, Korea Horst Treiblmaier, Austria
Honglei Li, UK Virpi Tuunainen, Finland
Nicholas Lockwood, USA Dezhi Wu, USA
Eleanor T. Loiacono, USA I-Chin Wu, Taiwan
Mei Lu, USA

Learning and Collaboration Technologies


Program Chairs: Panayiotis Zaphiris, Cyprus,
and Andri Ioannou, Cyprus

Ruthi Aladjem, Israel Edmund Laugasson, Estonia


Abdulaziz Aldaej, UK Ana Loureiro, Portugal
John M. Carroll, USA Katherine Maillet, France
Maka Eradze, Estonia Nadia Pantidi, UK
Mikhail Fominykh, Norway Antigoni Parmaxi, Cyprus
Denis Gillet, Switzerland Borzoo Pourabdollahian, Italy
Mustafa Murat Inceoglu, Turkey Janet C. Read, UK
Pernilla Josefsson, Sweden Christophe Reffay, France
Marie Joubert, UK Nicos Souleles, Cyprus
Sauli Kiviranta, Finland Ana Luı́sa Torres, Portugal
Tomaž Klobučar, Slovenia Stefan Trausan-Matu, Romania
Elena Kyza, Cyprus Aimilia Tzanavari, Cyprus
Maarten de Laat, The Netherlands Johnny Yuen, Hong Kong
David Lamas, Estonia Carmen Zahn, Switzerland
Organization XV

External Reviewers

Ilia Adami, Greece Asterios Leonidis, Greece


Iosif Klironomos, Greece George Margetis, Greece
Maria Korozi, Greece Stavroula Ntoa, Greece
Vassilis Kouroumalis, Greece Nikolaos Partarakis, Greece
HCI International 2015

The 15th International Conference on Human–Computer Interaction, HCI In-


ternational 2015, will be held jointly with the affiliated conferences in Los An-
geles, CA, USA, in the Westin Bonaventure Hotel, August 2–7, 2015. It will
cover a broad spectrum of themes related to HCI, including theoretical issues,
methods, tools, processes, and case studies in HCI design, as well as novel in-
teraction techniques, interfaces, and applications. The proceedings will be pub-
lished by Springer. More information will be available on the conference website:
http://www.hcii2015.org/

General Chair
Professor Constantine Stephanidis
University of Crete and ICS-FORTH
Heraklion, Crete, Greece
E-mail: [email protected]
Table of Contents

Enterprise Systems
Exploring Interaction Design for Advanced Analytics and Simulation . . . 3
Robin Brewer and Cheryl A. Kieliszewski

Decision Support System Based on Distributed Simulation Optimization


for Medical Resource Allocation in Emergency Department . . . . . . . . . . . . 15
Tzu-Li Chen

The Impact of Business-IT Alignment on Information Security


Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Mohamed El Mekawy, Bilal AlSabbagh, and Stewart Kowalski

Examing Significant Factors and Risks Affecting the Willingness to


Adopt a Cloud–Based CRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Nga Le Thi Quynh, Jon Heales, and Dongming Xu

Towards Public Health Dashboard Design Guidelines . . . . . . . . . . . . . . . . . 49


Bettina Lechner and Ann Fruhling

Information Technology Service Delivery to Small Businesses . . . . . . . . . . 60


Mei Lu, Philip Corriveau, Luke Koons, and Donna Boyer

Charting a New Course for the Workplace with an Experience


Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Faith McCreary, Marla Gómez, Derrick Schloss, and Deidre Ali

The Role of Human Factors in Production Networks and Quality


Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Ralf Philipsen, Philipp Brauner, Sebastian Stiller,
Martina Ziefle, and Robert Schmitt

Managing User Acceptance Testing of Business Applications . . . . . . . . . . . 92


Robin Poston, Kalyan Sajja, and Ashley Calvert

How to Improve Customer Relationship Management in Air


Transportation Using Case-Based Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . 103
Rawia Sammout, Makram Souii, and Mansour Elghoul

Toward a Faithful Bidding of Web Advertisement . . . . . . . . . . . . . . . . . . . . 112


Takumi Uchida, Koken Ozaki, and Kenichi Yoshida
XX Table of Contents

Social Media for Business


An Evaluation Scheme for Performance Measurement of Facebook Use:
An Example of Social Organizations in Vienna . . . . . . . . . . . . . . . . . . . . . . . 121
Claudia Brauer, Christine Bauer, and Mario Dirlinger

Understanding the Factors That Influence the Perceived Severity of


Cyber-bullying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Sonia Camacho, Khaled Hassanein, and Milena Head

Seeking Consensus: A Content Analysis of Online Medical


Consultation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Ming-Hsin Phoebe Chiu

Social Media Marketing on Twitter: An Investigation of the


Involvement-Messaging-Engagement Link . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Constantinos K. Coursaris, Wietske van Osch, and Brandon Brooks

The Internet, Happiness, and Social Interaction: A Review


of Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Richard H. Hall and Ashley Banaszek

Small and Medium Enterprises 2.0: Are We There Yet? . . . . . . . . . . . . . . . 175


Pedro Isaias and Diogo Antunes

Finding Keyphrases of Readers’ Interest Utilizing Writers’ Interest in


Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Lun-Wei Ku, Andy Lee, and Yan-Hua Chen

The Role of Interactivity in Information Search on ACG Portal Site . . . . 194


Juihsiang Lee and Manlai You

Factors Affecting Continued Use of Social Media . . . . . . . . . . . . . . . . . . . . . 206


Eleanor T. Loiacono and Scott McCoy

Image-Blogs: Consumer Adoption and Usage (Research-in-Progress) . . . . 214


Eleanor T. Loiacono and Purvi Shah

Main Factors for Joining New Social Networking Sites . . . . . . . . . . . . . . . . 221


Carlos Osorio and Savvas Papagiannidis

“There’s No Way I Would Ever Buy Any Mp3 Player with a Measly
4gb of Storage”: Mining Intention Insights about Future Actions . . . . . . . 233
Maria Pontiki and Haris Papageorgiou

Experts versus Friends: To Whom Do I Listen More? The Factors That


Affect Credibility of Online Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
DongBack Seo and Jung Lee
Table of Contents XXI

To Shave or Not to Shave? How Beardedness in a Linkedin Profile


Picture Influences Perceived Expertise and Job Interview Prospects . . . . 257
Sarah van der Land and Daan G. Muntinga

Empowering Users to Explore Subject Knowledge by Aggregating


Search Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
I-Chin Wu, Cheng Kao, and Shao-Syuan Chiou

Mobile and Ubiquitous Commerce

Follow-Me: Smartwatch Assistance on the Shop Floor . . . . . . . . . . . . . . . . . 279


Mario Aehnelt and Bodo Urban

A Qualitative Investigation of ‘Context’, ‘Enterprise Mobile Services’


and the Influence of Context on User Experiences and Acceptance of
Enterprise Mobile Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Karen Carey and Markus Helfert

Designing for Success: Creating Business Value with Mobile User


Experience (UX) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Soussan Djamasbi, Dan McAuliffe, Wilmann Gomez,
Georgi Kardzhaliyski, Wan Liu, and Frank Oglesby

The Performance of Self in the Context of Shopping in a Virtual


Dressing Room System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Yi Gao, Eva Petersson Brooks, and Anthony Lewis Brooks

Understanding Dynamic Pricing for Parking in Los Angeles: Survey


and Ethnographic Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
James Glasnapp, Honglu Du, Christopher Dance,
Stephane Clinchant, Alex Pudlin, Daniel Mitchell, and
Onno Zoeter

Full-Body Interaction for the Elderly in Trade Fair Environments . . . . . . 328


Mandy Korzetz, Christine Keller, Frank Lamack, and
Thomas Schlegel

Human-Computer vs. Consumer-Store Interaction in a Multichannel


Retail Environment: Some Multidisciplinary Research Directions . . . . . . . 339
Chris Lazaris and Adam Vrechopoulos

Market Intelligence in Hypercompetitive Mobile Platform Ecosystems:


A Pricing Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Hoang D. Nguyen, Sangaralingam Kajanan, and
Danny Chiang Choon Poo
XXII Table of Contents

A User-Centered Approach in Designing NFC Couponing Platform:


The Case Study of CMM Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Antonio Opromolla, Andrea Ingrosso, Valentina Volpi,
Mariarosaria Pazzola, and Carlo Maria Medaglia

Mobile Design Usability Guidelines for Outdoor Recreation


and Tourism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Sarah J. Swierenga, Dennis B. Propst, Jennifer Ismirle,
Chelsea Figlan, and Constantinos K. Coursaris

Gamification in Business
A Framework for Evaluating the Effectiveness of Gamification
Techniques by Personality Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Charles Butler

The Global Leadership of Virtual Teams in Avatar-Based Virtual


Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Paul Hayes Jr.

Gamification of Education: A Review of Literature . . . . . . . . . . . . . . . . . . . 401


Fiona Fui-Hoon Nah, Qing Zeng, Venkata Rajasekhar Telaprolu,
Abhishek Padmanabhuni Ayyappa, and Brenda Eschenbrenner

An Investigation of User Interface Features of Crowdsourcing


Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Robbie Nakatsu and Charalambos Iacovou

Co-design of Neighbourhood Services Using Gamification Cards . . . . . . . . 419


Manuel Oliveira and Sobah Petersen

Applications of a Roleplaying Game for Qualitative Simulation and


Cooperative Situations Related to Supply Chain Management . . . . . . . . . 429
Thiago Schaedler Uhlmann and André Luiz Battaiola

Gamification Design for Increasing Customer Purchase Intention in a


Mobile Marketing Campaign App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Don Ming-Hui Wen, Dick Jen-Wei Chang, Ying-Tzu Lin,
Che-Wei Liang, and Shin-Yi Yang

B2B, B2C, C2C e-Commerce


An Individual Differences Approach in Adaptive Waving of User
Checkout Process in Retail eCommerce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Marios Belk, Panagiotis Germanakos, Stavros Asimakopoulos,
Panayiotis Andreou, Constantinos Mourlas, George Spanoudis, and
George Samaras
Table of Contents XXIII

Do You Trust My Avatar? Effects of Photo-Realistic Seller Avatars and


Reputation Scores on Trust in Online Transactions . . . . . . . . . . . . . . . . . . . 461
Gary Bente, Thomas Dratsch, Simon Rehbach, Matthias Reyl, and
Blerta Lushaj
What Web Analysts Can Do for Human-Computer Interaction? . . . . . . . . 471
Claudia Brauer, David Reischer, and Felix Mödritscher
Persuasive Web Design in e-Commerce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
Hsi-Liang Chu, Yi-Shin Deng, and Ming-Chuen Chuang
Creating Competitive Advantage in IT-Intensive Organizations:
A Design Thinking Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
Alma Leora Culén and Mark Kriger
Understanding the Antecedents and Consequences of Live-Chat Use in
E-Commerce Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
Lele Kang, Xiang Wang, Chuan-Hoo Tan, and J. Leon Zhao
Productivity of Services – An Empirical View on the German Market . . . 516
Stephan Klingner, Michael Becker, and Klaus-Peter Fähnrich
Consumer Preferences for the Interface of E-Commerce Product
Recommendation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
Yi-Cheng Ku, Chih-Hung Peng, and Ya-Chi Yang
Critical Examination of Online Group-Buying Mechanisms . . . . . . . . . . . . 538
Yi Liu, Chuan Hoo Tan, Juliana Sutanto, Choon Ling Sia, and
Kwok-Kee Wei
A Case Study of the Application of Cores and Paths in Financial Web
Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Dongyuan Liu, Tian Lei, and Shuaili Wei
WebQual and Its Relevance to Users with Visual Disabilities . . . . . . . . . . 559
Eleanor T. Loiacono and Shweta Deshpande
First in Search – How to Optimize Search Results in E-Commerce Web
Shops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
Gerald Petz and Andreas Greiner
The Value of User Centered Design in Product Marketing: A Simulated
Manufacturing Company Product Offering Market Strategy . . . . . . . . . . . 575
April Savoy and Alister McLeod

Supporting Collaboration, Business and Innovation


Principles of Human Computer Interaction in Crowdsourcing to Foster
Motivation in the Context of Open Innovation . . . . . . . . . . . . . . . . . . . . . . . 585
Patrick Brandtner, Andreas Auinger, and Markus Helfert
XXIV Table of Contents

Search in Open Innovation: How Does It Evolve with the Facilitation


of Information Technology? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
Tingru Cui, Yu Tong, and Hock Hai Teo

Technology Acceptance Model: Worried about the Cultural


Influence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
Cristóbal Fernández Robin, Scott McCoy,
Luis Yáñez Sandivari, and Diego Yáñez Martı́nez

Towards the Development of a ‘User-Experience’ Technology Adoption


Model for the Interactive Mobile Technology . . . . . . . . . . . . . . . . . . . . . . . . 620
Jenson Chong-Leng Goh and Faezeh Karimi

Using Participatory Design and Card Sorting to Create a Community


of Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
Delia Grenville

“Crowdsense” – Initiating New Communications and Collaborations


between People in a Large Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
Sue Hessey, Catherine White, and Simon Thompson

Accelerating Individual Innovation: Evidence from a Multinational


Corporation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
Qiqi Jiang, Yani Shi, Chuan-Hoo Tan, and Choon Ling Sia

Determinants of Continued Participation in Web-Based Co-creation


Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
Sangyong Jung, JiHyea Hwang, and Da Young Ju

Towards Predicting Ad Effectiveness via an Eye Tracking Study . . . . . . . 670


Eleni Michailidou, Christoforos Christoforou, and
Panayiotis Zaphiris

Exploring the Impact of Users’ Preference Diversity on Recommender


System Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
Muh-Chyun Tang

A Usability Evaluation of an Electronic Health Record System for


Nursing Documentation Used in the Municipality Healthcare Services
in Norway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
Torunn Kitty Vatnøy, Grete Vabo, and Mariann Fossum

User Experience in Shopping and Business


UX and Strategic Management: A Case Study of Smartphone (Apple
vs. Samsung) and Search Engine (Google vs. Naver) Industry . . . . . . . . . . 703
Junho Choi, Byung-Joon Kim, and SuKyung Yoon
Table of Contents XXV

Designing a Multi-modal Association Graph for Music Objects . . . . . . . . . 711


Jia-Lien Hsu and Chiu-Yuan Ho

Usability Evaluations of an Interactive, Internet Enabled Human


Centered SanaViz Geovisualization Application . . . . . . . . . . . . . . . . . . . . . . 723
Joshi Ashish, Magdala de Araujo Novaes, Josiane Machiavelli,
Sriram Iyengar, Robert Vogler, Craig Johnson, Jiajie Zhang, and
Chiehwen Ed Hsu

Improving Xbox Search Relevance by Click Likelihood Labeling . . . . . . . . 735


Jingjing Li, Xugang Ye, and Danfeng Li

A Preliminary Study on Social Cues Design in Mobile Check-in Based


Advertisement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
Chi-Lun Liu and Hsieh-Hong Huang

Analyzing the User-Generated Content on Disintermediation Effect:


A Latent Segmentation Study of Bookers and Lookers . . . . . . . . . . . . . . . . 754
Carlota Lorenzo-Romero, Giacomo Del Chiappa, and Efthymios
Constantinides

Do We Follow Friends or Acquaintances? The Effects of Social


Recommendations at Different Shopping Stages . . . . . . . . . . . . . . . . . . . . . . 765
Tingting Song, Cheng Yi, and Jinghua Huang

When Two Is Better than One – Product Recommendation with Dual


Information Processing Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
Wee-Kek Tan, Chuan-Hoo Tan, and Hock-Hai Teo

Effects of Social Distance and Matching Message Orientation on


Consumers’ Product Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
Lu Yang, Jin Chen, and Bernard C.Y. Tan

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799


Enterprise Systems
Exploring Interaction Design for Advanced Analytics
and Simulation

Robin Brewer1 and Cheryl A. Kieliszewski2


1
Northwestern University, 2240 Campus Drive, Evanston, Illinois 60208
2
IBM Research – Almaden, 650 Harry Road, San Jose, California 95120
[email protected], [email protected]

Abstract. Enterprise businesses are increasingly using analytics and simulation


for improved decision making with diverse and large quantities of data.
However, new challenges arise in understanding how to design and implement
a user interaction paradigm that is appropriate for technical experts, business
users, and other stakeholders. Technologies developed for sophisticated
analyses pose a challenge for interaction and interface design research when the
goal is to accommodate users with different types and levels of expertise. In
this paper we discuss the results of a multi-phase research effort to explore
expectations for interaction and user experience with a complex technology that
is meant to provide scientists and business analysts with expert-level capability
for advanced analytics and simulation. We find that while there are unique
differences in software preferences of scientists and analysts, that a common
interface is feasible for universal usability of these two user groups.

Keywords: Simulation, modeling, expert, analysis, interviews, disruption,


ideation.

1 Introduction

Federal lawmakers want to propose a coast-to-coast high-speed rail transportation


system to the public. Being that this is a large investment of taxpayer dollars, they
want to make the first proposal the optimal proposal so as not to upset citizens. They
also realize many decisions are often made with good information and insight such as
future needs, demand, and geographic location. Such information is spread across
different sources. Assistance is needed aggregating appropriate data sources and
models for a large-scale benefit analysis. What would you recommend for developing
a seamless high-speed rail infrastructure that reduces airplane and automobile
emissions while being cost-efficient, improving overall quality of life for customers,
and that is accessible to customers quickly?
Above is an example of a complex problem for which modeling and simulation can
provide a solution. Technologies for advanced analytics and simulation are often very
complex, requiring specialized knowledge to use them, and are created for experts in
a particular domain (domain expert). As an ‘expert’, the expectation is that she has

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 3–14, 2014.
© Springer International Publishing Switzerland 2014
4 R. Brewer and C.A. Kieliszewski

mastered a set of tasks and activities that are performed on a regular basis, and these
tasks often become automatic. In turn, this automation can make it difficult to elicit
detailed information from the expert about a set of tasks because she may
unintentionally leave out important details or essential steps when describing the
tasks [1,2].
The research presented in this paper was conducted within the context of a
modeling and simulation (M&S) tool called SPLASH (Smarter Planet Platform for
Analysis and Simulation of Health) [3]. Through SPLASH, end users with varying
degrees of expertise in analytics and simulation can design simulation experiments to
apply in a variety of fields including finance, urban planning, healthcare, and disaster
planning. This range of fields and end users poses challenges for how to
accommodate a wide array of expertise in M&S – that is, for people with deep
domain knowledge about the construction of models and simulations to people with
skill and expertise in running the simulation system and analyzing the output within a
particular field. In addition, the domain of modeling and simulation tends to
emphasize algorithm design and implementation rather than interface and interaction
design. Without a body of evidence of how scientists and analysts use modeling or
simulation tools, we had to work with a community of our intended end users to
identify expectations and interface design features. This paper describes the method
and results of using exploratory interviews, disruptive interviews, and participatory
ideation to elicit information from experts in the field of M&S to inform the design of
the SPLASH interaction.

2 Background

The goal of SPLASH is to facilitate the creation of complex, interconnected system-


of-systems to advise and guide ‘‘what-if’’ analyses for stakeholders and policy
makers. In contrast to the tradition of developing isolated models of phenomena,
SPLASH takes a slightly different approach to the question, can we use M&S to help
policy makers envision the trades-offs of complex policy and planning problems in a
more holistic way? Specifically, SPLASH affords being able to examine real-world
complex systems by reusing and coupling models and data of individual systems into
a more comprehensive simulation [4]. As such, providing a way to consider the
effects of change on the complete system rather than through the independent lens of
individual systems models. Smarter Planet Platform for Analysis and Simulation of
Health is intended to help the stakeholders consider as much about a complex system
as possible to avoid negative unintended consequences by using relevant constituent
components (i.e., data, models, simulations) for their desired level of system
abstraction and analysis [5]. Our role in the development of SPLASH was to initiate
the design of the user interface and end user interaction model.

2.1 Composite Modeling Methodology

Modeling and simulation is a complex research area that typically draws from
mathematics, statistics, and business [6]. The process to create models and
Exploring Interaction Design for Advanced Analytics and Simulation 5

simulations tends to be subjective and dependent on the stakeholders, the model


scope, level of detail of model content, and data requirements [6, 7]. A typical
approach to examining a complex problem is for the modeler to use the individual
components they are familiar with (i.e., as data, statistics, models, or simulations) to
model and simulate a system. The modeler then uses output from these components as
analysis of the individual pieces of the larger system. This would include working
with key stakeholders to make assumptions about the impact of changes on the
overall system using the individual pieces, resulting in an informed but fragmented
system perspective [8].
Creating complex system simulations by coupling models and data sources is not a
brand new area for the M&S community. There are a number of ways to create
complex simulations through model integration, and these can be classified into three
types: (1) integrated and uniform modeling framework, (2) tightly-coupled modeling
framework, and (3) loosely-coupled modeling framework (see [3] for additional detail
about each type of modeling framework). However, unless designed to accommodate
one of these three frameworks from the beginning, the coupling of component models
typically requires systems development work to integrate independent data sources
and/or to re-code models and simulations so they can conform to a particular protocol
or standard. By contrast, SPLASH enables the creation of composite models by
automatically translating data from one component model into the form needed by
another model to create a composite system model. In doing so, SPLASH also helps
to alleviate the guesswork and assumptions about impact of changes and the potential
for unintended consequences [3].
This suffices from a systems engineering perspective, but how is the stakeholder
supposed to actually use such a complex technology? What complicated our role of
designing an interface and interaction model for composite modeling is that there is
not a standard process for building individual models or simulations to help inform
expectations through a set of current conventions. This left us with little interaction
guidance to begin prototyping an interface design for SPLASH.

2.2 Expert Elicitation

An expert can be defined as “an individual that we can trust to have produced
thoughtful, consistent and reliable evaluations of items in a given domain” [9].
Because experts have, in essence, 10,000+ hours of experience [2], they are very
familiar with a particular process and pattern to perform a task or activity. Therefore,
it may be easy for the expert to recall the process for performing a particular activity
or sequence of tasks but difficult to express the process to a novice. To study expert
activities, many routine tasks are documented using some form of observation
[10,11]. However, the tacit knowledge and reasoning may not be apparent to the
observer when experts are performing a routine task [12].
There are two intended user groups of SPLASH, both of which are considered to
be experts: scientists and analysts. The descriptions of our population were that
scientists typically design, build, and run models and simulation experiments.
Analysts run experiments after a model has been built and/or analyze results of the
6 R. Brewer and C.A. Kieliszewski

simulation run to aid in further decision-making. Both scientists and analysts are
experts in performing analytical tasks that we needed to better understand. To design
an interface for SPLASH, it was fundamental to understand what processes, tools, and
techniques our target users employ to build and run simulations to model and
understand potential system behavior.
For this study, we decided to use a series of three interview techniques to elicit
expert knowledge in a relatively short period of time – being sensitive to work
schedules and volunteer participation of our pool of professionals. Interviewing is a
common HCI technique for eliciting information from stakeholders for rich
qualitative analysis. Interviews can take many different forms including unstructured,
semi-structured, and structured [13]. We started our investigation with semi-
structured exploratory interviews to gain an understanding of what it is to do M&S
work and to further structure the remaining two investigation phases of disruptive
interviews and participatory ideation.
Disruptive interviews are derived from semi-structured interviews and can aid
in the recall of past steps to complete a process that may have become automatic
and taken for granted [12,14]. The interview technique uses a specific scenario that is
then constrained over time by placing limitations on the options available to the
participant. The constraints of the scenario are iteratively refined so that the
participant must reflect on the processes and their reasoning. This technique borrows
from condensed ethnographic interviews [12] that transform discussion from broad
issues to detailed steps [15]. It is critical that disruptive interviews consider the
context of the interviewees’ processes. Understanding such context allows the
researcher to design interview protocols appropriate to the constraints a person
typically encounters in their work.
Participatory ideation (PI) is a mash-up of two existing techniques, participatory
design and social ideation. Participatory design is often described as ‘design-by-
doing’ [16] to assist researchers in the design process. This method is often used
when researchers and designers want to accurately design a tool for an audience they
are not familiar with [17]. Complementary to this, social ideation is the process of
developing ideas with others via a web-enabled platform and utilizes brainstorming
techniques to generate new ideas [18]. Both participatory design and social ideation
are intended for early stage design and to engage with the users of the intended tool.
We interviewed professional scientists and analysts to investigate their
expectations for the design of a technology such as SPLASH. The research questions
we aimed to address were:
• RQ1: What are people’s expectations for a complex cross-disciplinary modeling
and simulation tool?
• RQ2: How should usable modeling and simulation interfaces be designed for
non-technical audiences?

3 Methods

To address the above research questions we began with exploratory interviews. We


then used the findings from the exploratory interview to design business-relevant
scenarios, conduct disruptive interviews, and structure a participatory ideation phase.
Exploring Interaction Design for Advanced Analytics and Simulation 7

We worked with 15 unique participants through the three phases of investigation. Of


the 15 participants, nine were scientists, four were analysts, and two held both
scientist and analyst roles. (Referred to as scientific analysts here on in, this hybrid
categorization included participants who have experience with building models
and with analyzing simulation results.) The range of modeling, simulation, and/or
analytical domain expertise included atmospheric modeling, healthcare,
manufacturing, polymer science, physics, statistics, social analytics, supply-chain
management, and text analytics. Participants were recruited opportunistically as
references and by snowball sampling.

3.1 Exploratory Interviews and Scenario Design


The first stage of this work was to understand our participant’s work context, the type
of modeling and/or simulation work that they perform, and their process for building
a model and/or running a simulation. We began by interviewing five people, of which
four were scientists and one was an analyst. The exploratory interviews were semi-
structured, lasted approximately 30 minutes, and were conducted both in-person (for
local participants) and by telephone (for remote participants). The results were used to
help gauge the level of self-reported expertise of each participant and to develop the
scenarios and disruptive interview protocol from the perspective of how M&S
activities are performed.
After conducting the exploratory interviews, we aggregated scenario examples
provided by participants, examples from our previous publications [3,4,5], and areas
of interest to IBM’s Smarter Cities initiative [19]. This yielded four scenarios for the
disruptive interviews in the fields of transportation, healthcare, disaster recovery, and
supply chain. The scenarios are hypothetical contexts in which simulations might be
used to help examine a complex business challenge. We used the scenarios developed
from the exploratory interviews to scope the disruptive interviews and provide
context for the participants of the disruptive interview phase.

3.2 Disruption

Disruptive interviews are “disruptive” in nature because of the ever-increasing


constraints placed on a solution set that is available to the participant during the
interview itself. In our study, the interviewee was presented a scenario and asked to
identify component model and data sources he or she would use to address the
challenge highlighted in the scenario. In this phase of the investigation, our
participant pool included two analysts, three scientists, and two scientific analysts.
The participants began by describing the models and data sources they thought
would be useful in addressing the scenario. This was done without constraint to get
the participant engaged in the scenario and to gather thoughts and reasoning of how
the participant would approach the scenario challenge. Then, to begin triggering
additional and more detailed feedback, the participants were only allowed to choose
from a pre-determined list of model and data sources to address the scenario. Lastly,
access to component sources was narrowed even further, which required the
8 R. Brewer and C.A. Kieliszewski

participant to reflect on the trade-off of potentially not having precisely what


component sources they desired and expressing what was important to the design and
build of a composite model for analysis. Each interview lasted approximately 1 hour,
was transcribed, and then coded for emergent themes using Dedoose [20].

3.3 Participatory Ideation

All of the participants were remote for the participatory ideation phase that was
conducted to elicit early-stage interface prototype design ideas. Because all of our
participants were remote, we used an asynchronous, online collaboration tool called
Twiddla [21] as an aid to collect input. The participants were placed into one of two
conditions: individual ideation or group ideation. For this phase we recruited two
scientists and one analyst for the individual ideation condition, and two scientists and
two analysts for the group ideation condition.
We started with individual ideation, where the participants were given a blank
canvas and asked to sketch ideas for model and data source selection, composition,
and expected visualization(s) of simulation output based on one of the four scenarios
that was created from the exploratory phase. Key interface and interaction features
from the individual ideation output were then summarized and displayed as a starting
point on the Twiddla drawing canvas for the group ideation participants. We
hypothesized that the group ideation would produce more robust ideas because
participants wouldn’t need to create a new concept, but could simply build upon a set
of common ideas [22].

4 Results

The three phases of this work each provided insight towards answering our research
questions and built upon the findings of the previous phase(s). Here we provide the
key results for each.

4.1 Grounding the Investigation: Exploratory Interview Results

To begin the exploratory interviews, we asked our participants to describe or define a


model and a simulation. We received a range of responses for “model”. However, the
descriptions were all disposed towards being a codified representation (computer
program) of a physical process. An example response was:

“A model would be a representation of a physical process, but a simplified


representation of that process so that a computer can handle the level of detail,
computationally, in an efficient manner.”

Similarly, we received a range of responses to describe or define “simulation”. The


tendency was for both scientists and analysts to define a simulation in the context of
their work with modeling, making little or no distinction between a simulation and a
Exploring Interaction Design for Advanced Analytics and Simulation 9

model. We provided definitions in the subsequent phases of investigation to overcome


any issues with ambiguous use of these terms.
Participants, regardless of their area of expertise, expressed that the software tools
used in their daily work were a large source of frustration when building models and
running simulations. Software constraints included limitations of existing tools to
correctly support and manage the model development and simulation run independent
of the problem size and the time trade-off to build custom tools.
We found that all of the scientists had experience using third party tools but would
eventually develop customized applications, program extensions to an existing tool,
and/or couple multiple third party tools. The main reasons for custom-built tools
were: (a) to accommodate legacy models and computer systems, (b) to perform
additional analysis of the post-simulation run results, (c) to properly implement error
handling during the simulation runtime, and/or (d) to add capabilities to visualize the
simulation results.
In addition to frustration with tools used to build models and run simulations, we
found that the amount of time to run a simulation was also a critical factor. The main
challenges for time were a combination of (a) proper model design, (b) data quality,
and/or (c) avoidance of unnecessary runtime delays or re-runs/re-starts. Results from
the exploratory interviews were used to scope the four scenarios for the remaining
investigations and to define some of the constraints used in the disruptive interviews.

4.2 Revelation through Disruption: Disruptive Interview Results

The disruptive interviews provided insight into the selection and prioritization of
model and data sources – a key element to composite modeling. We were able to
explore steps taken when options are suddenly limited and how one would work
through the challenge. In doing so, there were disruption-based triggers that prompted
participants to deliberately reflect on and express how they would complete the
scenario – as illustrated in the following statement:

“When you build a simulation model you can collect everything in the world
and build the most perfect model and you ask what are my 1st order effects?
What are the ones I think are most critical? If I don't have them in there, my
simulation model would be way off. The second ones are secondary effects...
Those are the ones if I don't have enough time, I could drop those.”

By narrowing the selection of available model and data sources available to


address a scenario, participants expressed their preferences and expectations for being
able to find resources such as data, models, and tools. The research focused on
prioritization, selection, and preferences for data sources, type of analysis, kinds of
tools, and visualization capabilities. The participants also expressed a preference for a
navigational browser to help them visualize data and select the model and data
sources to address a scenario. Results from the disruptive interviews were used as
guide for a low-fidelity interface design that resulted from this series of
investigations.
10 R. Brewer and C.A. Kieliszewski

4.3 Early Design: Participatory Ideation Results

This next phase resulted in sketches of interface ideas generated by the participants.
Recall that the participatory ideation phase was designed with two conditions of
participation: individual ideation and group ideation. The findings show similarities
between the user groups, but also ideas unique to scientists and to analysts. In
addition, we unexpectedly found that even though our group ideation participants
were provided a sketch to start from (based on the individual ideation results), it was
ignored by all of them and each decided to start with a blank design canvas. What
follows is a summary of the design ideas that were mutual to analysts and scientists
and then those that were specific to each participant group.
Once the results of the participatory ideation phase were aggregated, three mutual
interaction and interface design ideas stood-out. The first design idea was a feature to
support browsing and exploration of model and data sources that would afford
examination of schemas and/or variables prior to selection for use in a scenario. The
second was a feature to compare the output of multiple simulation runs for a
particular scenario to better understand the trade-offs of selecting one simulation
solution compared to another (Fig. 1). The third feature was an audience-specific
dashboard for making complex decisions that would provide a summary of the model
and data sources that were used when running the simulation.

Fig. 1. Example sketch of a simulation output where it would be easy to compare scenarios

Analyst-Specific Design Ideas. Analysts emphasized guidance and recommendation.


For example, analysts wanted pre-defined templates for simulation set-up and for
analyzing simulation output. They expected the system to provide recommendations
for which template to use (similar to the query prediction feature in Google) along
with the steps to run a simulation. Also, they did not want technical terms such as
“simulation”, “model”, or “factor” used in the interface. Instead, they preferred words
such as “concept” or “category”. For visualization, analysts wanted a feature to
suggest if one chart style would be better than another style to explain relationships in
output data. For example, participants wanted a feature to suggest if a bar chart would
be better than a tree map to explain relationships in their data.
Exploring Interaction Design for Advanced Analytics and Simulation 11

Scientist-Specific Design Ideas. Scientists emphasized flow and a rich set of


interaction features (Fig. 2). For example, they were consistent in requiring a way to
assess the veracity and provenance of model and data sources. This stemmed from
past experience with models that did not perform as expected or data that was
inconsistent. During this phase, participants were able to query and select curated
model and data sources. However, the scientists found the selections to be limiting
and wanted to be able to upload their own sources to supplement the existing sources.
Lastly, scientists preferred high levels of interaction with the data to examine the
source and/or cleanliness of the data, and to determine the appropriateness for their
simulation goals when previewing search results prior to running the simulation. For
example, they wanted to edit parameters of the simulation set-up and interact with the
sources before and after they were selected.

Fig. 2. Example of expected flow and interaction features for composite modeling

5 Discussion

The results of this series of interviews helped us better understand our target users and
inform subsequent interface prototype design. Specifically, the use of constraints as
disruption in the interviews served as effective triggers, prompting and focusing our
experts to provide details about how they would go about designing a composite
model. These triggers demonstrated the usefulness of disruptive interviews [12,14,15],
and although [9] suggests that experts tend to produce consistent and reliable
evaluations of the work that they perform, we found that they are not particularly
consistent in the manner that they reflect on their process of doing so. In addition, we
were able to efficiently collect interaction expectations and interface design input from
the experts we worked with through participatory ideation.
12 R. Brewer and C.A. Kieliszewski

During the initial process of building a composite model, our analyst community
expected a tool that would provide recommendations. These recommendations ranged
from an automated reference providing which model and data sources to use for a
particular scenario to suggestions for how to then couple the data and models in order
to run the simulation. This ran counter to what our scientist community expected.
Where, they were familiar with building the models and wanted to be able to
interrogate the data and model sources to investigate elements such as provenance,
robustness, and limitations prior to selection for use. A compromise that may satisfy
both participant groups would be to implement an exploratory search and browse
feature where users are not recommended models and data sources, but must
prioritize the information needed before beginning the information retrieval process.
An exploratory search and browse feature may be useful for interactive navigation
of model and data sources to identify the appropriate elements to create a composite
model. For example, take two use cases we found for creating a composite model.
The first is that users may know the specific scenario or issue that they want to
analyze using a composite model; and to facilitate the identification of appropriate
and useful source components, they want to perform a search using specific keywords
or questions. The second use case is that users are in the early stages of defining their
project scope and want to run a simplified or meta-simulation to explore what is
important in order to identify the appropriate source components for the design of the
composite model. This loose exploration would be equivalent to browsing content on
a system, or browsing a larger set of possible scenarios, and getting approximate
output based on approximate inputs. This would allow the user the luxury of having a
basic understanding of the model and data requirements to target particular source
components.
Implementing an exploratory search and browse would require the underlying
systems to have information about the source components (most likely through
metadata, e.g., [3]) along with a set of composite model templates to enable this
manner of recommendation system. Alternatively, a more manual approach could be
taken such as prompting the user to identify known factors to be explored prior to
building the simulation, or identify the important relationships between source
components. This would lead to the system displaying either a dashboard of specific
sources or a catalog of different scenarios to consider. Participants agreed this
exploration should include a high level of interaction with different tuning knobs and
a visualization recommendation interface. In addition, audience-specific dashboards
would be useful for making complex decisions, providing a summary of the
simulation models and source components used in the simulations.
For the simulation output, our results show that both user groups want a
comparison feature that illustrates trade-offs of important scenario factors used in the
final simulation. In addition, they would prefer recommended visualizations for the
simulation to best understand and interpret the generated output. Overall, we saw a
desire to explore model and data sources before and after use in a simulation.
Exploring Interaction Design for Advanced Analytics and Simulation 13

6 Conclusions

This paper describes the results of the first stages of a research effort to explore
interaction expectations for a modeling and simulation technology. The study was set
within the context of a composite modeling and simulation technology called
SPLASH that enables the coupling of independent models (and their respective data
sources) to examine what-if trade-offs for complex systems. Our participant pool
included scientists and analysts; both considered experts in the areas of modeling,
simulation, and analytics. Without the benefit of interaction conventions for modeling
and simulation technologies, we used three techniques (exploratory interviews,
disruptive interviews, and participatory ideation) to elicit information from experts in
the field of modeling and simulation to inform the interaction design of the SPLASH
interface.
Our results show that there are differences in interaction expectations between
scientists and analysts. Our scientists wanted considerably more explicit features and
functionality to enable deep precision for modeling and simulation tasks; whereas our
analysts wanted simplified functionality with intelligent features and recommendation
functionality. We also found some common ground between our participants, such as
both groups wanting a comparison feature to show trade-offs based on simulation
output. Our findings point towards a semi-automated interface that provides a
recommended starting point and allows for flexibility to explore component sources
of models and data prior to selection for use, along with a pre-screening capability to
quickly examine potential simulation output based on an early idea for a composite
model.

References
1. Chilana, P., Wobbrock, J., Ko, A.: Understanding Usability Practices in Complex
Domains. In: Proceedings of the 28th International Conference on Human Factors in
Computing Systems, CHI 2010, pp. 2337–2346. ACM Press (2010)
2. Ericsson, K.A., Prietula, M.J., Cokely, E.T.: The Making of an Expert. Harvard Business
Review: Managing for the Long Term (July 2007)
3. Tan, W.C., Haas, P.J., Mak, R.L., Kieliszewski, C.A., Selinger, P., Maglio, P.P., Li, Y.:
Splash: A Platform for Analysis and Simulation of Health. In: IHI 2012 – Proceedings of
the 2nd ACM SIGHIT International Health Informatics Symposium, pp. 543–552 (2012)
4. Maglio, P.P., Cefkin, M., Haas, P., Selinger, P.: Social Factors in Creating an Integrated
Capability for Health System Modeling and Simulation. In: Chai, S.-K., Salerno, J.J.,
Mabry, P.L. (eds.) SBP 2010. LNCS, vol. 6007, pp. 44–51. Springer, Heidelberg (2010)
5. Kieliszewski, C.A., Maglio, P.P., Cefkin, M.: On Modeling Value Constellations to
Understand Complex Service System Interactions. European Management Journal 30(5),
438–450 (2012)
6. Robinson, S.: Conceptual Modeling for Simulation Part I: Definition and Requirements.
Journal of the Operational Research Society 59(3), 278–290 (2007a)
7. Robinson, S.: Conceptual Modeling for Simulation Part II: A Framework for Conceptual
Modeling. Journal of the Operational Research Society 59(3), 291–304 (2007b)
14 R. Brewer and C.A. Kieliszewski

8. Haas, P., Maglio, P., Selinger, P., Tan, W.: Data is Dead... Without What-If Models.
PVLDB 4(12), 11–14 (2011)
9. Amatriain, X., Lathia, N., Pujol, J.M., Kwak, H., Oliver, N.: The Wisdom of the Few. In:
Proceedings of the 32nd International ACM SIGIR Conference on Research and
Development in Information Retrieval - SIGIR 2009, pp. 532–539. ACM Press (2009)
10. Karvonen, H., Aaltonen, I., Wahlström, M., Salo, L., Savioja, P., Norros, L.: Hidden Roles
of the Train Driver: A Challenge for Metro Automation. Interacting with Computers 23(4),
289–298 (2011)
11. Lutters, W.G., Ackerman, M.S.: Beyond Boundary Objects: Collaborative Reuse in
Aircraft Technical Support. Computer Supported Cooperative Work (CSCW) 16(3), 341–
372 (2006)
12. Comber, R., Hoonhout, J., Van Halteran, A., Moynihan, P., Olivier, P.: Food Practices as
Situated Action: Exploring and Designing for Everyday Food Practices with Households.
In: Computer Human Interaction (CHI), pp. 2457–2466 (2013)
13. Merriam, S.B.: Qualitative Research and Case Study Applications in Education. Jossey-
Bass (1998)
14. Hoonhout, J.: Interfering with Routines: Disruptive Probes to Elicit Underlying Desires.
In: CHI Workshop: Methods for Studying Technology in the Home (2013)
15. Millen, D.R., Drive, S., Bank, R.: Rapid Ethnography: Time Deepening Strategies for HCI
Field Research. In: Proceedings of the 3rd Conference on Designing Interactive Systems:
Processes, Practices, Methods, and Techniques, pp. 280–286 (2000)
16. Kristensen, M., Kyng, M., Palen, L.: Participatory Design in Emergency Medical Service:
Designing for Future Practice. In: Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, pp. 161–170. ACM Press (2006)
17. Hagen, P., Robertson, T.: Dissolving Boundaries: Social Technologies and Participation in
Design. Design, pp. 129–136 (July 2009)
18. Faste, H., Rachmel, N., Essary, R., Sheehan, E.: Brainstorm, Chainstorm, Cheatstorm,
Tweetstorm: New Ideation Strategies for Distributed HCI Design. In: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems, pp. 1343–1352 (2013)
19. IBM, http://www.ibm.com/smarterplanet/us/en/
smarter_cities/overview/index.html
20. Dedoose, http://www.dedoose.com/
21. Twiddla, http://www.twiddla.com
22. Osborn, A.F.: Applied Imagination, 3rd edn. Oxford (1963)
Decision Support System Based on Distributed
Simulation Optimization for Medical Resource Allocation
in Emergency Department

Tzu-Li Chen

Department of Information Management, Fu-Jen Catholic University


510 Chung Cheng Rd , Hsinchuang, Taipei County 24205 Taiwan
[email protected]

Abstract. The number of emergency cases or people making emergency room


visit has rapidly increased annually, leading to an imbalance in supply and de-
mand, as well as long-term overcrowding of emergency departments (EDs) in
hospitals. However, solutions targeting the increase of medical resources and
improving patient needs are not practicable or feasible in the environment in
Taiwan. Therefore, under the constraint of limited medical resources, EDs must
optimize medical resources allocation to minimize the patient average length of
stay (LOS) and medical resource wasted costs (MWCs). This study constructs a
mathematical model for medical resource allocation of EDs, according to emer-
gency flow or procedures. The proposed mathematical model is highly complex
and difficult to solve because its performance value is stochastic and it consid-
ers both objectives simultaneously. Thus, this study postulates a multi-objective
simulation optimization algorithm by integrating a non-dominated sorting ge-
netic algorithm II (NSGA II) and multi-objective computing budget allocation
(MOCBA), and constructs an ED simulation model to address the challenges of
multi-objective medical resource allocation. Specifically, the NSGA II entails
investigating plausible solutions for medical resource allocation, and the
MOCBA involves identifying effective sets of feasible Pareto medical resource
allocation solutions and effective allocation of simulation or computation budg-
ets. Additionally, the discrete simulation model of EDs estimates the expected
performance value. Furthermore, based on the concept of private cloud, this
study presents a distributed simulation optimization framework to reduce simu-
lation time and subsequently obtain simulation outcomes more rapidly. This
framework assigns solutions to different virtual machines on separate comput-
ers to reduce simulation time, allowing rapid retrieval of simulation results and
the collection of effective sets of optimal Pareto medical resource allocation so-
lutions. Finally, this research constructs an ED simulation model based on the
ED of a hospital in Taiwan, and determines the optimal ED resource allocation
solution by using the simulation model and algorithm. The effectiveness and
feasibility of this method are identified by conducting the experiment, and the
experimental analysis proves that the proposed distributed simulation optimiza-
tion framework can effectively reduce simulation time.

Keywords: Simulation optimization, Decision support, Non-dominated sorting


genetic algorithm, Multi-objective computing budget allocation, Emergency
department.

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 15–24, 2014.
© Springer International Publishing Switzerland 2014
16 T.-L. Chen

1 Introduction
In recent years, Taiwan has gradually become an aging society. The continuous
growth of the senior population annually accelerates the increase and growth rate in
emergency department (ED) visits. According to statistics from the Department of
Health, Executive Yuan, from 2000 to 2010, the overall number of people making
emergency visits in 2000 was 6,184,031; the figure had surged rapidly to 7,229,437 in
2010, demonstrating a growth rate of approximately 16%.
People making emergency visits and the growth rate for these visits have risen
rapidly in the past 11 years. Such an increase causes an imbalance between supply
and demand, and ultimately creates long-term overcrowding in hospital EDs. This
phenomenon is primarily caused by the sharp increase in patients (demand side), and
the insufficient or non-corresponding increase in medical staffing (supply side).
Consequently, medical staff capacity cannot accommodate excessive patient loads,
compelling patients to wait long hours for medical procedures, thus contributing to
long-term overcrowding in EDs.
The imbalance in supply and demand also prolongs patient length of stay (LOS) in
the ED. According to data from the ED at Taiwan National University Hospital, Shin
et al. (1999) found that, among 5,810 patients, approximately 3.6% (213 patients) had
stayed over 72 hours in the ED. Of these 213 patients, some had waited for physicians
or beds, whereas some had waited in the observation room until recovery or to be
cleared of problems before being discharged. These issues frequently lead to long-
term ED overcrowding. Based on data analysis of the case hospital examined in this
research, among 43,748 patients, approximately 9% (3,883 patients) had stayed in the
ED for over 12 hours, approximately 3% (1,295) had stayed over 24 hours, and
approximately 1% (317 patients) had stayed in the ED for 72 hours.
Hoot and Aronky (2008) postulated three solutions to address the overcrowding of
EDs: (1) Increase resources: solve supply deficiency by adding manpower, number
of beds, equipment, and space. (2) Effective demand management: address problems
of insufficient supply by implementing strategies, such as referrals to other depart-
ments, clinics, or hospitals. (3) Operational research: explore solutions to ED over-
crowding by exploiting management skills and models developed in operational
research. For instance, determining effective resource allocation solutions can
improve the existing allocation methods and projects, ultimately enhancing ED effi-
ciency, lowering patient waiting time, and alleviating ED overcrowding.
Among the previously mentioned solutions, the first solution is not attainable in
Taiwan, because most hospital EDs have predetermined and fixed manpower, budget,
and space; hence, resources cannot be expanded to resolve the problem. The second
solution is not legally permitted in Taiwan, and is essentially not applicable. Both of
the preceding solutions are seemingly inappropriate and not applicable; therefore, this
study adopted the third solution, which entailed constructing an emergency flow si-
mulation model by conducting operational research. Additionally, the simulation op-
timization algorithm was used to identify the optimal medical resource allocation
solution under the constraint of limited medical resources to attain minimal average
patient LOS and minimal MWC, subsequently ameliorating ED overcrowding.
The main purpose of this research was to determine a multi-objective simulation
optimization algorithm that combines a non-dominated sorting genetic algorithm II
Decision Support System Based on Distributed Simulation Optimization 17

(NSGA II) and a multi-objective optimal computing budget allocation (MOCBA). An


additional purpose was to conduct simulations of schemes and solutions by applying
an ED discrete event simulation (DES) model produced using simulation software to
obtain optimal resource allocation solutions.
In actual solution or scheme simulations, an enormous amount of simulation time
is required to perform a large quantity of solution simulations. Therefore, a distributed
simulation framework is necessary to save simulation time. This study adopted the
concept of “private cloud,” and used the distributed simulation optimization frame-
work to implement and solve this multi-objective emergency medical resource optim-
al allocation problem. The operation of this distributed simulation optimization
framework can be categorized into two main areas: a multi-objective simulation opti-
mization algorithm and a simulation model. During implementation and operation,
NSGA II is first used to search feasible solutions and schemes. The simulation model
is then used to simulate, obtain, and evaluate performance values, whereas MOCBA
determines simulation frequency for the solution or scheme during simulation. For the
simulation model, this study adopted a distributed framework, in which multiple vir-
tual machines (VMs) are installed on separate computers. For solution or scheme
allocation, single control logic is used to assign various resource allocation solutions
to simulation models for different VMs to conduct simulation. Performance values are
generated and returned after the simulation is complete. This framework is characte-
rized by its use of distributed simulation to rapidly obtain performance values and
reduce simulation time.

2 Medical Resource Allocation Model in Emergency


Department

2.1 The Interfaces with Associated Tools

This study was based on the ED flow of a certain hospital as a research target. It has
been established that patient arrival interval times and service times of each medical
service obey specific stochastic distributions; each type of medical resource (such as
staff, equipment, and emergency beds), and the presumed resource allocation at any
time is deterministic or fixed and does not change dynamically according to time.
Under these pre-established conditions, a multi-objective emergency medical re-
sources optimization allocation problem in which the primary goals were minimal
average LOS and minimal average MWC was sought. Under restricted medical re-
sources, this study aimed to obtain the most viable solution for emergency medical
resource allocation.

Index:
i :Index for staff type ( i = 1,..., I ), such as doctor and nurse etc.
j :Index for working area ( j = 1,..., J ), such as registration area, emergency
and critical care area, treatment area and fever area etc.
18 T.-L. Chen

k :Index of medical resources type ( k = 1,..., K ), such as X-Ray machines,


computer tomography (CT) machines, and lab technicians and hospital
beds etc.
Parameters:

cij :Labor cost of staff type i in the working area j

ck :Cost of medical resource type k

lij :Minimum number of staff type i in the working area j

l k :Minimum number of medical resource type k


ui :Maximum number of staff type i
u k :Maximum number medical resource type k
Decision Variables:

X ij :Number of staff type i in working area j

X :Matrix of number of all staff types in all working area, X = ( X ij ) I × J

Yk :Number of medical resource type k

Y Matrix of number of all medical resource types, Y = (Yk ) K


Stochastic medical resource allocation model:

min f1 ( X, Y) = E[ LOS ( X, Y;ω )] (1)

min f 2 ( X, Y) = E[ MWC( X, Y;ω )] (2)

Subject to

lij ≤ X ij ∀i, j (3)

lk ≤ Yk ∀k (4)

X
j
ij ≤ ui ∀i (5)

Yk ≤ uk ∀k (6)
Decision Support System Based on Distributed Simulation Optimization 19

X ij ≥ 0 and integer ∀i, j (7)

Yk ≥ 0 and integer ∀k (8)

Explanations of these mathematical models are as follows: Equation (1) is minimal


expected patient LOS, where ω stands for the stochastic effect; Equation (2) is
minimal average MWC, where ω stands for the stochastic effect. There are two
levels of significance for minimal average MWC: (a) maximized resource use rate;
and (b) minimized medical resource cost; Equation (3) is number of physicians and
nurses in each area, which must exceed the lower limit; Equation (4) is the number of
X-rays, CTs, laboratory technicians, and beds in the ED, which— must exceed the
lower limit; Equation (5) is the sum of the number of physicians and nurses in each
area, which must not exceed the upper limit; Equation (6) is the number of X-rays,
CTs, and laboratory technicians, beds in the ED, which must not exceed the upper
limit; Equation (7) is the number of physicians and nurses in each area, which must
be greater than 0 and expressed as a whole number; and Equation (8) is the number of
X-rays, CTs, and laboratory technicians, and beds in the ED, which must be greater
than 0 and expressed as whole numbers.

3 Multi-objective Simulation Optimization


Multi-objective medical resource allocation is a stochastic optimization problem, and
the ED system shows a stochastic effect. Therefore, to obtain the expected patient
LOS and the expected rate of waste of each resource, the ED simulation model and
the repetition of simulation are required to obtain the estimation value. However,
determining the frequency of simulation repetition during the process of simulation is
crucial. Excess simulation repetition improves the accuracy of the objective values,
but consumes large amounts of computation resources. Therefore, this research sug-
gests a multi-objective simulation optimization algorithm, incorporating NSGA II and
MOCBA, to address the multi-objective ED resource allocation problem. The NSGA
II algorithm, multi-objective population-based search algorithm, is used to identify
the optimal and efficient Pareto set collected from the non-dominated medical
resource allocation solutions through the evolutionary processes. However, to esti-
mate the fitness of each chromosome (medical resource allocation solution) precisely,
NSGA II needs a large number of simulation replications within the stochastic ED
simulation model to find the non-dominated solution set. Moreover, the simulation
replications are identical for all candidate design chromosomes to cause high simula-
tion costs and huge computational resources. Therefore, to improve simulation effi-
ciency, the MOCBA algorithm, new multi-objective R&S method, developed from
Lee et al. (2010) is applied to reduce total simulation replications and efficiently allo-
cate simulation replications or computation budgets for evaluating the solution quality
of all candidate chromosomes to identify and select the promising non-dominated
Pareto set. The algorithmic procedure for integrating NSGA II and MOCBA is
demonstrated in Figure 1.
20 T.-L. Chen

Fig. 1. The flow chart of integrating NSGA II and MOCBA algorithm

4 Distributed Simulation Optimization Framework


This study used eM-Plant 8.1 as a tool for developing the ED flow simulation model.
Figure 2 illustrates the overall ED flow simulation model. In addition, a framework of
distributed simulation optimization is developed to reduce the computation time by
the private cloud technology. In this framework, we initially installed Microsoft

Fig. 2. Simulation model of emergency department flow


Decision Support System Based on Distributed Simulation Optimization 21

Hyper-V, a virtual operating system, on several actual servers to form a computer


resource pool concept. We then established numerous virtual machines (VM) in this
resource pool and assigned 1 simulation model to each VM. Emergency department
procedures were subsequently simulated using these simulation models.
The distributed simulation optimization framework in Figure 3 comprised a client
and a server. After the initial client parameters were set, Web services (WS) were
employed to obtain the non-dominated sorting genetic algorithm-II (NSGA II) from
the server via the Internet. These parameters were subsequently transferred to the
NSGA II’s WS. Upon receiving the HTTP request and parameter settings, the NSGA
II conducts algorithmic procedures, calling WS for the multi-objective optimal budget
allocation (MOCBA) algorithm when simulation is required. The MOCBA deter-
mines the number of simulation iterations required, calling WS for the simulation
coordinator while simultaneously uploading the relevant simulation programs into the
database. The SC’s WS manages the simulation models, identifies the idle simulation
models, and distributes simulation programs to the idle models to perform simula-
tions. After identifying which model to simulate, the SC’s WS commands the model
to retrieve the simulation program from the database. Consequently, the simulation
results are transferred to the SC’s WS, which then transfers this data to the MOCBA
to determine the simulation iterations required untill achieving the termination condi-
tions. After the MOCBA is terminated, the performance results are transferred to the
NSGA II’s WS to again achieve the termination conditions. Following the termination
of the NSGA II, the optimal program produced is transferred to the client-end.

Client Master Controller Private Cloud


Multi-Objective Optimizer (VM Pools)
GA / PSO
Optimizer
Simulation
Web 2. 8. Model
Browser 1. DB 5.
Simulation Budget
VM 1
Allocation
3.
Web Services 9.
Call MOCBA
6.
3. 7. 4. Simulation
4. Model

Simulation VM n
Coordinator 6.
VM Dispatcher

Fig. 3. Distributed simulation optimization framework


22 T.-L. Chen

This framework was executed in the following process:


Step 1: The client calls WS for the NSGA II and transfers the parameters set
by the user to the NSGA II’s WS via the Internet.
Step 2: When simulation is required, the NSGA II calls WS for the MOCBA
and transfers the simulation programs to the MOCBA’s WS.
Step 3: When performance values are required, the MOCBA uploads the re-
quired simulation programs to the database via ADO.NET and calls WS
for the SC to determine which VM simulation model to simulate.
Step 4: The SC’s WS uses sockets to identify which VM is available and
command the simulation model on the VM to perform a simulation.
Step 5: The simulation model uses open data connectivity to collect the simu-
lation program data from the database after receiving the execution
command from the coordinator socket.
Step 6: After executing the simulation program, the performance values are
transferred to the SC’s WS via the socket.
Step 7: The SC’s WS transfers the performance results to the MOCBA’s WS
after receiving them from the simulation model.
Step 8: After receiving the performance values, the MOCBA’s WS executes
the MOCBA until the termination conditions are achieved. Subsequent-
ly, the performance results for algorithm termination are transferred to
the NSFA II’s WS.
Step 9: After receiving the performance values, the NSGA II’s WS executes
the NSGA II until the termination conditions are achieved. Subsequent-
ly, the produced results are transferred to the client via the Internet.

5 Experimental Analysis for the Distributed Simulation


Optimization Framework

In this experiment, we primarily compared the simulation times for varying numbers
of VMs to identify the differences when applying the proposed distribution simulation
optimization model and the effects that the number of VMs had on the simulation
times. In addition, this experiment analyzed the differences in simulation times for
various allocation strategies with equal numbers of VMs.
We adopted the integrated NSGA II_MOCBA as the experimental algorithm, and
employed the optimal NSGA II parameter settings determined in the previous
experiments. The parameter settings were as follows: generation = 10, population size
= 40, C = .7, M = .3, and the termination condition = generation (10).
The initial number of simulation iterations for the MOCBA was n0 = 5 , with a
possible increase of Δ = 30 , and P *{CS} = 0.95 for every iteration.
Regarding the number of VMs, we conducted experiments using 1, 6, 12, and 18
VMs. Table1 shows the execution times for the simulation programs with varying
numbers of VMs and allocation strategies. Besides 1 VM, two methods can be used
for allocating the remaining numbers of VM, specifically, including and excluding
Decision Support System Based on Distributed Simulation Optimization 23

allocation of the number of simulation iterations. Excluding the allocation indicates


that the simulation program is allocated to 1 VM for execution regardless of the
program’s number of simulation iterations, that is, the number of iterations for that
program is not divided and allocated to separate VMs. Conversely, including the allo-
cation indicates that when the number of iterations for the simulation program
exceeds the initial number of iterations n 0 set by the MOCBA, the number of itera-
tions is divided and allocated to numerous VMs for execution.

Table 1. The execution times for the simulation programs with varying numbers of VMs and
allocation strategies

Number of Allocation method Number of executionsExecution times


VMs
1 - 4200 executions 690.5 h (28.77 d)
6 Excluding number of runs 4260 executions 112 h (4.67 d)
allocation
Including number of runs 4260 executions 105.5 h (4.40 d)
allocation
12 Excluding number of runs 4290 executions 58 h (2.42 d)
allocation
Including number of runs 4230 executions 52 h (2.17 d)
allocation
18 Excluding number of runs 4380 executions 52 h (2.17 d)
allocation
Including number of runs 4350 executions 40 h (1.67 d)
allocation

According to the experimental results shown in Table 1, we determined the following


insights:

1. The overall execution time for 1 VM approximated a month (28 d). However, the
execution time was reduced significantly to approximately 4 and 1.5 days when
the number of VMs was increased to 6 and 18, respectively (Table 1). In addition,
the curve exhibited a significant decline from 1 VM to 18 VMs. Thus, we can con-
firm from these results that the proposed distributed simulation optimization
framework can effectively reduce simulation times.
2. The overall execution time was reduced from approximately 4 days to 1 day when
the number of VMs increased from 6 to 18 (Table 1). In addition, the curve exhi-
bited a decline from 6 VMs to 18 VMs. These results indicate that the simulation
times can be reduced by increasing the number of VMs.
3. With a fixed number of VMs, the time required to divide and allocate simulation
iterations to numerous VMs is shorter than that for allocating the entire number of
iterations to 1 VM (Error! Reference source not found.1). Considering 6 VMs as
an example, the execution time without dividing and allocating the number of si-
mulation times was 112 h, whereas the execution time with dividing and allocating
the number of iterations was 105.5 h. These results indicate that distributing the
24 T.-L. Chen

number of simulation times among numerous VMs can reduce the overall execu-
tion time.
4. According to the experimental results, we infer that a limit exists when the number
of VMs is increased to significantly reduce the simulation times. In other words,
when a specific number of VMs is added to a low number of available VMs, the
simulation time is significantly reduced. However, when the number of VMs
increases to a specific amount, the reduction in simulation time becomes less sig-
nificant, eventually reaching convergence. This indicates that after a certain num-
ber of VMs, the simulation time dos not decline with additional VMs.

6 Conclusion

This study investigated the resolution of ED overcrowding through ED medical


resource optimal allocation. First, an emergency simulation model for a hospital in
Taiwan was designed based on interviews and analysis regarding procedures and
flow. A multi-objective simulation optimization algorithm was then designed by inte-
grating the NSGAII algorithm and the MOCBA. To obtain simulation outcomes more
rapidly by diminishing simulation time, this study proposes a distributed simulation
optimization framework based on the private cloud concept to practice or implement
and resolve this multi-objective emergency medical resource optimization allocation
problem. In the proposed distributed simulation optimization framework, solutions or
schemes are assigned to different VMs on separate computers to conduct simulations
and minimize simulation time, as well as obtain simulation results more rapidly.

References
1. Ahmed, M.A., Alkhamis, T.M.: Simulation optimization for an ED healthcare unit in
Kuwait. European Journal of Operational Research 198, 936–942 (2009)
2. Chen, C.H., Lee, L.H.: Stochastic simulation optimization: An Optimal Computing Budget
Allocation. World Scientific Publishing Co. (2010)
3. Hoot, N.R., Aronsky, D.: Systematic Review of ED Crowding: Causes, Effects, and
Solutions. Health Policy and Clinical Practice 52(2), 126–136 (2008)
4. Lee, L.H., Chew, E.P., Teng, S., Goldsman, D.: Finding the non-dominated Pareto set for
multi-objective simulation models. IIE Transactions 42(9), 656–674 (2010)
5. Pitombeira Neto, A.R., Gonçalves Filho, E.V.: A simulation-based evolutionary multiobjec-
tive approach to manufacturing cell formation. Computers & Industrial Engineering 59,
64–74 (2010)
The Impact of Business-IT Alignment on Information
Security Process

Mohamed El Mekawy, Bilal AlSabbagh, and Stewart Kowalski

Department of Computer and Systems Science (DSV), Stockholm University, Sweden


{moel,bilal}@dsv.su.se, [email protected]

Abstract. Business-IT Alignment (BITA) has the potential to link with organi-
zational issues that deal with business-IT relationships at strategic, tactical and
operational levels. In such context, information security process (ISP) is one of
the issues that can be influenced by BITA. However, the impact has yet not
been researched. This paper investigates the BITA impact on ISP. For this in-
vestigation, the relationships of elements of the Strategic Alignment Model and
the components of Security Values Chain Model are considered. The research
process is an in-depth literature survey followed by case study in two organiza-
tions located in United States and the Middle East. The results show clear
impact of BITA on how organizations would distribute allocated security budg-
et and resources based on the needs and risk exposure. The results should
support both practitioners and researchers to gain improved insights of the rela-
tionships between BITA and IT security components.

Keywords: Business-IT alignment, BITA, Information Security Process,


Security Value Chain, Security Culture.

1 Introduction

The importance of IT as an enabler of business has spawned research on effective and


efficient deployment of IT to gain strategic advantage (Sim and Koh, 2001). However,
many companies still fail to gain values and advantages from huge IT investments.
This failure is partially attributable to a lack of Business-IT alignment (BITA) (Leo-
nard & Seddon, 2012). Strategic alignment refers to applying IT in a way that is timely
and appropriate and in line with business needs, goals and strategies (Luftman, 2004).
Therefore, in an increasingly competitive, IT-driven and vibrant global business envi-
ronment, companies can only gain strategic advantages and derive values from IT in-
vestments when efforts are made by management to ensure that business objectives are
shaped and supported by IT in a continuous fashion (Kearns & Lederer, 2000).
The achievement of such objectives requires strong relationships between business
and IT domain not only at strategic level, but at also tactical and operational levels
(Tarafdar and Qrunfleh, 2009). This highlights the importance of ensuring internal
coherence between organizational requirements and delivery’s capability of IT
domain. It also highlights the importance of Information Security Process (ISP) as
integrated part of IT strategy tactics and operations (Avison et al., 2004). In particular,

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 25–36, 2014.
© Springer International Publishing Switzerland 2014
26 M. El Mekawy, B. AlSabbagh, and S. Kowalski

BITA at operational level requires social perspective and aspects like interaction,
shared understanding/knowledge across teams and personnel. Even thought BITA is
shown to have potential impact on ISP at different organizational levels, little research
has been done is this area (Saleh, 2011). Given the fact that the ISP focuses on rela-
tionships between business and IT for supporting BITA, the complexity of its nature
is increased when considering different views on IT in organizations and how to util-
ize it in regard of business objectives.
This paper investigates the impact of BITA on ISP. For this investigation, the
relationships of elements of the Strategic Alignment maturity Model (SAM) devel-
oped by Luftman (2000) and the components of the Security Values Chain Model
(SVCM) developed by Kowalski & Boden (2002) are considered. The remainder of
the paper is structured as follows: the research approach is discussed in section 2. The
implications of BITA and ISP are presented in section 3 and 4 respectively. Potential
relationships between BITA components and SVCM are presented in section 5.
Results and analyses are presented in section 6 followed by conclusions in section 7.

2 Research Approach

The followed research method and process are namely an in-depth literature survey
followed by case study research. The literature survey aimed to study theories behind
BITA and ISP and hypothesize the impact of BITA criteria on SVCM’s components.
Following that, qualitative data was collected from two organizations through semi-
structured interviews with four respondents in each organization i.e. selected to
represent strategic and senior management at both business and IT in both organiza-
tions. The results where codified and compared to the proposed hypotheses.
The first organization (Referred as Company-A) is a midsize insurance company in
the Midwest of the United Stated. The second organization (Referred as Company-B)
is a governmental entity located in the Middle East and acts as national regulator for
communication and technology business.

3 Implications of Business-IT Alignment

In literature, BITA is related to different scopes, and it is therefore defined differently.


While some definitions focus more on the outcomes from IT for producing business
value, others focus on harmonizing business and IT domains with their objectives,
strategies and processes. These two views have affected the way in which BITA is
expressed in publications. Publications which studied benefits of IT for business look
at leveraging/linking (Henderson and Venkatraman, 1993), enabling (Chan et al.,
1997), transforming (Luftman et al., 2000) and optimizing (Sabherwal et al., 2001)
business processes. Other studies which focus on relationship between business and
IT refer to BITA as fitting (Benbya & McKelvey, 2006), integrating (Lacity et al.,
1995), linking (Reich & Benbasat, 2000), matching (Chan et al., 1997), bridging (Van
Der Zee and De Jong, 1999), fusion (Smaczny, 2001) and harmonizing (Chan, 2002).
The Impact of Business-IT Alignment on Information Security Process 27

Results from BITA research show that organizations that successfully align their
business and IT strategy can increase their business performance (Kearns & Lederer,
2003). BITA can also support analysis of potential role of IT in an organization when
it supports to identify emergent IT solutions in the marketplace that can be opportuni-
ties for changing business strategy and infrastructure (Henderson & Venkatraman.
1993). Not only researchers, but business and IT practitioners have also emphasized
the importance of BITA. In the annual survey of the Society for Information Man-
agement, BITA was first on the top management concern from 2003-2009 with the
exception of 2007 and 2009 in which it was second (Luftman & Ben-Zvi, 2010).
Therefore, practitioners should place special attention on BITA and particularly on
how it is achieved, assessed and maintained in organizations.

Fig. 1. Luftman’s Strategic Alignment Maturity (SAM) (adapted from Luftman. 2000)

Different efforts have been oriented towards assessing BITA by proposing theoret-
ical models that can be applied as supportive tools for addressing different BITA
components. An extensive study by El-Mekawy et al. (2013) collected those models
with their components in a comparative framework. Although Henderson and Venka-
traman are seen as the founding fathers of BITA modeling (Avison et al., 2004),
Luftman’s model (SAM) has gained more popularity in practice (Chan & Reich,
2007). This gain is due to the following motivation: a) It follows a bottom-up
approach by setting goals, understanding linkage between Business and IT, analyzing
and prioritizing gaps, evaluating success criteria, and consequently sustaining align-
ment, b) It presents strategic alignment as a complete holistic process which encom-
passes not only establishing alignment but also its maturity by maximizing alignment
enablers and minimizing inhibitors (Avison et al., 2004), c) SAM focuses on different
BITA areas by modularity in six criteria, and d) Since its inception, SAM has been
used by several researchers and in number of industries for assessing BITA and its
components. Therefore, SAM is selected to be used in this study for assessing BITA
28 M. El Mekawy, B. AlSabbagh, and S. Kowalski

and analyzing the proposed impact on ISP. SAM classifies BITA in six criteria
(Table 1) consisting of 38 attributes (Figure 1) in five maturity levels: Ad Hoc, Com-
mitted, Established Focused, Managed, and Optimized Process. This classification
gives clear view of alignment and helps to spot particular areas of where an organiza-
tion needs to improve for maximizing values of IT investments.

Table 1. Criteria of SAM

BITA Criterion Definition and Questions Attached


Communications Refers to clear understanding between business and IT communities with an effective
exchange and sharing of each ideas, processes and needs.
Competency/ Value Concerns about demonstrating IT values in compatible figures with the business
Measurements community understanding. Therefore, both business and IT have usually different
metrics of values they add.
Governance Ensures that business and IT communities formally and periodically discuss and
review their plans. Priorities are important for allocating the needed IT resources.
Partnership Refers to the relationship between business and IT in having shared vision of organi-
sation’s processes IT as an enabler/driver for business transformation.
Scope and Architec- Illustrates IT involvement in organisational processes, and in supporting flexible and
ture transparent infrastructure. This, however, facilitates applying technologies effectively
and providing customised solutions responding to customer needs.
Skills Refers to human resource aspects that influence/(are influenced) by changes and
cultural/social environment as components of organizational effectiveness.

4 Information Security Process (ISP)

Information systems (IS) in organizations are implemented to support their business


processes that enable to achieve business objectives. With such systems, one should
consider information security as a process of answering questions of ‘what is needed
to protect organization resources’, ‘why do resources need to be protected? from
whom and how’ (Schwaninger, 2007). In such context, information security, given its
socio-technical nature, requires both social and technical activities. Globalization of
Internet has created situations in which security problems are not limited within
groups, organizations, or nations. With current trends in IS outsourcing and move-
ment towards open distributed systems, people from different organizational culture
are charged to administer security processes that need to meet security requirements
and expectations of data owners. International security standards have been made
available to address part of the issue by providing standard measures. However, stan-
dards are by design attempt to be contextual neutral i.e. do not consider organizational
cultures, governance or alignment between business and IT domains.
ISP traditionally has been linked to three main objectives; confidentiality, integrity,
and availability. However, achieving information security is unlimited to only achiev-
ing these objectives. It is attached to sustaining IS for achieving organizational objec-
tives against security attacks and accidents (Saleh, 2011). One of the main problems
The Impact of Business-IT Alignment on Information Security Process 29

in organizations’ security is that it is often viewed as an isolated island without estab-


lished bridges between security requirements and business goals. The rationale for
this problem is mainly referred to financial aspects and controls in organizations. This
often results in lack of security and financial investments in the organizational core
IS. It is therefore important that security to be built as a process with both planning
and designing phases of IS. This includes adaptability of security architecture for
ensuring that regular and security related tasks are deployed correctly (Amer & Ham-
ilton, 2008). It has been emphasized that security requirements should be linked to
business goals and IS through a process-oriented approach (Schwaninger, 2007). This
clearly supports for building-up information security as a process dealing with organi-
zation’s governance, organizational culture, IT architecture and service management
(Whitman & Mattord, 2003). In addition to that, best practices in implementing
security in organizations is indicated by factors such as complying regulatory
equirements and fiduciary responsibility, measuring information security practices
and improving efficiency/effectiveness (Saleh, 2011).
Unlimited to researchers, business and IT practitioners also have emphasized the
ISP importance. In the annual survey of the Society for Information Management, ISP
was among the top 10 management concerns from 2003-2009 and is the only technic-
al issue in 2009 (Luftman & Ben-Zvi, 2010). Therefore, practitioners should place
special attention on how information security should be practiced as a process joined
with organizational planning, design and performing tasks.
Research in modelling ISP has been going since the introduction of computer sys-
tems to business. An early attempt to holistic models in this area is the Security by
Consensus (SBC) framework developed by Kowalski (1991) for comparing different
national approaches to security. Following that, socio-technical frameworks were
developed (e.g. Lee et al., 2005; Al-Hamdani, 2009) for understanding security man-
agement as a social process. Other frameworks were developed emphasizing mental
models of security (e.g. Adams, 1995; Oltedal et al., 2004; Kowalski & Edwards,
2004; Barabanov & Kowalski, 2010) for linking information security as a cultural
process to business objectives. In this study, the Security Value Chain (SVC), devel-
oped by Kowalski & Edwards (2004), (Figure 2) is selected to analyze BITA impact
on ISP. This is motivated by arguing on its establishment in analyzing different steps
of business development process which is clearly influenced by aligning business and
IT views. In addition to that, it represents patterns of mental security spending on its
steps for visualizing how business and IT inputs intervene.

Fig. 2. Security Value Chain


30 M. El Mekawy, B. AlSabbagh, and S. Kowalski

The chain consists of five security access controls: deterrent, protective (preven-
tive), detective, responsive (corrective) and recovery. These controls represent input
points to IS (Table 2) in which an action may take place to stop undesired actions on
the system. AlSabbagh & Kowalski (2012) operationalized the security value chain as
a social metric for modeling the security culture of IT workers individuals at two
organizations. Their research showed how IT workers’ and individuals’ security
culture diverse given security problem at personal, at enterprise and national level.
The research also studied the influence of available fund on security culture.

Table 2. Definitions of Security Value Chain Control Measures

Control Definition
Deter for reducing chances of exploiting existing vulnurability without actually reducing the
exposure. E.g. consequences of violating a company security policy.
Protect for preventing occuring of security incident (e.g. access control implementations).
Detect for identifying and characterize a security incident (e.g. monitoring system alarm).
Respond for remediating the damage caused by a security incient (e.g. incidet response plan).
Recover for compensating for the losses incurred due to a security incident (e.g. security
incident insurance).

5 BITA Impact on Information Security Process

Over years, different studies have shown clear impact of business objectives and per-
formance on ISP (e.g. Huang et al., 2005; Johnson & Goetz, 2007). Other studies
focused on the impact of IT strategies and how IT is perceived on ISP (e.g. von Solms
and von Solms, 2004; Doherty & Fulford, 2005). As the relationship between busi-
ness and IT is represented by BITA, the impact of BITA on ISP is apparent. However,
it is neither analyzed in studies of BITA nor in studies of ISP (Saleh, 2011). In this
section, indications of BITA impact on ISP are presented. Each criterion of SAM is
described by which it influences the access controls of the security value chain. Hypo-
thetically, we expect to find at least one existing reflection of each SAM criterion on
an access control. With the help of SAM’s attributes in each criterion, more various
interesting relations may be addressed.
• Communications. Based on the findings of Herath & Herath (2007), it is indicated
that matured channels and metrics for communications between business and IT
have a strong impact on how ISP is perceived in an organization. This also influ-
ences the way the organization reacts and responses to the security attacks. How-
ever, as found by Huang et al. (2006), it can be concluded that achieving complete
information security is virtually impossible. This is due to the need for matured
communications in an organization to be further extended to include suppliers,
partners and customers which potentially increases the risks to attacks. Therefore,
matured communications in BITA is found to have less expenditure in detecting,
responding and recovering but no clear indications for deterring and protecting.
The Impact of Business-IT Alignment on Information Security Process 31

• Competency/ Value Measurements. Kumar et al. (2007)’s findings indicated the


importance of developing IT and business metrics to the expenditure on ISP. They
are not only indicating risks through process, but also in incorporating changes in
organizational aspects compared to previous results. In addition to that, the find-
ings of Gordon et al. (2005) show that attacks on IS come not only from outside
the organization. The loss from ‘theft of proprietary information’ was, for example,
shown to be three times than from virus in 2005 according to the CSI/FBI survey.
This indicates that developing matured business and IT metrics will reduce invest-
ments in Detecting and Responding of ISP but increasing expenditure in Deterring
and Protecting. However, there is no clear indication on Recovering.
• Governance. According to the results of Johnson & Goetz (2007), effective distri-
bution of investment on ISP is influenced by fitting IT security into business goals
and processes through its governance structure. In addition to that, Beautement et
al. (2008) argue that the misalignment in governance would lead to friction
between ISP and business processes into the organizational system. It is then indi-
cated that matured governance can result in reducing the expenditure on detecting
and responding, but increasing the expenditure on protecting and recovering. No
indications can be highlighted for the deterring.
• Partnership. According to the findings of Ogut et al. (2005), organizations with
high partnership have interconnection between business and technology which
supports the organization in better planning and decision making for security.
According to Yee (2005), this partnership makes clear goals and trust all over the
organization and supports for faster matured ISP. Therefore, it can be indicated that
matured partnership would be attached to less expenditure in detecting, responding
and recovering but no clear indications for deterring and protecting.
• Scope and Architecture. As found by Huang et al. (2006), complete information
security is impossibly achieved. Gordon and Loeb (2002) found that optimal
investment in information security is not necessarily increased with vulnerability.
Organizations should prioritize to protect the most significant IS. Johnson & Goetz
(2007), additionally, found that advancing IT architecture with rigid structure
would influence expenditure on ISP. It is then concluded that matured IT architec-
ture would increase its complexity level, and consequently indicates slower detec-
tion and responding to attacks with increasing their expenditure. However, rigid
and strong architecture will reduce the cost of deterring, protecting and recovering.
• Skills. Huang et al. (2005) found that skills and experiences of decision makers are
important players in information security investments. Although, there are strong
arguments from different researchers (e.g. Beautement et al., 2008) on reasoning
for cost and benefit of ISP to include the impact of individual employees, but it is
mainly related to complying security policies. It is then influenced by individual’s
goals, perceptions and attitudes. However, they influence the development level of
systems, platforms and protecting important applications as well. Therefore the
impact of matured skills can be indicated on reducing expenditure on protecting,
detecting, responding and recovering.
32 M. El Mekawy, B. AlSabbagh, and S. Kowalski

6 Results and Analyses

In this section, results and analyses of BITA assessment are presented in subsection
6.1 followed by the analyses of BITA and ISP in subsection 6.2.

6.1 BITA in the Organizations

• Communications. In Company-A, the understanding of business by IT is characte-


rized to be higher than understanding of IT by business. Understanding of business
by IT is seen focused and established process, but it should be more tied to perfor-
mance appraisals throughout IT functions. However, the business senior and
mid-level managers have limited understanding of IT which results in less Com-
mitted process. In overall, communications is assessed at level 2. In Company-B,
understanding of business by IT is also more matured than understanding of IT by
business. As an IT-related organization, senior and mid-level IT managers have
good understanding of business in order to achieve the targeted objects. Know-
ledge sharing is limited to the strategic level. Such conditions were indicated at
matured level 3.
• Competency/ Value Measurements. IT metrics and processes in Company-A are
perceived primarily technical (e.g. system availability, response time). They do not
relate to business goals or functions. However, business metrics are seen far
matured than IT metrics and extended as value-based on contributions of custom-
ers. The organization has formal feedback processes in place to review and take ac-
tions based on results of measures and to assess contributions across organisational
functions. In overall, the maturity level is assessed at level 3. In Company-B, IT
metrics are more matured. They are extended to formally assess technical, cost
efficiency, and cost effectiveness measures (e.g., ROI, ABC). They are also fol-
lowed by formal feedback processes in place to review and take actions based on
results of measures. The business metrics are also matured and customer-based
representing an enterprise scope. The overall maturity level is highlighted 2.
• Governance. It is indicated in Company-A that both business and IT strategic
planning are characterized by formal planning at functional levels. However, it is
extended at the business domain. In the IT domain, it is more occasional respon-
sive according to projects or involvement scale in business. The overall maturity
level is 2. The governance in Company-B is characterized by strategic business
planning at functional units and across the enterprise with IT participation. It is fur-
ther extended to business partners/alliances. However, the strategic IT planning is
less matured without an extended enterprise view to customers/alliances. The fede-
rated reporting system further supports for an overall maturity level as 4.
• Partnership. Although there is good insights for matured alignment in Company-
A, but IT is perceived as a cost to the organization for doing business rather a
strategic partner. IT is involved in strategic business planning in limited scope. IT
co-adapts with business to enable/drive for some projects and strategic objectives.
In overall all, the maturity level is highlighted as 3. In Company-B, IT is perceived
having a better role, however, it is still seen as enabler to future business activities.
The Impact of Business-IT Alignment on Information Security Process 33

It is also seen to bring values to the organization and co-adapt with business to
enable/drive strategic objectives. These conditions indicate a level of maturity 4.
• Scope and Architecture. In both Company-A and Company-B, IT is considered
as a catalyst for changes in the business strategy with a matured IT architecture. In
addition to that, IT standards are defined and enforced at the functional unit level
with emerging coordination across functional units. Although they are integrated
across the organisation, but they are not extended to include customer and supplier
perspectives which make a matured level of 3.
• Skills. In Company-A, the environment is characterized as innovative and encour-
aging especially at functional units. However, it has initial, technical training and
little rewards. The career crossover is limited to only strategic levels, and the envi-
ronment is dominated by top business managers who have more locus of power
than IT managers. The overall matured level is then assessed as 1. In Company-B,
innovation is strongly motivated especially at functional units with cross training
and limited change readiness. The top business management has domination and
locus of power for IT management. Career crossover is extended but to the senior
management and functional units. The overall maturity is indicated at level 3.

6.2 BITA Impact on ISP

• Company-A. The interviews show potential impact of BITA maturity on ISP. For
instance, while business perceives IT as a cost for business, senior and mid-level
business managers have limited understanding of IT. Business seems not to care
about security spending. The budget is allocated with no questions or awareness on
how effectively used. This is also reflected in the fact that IT metrics are primarily
technical. BITA maturity level seems to be focused and managed process. There is
a formal feedback process for reviewing and improving measurement results. Both
business and IT conduct formal strategic planning across the organisation but not
extended to partners/alliances. What has also been understood during the inter-
views is that there is no awareness regarding the need for having the five types of
security access controls. One of the interviewees was even supported to get figures
providing spending distribution according to the five controls.

Table 3. Ideal and Expected Security Value Chain in Company-A based on Collected Data

Security Access Control Deter Protect Detect Correct Recover


Ideal Budget Dsribution (%) 5 40 35 15 5
Expected Current (%) 10 30 25 20 15

• Company-B. The interviews revealed potential impact of BITA maturity on ISP.


The current SVC distribution almost matches what would be seen ideal. The reason
behind this is the optimized levels of BITA Value Measurements and Governance.
The limited business understanding for the importance of implementing deterring
controls are apparent. However, there is a potential support and motivation for
developing security policies that would state the consequences of misconduct and
34 M. El Mekawy, B. AlSabbagh, and S. Kowalski

accountability when security is violated. More than 10% of security budget is allo-
cated to such deterring controls. The same problem is observed regarding recovery
controls implementations. As business does not understand why IT needs to have
active support licenses for its applications, the business decided not to renew any
license. It is known in IT that having such support available is vital for providing
means of recovery for potential issues. The business has considered having active
support licenses as an extra cost which is not used most of the time. The limited
maturity in Communications and Skills has also resulted in more severe issues
related to human resourcing. Business is not allocating enough funds for hiring
senior security consultants who can improve the organization’s security position.
Business perceives IT as an enabler to business objectives and changes, however,
with insufficient turnovers. This perception has resulted in having budget con-
straints for IT and difficulties in approving it.

Table 4. Ideal and Current Security Value Chain in Company-B based on Collected Data

Security Access Control Deter Protect Detect Correct Recover


Ideal Budget Dsribution (%) 12 23 23 20 22
Expected Current (%) 10 25 25 18 22

7 Conclusions and Future Work


In this paper, the potential impact of BITA maturity on ISP was explored in two orga-
nisations based on SAM and SVCM respectively. The study revealed correlations
between BITA maturity level and existing security process. For instance, the lack of
Communications maturity between business and IT had significant impact on security
culture. When business management had limited understanding of IT, it was corre-
lated to difficulties in approving IT security budgets including required human
resourcing for hiring security consultants. This lack of communications had also
negative impact on implementing Deterrent controls desired by IT department. It was
also observed that limited business participation in IT strategic planning (i.e. Gover-
nance) was correlated to limited business understanding while Recovery security
controls are needed. In turns this had a negative impact on implementing Recovery
controls.
Immature alignment in Value Measurement and Partnership was found leading to
immature security culture. For instance, when IT uses only technical metrics with no
business considerations, it is perceived as a cost for business. This leads to lack of
security awareness where business neither has interest to know nor it is aware of secu-
rity spending or its performance. Optimized levels of BITA Value Measurement and
Governance were correlated with increasing security awareness and its importance in
business side and thus have raised interest in requirements related to IT security. This
resulted in immediate approval of IT security budgets. Such situation has enabled IT
managers to implement the SVC they believe to be ideal.
Suggested future work for this paper would be to conduct more case organisations
to confirm whether the findings will lead to the same results we have in this paper.
The Impact of Business-IT Alignment on Information Security Process 35

References
1. Adams, J.: Risk. Taylor & Francis, London (1995)
2. Al-Hamdani, W.A.: Non risk assessment information security assurance model. In:
Proceedings of the Information Security Curriculum Development Conference, pp. 84–90.
ACM, Kennesaw (2009)
3. AlSabbagh, B., Kowalski, S.: Developing Social Metrics for Security – Modeling the
Security Culture of IT Workers Individuals (Case Study). In: Proceedings of the 5th Inter-
national Conference on Communications, Computers and Applications (2012)
4. Amer, S.H., Hamilton, J.A.: Understanding security architecture. In: Proceedings of the
Spring Simulation Multi-conference, Society for Computer Simulation, Canada (2008)
5. Avison, D., Jones, J., Powell, P., Wilson, D.: Using and Validating the Strategic
Alignment Model. Journal of Strategic Information Systems 13, 223–246 (2004)
6. Barabanov, R., Kowalski, S.: Group Dynamics in a Security Risk Management Team
Context: A Teaching Case Study. In: Rannenberg, K., Varadharajan, V., Weber, C. (eds.)
SEC 2010. IFIP AICT, vol. 330, pp. 31–42. Springer, Heidelberg (2010)
7. Beautement, A., Sasse, M.A., Wonham, M.: The compliance budget: managing security
behaviour in organisations. In: NSPW 2008, pp. 47–58 (2008)
8. Benbya, H., McKelvey, B.: Using Coevolutionary and Complexity Theories to Improve IS
Alignment: A multi-level approach. Journal of Information Tech. 21(4), 284–298 (2006)
9. Chan, Y.E., Huff, S.L., Barclay, D.W., Copeland, D.G.: Business Strategic Orientation,
IS Strategic Orientation, and Strategic Alignment. ISR 8(2), 125–150 (1997)
10. Chan, Y.E.: Why haven’t we mastered alignment? The Importance of the informal organi-
zation structure. MIS Quarterly 1, 97–112 (2002)
11. Chan, Y.E., Reich, B.H.: IT alignment: what have we learned? Journal of Information
Technology 22(4), 297–315 (2007b) (advance online publication)
12. Doherty, N.F., Fulford, H.: Do information security policies reduce the incidence of
security breaches: an exploratory analysis. IRM Journal 18(4), 21–38 (2005)
13. El-Mekawy, M., Perjons, E., Rusu, L.: A Framework to Support Practitioners in Evaluat-
ing Business-IT Alignment Models. AIS Electronic Library (2013)
14. Gordon, L.A., Loeb, M.P.: The Economics of Information Security Investment. ACM
Transactions on Information and Systems Security 5(4), 438–457 (2002)
15. Gordon, L.A., Loeb, M.P., Lucyshyn, W., Richardson, R.: CSI/FBI Computer Crime and
Security Survey. Computer Security Institute (2005)
16. Henderson, J., Venkatraman, N.: Strategic alignment: leveraging information technology
for transforming organizations. IBM Systems Journal 32(1), 472–484 (1993)
17. Herath, H.S.B., Herath, T.C.: Cyber-Insurance: Copula Pricing Framework and Implica-
tions for Risk Management. In: Proceedings of the Sixth Workshop on the Economics of
Information Security, Carnegie Mellon University, June 7-8 (2007)
18. Huang, C.D., Hu, Q., Behara, R.S.: Investment in information security by a risk-averse
firm. In: Proceedings of the 2005 Softwars Conference, Las Vegas, Nevada (2005)
19. Huang, C.D., Hu, Q., Behara, R.S.: Economics of Information Security Investment in the
Case of Simultaneous Attacks. In: Proceedings of the Fifth Workshop on the Economics of
Information Security, Cambridge University, pp. 26–28 (2006)
20. Johnson, M.E., Goetz, E.: Embedding Information Security into the Organisation. IEEE
Security & Privacy 16 – 24 (2007)
21. Kearns, G.S., Lederer, A.L.: The Effect of Strategic Alignment on the use of IS-Based
Resources for Competitive Advantage. Journal of Strategic IS 9(4), 265–293 (2000)
36 M. El Mekawy, B. AlSabbagh, and S. Kowalski

22. Kowalski, S.: The SBC Model: Modeling the System for Consensus. In: Proceedings of
the 7th IFIP TC11 Conference on Information Security, Brighton, UK (1991)
23. Kowalski, S., Boden, M.: Value Based Risk Analysis: The Key to Successful Commercial
Security Target for the Telecom Industry. In: 2nd Annual International Common Criteria
CC Conference, Ottawa (2002)
24. Kowalski, S., Edwards, N.: A security and trust framework for a Wireless World: A Cross
Issue Approach, Wireless World Research Forum no. 12, Toronto, Canada (2004)
25. Kumar, V., Telang, R., Mukhopahhyay, T.: Optimally securing interconnected information
systems and assets. In: 6th Workshop on the Economics of IS, CM University (2007)
26. Lacity, M.C., Willcocks, L., Feeny, D.: IT outsourcing: maximise flexibility and control.
Harvard Business (1995)
27. Lee, S.W., Gandhi, R.A., Ahn, G.J.: Establishing trustworthiness in services of the critical
infrastructure through certification and accreditation. SIGSOFT Softw. Eng. Notes 30(4),
1–7 (2005)
28. Leonard, J., Seddon, P.: A Meta-model of Alignment. Communications of the Association
for Information Systems 31(11), 230–259 (2012)
29. Luftman, J.: Assessing Business-IT Alignment Maturity. Communications of the Associa-
tion for Information Systems 4, Article 14 (2000)
30. Luftman, J.N.: Managing IT Resources. Prentice Hall, Upper Saddle (2004)
31. Luftman, J., Ben-Zvi, T.: Key Issues for IT Executives: Difficult Economy’s Impact on IT.
MIS Quarterly Executive 9(1), 49–59 (2010)
32. Oltedal, S., Moen, B., Klempe, H., Rundmo, T.: Explaining Risk Perception. An evalua-
tion of cultural theory. Norwegian University of Science and Technology (2004)
33. Ogut, H., Menon, N., Raghunathan, S.: Cyber Insurance and IT security investment: Im-
pact of interdependent risk. In: Workshop on the Economics of Information Security,
WEIS 2005, Kennedy School of Government, Harvard University, Cambridge, Mass.
(2005)
34. Reich, B.H., Benbasat, I.: Factors That Influence The Social Dimension of Alignment
Between Business And IT Objectives. MIS Quarterly 24(1), 81–113 (2000)
35. Sabherwal, R., Chan, Y.E.: Alignment Between Business and IS Strategies: A Study of
Prospectors, Analyzers, and Defenders. IS Research 12(1), 11–33 (2001)
36. Saleh, M.: Information Security Maturity Model. Journal of IJCSS 5(3) (2011)
37. Schwaninger, M.: From dualism to complementarity: a systemic concept for the research
process. International Journal of Applied Systemic Studies 1(1), 3–14 (2007)
38. Smaczny, T.: Is an alignment between business and information technology the appropri-
ate paradigm to manage IT in today’s organisations? Management Decision 39(10),
797–802 (2001)
39. Tarafdar, M., Qrunfleh, S.: IT-Business Alignment: A Two-Level Analysis. Information
Systems Management 26(4), 338–349 (2009)
40. Whitman, M.E., Mattord, H.J.: Principles of Information Security. Thomson Course Tech.
(2003)
41. Van Der Zee, J.T.M., De Jong, B.: Alignment is Not Enough: Integrating business and in-
formation technology management with the balanced business scoreboard. Journal of
Management Information Systems 16(2), 137–156 (1999)
42. von Solms, B., von Solms, R.: The ten deadly sins of information security management.
Computers & Security 23(5), 371–376 (2004)
43. Yee, K.P.: User Interaction Design for Secure Systems. In: Faith Cranor, L., Garfinkel, S.
(eds.) Security and Usability: Designing Secure Systems that People Can Use, pp. 13–30.
O’Reilly Books (2005)
Examing Significant Factors and Risks Affecting
the Willingness to Adopt a Cloud–Based CRM

Nga Le Thi Quynh1, Jon Heales2, and Dongming Xu2


1
Falcuty of Business Information Systems,
University of Economics HoChiMinh City, Vietnam
2
UQ Business School, The University of Queensland, Australia
[email protected]

Abstract. Given the advantages of and significant impact that Cloud-based


CRMs have had on achieving competitive edge, they are becoming the primary
choice for many organizations. However, due to the growth of concerns around
cloud computing, cloud services might not be adopted with as much alacrity as
was expected. A variety of factors may affect the willingness to adopt a cloud-
based CRM. The purpose of this study, therefore, is to explore the factors that
influence the adoption of a cloud-based CRM in SME’s, from the perspectives
of the client organizations and users. We then propose a research model,
grounded in the Resource Based View Framework (RBV), the Theory of Tech-
nology Acceptance Model (TAM2), Risks and Trust Theories. This report
recommends a research methodology. It offers recommendations for practitio-
ners and cloud service providers to effectively assist in the adoption of cloud-
based CRMs in organizations.

Keywords: cloud computing, CRM, adoption, TAM, risks, trust.

1 Introduction

Although Cloud Computing has been undergoing rapid evolution and advancement, it
is still an emerging and complex technology [1], and our understanding of, and regu-
latory guidance related to cloud computing is still limited [2]. These limitations raise
significant concerns about security, privacy, performance, and trustworthiness of
cloud-based applications. [3, 4]. While the cloud offers a number of advantages, until
some of the risks are better understood and controlled, cloud services might not be
adopted with as much alacrity as was expected [5].
Although there are studies investigating the implementation of CRM systems
[6, 7], there is a lack of research in adopting cloud-based CRMs. To successfully
adopt and implement a cloud-based CRM, client organizations need to have under-
standing about cloud computing, its characteristics, and need to take into account the
risks involved when deciding to migrate their applications to the cloud. Cloud servic-
es providers also need to enhance their understanding of client users’ behavior such
as how they act and what factors affect their choice, in order to increase the rate of
adoption.

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 37–48, 2014.
© Springer International Publishing Switzerland 2014
38 N. Le Thi Quynh, J. Heales, and D. Xu

Having an understanding of client users’ behavior during the examination phase,


before a full adoption decision is made, will help cloud service providers better
address potential users’ concerns.

2 Literature Review

This study explores the roles of Risks relating to Tangible Resources, Intangible
Resources, and Human Resources; perceived usefulness, perceived ease of use, sub-
jective norm and Trust in the adoption of Cloud-Based CRMs. The study is informed
by the Resource-Based View Framework, Risk and Trust Theories, and the Technolo-
gy Acceptance Model (TAM2).

2.1 Cloud Computing

We adopt the Efraim, Linda [8] view of Cloud Computing as the general term for
infrastructures that use the Internet and private networks to access, share, and deliver
computing resources with minimal management effort or service provider interaction.
In the cloud context, users pay for the services as an operating expense instead of the
upfront capital investment [9].
Cloud computing provides several advantages, including cost reduction [4, 9], or-
ganizational agility and often competitive advantage [10, 11]. However, there is a lot
of uncertainty and skepticism around the cloud that stakeholders in cloud computing
(e.g. providers, consumers and regulators) should take into account, including the gap
in cloud capabilities, security, and audit and control risks. The next sections examine
these risks more thoroughly.

2.2 Customer Relationship Management (CRM) and Cloud-Based CRMs

Efraim, Linda [8 pg. 324] define CRM as the methodologies and software tools that
automate marketing, selling, and customer services functions to manage interaction
between an organization with its customers, and to leverage customer insights to
acquire new customers, build greater customer loyalty, and increase profit level.
One of the biggest benefits of a cloud-based CRM is that it is easily accessible via
mobile devices from any location, at any time [8 pg. 328]. In addition, cloud-based
CRM allows enterprises, especially Small and Medium Enterprises (SMEs) not only
to achieve cost benefits through pay-per-use, without a large upfront investment, but
also to mimic their larger rivals to effectively manage and enhance customer relation-
ship processes.

2.3 Technology Acceptance Model (TAM)


Employing the Theory of Reasoned Action (TRA) [12], TAM [13] has been widely
utilized for analyzing and explaining a user’s intention to adopt an information
system.
Significant Factors and Risks Affecting the Willingness to Adopt a Cloud–Based CRM 39

The original TAM model does not incorporate the effect of the social environment
on behavioral intention. Therefore, we apply TAM2 [14], which hypothesizes per-
ceived usefulness, perceived ease of use, and subjective norm as the determinants of
Usage Intention, to our conceptual research model.
We apply TAM2 to our theoretical foundation and define the constructs as follows:
Perceived usefulness, for the purpose of this paper, is defined as the degree to
which an individual believes that using a cloud-based CRM would improve his or her
job performance. Seven capabilities of cloud computing, namely controlled interfaces,
location independence, sourcing independence, ubiquitous access, virtual business
environments, addressability and traceability, and rapid elasticity [10], enable users to
access the application, internal and external resources over the internet easily and
seamlessly. This has made cloud-based CRMs advantageous to client organizations.
Perceived ease of use of cloud-based CRMs refers to the extent to which a user
believes that using a cloud-based application would be free of effort.
As one characteristic of cloud-based applications is the ease with which to switch
between service providers, the higher degree that the users can use the application and
its functions to help them in daily operations without investing a lot of effort on learn-
ing how to use during the trial time, the more probability that they will be willing to
adopt the application.
Subjective norm, for the purpose of this paper, is the degree to which an individual
perceives that others believe he/ she should use a specific cloud-based CRM. The
advantage of virtual communities and social networks is that it allows users to share
and exchange ideas and opinions within communities. An individual’s behavior will
be reinforced by the multiple neighbors in the social network who provide positive
feedback and ratings [15], especially, when subscribing to a new application or pur-
chasing a product, so users tend to evaluate the product by examining reviews of
others [16] . The following propositions follow:
P1: Perceived Usefulness will positively affect the Willingness to Adopt Cloud
Based CRMs.
P2a: Perceptions of Cloud-based CRMs Ease of Use will positively affect Per-
ceived Usefulness.
P2b: Perceptions of Cloud-based CRMs Ease of Use will positively affect the Wil-
lingness to Adopt Cloud Based CRMs.
P3: Subjective Norm will positively affect the Willingness to Adopt Cloud Based
CRMs.

2.4 Trust

Trust has been regarded as the heart of relationships of all kinds [17] and a primary
enabler of economic partnerships [18]. Building trust is particularly important when
an activity involves uncertainty and risk [19]. In the context of cloud computing, un-
certainty and risk are typically high because of the lack of standards, regulations and
complexity of technology, etc. [1, 9]. This leads to a significant concern for enterpris-
es about TRUST in cloud-based applications [20].
40 N. Le Thi Quynh, J. Heales, and D. Xu

Antecedents of Trust
Prior research on Trust has proposed a number of trust antecedents: knowledge-based
trust, institution-based trust, calculative-based trust, cognition-based trust and perso-
nality-based trust [for more details, see 21].
We consider the initial level-of-trust formation, would directly affect the organiza-
tion’s willingness to adopt.
Personality-based trust – Personal perception is formed based on the belief that
others are reliable and well-meaning [22], resulting in a general tendency to believe to
others and so trust them [23]. This disposition is especially important for new organi-
zational relationships, where the client users are inexperienced with service providers
[24].
Cognition-based trust – perception of reputation: is built on first impression rather
than experiential personal interactions [23]. In the context of cloud-based CRMs, to
access trustworthiness of cloud service providers, client organizations tend to
base their evaluation on secondhand information provider’s reputation. Reputation
of providers is also particularly important when considering cloud adoption and
implementation [25].
Institution-based Trust – perception of Structural Assurance: is formed from safety
nets such as regulations, guarantees, legal recourse [26].
A Service-level agreement (SLA) is a negotiated contract between a cloud service
provider with client organization. Cloud service providers use SLAs to boost the con-
sumer’s trust by issuing guarantees on service delivery.
Knowledge-based Trust: is formed and developed over time though the interaction
between participants [21, 27]. This type of trust might be absent for the first meet
between service provider and client organization. However, during the trial time,
interaction and communication between parties will affect to the level of trust in each
other, thus improving their behavioral intention to continue adopting the application.
Based on our argument above, and because we are using already validated meas-
ures of trust, we make the following complex proposition:
P4: Personal Perception, Perception of Reputation of a cloud-based CRM provid-
er, Perception of Structural Assurances built into a cloud-based CRM, and Know-
ledge-based Trust will positively affect Trust in a cloud-based CRM provider.

Consequences of Trust
Heightened level of Trust, as a specific belief in a service provider, are associated
with heightened willingness to use services supplied by that provider. Cloud compu-
ting is still in its infancy [28], and contains a certain level of complexity of technolo-
gy [29] and immaturity of standards, regulations, and SLAs, thus we propose :
P5: Trust in a Cloud-based CRM Provider will positively affect the Willingness to
Adopt a Cloud-based CRM.
Trust in a cloud service provider implies the belief that service provider will
deliver accurate and qualified services, as expected. Users are less likely to accept
unexpected failure of the system or network, and unqualified performance of service.
Therefore, a service provider’s subjective guarantee, through SLAs, and other
elements such as the provider’s reputation or customer services, during the trial time,
Significant Factors and Risks Affecting the Willingness to Adopt a Cloud–Based CRM 41

would bolster user’s confidence. Such a guarantee is likely to increase the likelihood
that the CRM application will improve users’ performance in managing the customer
relationship. Conversely, adopting an application from an untrustworthy service
provider might result in reduced usefulness. Based on this, we propose that:
P6: Trust in a Cloud-based CRM Provider will positively affect the Perceived
Usefulness of Cloud-based CRMs.

2.5 Theory of Resource Based View (RBV) as a Framework Foundation for


Risk Assessment

The RBV explains the role of resources in firm performance and competitive advan-
tage [30]. Barney [30] went on to show that to achieve sustained competitive advan-
tage, resources must be “valuable, rare, difficult to imitate, and non-substitutable”.
When putting the RBV in the context of cloud computing, there are a number of or-
ganizational resources that can affect the competitiveness and performance of the
firms. First, by accessing current infrastructures and using complementary capabilities
from cloud providers, clients can focus on internal capabilities and core competencies
to achieve competitive advantage [11]. Second, one characteristic of cloud-based
applications is the ease with which to switch between service providers, and the num-
ber of options for customers has increased over time. Customers tend to seek qualified
products, and if service providers cannot ensure necessary resources and capabilities,
they might lose their current and potential customers into their competitors.
Therefore, the more uncertainty that affects the effectiveness of the firm’s
resources, the less probability that firms might achieve good performance and com-
petitive advantage.

Salient Risks Relating to Tangible Resources in Cloud-Based CRM Adoption

Data – related risks


Migrating to cloud means that the enterprise data would be stored outside the
enterprise boundary, at the cloud service provider end, and the client organization
entrusts the confidentiality and integrity of its data to the cloud service provider. This
raises certain concerns on how adequate a level of security the cloud service provider
offers to ensure data security and prevent breaches due to security vulnerabilities in
the application, cloud service provider’s environment, or through malicious users [29,
31]. Currently many organizations are only willing to place noncritical applications
and general data in the cloud [32]. According to an InformationWeek report [33], of
those respondents using, planning to use, or considering public cloud services, 39%
say they do not / will not allow their sensitive data to reside in the cloud and 31% say
they do not /will not run any mission-critical applications in the cloud.
In addition, for CRMs, to provide fast response, and efficient processing services
for customers, the data are retrieved from multiple resources via CDIs (Customer
Data Integration). Dealing with data changes, data glitches in verification, validation,
42 N. Le Thi Quynh, J. Heales, and D. Xu

de-duplication and merging processes also provides significant challenges for service
providers [34].
However, trust in a cloud service provider, resulting from the provider’s reputation
and their structural assurance (e.g. SLAs), to some extent, can lessen the fear of inci-
dents and risks related to data security and privacy. In the cloud context, cloud users
face insecure application programming interfaces (APIs), malicious insiders, data
breaches, data loss, and account hijacking [4, 31]. In addition, cloud-provider may be
perceived to have too much power to view and potentially abuse sensitive customer
data. Therefore, a provider with a good reputation and sufficient security mechanisms
will provide confidence that customer data will be stored and protected against illegal
access, and therefore increase the likelihood of adopting the cloud-based application.
Based on our argument above, we make the following propositions:
P7a: The Data-Related Risks will negatively affect the Willingness to Adopt Cloud
Based CRMs.
P7b: Trust moderates the relationship between Data-Related Risks and the Will-
ingness to Adopt Cloud Based CRMs.

Economic Risks
With a cloud-based application, the business risk is decreased by a lower upfront
investment in IT infrastructure [3], although there is still the uncertainty of hidden
risks during the time customers use the application. For example, to maximize the
number of capabilities of an application, customers may have to pay more to get the
advanced version [35]. The more reliable and specialized the hardware, software and
services offered, the higher the price service providers would set [36].
Furthermore, with the Medium and Large size enterprises migrating their enter-
prise applications such as CRMs and ERPs to cloud based environments, the cost of
transferring organizational data is likely to increase, especially if the organization
applies the hybrid cloud deployment model where data would be stored in different
distinct cloud infrastructures (e.g. private, community and public) [37]. Thus;
P8: The Economic Risks will negatively affect the Willingness to Adopt Cloud
Based CRMs.

IT Infrastructure risks
IT Infrastructure risks are the possibility that the service provider may not deliver the
expected level of infrastructure. That is the network infrastructure is not provided
with the speed or reliability at the level expected. One positive characteristic of cloud
computing is the rapid elasticity, which enables the scaling up or down of service
usage, based on virtualization technology [11]. However, risks such as the unpredict-
able performance of virtual machines, frequent system outages, and connectivity
problems, can affect all a provider’s customers at once, with significant negative im-
pacts on their business operations. [4].
IT infrastructure risks also include the risk of problems related to the integration
between cloud-based applications and internal systems. The perceived IT infrastruc-
ture risks mentioned above are likely to influence the user’ perception that the CRM
might not perform as smoothly and seamlessly as expected. Thus;
P9: The IT Infrastructure Risks will negatively affect the Perceived Cloud-based
CRM Usefulness.
Significant Factors and Risks Affecting the Willingness to Adopt a Cloud–Based CRM 43

Salient Risks Relating to Human Resources in Cloud-Based CRM Adoption

Technical skill risks


Technical skill risks are the possibility that lack of knowledge about cloud computing
and CRM, and competence in emerging technologies, will negatively affect the ability
to successfully implement cloud-based CRMs.
To effectively deal with the complexities and uncertainties associated with new
technologies like cloud computing, and to ensure the smooth adoption and operation
of cloud-based applications, organizations require qualified employees. A lack of
professional knowledge about cloud computing, as well as information systems from
members participating in the cloud based CRM deployment, would create hurdles
slowing down the process of adoption [38]. Thus, the client users might need to spend
more time and effort to learn how to use the application. Thus;
P10: Lower levels of Technical skill will negatively affect Perceived Ease of Use of
the Cloud Based CRMs.

Managerial risks
From the psychosocial view, it is noted that IT executives might be conscious of
negative consequences from adopting cloud-based applications [35]. The likelihood
of successfully implementing a new system largely depends on good project man-
agement and leadership skills [39], and effective coordination and interaction with
stakeholders [38]. Because cloud-based CRMs involve business process changes,
integration of the new system into an existing IT infrastructure and system, and ex-
ploitation new technologies, it is necessary for technological and organization-
specific knowledge of how to implement cloud solutions to operate business transac-
tions as well as achieve business objectives [39].
The managerial risk might be reduced if there is a strong belief in the cloud-service
providers. Trust can bolster the executive’s optimism about the desirable conse-
quences [21, 23], as a result, they might willing to adopt cloud-based application
when they trust the service provider. We propose that managerial risk will affect the
willingness of adoption of cloud-based CRMs; this proposition is moderated by Trust
in a cloud-based CRM provider.
P11a: The Managerial Risks will negatively affect the Willingness to Adopt Cloud
Based CRMs.
P11b: Trust moderates the relationship between Managerial Risks and the Will-
ingness to Adopt Cloud Based CRMs.

Salient Risks Relating to Intangible Resources in Cloud-Based CRM Adoption

Strategic risk
Strategic risks include the risks that cloud-based CRM clients might be heavily
dependent on the service providers and their applications. The cloud-based CRM
applications may not be flexible enough to respond to changes in their business
strategies and thus ensure alignment between IT and business strategies [35].
44 N. Le Thi Quynh, J. Heales, and D. Xu

A high degree of dependence on a cloud provider may also cause vendor lock-in
and business continuity issues [4, 31].
However, trust in a cloud provider, resulting from the provider’s reputation and
structural assurance (e.g. SLAs), to some extent, can lessen this fear. When the pro-
vider issues guarantees about data ownership, disaster recovery plans, standards, and
assurances that regulations are followed, the level of trust is raised. Thus, a provider
with a strong reputation can give the impression that it is able to sustain superior
profit outcomes. [40].Thus;.
P12a: The Strategic Risks will negatively affect the Willingness to Adopt Cloud
Based CRMs.
P12b: Trust moderates the relationship between Strategic Risks and the Willing-
ness to Adopt Cloud Based CRMs.

Audit risk
Audit risk is the probability of there will be material misstatements in the client
organization’s financial statements. This can result from the lack of internal control
and governance, ambiguous agreement on data ownership, and/or immature regula-
tions and standards for cloud computing.
SAS No.107 [41] categorizes audit risk into three components: inherent risk, con-
trol risk, and detection risk. Inherent risk is possibility that a material misstatement in
the client's financial statements will occur in the absence of appropriate internal con-
trol procedures. Control risk is the risk that material misstatement will not be detected
and corrected by management's internal control procedures. Detection risk is the risk
that the auditor will not detect material misstatement. Cloud computing places an
increased burden on the auditor [2], and the lack of understanding of cloud computing
in terms of technical and business aspects, as well as the risks associated with cloud
computing, might lead to an increase in detection risk.
These risks can affect the Trust in cloud service providers, if they do not issue
appropriate SLAs that specify the provider’s responsibilities for services, data owner-
ship and regulations and standards they would follow. Thus;
P13: Increasing level of Audit Risk will negatively affect Trust in cloud-based
CRM provider.

Performance Functionality Risks


Marketing research suggested the reasons for CRM implementation are to boost the
organization’s ability to communicate with the customers, to learn about customer
preferences in a timely manner, to achieve fast response to customers, and to analyse
customer insights [42]. Put these requirements in context of cloud computing, there
are the risks that the service provider will not be able to ensure seamless interopera-
bility with home-grown applications [35], as well as with other on-demand applica-
tions on the same and different cloud platforms [37].
These risks can result the user’s perception that he/she cannot perform his/her job
well when he/she uses a cloud-based CRM. Thus;
P14: The Performance - Related Risks will negatively affect the Perceived Useful-
ness of Cloud Based CRMs.
Significant Factors and Risks Affecting the Willingness to Adopt a Cloud–Based CRM 45

3 Model of Cloud-Based CRM Adoption

Following from the review presented on the previous section, we propose the research
model depicted in Figure 1.

4 Research Method

4.1 Conduct the Research

We seek to gather data from individual users who have commissioned a trial test of a
cloud-based CRM and examination phase before deciding to fully adopt the CRM. To
test this model we consider a survey-based approach is the most appropriate [see 43].
The following steps need to be taken:

1. We adopt measures from the literature for each of the constructs in the model, and
operationalize them so that they can be used to gather the required data.
2. A preliminary web analysis of constructs was performed to validate the measures
developed in the model. We collected user comments from 3 cloud-based CRM
applications, namely Salesforce.com, Insightly, and Zoho CRM on the Apple Apps
store, Google apps Marketplace, Google Play and Blackberry World. 1579 com-
ments were collected by users who were considering trialling, or who were trialling
the applications.
3. Based on the analysis of the preliminary data, we ensure all comments can be cate-
gorised by our constructs in the final questionnaire.
4. A large-scale survey would then be conducted to test our model of factors and risks
involved in the adoption of a cloud-based CRM.

4.2 Questionnaire Development and Measures


The pre-validated questionnaire items were obtained from previous research on CRM,
cloud computing, trust, risks, and TAM2. All items specified a seven-level Likert
scale, expressed in linguistic terms: strongly disagree, moderately disagree, somewhat
disagree, neutral (neither disagree nor agree), somewhat agree, moderately agree, and
strongly agree.

5 Analysis of the Findings

This will be presented and discussed at the conference.

6 Implications Drawn from Analysis

This will be presented and discussed at the conference.


46 N. Le Thi Quynh, J. Heales, and D. Xu

7 Conclusions and Limitations

This paper presents the factors and risks involved in the adoption of a cloud-based
CRM. These factors and risks were derived from the analysis of research conducted
into the adoption of information technology and systems, cloud computing, trust, and
audit risk. From this research foundation a model was developed and presented.
This research will help provide more insights about client user behaviour toward
the adoption a cloud-based CRM. This study also offers several practical implications.
First, perception of risks together may inhibit the cloud-based CRM adoption. It is
recommended that cloud service providers develop appropriate strategies to counter
these concerns. For example, effective risk-mitigation strategies may include strong
guarantees, better transparency and more consumer control of data and processes.
Client users may be more willing to overlook the perceived risks if they know what is
happening with their application and data, and they are confident that the service
provider is trustworthy and can perform efficiently to ensure the system run smoothly.
Second, our study suggests that the cloud-based CRM adoption depends heavily on
perceived usefulness, perceived ease of use and a trusting belief in the cloud service
provider. By acting in a competent and honest manner, a cloud service provider can
maintain high trust, resulting the willingness to adopt and retaining of users of its
cloud-based CRM from organization clients.
Future studies may include other aspects that might influence the adoption such as
organizational characteristics (e.g. firm size, organizational strategies, maturity of
current information systems, etc.), industry characteristics (e.g. competitive intensity)
and personals characteristic (e.g. gender, age, experience, etc.)

References
1. Blaskovich, J., Mintchik, N.: Information Technology Outsourcing: A Taxonomy of Prior
Studies and Directions for Future Research. Journal of Information Systems 25(1), 1–36
(2011)
2. Alali, F.A., Chia-Lun, Y.: Cloud Computing: Overview and Risk Analysis. Journal of
Information Systems 26(2), 13–33 (2012)
3. Pearson, S.: Privacy, Security and Trust in Cloud Computing, in Technical Reports,
HP: HP (2012)
4. Armbrust, M., et al.: A View of Cloud Computing. Communications of the ACM 53(4),
50–58 (2010)
5. Youseff, L., Butrico, M., Da Silva, D.: Toward a unified ontology of cloud computing.
In: Grid Computing Environments Workshop, GCE 2008. IEEE (2008)
6. Kim, H.-S., Kim, Y.-G., Park, C.-W.: Integration of firm’s resource and capability to
implement enterprise CRM: A case study of a retail bank in Korea. Decision Support
Systems 48(2), 313–322 (2010)
7. Avlonitis, G.J., Panagopoulos, N.G.: Antecedents and consequences of CRM technology
acceptance in the sales force. Industrial Marketing Management 34(4), 355–368 (2005)
8. Efraim, T., Linda, V., Gregory, W.: Information Technology for Management, 9th edn.
(2013)
Significant Factors and Risks Affecting the Willingness to Adopt a Cloud–Based CRM 47

9. Marston, S., et al.: Cloud computing — The business perspective. Decision Support
Systems 51(1), 176–189 (2011)
10. Iyer, B., Henderson, J.C.: Preparing for the Future: Understanding the Seven Capabilities
of Cloud Computing. MIS Quarterly Executive 9(2), 117–131 (2010)
11. Iyer, B., Henderson, J.C.: Business value from Clouds: Learning from Users. MIS Quarter-
ly Executive 11(1), 51–60 (2012)
12. Fishbein, M., Ajzen, I.: Belief, attitude, intention and behavior: An introduction to theory
and research (1975)
13. Davis Jr., F.D: A technology acceptance model for empirically testing new end-user in-
formation systems: Theory and results. Massachusetts Institute of Technology (1986)
14. Venkatesh, V., Davis, F.D.: A Theoretical Extension of the Technology Acceptance Mod-
el: Four Longitudinal Field Studies. Management Science 46(2), 186–204 (2000)
15. Centola, D.: The Spread of Behavior in an Online Social Network Experiment.
Science 329(5996), 1194–1197 (2010)
16. Park, D.-H., Lee, J., Han, I.: The Effect of On-Line Consumer Reviews on Consumer
Purchasing Intention: The Moderating Role of Involvement. International Journal of Elec-
tronic Commerce 11(4), 125–148 (2007)
17. Morgan, R.M., Shelby, D.H.: The Commitment-Trust Theory of Relationship Marketing.
Journal of Marketing 58(3), 20–38 (1994)
18. Gefen, D.: What Makes an ERP Implementation Relationship Worthwhile: Linking Trust
Mechanisms and ERP Usefulness. Journal of Management Information Systems 21(1),
263–288 (2004)
19. Luhmann, N.: Familiarity, confidence, trust: Problems and alternatives. Trust: Making and
Breaking Cooperative Relations 6, 94–107 (2000)
20. Huang, J., Nicol, D.: Trust mechanisms for cloud computing. Journal of Cloud Compu-
ting 2(1), 1–14 (2013)
21. Gefen, D., Karahanna, E., Straub, D.W.: Trust and TAM in Online Shopping: An
Integrated Model. MIS Quarterly 27(1), 51–90 (2003)
22. Wrightsman, L.S.: Interpersonal trust and attitudes toward human nature. Measures of
Personality and Social Psychological Attitudes 1, 373–412 (1991)
23. McKnight, D.H., Cummings, L.L., Chervany, N.L.: Initial Trust Formation in New Orga-
nizational Relationships. The Academy of Management Review 23(3), 473–490 (1998)
24. Gefen, D.: E-commerce: the role of familiarity and trust. Omega 28(6), 725–737 (2000)
25. Koehler, P., et al.: Cloud Services from a Consumer Perspective. In: AMCIS. Citeseer
(2010)
26. Sitkin, S.B.: On the positive effects of legalization on trust. Research on Negotiation in
Organizations 5, 185–218 (1995)
27. Holmes, J.G.: Trust and the appraisal process in close relationships (1991)
28. Misra, S.C., Mondal, A.: Identification of a company’s suitability for the adoption of cloud
computing and modelling its corresponding Return on Investment. Mathematical and
Computer Modelling 53(3-4), 504–521 (2011)
29. Subashini, S., Kavitha, V.: A survey on security issues in service delivery models of cloud
computing. Journal of Network and Computer Applications 34(1), 1–11 (2011)
30. Barney, J.: Firm Resources and Sustained Competitive Advantage. Journal of Manage-
ment 17(1), 99 (1991)
31. Nicolaou, C.A., Nicolaou, A.I., Nicolaou, G.D.: Auditing in the Cloud: Challenges and
Opportunities. CPA Journal 82(1), 66–70 (2012)
48 N. Le Thi Quynh, J. Heales, and D. Xu

32. Barwick, H.: Cloud computing still a security concern: CIOs, September 17-20 (2013),
http://www.cio.com.au/article/526676/
cloud_computing_still_security_concern_cios/?fp=16&fpid=1
33. Emison, J.M.: 9 vital questions on moving Apps to the Cloud, in InformationWeek Reports
(2012)
34. Buttle, F.: Customer relationship management. Routledge
35. Benlian, A., Hess, T.: Opportunities and risks of software-as-a-service: Findings from a
survey of IT executives. Decision Support Systems 52(1), 232–246 (2011)
36. Durkee, D.: Why cloud computing will never be free. Commun. ACM 53(5), 62–69 (2010)
37. Dillon, T., Wu, C., Chang, E.: Cloud computing: Issues and challenges. In: 2010 24th
IEEE International Conference on Advanced Information Networking and Applications
(AINA). IEEE (2010)
38. Finnegan, D.J., Currie, W.L.: A multi-layered approach to CRM implementation: An inte-
gration perspective. European Management Journal 28(2), 153–167 (2010)
39. Garrison, G., Kim, S., Wakefield, R.L.: Success Factors for Deploying Cloud Computing.
Communications of the ACM 55(9), 62–68 (2012)
40. Roberts, P.W., Dowling, G.R.: Corporate Reputation and Sustained Superior Financial
Performance. Strategic Management Journal 23(12), 1077–1093 (2002)
41. AICPA, Audit Risk and Materiality in Conducting an Audit. Statement on Auditing Stan-
dards No.107, AICPA (2006)
42. Sun, B.: Technology Innovation and Implications for Customer Relationship Management.
Marketing Science 25(6), 594–597 (2006)
43. Yin, R.K.: Case study research: Design and methods, vol. 5. Sage (2003)
Towards Public Health Dashboard Design Guidelines

Bettina Lechner and Ann Fruhling

School of Interdisciplinary Informatics, University of Nebraska at Omaha,


Omaha NE 68182, USA
{blechner,afruhling}@unomaha.edu

Abstract. Ongoing surveillance of disease outbreaks is important for public


health officials, who to need consult with laboratory technicians in identifying
specimen and coordinate care for affected populations. One way for public
health officials to monitor possible outbreaks is through digital dashboards of
summarized public health data. This study examines best practices for design-
ing public health dashboards and proposes an optimized interface for an emer-
gency response system for state public health laboratories. The practical nature
of this research shows how general dashboard guidelines can be used to design
a specialized dashboard for a public health emergency response information
system. Through our analysis and design process, we identified two new guide-
lines for consideration.

Keywords: Medical information system, dashboard interface design, disease


surveillance, public health.

1 Introduction

Public health crises such as the recent Listeria outbreaks or the 2009 influenza pan-
demic require the immediate attention of public health directors and practitioners who
coordinate diagnosis and care for affected populations. Continual monitoring of the
public health environment allows for faster response and may reduce the impact of
such emergencies. To address this need, digital dashboards have been shown to be an
effective means to quickly assess and communicate the situation. Often these dash-
boards include computerized interactive tools that are typically used by managers to
visually ascertain the status of their organization (in this case, the public health envi-
ronment) via key performance indicators (Cheng et al., 2011). Dashboards allow users
to monitor one or more systems at a glance by integrating them and summarizing key
metrics in real time to support decision making (Kintz, 2012; Morgan et al., 2008). In
the medical field, dashboards continue to expand and have been used for purposes
such as emergency response coordination (Schooley et al., 2011), patient monitoring
(Gao et al., 2006), and influenza surveillance (Cheng et al., 2011).
The US states of Nebraska, Kansas, and Oklahoma use a public health emergency
response information system (PHERIS) to allow hospital microbiology laboratorians
to monitor and report public health episodes across their state. In the case of a
potential outbreak the PHERIS is the tool used by the microbiologists at the clinical

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 49–59, 2014.
© Springer International Publishing Switzerland 2014
50 B. Lechner and A. Fruhling

laboratory to consult with epidemiology experts at the State Public Health Laboratory
through a secure connection over the Internet. This system provides functionality to
send informational text and images of specimens between laboratories and the state
public health laboratory. However, to further enhance the functionality and usability
of the PHERIS it would be ideal if there were a single display screen (e.g. digital
dashboard) where the State Public Health Director could immediately assess if there
are any potential outbreaks on the cusp of happening with just a glance.
The first aim of our study is to analyze and apply dashboard specific design guide-
lines we identified in our literature review through a new dashboard interface opti-
mized for real-time disease outbreak and public health emergency surveillance.
Second, we will evaluate if there are any missing guidelines.
In the remainder of this paper, we begin by presenting background information on
the public health area, on the PHERIS (the system that is used in this study), and
on the various dashboard design guidelines found in the literature. Next, we present
our application of the selected medical dashboard guidelines to the new dashboard
design. Then we present our analysis of missing dashboard guidelines. We conclude
with remarks on the next phases planned for this study.

2 Background

2.1 Public Health


Public health is defined as “all organized measures (whether public or private) to
prevent disease, promote health, and prolong life among the population as a whole”
(WHO, 2014). The mission of public health is “fulfilling society’s interest in assuring
conditions in which people can be healthy” (IOM, 1988).
Some of the goals of public health are to prevent epidemics and the spread of
disease, protect against environmental hazards, promote and encourage healthy beha-
viors, respond to disasters and assist communities in recovery, and to assure the quali-
ty and accessibility of health services (Turnock, 2009). One of the essential services
provided by public health agencies is to monitor the health status and to identify
community health problems (Turnock, 2009).
In the USA, the Centers for Disease Control and Prevention (CDC) is the nation’s
leading public health agency, and is responsible for responding to health threats such
as naturally occurring contagious disease outbreaks or deliberate attacks (CDC,
2011). To be able to fulfill this monitoring role, every time a suspected select agent
(such as Bacillus anthracis [“anthrax”]) is encountered by a state public health organ-
ization, it needs to be reported to the CDC. To fulfill this requirement, the state public
health laboratories of Nebraska, Kansas, and Oklahoma use a system which allows
them to communicate with laboratories in their state electronically and collect photos
and metadata of suspected select agents to report to the CDC.

2.2 Public Health Emergency Response Information System

The intent of the PHERIS (STATPack™) system used in this study was to address
critical health communication and biosecurity needs in State Public Health Laboratory
Towards Public Health Dashboard Design Guidelines 51

rural states. The Secure Telecommunications Application Terminal Package (STAT-


Pack™) system is a secure, patient-privacy compliant, web-based network system that
supports video telemedicine and connectivity among clinical health laboratories. The
overarching goal of this public health emergency response system is to establish
an electronic infrastructure, largely using web technology, to allow secure communi-
cation among state public health hub and spoke laboratory networks in emergency
situations.
Specifically, the STATPack™ concept involves taking macroscopic (gross) as well
as microscopic digital images of culture samples and sending them electronically for
consultation with experts at state public health laboratories. STATPack™ enables
microbiology laboratories around the state to send pictures of suspicious organisms to
the state public health laboratory, instead of the samples themselves, thus lessening
the risk of spreading infectious diseases. The system includes an alert system that is
bi-directional and has various levels of priorities (emergency, urgent, routine, and
exercise).
STATPack™ is especially useful in states where much of the expertise is located in
a hub laboratory, while most triage and decision making regarding specimen
processing takes place in smaller spoke hospital laboratories. For some of the spoke
laboratories, it is difficult if not impossible for them to describe to experts what they
see in a culture sample. STATPack™ allows experts to actually see the sample imme-
diately and assist with the diagnosis in a matter of minutes, eliminating the risks and
time delay of shipping the sample by courier.
In the case of an emergency, an expert scientist at a hub laboratory can in real-
time, remotely focus the camera on a suspicious organism, analyze the image, and
respond to the spoke laboratory. If the organism is deemed a public health threat, the
STATPack™ system can be used to send an alert to every laboratory in the network.
Prior to STATPack™, the only option was to physically send the sample to the hub
laboratory, which could take several hours or even a full day to receive.
State public health experts spend significant time monitoring public health threats
such as influenza outbreaks. Monitoring multiple public health laboratories state-wide
at a glance is often challenging due to having to search multiple places for
information, data overload, continuous changes of statuses, not knowing what infor-
mation has changed, and a need to evaluate the potential impact. To address some
these challenges, we designed a dashboard that would present all the relevant infor-
mation for a state-wide surveillance system on one screen. We will refer to this new
dashboard as STATDash.

2.3 Dashboard Design Guidelines


In this section we present a meta review of existing dashboard design best practices
and related guidelines. This includes several studies reporting on the development of
different kinds of medical dashboards, ranging from influenza surveillance, patient
triage monitoring, to radiology reporting. A list of studies is presented in Table 1.
Most of these studies also included guidelines for medical dashboard design, not just
dashboards in general. The number of guidelines featured in each study is shown in
Table 1.
52 B. Lechner and A. Fruhling

Table 1. Selected relevant research

Study Subject # Guidelines


Cheng et al., 2011 * Influenza surveillance dashboard 5
Dolan et al., 2013 Treatment decision dashboard 0
Few, 2006 Information dashboard design 12
Fruhling, 2004 Public health emergency response system 3
Gao et al., 2006 Patient triage monitoring for emergency response 15
Morgan et al., 2008 Radiology report backlog monitoring dashboard 4
Schooley et al., 2011 Emergency medical response coordination 6
Tufte, 2001 Information visualization 1
Turoff et al., 2004 Medical response information system 8
Zhan et al., 2005 * Disease surveillance and environmental health 4

As shown in Error! Reference source not found., the number of guidelines spe-
cific to public health monitoring dashboards is relatively low -- only two studies pro-
viding a total of nine guidelines fall into this field (highlighted with an asterisk).
When we widen the criteria to include all medical dashboard guidelines, four more
studies presenting 33 guidelines can be included. Furthermore, there are two relevant
papers discussing 11 best practices for medical/public health emergency response
systems design. Also, two studies in the field of information visualization and general
dashboard design have some overlapping relevancy and thus, are included.
The dashboard and data visualization guidelines developed by Few (2006) and
Tufte (2001) were reviewed and considered in this study. Even though they are gener-
al in nature and not specific to medical dashboards we included them, because they
provide important contributions to information visualization and dashboard user inter-
face design.
We also included Turoff et al. (2004)’s eight design principles for emergency
response information systems (not necessarily dashboards) in our literature review.
We decided to do this because Turoff’s principles are concerned with the content
required to make emergency response information systems useful.
After identifying the most salient studies, we performed a meta-analysis of all the
guidelines for dashboard design. In total, 58 guidelines were identified in the litera-
ture. Among these there were several recurring themes as well as guidelines unique to
the medical field.
The most common themes were those of designing dashboards as customizable,
actionable “launch pads”, supporting correct data interpretation, and aggregating and
summarizing information. Also frequently mentioned were adherence to conventions,
minimalist design, in-line guidance and user training, workload reduction, and using
GIS interfaces. 33 of the guidelines were unique to the field of medical dashboards,
while 17 were not applicable and 7 were too general.
The other 50 guidelines can be sorted into these eight themes that emerged from
their review. Error! Reference source not found. shows the number of guidelines in
each thematic area and the studies represented within.
Towards Public Health Dashboard Design Guidelines 53

Table 2. Categorized guidelines

Theme # Guidelines Studies


Customizable, actionable 10 Cheng et al., 2011; Few, 2006;
“launch pad” Gao et al., 2006; Morgan et al., 2008;
Schooley et al., 2011; Zhan et al., 2005
Support correct data interpreta- 8 Few, 2006; Gao et al., 2006;
tion Morgan et al., 2008
Information aggregation 7 Cheng et al., 2011; Few, 2006;
Gao et al., 2006; Morgan et al., 2008
Adherence to conventions 6 Few 2006; Gao et al., 2006; Schooley
et al, 2011
Minimalist aesthetics 6 Few, 2006; Gao et al., 2006;
Tufte, 2001
In-line guidance and training 4 Few, 2006; Gao et al., 2006;
Zhan et al., 2005
User workload reduction 3 Gao et al., 2006; Schooley et al., 2011
GIS interface 3 Schooley et al., 2006; Zhan et al., 2005

Designing dashboards as customizable, actionable “launch pads” is the guideline


that was mentioned most often. This theme is concerned with allowing users to drill
down into different aspects of the dashboard and initiate actions based on the data
presented to them. A sample best practice of this theme would be “Design for use as a
launch pad” (Few, 2006).
The second most common theme is “support correct data interpretation”, which is
related to helping the user understand information and perform actions correctly. An
example of a best practice would be “Support meaningful comparisons. Discourage
meaningless comparisons” (Few, 2006).
Third, the “information aggregation” theme places an emphasis on condensing data
to show only a high-level view of the indicators most important to the users. A sample
of this theme is “Based on the back-end algorithm, the level and trend of the overall
influenza activity are shown in the top left” (Cheng et al., 2011).
Further, the influenza monitoring dashboard in Cheng et al., 2011’s study synthe-
sizes five different data types/sources to provide an overview of disease activity from
multiple perspectives. It provides drill-down functionality for each individual data
stream, a one-sentence summary of the level and trend of influenza activity, and
general recommendations to decrease the flu risk.
Similarly, STATDash provides several different data streams that allow for activity
monitoring: They are Alerts sent to clients, Alerts received from clients, Images
stored by clients, and the Network stability statuses.

3 Applying the Guidelines

We designed a dashboard interface for STATPackTM (STATDash) based on the


guidelines we selected in our meta-review discussed above and also we used our own
54 B. Lechner and A. Fruhling

knowledge and expertise where there were gaps (Fruhling, 2006; Lechner et al., 2013;
Read et al., 2009). Figures 1 and 2 show the same STATDash, but at different states.
Error! Reference source not found. shows the overview screen, while Error!
Reference source not found. shows the location drill-down screen.

Fig. 1. Dashboard overview screen

Fig. 2. Location drill-down screen

A discussion on how the selected guidelines were operationalized is presented in the


next sections. We begin with the customizable, actionable, “launch pad” guideline.

3.1 Customizable, Actionable “Launch Pad”


The customizable, actionable “launch pad” guidelines (Cheng et al., 2011; Zhan et al.,
2005) were implemented by having all surveillance data required by the state public
Towards Public Health Dashboard Design Guidelines 55

health laboratory experts displayed on a single screen. This was achieved by showing
the status of each location with a color code on a map. We also included two charts
below the map that show the history of alert activity at various intervals: yearly,
monthly or daily.
Activity is organized by routine/exercise and emergency/urgent alerts to allow the
user to determine if a state of urgency exists. The right side of the screen shows
details of recent activity (recent alerts received, clients becoming unavailable, and
images stored). This list can be filtered to show only activity of a certain type. In ad-
dition, users can customize the thresholds used to determine the color a location is
displayed in.
The dashboard is also actionable (Few, 2006; Morgan et al., 2008). Clicking on a
location marker allows the user to view details about that location, such as recent
alerts and images, contact information, and access to advanced functionality. In addi-
tion, clicking on a point in one of the charts shows the details of that data point.
These dashboard features require few touches/clicks to navigate the system
(Schooley et al., 2011). When a user wanted to send an alert to a client in the old user
interface, they had to click on the “Send Message” button, then locate the name of the
client in a long list, select it, and then type their message.

3.2 Supporting Correct Data Interpretation

As discussed earlier, in the context of dashboard design, this guideline focuses on


users correctly and accurately interpreting the data. It also includes that the data is
analyzed correctly by the developers and displayed accordingly. By following this
guideline, user errors can be reduced (Few, 2006). In our dashboard design, this is
instantiated by allowing the user to compare current activity level charts to average
activity over the life of the system for the respective time period. Since some disease
activity can be seasonal, this allows the specialists to make direct comparisons to
historical data.

3.3 Information Aggregation

As mentioned above, aggregated information is data that has been gathered and
expressed in a summary form, often for the purposes of statistical analysis. In our
example, the STATDash shows information aggregated at different levels. At the top
of the screen, a statement informs the user about the overall level and trend of activi-
ty. The map allows a user to see activity by location at a glance by implementing a
traffic light metaphor and different colors to convey meaning (Cheng et al., 2011;
Morgan et al., 2008). The section on the right hand side shows more detailed, action-
able information about the most recent -- most urgent -- activity. Finally, the two
charts at the bottom give a summary of historical data. These four elements give a
non-redundant, condensed, complete picture of disease activity following the guide-
lines presented by Few (2006) and Gao et al. (2006).
56 B. Lechner and A. Fruhling

3.4 Adherence to Convention

Adherence to convention can be thought of as systems adhering to the same look and
feel across the entire user interface and using familiar, established user interface ele-
ments. Convention was observed by retaining the same principles for core functionali-
ty as before, including alert meta-data and transmission. The terminology and labels
within the system have also remained the same. Familiar symbols such as the map
markers and traffic light color coding were employed. As such, it will be easy for
users to learn to use the new dashboard, as they will already be accustomed to the
functionality and terminology (Gao et al., 2006).

3.5 Minimalist Aesthetics


The design of the dashboard follows a minimalist aesthetic approach by reducing non-
data “ink” that does not convey information (such as graphics and “eye candy”) (Few,
2006; Tufte, 2001). One example is the map, which has been reduced to only show
the outline of the state on a gray background and the locations of the clients as labeled
markers.
As a second measure, colors have been used conservatively (Few, 2006). Most of
the interface is white, gray, or black. Colors are only used to convey information,
such as using colored markers for clients to indicate their status, highlighting ur-
gent/emergency alerts in red, and showing the data lines in the charts as blue (routine
and exercise alerts) or red (urgent and emergency alerts).
Advanced functionality such as sending alerts to a specific client is hidden from
the initial view of the dashboard, thus reducing clutter and complexity (Gao et al.,
2006).

3.6 In-Line Guidance and Training


In-line guidance is provided by choosing easily understandable labels (Few, 2006)
that are based on the previous design and already familiar to the users. In cases where
this was not possible, labels were chosen with user feedback.
Visual feedback to the user’s actions is also important (Gao et al., 2006). This is
achieved through a variety of means, such as dimming the other location markers on
the map when one location is selected.

3.7 User Workload Reduction


The dashboard by design is intended to reduce the user’s workload both cognitively
and physically. A lot of this is accomplished through minimalist design and informa-
tion aggregation.

3.8 GIS Interface

The map in the center of the dashboard provides situational awareness of disease ac-
tivity and trends. This graphical display is combined with the performance indicators
Towards Public Health Dashboard Design Guidelines 57

above and below the map for a multi-faceted view of the current status (Schooley et
al., 2011). The map allows users to pan and zoom and select clients to view detailed
information and interact with them.

3.9 Content
Every alert and image stored within the system is identified by its source and location,
time of occurrence, and status (emergency, urgent, routine, or exercise) (Fruhling,
2006; Turoff et al., 2004). This allows users to clearly determine the source and sever-
ity of an alert and respond to it accordingly in the case of an emergency.
Up-to-date information that is updated whenever a user loads a screen (Turoff et
al., 2004) is of great importance in an emergency response medical system and fully
implemented in STATDash, to ensure all users have the most current information
available to them for decision making.

3.10 Guidelines
Of the guidelines reviewed for this study, there were two guidelines that were not as
salient for PHERIS dashboards; rather they are just best overall practices. “Adherence
to conventions” is certainly a useful heuristic for designing dashboards, but it is too
general to be included in a set of best practices specific to PHERIS dashboards. In a
similar vein, providing “in-line guidance and training” is also too general. This guide-
line is applicable not only to this specific kind of dashboard, but to all computer sys-
tems in general (Nielsen, 1993).

4 Proposed New Dashboard Design Guidelines

The guidelines we found in our literature search were helpful in many ways; however,
we identified two gaps. Therefore, we are proposing the following new guidelines.

4.1 Minimize Cognitive Processing

This guideline seeks to reduce the users’ cognitive load by including all indicators on
a single screen without a need for navigation. In addition, charts and graphs should be
used where sensible to show trends visually and for quick interpretation.

4.2 Use Temporal Trend Analysis Techniques


Temporal relationships and comparisons are important in recognizing patterns, trends,
and potential issues. Therefore, the dashboard should have temporal capabilities to
show trends over time and in relationship to historical data. In addition, information
should be presented in a priority order based on recentness, urgency, and impact.
58 B. Lechner and A. Fruhling

5 Conclusion

In conclusion, our analysis found several of the guidelines cited in the literature to be
appropriate and useful for public health surveillance dashboard design, yet, we also
discovered there were missing guidelines. Therefore, we propose two new guidelines:
minimize cognitive processing, and use of temporal trend analysis techniques. A limi-
tation of this study is that we have not validated the two proposed guidelines nor have
we conducted any user usability evaluation on our proposed STATDash design.
Therefore, the next phase of our research is to involve users in conducting various
usability evaluations on STATDash.

References
1. Centers for Disease Control and Prevention: CDC responds to disease outbreaks 24/7
(2011), http://www.cdc.gov/24-7/cdcfastfacts/
diseaseresponse.html
2. Cheng, C.K.Y., Ip, D.K.M., Cowling, B.J., Ho, L.M., Leug, G.M., Lau, E.H.Y.: Digital
dashboard design using multiple data streams for disease surveillance with influenza sur-
veillance as an example. Journal of Medical Internet Research 13, e85 (2011)
3. Diaper, D.: Task Analysis for Human-Computer Interaction. Ellis Horwood, Chichester
(1989)
4. Dolan, J.G., Veazie, P.J., Russ, A.J.: Development and initial evaluation of a treatment
decision dashboard. BMC Medical Informatics and Decision Making 13, 51 (2013)
5. Few, S.: Information Dashboard Design. O’Reilly, Sebastopol (2006)
6. Fruhling, A.: Examining the critical requirements, design approaches and evaluation
methods for a public health emergency response system. Communications of the Associa-
tion for Information Systems 18, 1 (2006)
7. Gao, T., Kim, M.I., White, D., Alm, A.M.: Iterative user-centered design of a next genera-
tion patient monitoring system for emergency medical response. In: AMIA Annual
Symposium Proceedings, pp. 284–288 (2006)
8. Institute of Medicine: The Future of Public Health. National Academy Press (1988)
9. Kintz, M.: A semantic dashboard language for a process-oriented dashboard design me-
thodology. In: Proceedings of the 2nd International Workshop on Model-Based Interactive
Ubiquitous Systems, Copenhagen, Denmark (2012)
10. Lechner, B., Fruhling, A., Petter, S., Siy, H.: The chicken and the pig: User involvement in
developing usability heuristics. In: Proceedings of the Nineteenth Americas Conference on
Information Systems, Chicago, IL (2013)
11. Morgan, M.B., Brandstetter IV, B.F., Lionetti, D.M., Richardson, J.S., Chang, P.J.: The
radiology digital dashboard: effects on report turnaround time. Journal of Digital Imag-
ing 21, 50–58 (2008)
12. Nielsen, J.: Usability Engineering. Academic Press, San Diego (1993)
13. Read, A., Tarrell, A., Fruhling, A.: Exploring user preferences for dashboard menu
design. In: Proceedings of the 42nd Hawaii International Conference on System Sciences,
pp. 1–10 (2009)
14. Schmidt, K.: Functional analysis instrument. In: Schaefer, G., Hirschheim, R., Harper, M.,
Hansjee, R., Domke, M., Bjoern-Andersen, N. (eds.) Functional Analysis of Office
Requirements: A Multiperspective Approach, pp. 261–289. Wiley, Chichester (1988)
Towards Public Health Dashboard Design Guidelines 59

15. Schooley, B., Hilton, N., Abed, Y., Lee, Y., Horan, T.: Process improvement and consum-
er-oriented design of an inter-organizational information system for emergency medical re-
sponse. In: Proceedings of the 44th Hawaii International Conference on System Sciences,
pp. 1–10 (2011)
16. Tufte, E.R.: The Visual Display of Quantitative Information, 2nd edn. Graphics Press,
Cheshire (2001)
17. Turnock, B.J.: Public Health: What It Is and How It Works. Jones and Bartlett Publishers,
Sudbury (2009)
18. Turoff, M., Chumer, M., Van de Walle, B., Yao, X.: The design of a dynamic emergency
response management information system (DERMIS). Journal of Information Technology
Theory and Application 5, 1–35 (2004)
19. World Health Organization: Public health (2014),
http://www.who.int/trade/glossary/story076/en/
20. Zhan, B.F., Lu, Y., Giordano, A., Hanford, E.J.: Geographic information system (GIS)
as a tool for disease surveillance and environmental health research. In: Proceedings
of the 2005 International Conference on Services, Systems and Services Management,
pp. 1465–1470 (2005)
Information Technology Service Delivery
to Small Businesses

Mei Lu, Philip Corriveau, Luke Koons, and Donna Boyer

Intel Corporation, United States


{mei.lu,philip.j.corriveau,luke.e.koons,
donna.j.boyer}@intel.com

Abstract. This paper reports findings from a study conducted to evaluate Intel’s
Service Delivery Platform for small businesses. The Service Delivery Platform
adopted a Software-as-a-Service (SaaS) approach, and aimed to deliver infor-
mation technology (IT) services on a pay-as-you-go subscription model. The
majority of small business decision makers found the solution appealing. Nev-
ertheless, wide adoption of the solution will be contingent on quality and
breadth of service offerings, cost, reliability of service delivery, and respon-
siveness of support.

Keywords: Software as Service, information technology.

1 Introduction

Small businesses in all countries are an important part of the economy [2, 4]. In the
USA, more than 98% of all firms are small businesses with less than one hundred
employees; these businesses employ about 36% of the total work force (USA census
data, 2004). They represent a market segment that is eager to explore or grow their
business with the help of new information technology. From 2004 to 2008 we visited
more than 50 small businesses to understand their technology needs in various areas,
including collaboration, information management, and IT manageability. We found
that IT landscapes in small businesses were smaller, but just as complex, as those in
large organizations. Small business needs included networks, servers, personal com-
puters, phones, printers, and many other hardware equipment. Like larger businesses,
they needed software applications for productivity, business process automation, and
internal and external collaboration. However, they were much more constrained than
larger businesses in terms of resources, knowledge, and expertise regarding informa-
tion technology. Small business owners consistently told us that they had challenges
in understanding and keeping up with the newest developments in technology, and in
selecting the best solutions for their businesses. They also had difficulty quickly
deploying solutions, maintaining a highly managed computing environment, and
providing end-user support. Many small businesses depended on external service
providers for IT management. These service providers were looking for solutions that

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 60–67, 2014.
© Springer International Publishing Switzerland 2014
Information Technology Service Delivery to Small Businesses 61

could help them to build trusted relationships more effectively with customers, and to
manage IT for different businesses more efficiently.
The Service Delivery Platform is designed to address these needs and challenges
for business owners and for service providers. The platform adopts a Software-as-a-
Service [1] approach. It aggregates services from different vendors, and aims to
deliver the services to small businesses with a “pay-as-you-go” subscription model.
Services here intend to cover applications that businesses may need for their daily
operations, including IT managerial, employee productivity and business processes.
The platform provides a web-based portal that is targeted to two types of users – 1)
business owners and decision makers, who will use the portal to conduct research on
IT solutions, and review recommendations and feedback from other users; 2) Internal
or external IT administrators, who manage services and provide support for end users.
The portal supports key user tasks such as service subscription, device management,
status monitoring, and remote trouble shooting and support. Key portal components
include:

• Service catalog: Descriptions of platform service offerings, including pricing,


screen shots, technical details, user reviews, and user manuals or instructions.
• Control panel: A view to allow business owners or IT administrations to remotely
add or remove services via a subscription to their clients’ end-user computers.
• “Pay-as-you-go” subscription service. It allows businesses to pay for services
based on the number of users and length of time they use the services. Services can
be started or cancelled any time from the web portal.
• Status monitoring dashboard: Allows owners or IT administrators to view all of
their devices, and remotely monitor the status of service installations or operations
on different devices.

This research was conducted to evaluate an early prototype of the Service Deliver
Platform with small business owners and their internal or external IT administrators.
In-depth interviews were conducted with twenty businesses in several locations across
the United States, including New Jersey, New York, and Oregon. The primary goal
was to understand their key perceptions regarding the value of such a solution, inten-
tion to adopt, decision factors, and potential adoption hurdles. To support further
design and development of the web portal, the research also tried to understand per-
ceived usefulness of its key features, and priorities of potential service offerings on
the platform.

2 Method

Several participant recruiting criteria were designed to identify businesses as potential


early adopters or early majority on Rogers’ innovation adoption curve [3]. The goal
was to identify businesses with potential needs for IT services, and at the same time,
that could provide objective and balanced views on the values of the Service Delivery
Platform. The criteria include:
62 M. Lu et al.

• Business verticals: proprietary market research data suggested different industry


verticals had different levels of spending on IT services. Four verticals that had
high and middle levels of spending in IT services were selected for the interviews,
including 1) professional services; 2) retail; 3) finance, insurance & real estate; and
4) wholesale and distribution.
• Attitude towards IT: during the recruiting process, businesses owners were asked
about their use and attitudes toward technology. Businesses that were selected for
the interview regarded technology as important or somewhat important, and had
been using technology enthusiastically or pragmatically.
• Current IT service models: the selected businesses represented two types of IT
support models whereby IT was mainly 1) self-managed by either part time or full
time IT staff; or 2) managed by outsourced service companies.

The two-hour interviews were conducted on the businesses’ sites with both the
business owners/decision maker and internal or external IT staff.
After general discussions about their business background and current IT practices,
the solution of Service Delivery Platform was presented to the interviewees with sto-
ryboards and visual paper prototypes or mockups. Afterward, those interviewed were
asked to 1) rate usefulness of major features of the platform and describe how the
features might be used in their organizations, and 2) review different potential service
offerings in the catalog and discuss whether they were interested in subscribe to dif-
ferent IT services from the platform 3) discuss overall appeal of the solution, adoption
hurdles and concerns.

3 Results
Out of the twenty businesses we interviewed, fifteen rated the platform solution as
appealing or very appealing. The businesses expressed general interest in subscribing
to services in areas related to security and protection, employee productivity (e.g.,
word processing and E-mail), and external service provider support. However, the
businesses also pointed out that their adoption would be contingent on a number of
factors, including cost, the breadth and quality of service catalog offerings, reliability
of service delivery, and responsiveness of support.

3.1 Key Perceived Values

The businesses identified a number of values and benefits in the Service Delivery
Platform. Key values include ease of service deployment, ease of control and
management, pay-as-you-go flexibility, and potentials for preventive management.

Ease of Service Deployment. We frequently heard in previous small business related


studies about the difficulty in keeping up with technology development. That senti-
ment was reiterated in this study. As one business owner said, “our old technologies
worked just fine, but we were often forced to upgrade (because vendors no long
provided support to old technologies).” Or, as other business owners said, “the most
Information Technology Service Delivery to Small Businesses 63

challenging is trying to keep up with what’s available as far as new equipment and
what we can use”, and “it is time-consuming (to do research). I have no idea on what
is out there.”
The key features of the Service Delivery Platform appear to address this challenge.
One key benefit that business owners and IT staff identified was that the platform
potentially allowed easy research, and much quicker decision or deployment of IT
solutions.
The business owners viewed the service catalog as a place where they could con-
duct research on new technology, view opinions of other business owners and rec-
ommendations from other users. In addition, the platform provided a mechanism for
them to easily experiment with different potential solutions. For example, with mi-
nimal commitment they could easily install an application on one or several computer
and experiment with it. The ability to cancel services at any time gave users more
confidence to try out different services.

Ease of Control. Another key perceived benefit is ease of control and management.
IT staff liked the remote subscription service. Especially for external IT staff, the
ability to remotely install and uninstall services would allow them to more efficiently
provide support to customers in different businesses. They were most interested in
features allowing them to efficiently manage services for multiple computers. For
example:

• Creating an image or configuration with a set of various services, and then apply-
ing the image to a computer to install multiple services together.
• Copying the service configuration of one computer to another one: for example,
when a user’s computer needed to upgrading to a new service configuration. As
an IT staff said: “The hardest thing when upgrading a computer, is to get all that
information back over (to the new computer).”

In addition, the portal provided a centralized location for IT staff to track assets
and licenses, allowing businesses to view all their devices and the software installed
on each device.
A number of businesses mentioned current challenges in tracking software
licenses. As one owner said: “one of the challenges we run into is trying to keep track
of everything we have, all the software versions, all the licenses we have, the latest
downloads. That becomes extremely cumbersome.” Another IT staff said: “it is huge
being able to consolidate all your clients into one view.” The businesses pointed out
that the visibility also allowed them to more effectively plan for future technology
needs.

Flexibility. For the subscription-based payment model, the businesses identified two
main potential benefits: flexibility and cost saving. The ability to subscribe to or to
terminate service subscription at any time allowed businesses to pay for what they
were actually using. It enabled businesses to easily access expensive applications that
they did not use frequently, or not all of the time, such as video and image editing
64 M. Lu et al.

applications. The users also identified the benefits of easy decommissioning of ser-
vices from devices. As one owner said “That’s the hardest thing for a small guy (to
decommission devices); Jay leaves the company tomorrow, his laptop is sitting there,
no one’s using it, I want to be able to turn Jay’s office just in a manner that says Jay’s
not using it.” Another owned pointed out that “it is a much better approach than the
yearly commitment type.”

Preventive Management. Another key perceived benefit was that the Service Deli-
very Platform would allow businesses to shift from reactive IT management models
to proactive and preventive management models. It was observed that IT management
in these businesses was mostly reactive, in the sense that IT administrators acted
when users approached them with problems. The Service Delivery Platform offered
features such as asset tracking, device status monitoring, service status monitoring,
and service activity support. With these features, businesses would be more aware of
what devices were in the environment, how they were used, and how everything was
running. As a result, those interviewed said they would be able to “address issues
before catastrophic impact,” “more effectively anticipate and plan for user needs,”
“easily create a budget for IT services,” and “do more fire prevention instead of fire-
fighting.”

3.2 Usefulness Ratings

The participants were asked to rate usefulness of the main features on the platform,
using a five-point scale with 5 being “very useful”, and 1 being “not useful at all.”
Table 1 summarized the highest rated features. These ratings were consistent with
participants’ discussions on key values of the platform. The most highly rated features
were related to ease of service deployment, preventive management, centralized
tracking and control.
Both business owners and their IT administrators were interested in ability to
quickly deploy services with a “service image or profile”, or by “duplicating service
configuration from one device to another.” The values of these features were abilities
to quickly provision or deploy a computer for a user. Similarly, when a computer was
no longer used, for example, after a user had left the company, businesses wanted to
quickly “decommission” the computer so that they would not pay for the services.
The features of “real time service status” and “device status” were found useful
because they allowed internal or external IT administrator to closely monitor their
computing environments and take proactive actions if needed. Finally, the businesses
owners liked the ability to “track all their assets” via the portal, and the ability to
receive and review a “service activity report” to understand what services they had
received, and how much they had cost; the information would be useful for creating a
budget plan for the future.
Information Technology Service Delivery to Small Businesses 65

3.3 Interest in Services


After the discussion of key features on the platform and portal, those interviewed
were invited to review potential offerings in the service catalog. The service offerings
were related to four categories, including employee productivity, collaboration, secu-
rity and protection, managed service provider support, backup and restore. The partic-
ipants were asked to indicate whether they would be interested in purchasing or in
subscribing to the services from the Service Delivery Platform. Table 2 summarized
their interest in purchasing services.

Table 1. Highest rated platform features (n=20)

Features Rating
Real time service status 4.4
Device asset tracking on the portal 4.2
Service configuration duplication - Allow quick 4.2
deployment of PC to replace an old one

Service image or profile 4.1


Device status information 4.0
Service activity report 4.0
Decommission 4.0

Table 2. Businesses’ interest in different services (n=16)

Services Interested in Services Interested in


buying (%) buying (%)

Office applications 81 VoIP 44


PC anti-virus 81 Local backup 38
Email anti-virus 81 Database 31
Email anti-spam 75 Remote firewall 31
Intrusion detection 75 BI 19
Remote backup 69 Accounting 19
Email 69 CRM 13
File sharing 50 Project management 13
VPN 44 Content management 13

The businesses were most interested in services related to security and protection,
and basic employee productivity including office, email and file sharing applications.
High levels of interests in security and protection services were consistent with the
66 M. Lu et al.

participants’ discussions on their current challenges. One major challenge pointed out
by several different businesses was protection from malware or spam email from the
Internet. As one IT staff said “A big problem is people download some programs that
make their PC not working. It (PC) slows down and becomes unusable. It is very
time consuming to solve the problem.” As another business pointed out, “Biggest
thing we have to watch is e-mail spam… What the server spends most of its time
doing is rejecting spam. .. 5,000 to 8,000 collectively a day we get hit with.”
In contrast, the businesses expressed lower levels of interests (<50%) in more so-
phisticated applications such as voice over IP (VoIP), database, business intelligence
(BI), virtual private network (VPN), remote firewalls, project management, customer
relationship management, and content management. The main reasons given for the
lower level of interests were: lack of needs, and existence of similar applications that
they were not likely to replace in the near term.

3.4 Potential Adoption Hurdles

Even though the businesses demonstrated enthusiasm in the solution of Service Deli-
very Platform, they pointed out several potential adoption hurdles.

• Cost: the interviewees could perceive the cost saving benefits from the subscrip-
tion-based service model, nevertheless, they mentioned that they would carefully
compare its cost to that of more traditional purchase models or shop at multiple
places to look for the prices. It was critical for the platform to provide compelling
pricing models so that businesses could reduce the total cost of IT operations.
• Quality and breadth of service offerings: Even though the businesses expressed
more levels of interests in different services, they expected the service catalog to
offer a wide collection of high quality of services. The participants mentioned that
the best adoption entry points were when businesses were purchasing new comput-
ers or a new business was formed. At the time, they expected the service catalog to
provide services for all basic computing needs.
• Reliability: Businesses expected the platform to deliver and install services in a
highly reliable fashion, and that the services would not cause any disruption to PC
performance. As one owner said. “We cannot afford any downtime -- every minute
we will be losing money.”
• Responsiveness of support: Business owners expected a very quick support re-
sponse, a response as fast as they currently received from internal staff or local
service providers. “They should be just one phone call or one email away.”

4 Discussions
Small businesses have large and complex demands for information technology, none-
theless lack expertise and resources to stay abreast with the newest developments.
From this study, small businesses experience numerous pain points with traditional
models of software or service management, including research, purchasing, deploy-
ment, license management, maintenance contracts, and expensive upgrades. Software-
as-a-service approaches appear to have the advantage of provide the simplicity and
Information Technology Service Delivery to Small Businesses 67

flexibility small businesses desire. In this study, business owners demonstrated


willingness to trust a reliable external service provider for their computing needs, and
adopt a subscription-based model for their software and support. Nevertheless, both
service providers and businesses owners will need infrastructure support in order
to achieve the required reliability, efficiency and effectiveness for service delivery.
The Service Delivery Platform is intended to be such an infrastructure that will con-
nect service vendors, end users, and support providers. The value propositions and
key features of the Service Delivery Platform were well received by both businesses
owners and internal/external IT staff in the study. Such a platform will need both a
compelling business model and user experience to achieve wide adoption.
It is critical that it appropriately addresses needs of both business owners and
service/support providers. Key user experience needs for business owners include:

• A service catalog with information tailored to business owners. Typically they are
not technology experts and are not interested in technical details.
• Easy communication with external service providers, for example, the ability
to receive reports on what services have been provided, proactive and tailored rec-
ommendations on what technology might be useful for the businesses.
• Quick deployment, with the ability to easily experiment with different solutions,
and then quickly deploy solutions.

For external service providers or internal IT staff, key needs include:

• Technical details in service catalog as they need much more detailed information
on different services offered in the catalog.
• Well integrated service management tools, including asset tracking, service sub-
scription management, status monitoring, and device remote control for manage-
ment or trouble shooting purposes.
• Service bundling and packaging that provides the ability to easily create different
service bundling or packages for different business customers or end users.
• Customer management and support tools for external service providers that support
customer management, such as billing, support ticket management, and communi-
cation with customers.

References
1. Bennett, K., Layzell, P., Budgen, D., Brereton, P., Macaulay, L., Munro, M.: Service-Based
Software: The Future for Flexible Software. In: Proceedings of Seventh Asia-Pacific Soft-
ware Engineering Conference, pp. 214–221 (2000)
2. Berranger, P., Tucker, D., Jones, L.: Internet Diffusion in Creative Micro-business: Identi-
fying Change Agent Characteristics as Critical Success Factors. Journal of Organizational
Computing and Electronic Commerce 11(3), 197–214 (2001)
3. Rogers, E.M.: New Product Adoption and Diffusion. Journal of Consumer Research 2,
290–301 (1976)
4. Thong, J.Y.L.: An Integrated Model of Information Systems Adoption in Small Businesses.
Journal of Management Information Systems 15(4), 187–214 (1999)
Charting a New Course for the Workplace
with an Experience Framework

Faith McCreary, Marla Gómez, Derrick Schloss, and Deidre Ali

Information Technology Group, IT User Experience,


Intel Corporation, Santa Clara, CA USA
{faith.a.mccreary,marla.a.gomez,derrick.j.schloss}@intel.com
[email protected]

Abstract. Like many, our company had a wealth of data about business users
that included both big data by-products of operations (e.g., transactions) and
outputs of traditional User Experience (UX) methods (e.g. interviews). To fully
leverage the combined intelligence of this rich data, we had to aggregate big da-
ta and the outputs of traditional UX together. By connecting user stories to big
data, we could test the generalizability of insights of qualitative studies against
the larger world of business users and what they actually do. Similarly, big data
benefited from the rich contextual insights found in more traditional UX stu-
dies. In this paper, we present a hybrid analysis approach that allowed us to le-
verage the combined intelligence of big data and outputs of UX methods. This
approach allowed us to define an over-arching experience framework that pro-
vided actionable insights across the enterprise. We will discuss the underlying
methodology, key learnings and how the work is revolutionizing experience de-
cision making within the enterprise.

Keywords: UX Strategy, Big Data, Qualitative Data, User Research.

1 Introduction

Today’s enterprise experience is often a fragmented one spanning multiple vendors,


devices, products, and platforms. Enterprise users shift between very different inter-
faces which both frustrates them and makes them less efficient. This problem is
exacerbated by the number of teams needed to develop and manage the enterprise
experience, usually dozens of teams that span the globe who often operate indepen-
dently of each other with little opportunity to discuss how the pieces fit together to
shape the enterprise experience. After years of trying to wrestle the individual com-
ponents of the enterprise experience into some semblance of a coherent whole, Intel
IT took on an audacious goal to define a One IT experience that met employee needs
and spanned its many products and services.
Like many businesses, our IT shop had a wealth of data about business users that
included both big data by-products of operations (e.g., transactions) and by-products
of traditional User Experience (UX) methods (e.g. interviews). This data included
over 700 hours of user narratives, 20,000 surveys, and 18 million transactions. In this

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 68–79, 2014.
© Springer International Publishing Switzerland 2014
Charting a New Course for the Workplace with an Experience Framework 69

paper, we present a hybrid analysis approach that allowed us to leverage the com-
bined intelligence of big data and outputs of UX methods to define an over-arching
experience framework that is being used to frame the One IT experience and seed
human-centric transformation within the enterprise. We will discuss the underlying
methodology, decompose the framework, and provide examples of how it is being
used by the larger IT shop. Lastly, we will map the evolution of this effort over the
last two years, share learnings and insights from our journey, and discuss the benefits
of having a data-driven, re-usable and over-arching experience vision to guide enter-
prise decision-making.

2 Background

The data that enterprises collect every day is a storehouse of information about
business users. It includes enterprise transactions, social data, support tickets, web
logs, internet searches, clickstream data, and much more. Enterprises often manage
data related to users in silos around infrastructure or application support. Similarly,
analysis efforts focus on identifying problems related to the silo. Despite the rich
information contained in this data, it is seldom used to improve the cross-enterprise
experience of business users. Similar to how outside corporations examine the cus-
tomer usage and interactions (e.g. Amazon, Google) to tailor the experience of pur-
chasing or support for customers [1], enterprises could utilize knowledge about
employees to enhance their business experience. However, tools to derive insights
from big data are immature, especially with respect to UX; and analysis is hampered
by the fact that most of this data is incompatible, incomprehensible, and messy to tie
together. Further, even when this data is connected, big data is a backwards look at
what has been. It cannot help enterprises fully understand what motivates the user
behavior that they track or understand the full context in which it occurred. It does not
help enterprises spot future looking opportunities for providing new value to their
users, design a better solution, or better engage their users; and those places are where
user experience has the most potential to add value to the enterprise. Big data lacks
the contextual insights necessary for user-centric design and innovation.
Fortunately, where big data falls short, more traditional UX methods excel. Many
UX methods rely on user narratives or observations that come from interviews, parti-
cipatory design sessions, social media, or open-ended comments on surveys. They
provide the qualitative color that yields the richer understanding of the holistic expe-
rience necessary for experience innovation or improvement. While traditional UX has
a wide variety of methods (e.g. affinity diagrams, qualitative coding) to help UX
professionals transform qualitative data into insights, they often only talk to small
numbers of users which puts their generalizability in question in the corporate envi-
ronment. In addition, the output of these methods does not lend itself to easy mixing
with big data; nor are user narratives usually analyzed to the point where underlying
structures are visible [2]. And, much like the transactional data the enterprise collects,
data collected by UX professionals often remains siloed and is not re-used or used to
form a larger understanding of the enterprise experience.
70 F. McCreary et al.

Leveraging the combined intelligence of big data and traditional UX data can be a
daunting task as the data sets lack connections, or a way to pull together the diverse
data and connect to specific aspects of the experience. Sociotechnical systems theory
and macro ergonomics offer a way of connecting disparate data and provide a theoret-
ical model for understanding the holistic user experience. They have been used suc-
cessfully to holistically assess how well a technology fits its users and their work
environment in relationship to enterprise priorities using diverse data types [3, 4].
They are especially useful for examining the business experience, as success requires
IT understand how their “technology” impacts other elements of the user’s world.

3 Growing an Experience Framework for the Enterprise


Back in 2011, IT, in partnership with HR, conducted over 200 interviews and 300
participatory design sessions focused on understanding the experience of employees.
Since then, IT has increased the data set by 275%. The resulting multi gigabyte data
set covers more than 100K employees across Intel and around the world, more than
700 hours of user stories and 18 million user transactions. It provides a high confi-
dence, big data look at the business experience of employees, with the margin of error
for the qualitative sample at less than .0495 and less than .0002 for the transactional
sample [5]. Growing an experience framework from this massive data set necessitated
that we explore and understand hidden relationships within the data sets. This section
discusses the various methods that we used to elicit insights and describes the com-
plexity of managing the underlying data.

3.1 Growing Connections in User Transactional Big Data

Enterprises collect large amounts of user data in terms of user demographics (e.g.
role, organization) and as by-products of user transactions (e.g., portal usage, support
tickets). Aggregated together they provide a holistic picture of the enterprise expe-
rience. While some data is considered confidential (e.g., age), other data is more pub-
licly available (e.g., app use). Regardless, all data is typically protected in enterprises
which necessitates both legal and privacy negotiation before aggregating the data.
Prior to making any attempt to integrate the data sets, the raw data was anonymized
by replacing all employee identifiers with an encrypted unique identifier.
When we initially went to gather the user data, we naively expected an enterprise-
level data map that would help us locate relevant data. Instead, the process was a trea-
sure hunt for data that could enrich our understanding of employee usage of enterprise
products and services. The data was a mix of structured and unstructured data. Data
formats were sometimes undocumented, and often inconsistent within and between
datasets, with formatting often changing over time resulting in inconsistencies within
a single dataset. The management of structured versus unstructured data meant tra-
deoffs between what was known and what could be feasibly stored or analyzed. We
regularly exceeded the limits of our data storage and analysis capabilities and some-
times had to distill raw data into meaningful summary data. For instance, support
tickets were reduced to total number of tickets and mean time between tickets. This
Charting a New Course for the Workplace with an Experience Framework 71

mountain of data was then distilled into individual employee usage footprints using
the coded identifiers. By organizing the data in terms of individual users, we could
more easily discern individual patterns, allowing us to more easily integrate new
quantitative information as it was discovered.

3.2 Growing Connections in the User Stories

User narratives were captured through interviews, contextual inquiry, participatory


design sessions, support tickets, and surveys. Open-ended questions framed discus-
sion of the enterprise experience spanning key sociotechnical elements related to the
user’s environment, technology, social setting, and organization. The qualitative data
provided rich, near verbatim narratives of users’ experience. As with earlier work, we
took the narratives as a direct representation of experience or critical part of a user’s
underlying mental model [2]. Each user narrative was associated with an anonymous
identifier to connect the narratives to the quantitative data.
We manually coded user narratives using a mix of exploratory and structured
coding. For the free-form narratives, we started with the smallest actionable chunks
(e.g. low-level requirements) and built the coding structure from the bottom up rather
than pre-defining the coding. A single narrative was coded at a time, with the explora-
tory coding structure iteratively refined as analysis progressed. One coder coded the
majority of the narratives, with one other coder doing the exploratory coding for sev-
eral dozen. Additionally there were several feeder coders who helped build the struc-
tured branches of the model (e.g., social networks). Coders regularly met and went
thru an affinity diagram type activity [7] to consolidate coding structures. The narra-
tives guided the coding structure but we also coded certain attributes including

• Specifics of user activities (e.g. key steps, triggers, success criteria)


• If the narrative detailed a positive or negative incident from the user perspective
• Environmental factors (e.g., workspace, location)
• Underlying technology (e.g. suite of tools, enterprise system, process, or device)
• Individual user characteristics (e.g., attitudes, motivators)
• Social factors (e.g., social network)
• Organizational factors (e.g. how work was organized)

The final coding tree represented the users’ over-arching mental model of the expe-
rience [6] and defined the experience users wanted the enterprise to deliver. It mapped
patterns of user behavior and needs, with detail to get to requirements. We then
looked for meta-patterns, or schemas shared by enterprise users, again using an affini-
ty diagram type exercise [6] as a way of data sense-making. The derived meta-
patterns became the foundation of the experience framework.

3.3 Discovering Patterns in the Combined Data

We then connected the narratives with our “big” enterprise data using the coded
identifiers. Rather than merge the whole narratives as unstructured data, we defined
summary measures based on the coding framework. These summary measures
72 F. McCreary et al.

connected the user stories with the larger dataset to help us discover patterns across
datasets. For each node in the first few levels, we specified two summary measures:
(1) total number of references coded for the node, and (2) number of references coded
for the node that were negative (i.e., pain point). Using correlational methods, ma-
thematically best “fit” patterns were identified in the combined dataset based on simi-
larities in how employees used and talked about enterprise products and services. We
used non-parametric methods as the data was often non-normal. Cross-references
between the datasets allowed us to find connections and validate our findings from
other data sets [5]. This process was highly iterative with a continuous cycle of data
and user research. By making the combined dataset a living thing, we could add in
more as needed and it ensured the enterprise has a constant pulse of user needs, can
strategically identify key opportunities, and can respond more quickly when new
needs arise. The final best “fit” patterns became the building blocks of the experience
framework and will be discussed more in the next section.

4 Bringing the Framework to Life with Stories

The experience framework is a conceptual map of the desired user experience and our
intent was for the framework to become the common language and shared framework
for designing and evaluating enterprise services for the Intel user. In order to facilitate
the ability of product teams to use the framework, we introduced large-scale, layered
storytelling to unify the supporting framework collateral. The underlying stories focus
on particular elements of the dataset and ignore the rest. Strung together they map the
desired enterprise experience but individually only tell a piece. The data set is too
large and diverse to be told by a single story. Users of the experience framework take
these stories and data to create their own stories relevant to their product; many
stories are possible from the same data.
Different framework elements provide different insights. Themes define the enter-
prise experience vision that spans the many products and services provided by Intel
IT. Segments define the user groups that must be taken into account when creating the
enterprise experience, while influencers and activities help IT understand the role it
plays in core enterprise tasks and its impact on the overall experience. Much has been
learned about how to most effectively use this information with product teams and the
collateral has iteratively evolved to better help teams make sense of the large dataset.
Social media is used extensively to socialize the framework; training and workshops
were developed to optimize its use by service and portfolio teams.

4.1 Experience Themes


Experience themes describe core user needs that transcend enterprise product or
service boundaries. They help service and product teams understand the shared expec-
tations that users have of both the enterprise experience as well as their individual
product interactions. To increase the ease of applying a theme to a specific product,
each theme was decomposed into experience qualities that describe the core theme
Charting a New Course for the Workplace with an Experience Framework 73

components and the strategic functionality necessary to bring them to life. They were
packaged as quality “trading cards” and are used by teams while setting UX strategy
and product roadmaps. Each card details the key use scenarios for that quality and
proposed functionality. Experience qualities are further broken down into experience
elements which document key usage scenarios and requirements users expect in
products. This information was packaged in theme vision books and as 8x10 cards to
facilitate use during face-to-face design sessions. Three themes, 12 qualities, 59
experience elements and hundreds of requirements detail the desired over-arching
experience and are summarized in Table 1.

Table 1. The themes and qualities that framed the envisioned experience [5]

Theme Qualities
Feed Me Seamless - Transparent. Integrated but flexible.
I quickly and easily find Simple - Quick and easy. Language I can understand.
Meaningful - Points me in the right direction, aids me in sense-
the information I need to
making of information, and helps me work smarter.
speed my work. Proactive – Push me relevant information, make me aware of
changes before they happen, and help me not be surprised.
Connect Me Purposeful - Together we do work.
Connect me with the Easy - Easy to work together and connect.
Cooperative – Larger environment is supportive of me.
people, resources, and
Presence - Always present or at least I feel like you are near.
expertise I need to be
successful.
Know Me Recognized - Know who I am.
My information is known, Personalized - Implicitly know what I need.
Customized - Give me choices.
protected and used to im-
Private - My information is under my control. Always pro-
prove provided services. tected and secure.

4.2 Experience Segments


Although themes are based on research with thousands of business users and apply to
all enterprise products, how they apply to individual segments may vary. Segments
provide target users for product teams to help them design for or tailor the experience
for a particular audience. Six segments were identified with some segments further
decomposed into sub-segments based on strength of within segment difference. Per-
sonas put a face to the experience segments, with each segment having a persona
family that represents it. Supporting collateral for the personas summarize their goals
and needs, key tasks and behaviors, pain points, usage of enterprise products, and
relative priority of different experience qualities. The persona collateral ranges from
posters, day-in-the-life, and trading cards.

4.3 Experience Influencers


Experience influencers help product teams assess the relative contribution of core
elements of the enterprise world (e.g., IT, HR, physical workspace) on the holistic
74 F. McCreary et al.

enterprise experience and detail key pain points associated with a particular element.
They also help teams identify potential partners when improving the experience and
the potential impact of design changes.

4.4 Core Activities

Core activities provide product teams with specifics in how employees use and inte-
ract with enterprise products to accomplish shared tasks common to all employees
and provide teams with high-level journey maps for various key activities such as
“learn” or “find information.” The activity journey maps also describe key segment
differences relative to the activity, and provide a jumping off point.

5 Turning Understanding into Experience Transformation

An early adopter of the framework within Intel was the collaboration portfolio, which
is comprised of a set of technologies that help Intel employees collaborate and
includes social media, meeting tools, meetings spaces, and shared virtual workspaces.
The impact of the framework has been wide-ranging, from setting portfolio UX strat-
egy to vendor selection to helping an agile product team move faster. They evolved
our original approach by combining use of the experience framework with elements
of presumptive design [8]. The experience themes along with what was already
known about a particular audience (e.g., field sales) formulated the starting “presump-
tions” on which designs were based. These starting presumptions were then validated
using low cost methods and prototypes. In this section, we provide an overview of
how the framework aided their team.

5.1 Providing a Future Vision of Collaboration

The framework provided significant insights about what Intel employees need from
the enterprise collaboration experience. We provided teams with experience maps of
the employee vision of the future for enterprise collaboration. The key needs included

• Seamless integration of tools, with a single place to access collaborations,


• Consumer grade experiences and increased sense-making across activity streams,
• Easy to find experts thru personalized recommendations and visible connections.
• Increased personal interactions with more in person collaboration, higher fidelity
virtual alternatives, and increased access to video.

5.2 Defining Portfolio Strategy


The portfolio team began by identifying intersections between the framework and
learnings from deep dive research done by portfolio UX teams. They posted a giant
mind map of the experience themes up on the wall and, using sticky notes and
highlighters, the team added in data from the deep dive research. The team then used
Charting a New Course for the Workplace with an Experience Framework 75

the mindmap and the user needs defined by experience qualities and elements to get
the design process started. They isolated the elements relevant to collaboration and
completed a heat map to identify how well today’s capabilities are meeting target
requirements for each collaboration element and how important each of those
elements are to enterprise users. Answers to these questions helped the team set their
UX roadmap and to prioritize where to focus first. For example, an element critical to
initiating collaboration is “Bump into Interesting,” which is about helping users se-
rendipitously bump into information or people that are interesting and useful to them.
In this case, the team found the portfolio didn’t have solutions that were meeting the
target requirements.

5.3 Speeding Agile Product Design

Both the framework and deep dive research repeatedly highlighted expert or expertise
finding as a key need. The agile-based project team used the experience themes as a
starting point for their efforts to rapidly go from concept discussions to prototype.
During the initial team kickoff, the team found the strongest affinity with the Connect
Me and Feed Me themes which focus on the need to quickly find information and
connect employees with expertise. The associated element cards were a starting point
for the team’s Vision Quest activities and were a catalyst to helping the team form a
design hypothesis around core presumptions of what features and capabilities should
be included in the solution. Many of the early presumptions the team captured were
based on previously gathered user data, and the experience elements.
A series of contextual scenarios were written from the design hypothesis which
were then organized to form a high-level “narrative” or persuasive story of the prod-
uct vision. These were then documented in a storyboard. The experience themes
inspired many of the design patterns reflected in the proof-of-concept (POC) proto-
types, and the storyboard contained a swim lane the team used to map the experience
themes. To validate design presumptions, several intervals of presumptive design tests
were conducted with end-users in tandem with design activities. Features not vali-
dated as “valuable” by users were removed from the storyboard and product
vision. The vision iteratively became more defined and evolved into a ‘lightweight’
clickable prototype used to engage stakeholders and the technical team in feasibility
discussions.

6 Discussion

The experience framework is an innovative way to represent UX research in a way


that is consumable within the enterprise. It provides a foundational understanding of
the needs of different kinds of employees in spaces that lack the time or resources to
invest in more traditional user research. It also mitigates some of the key risks asso-
ciated with presumptive design [8] by providing a larger holistic look at the expe-
rience space and overarching prioritization that helps prevent teams from focusing on
the wrong solution to design or ignoring the needs of the larger experience. By taking
76 F. McCreary et al.

a “big data” approach to UX and creating an over-arching experience framework that


represents core wants and needs employees have of enterprise products and services,
we helped those responsible for setting enterprise strategy to incorporate UX more
easily in their decision process. By mapping the intersection between experience qual-
ities and elements against portfolio and product roadmaps, teams could identify po-
tential gaps between the planned and desired experience of their products.
Over time, the framework has evolved into a common language and shared under-
standing of users and design needs that defines the One IT experience vision, span-
ning the many products and services provided by Intel IT. The supporting collateral
helps set enterprise strategy and provides re-usable templates project teams can quick-
ly adapt for their purposes. This shared vision is transforming enterprise products and
services resulting in a more cohesive One IT experience and increased velocity of
teams. The large-scale, layered storytelling approach made the framework resonate to
the larger organization. It allowed framework users to explore the underlying data
below the themes to find their own meaning. It also seeds design investigations of
features and possible interaction models. This approach to socializing and utilizing
the experience framework provides a practical model for the creators of other types of
experience themes to more quickly trigger UX transformation in their own spaces.
When working with teams, we discovered creative ways to utilize “big data” past
its original role in deriving the framework. By intersecting the over-arching user data
with data specific to an enterprise product or service, we discovered new insights
about user expectations of their product and how their product needed to align with
the over-arching IT experience. The teams gained a much needed understanding of
how their users utilized other enterprise products, and their preferences, which helped
them more easily make decisions to ensure alignment with the overall user vision.

6.1 Key Learnings


The experience framework is being used across various levels of enterprise products
and services to feed UX strategy, technical architecture, and the design of specific
products. As a result, new learnings have emerged about how to most effectively inte-
grate into portfolio strategy and design. Key lessons learned include

• Teams should use the qualities to evaluate their own product at the start of using
the framework; it is key to learning and provides a baseline for improvement.
• Experience quality cards are paramount for setting vision and strategy. They spark
conversation and provide easy functionality checklists to feed UX roadmaps.
• Product teams need experience element cards that provide user requirements,
scenarios, and key audience differences once they move from strategy to design.
• Sample designs that embody the experience themes and elements are important to
spark new ideas or conversations about how the pattern can be improved.
• Different people have different learning styles and different teams have different
ways to work together. If collateral doesn’t resonate, iterate, iterate, iterate.
• We have found that generating design ideas is often fastest when you have a hard-
copy of element cards and other experience theme collateral so participants can
“re-use” collateral elements in discussions and prototyping.
Charting a New Course for the Workplace with an Experience Framework 77

6.2 Key Challenges


There are multiple challenges with an effort of this size. Discovering great experience
solutions is as much about collecting and analyzing user data as it is about transform-
ing an organization to actually use it effectively. It’s a journey – not a silver bullet.
Transforming an organization, a team, or an individual to be ‘experience driven’
doesn’t happen overnight and it doesn’t happen just because you have a framework.
It’s a collaborative process that requires joint partnership and extensive collaboration.

Making the Story Consumable. The size of our dataset made keeping the UX story
consumable extremely difficult. How do you turn mountains of user data into a
framework that can be digested by a diverse audience? We answered this challenge by
developing a multi-layered storytelling approach which included a variety of collater-
al forms – from vision books to quality cards, element cards, and reference sheets. We
also created job aids, including an evaluation spreadsheet that allows teams to grade
their solution according to the framework. Even with the wide range of collateral
available, teams can still find it unwieldy to work with, especially in the beginning.
Newcomers can easily lose their way in the multi-layered story so we work directly
with teams to help them understand the framework.

Exponentially Increasing Big Data. In the two years since the introduction of the
framework, the underlying data set has grown 275% and the supporting story-telling
collateral has grown by 870%. That’s a lot of information for anyone to digest and
maintain. While the challenges of use are large, the value of incorporating additional
data in the framework is immense. Increasing the variety of data allows us to identify
correlations of activities, allowing us to refine the enterprise footprint to increase our
understanding of user behavior and needs. Lastly, although collateral growth is begin-
ning to stabilize based on active use by Intel IT project teams, the underlying data set
is expected to grow even more rapidly in coming years as analysis tools become
capable of handling even larger data sets. Only about 30% of available user transac-
tional data has been incorporated in the current framework and the amount of data
continues to increase on a daily basis further exacerbating the challenges of re-use and
sense-making by project teams.

Enabling Social Storytelling and Knowledge Sharing. The framework and collater-
al put a face to the big data and provide an approach to defining a unified enterprise
experience, but they are merely the tip of the iceberg of potential insights that could
be derived from the underlying data set. Today, storytelling is primarily limited to the
research team that produced the experience framework or the UX professionals who
work directly with them. The rich data available on individuals, specific job roles,
different organizations, and geographic areas makes possible a great many more sto-
ries than our current collateral. The lack of “self-service” environments to enable
utilization of the data limits its broader utilization.
78 F. McCreary et al.

The majority of our collateral resides in flat files or posts in social media forums.
The framework has not yet been brought to life online and no easy methods exist for
teams to share outside of forum posts. Until the structure is available online and anno-
tatable, widespread sharing and associated efficiencies are unlikely to occur. We need
to enable project teams to not only re-use existing knowledge but also to add to it with
detailed stories of use and new data.

Experience-Driven Transformation Is a Journey. Even with an experience frame-


work, experience-driven transformation is a journey and what works for one team or
individual may not work for another. A corporation’s internal culture can also inhibit
knowledge sharing if there is internal competitiveness and a reluctance to share in-
formation such as datasets and experience artifacts (e.g. personas, scenarios, or design
patterns). It takes time, resources, and a willingness to collaborate with the rest of the
organization. Every team or individual starts from a different point of faith and under-
standing of what UX is and how to do it. We have all had to transform our thinking,
approach, decisions, and actions – from how we do user research to individual deci-
sions made on enterprise projects all the way up to architectural and overall strategy
decisions for IT. We celebrate the small and big wins where we see the framework is
used to drive strategy and design. We never expected Intel IT to shift overnight and
the journey is still in-progress but there have been big shifts. As researchers, we must
maintain agility and flexibility with the teams but make sure they understand the hard
work ahead.

7 Conclusion

In a world where businesses are constantly expected to move faster and workers be-
come increasingly sophisticated in their expectations of technology, an experience
framework can help speed up the business and become a force for UX transformation.
This hybrid approach is a fundamental shift in the management of the business expe-
rience from the perspective of UX and enterprise IT. By aggregating big data and the
outputs from more traditional UX together, UX teams can more quickly seed UX
within businesses. By connecting user stories to big data we can understand if our
insights from qualitative studies are generalizable to larger groups of business users.
Presenting big data in ways typically used by traditional UX (e.g., personas) can make
it more accessible. Together, big data and UX data are more powerful.
The experience framework defines interaction norms across enterprise tools and
serves as design guard rails to help developers create better interfaces. A common
framework and language understood by all results in more productive team discus-
sions that generate strategy and design ideas faster. However, transformation using
the framework is possible only when the findings are communicated in various ways
so that it resonates with the broad base of people who work together to define and
develop the workplace experience. A developer will look at the framework collateral
thru a different lens than a business analyst or a service owner. Furthermore, trans-
formation is a participatory process—it is not something that can be done by merely
Charting a New Course for the Workplace with an Experience Framework 79

throwing the framework over the wall to the business. For change to happen, all levels
of the organization must participate in the conversation and take ownership of how
their own role impacts the enterprise experience. The road to transformation that is
paved by an enterprise framework is often hard, uphill, and fraught with challenge,
but for those who take this journey, an experience framework can help seed a shared
vision and light the way for the action needed to bring the vision to life and signifi-
cantly improve the business user experience.

Acknowledgements. We would like to thank the collaboration portfolio, especially


Anne McEwan, Susan Michalak, and Cindy Pickering. We would like to thank Jayne
May for helping evolving the collateral. And lastly, thank you to Linda Wooding
who led the Intel IT’s UX team; without her support this work would not have been
possible.

References
1. Madden, S.: How Companies like Amazon Use Big Data to Make You Love Them,
http://www.fastcodesign.com/1669551/
how-companies-like-amazon-use-big-data-to-make-you-love-them
2. Tuch, A., Trusell, R., Hornbaek, K.: Analyzing Users’ Narratives to Understand Experience
with Interactive Products. In: Proc. CHI 2013, pp. 2079–2088. ACM Press (2013)
3. McCreary, F., Raval, K., Fallenstein, M.: A Case Study in Using Macroergonomics as a
Framework for Business Transformation. Proceedings of the Human Factors and Ergonom-
ics Society Annual 50(15), 1483–1487 (2006)
4. Kleiner, B.: Macroergonomics as a Large Work-System Transformation Technology.
Human Factors and Ergonomics in Manufacturin 14(2), 99–115 (2004)
5. McCreary, F., McEwan, A., Schloss, D., Gómez, M.: Envisioning a New Future for the
Enterprise with a Big Data Experience Framework. To appear in the Proceedings of 2014,
World Conference on Information Systems and Technologies (2014)
6. Young, I.: Mental Models: Aligning Design Strategy with Human Behavior. Rosenfeld
Media (2008)
7. Beyer, H., Holtzblatt, K.: Contextual Design. Interactions 6(1), 32–42 (1999)
8. Frishberg, L.: Presumptive Design: Cutting the Looking Glass Cake. Interactions 13, 18–20
(2006)
The Role of Human Factors in Production Networks
and Quality Management

Ralf Philipsen1, Philipp Brauner1, Sebastian Stiller2,


Martina Ziefle1, and Robert Schmitt2
1
Human-Computer Interaction Center (HCIC)
2
Laboratory of Machine Tools and Production Engineering (WZL)
RWTH Aachen University, Germany
[email protected]

Abstract. Quality management in production networks is often neglected. To


raise awareness for this subject, we developed an educational game in which
players are responsible for managing orders and investments in quality assur-
ance of a manufacturing company. To understand individual performance
differences and playing strategy, we conducted a web-based study with 127
participants. Individual performance differences were discovered. Players who
closely observe the company data and frequently modify order levels and quali-
ty investments perform significantly better. Furthermore, we found that the
game model works and that the awareness towards quality assurance increases
through the interaction with the game. Hence, the game is a suitable educational
tool for teaching decision making in quality management.

Keywords: Quality Management, Decision Support, Human Factors, Produc-


tion Networks, Personality Traits, Game-based Learning.

1 Introduction

Many of today’s products are built from a large number of components that are deli-
vered by a number of different suppliers. To enable a company to profitably manufac-
ture its products, an efficient and viable production network is required. However, in
today’s globalized world these networks have reached a very high complexity [1].
Decision makers in current production networks need to have a comprehensive over-
view of the interrelationships of their company, the suppliers, and customers of many
of different products and components. The arising problems are twofold: Not only do
the decision makers have to ensure that enough components are available in the pro-
duction process, but also a sufficient quality of the components has to be assured.
Modern Enterprise Resource Planning systems support people in their decision
making. However, the huge quantity of presented and retrievable information might
lead to information overflow and users who might focus on the wrong parameters,
leading to inefficiencies, low product quality, or lower profits in the production
networks. Human behavior in production networks and quality management is insuf-
ficiently explored. In order to study decision making processes in quality management

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 80–91, 2014.
© Springer International Publishing Switzerland 2014
The Role of Human Factors in Production Networks and Quality Management 81

and to develop tools that can give suitable support to decision makers, we developed a
web based simulation that puts users into the role of decision makers.
This publication serves a dual purpose: First, we present the design and implemen-
tation of a simulation game for quality management in production networks. Second,
we analyze the effect of human behavior and characteristics in the developed game as
well as the consequences for real world companies.

2 Development of a Game for Quality Management

Simulations are experiments within a controlled environment, thereby reducing as-


pects of the real world in terms of structure and behavior. The behavior of complex
systems is neither predictable nor completely understandable. The combination of
human intuition and analytical modeling is utilized as a model for decision making in
complex systems such as production and supply chain networks [2] [3] [4].
In order to train and support decision making, simulation models and serious
games serve as ideal training environments, in which managers are confronted with
challenging situations that require fast and important decisions. These games support
the awareness of typical problems in production, logistics, or quality management,
e.g., the Beer Distribution Game, Goldratt’s game [5] [6], KANBAN simulations.
However, no games exist that address quality management in production networks.
The Quality Intelligence Game (Q-I Game) is a turn-based game in which players
have to fulfill the customer demands by procuring and processing vendor parts into a
given product. In contrast to the Beer Distribution Game, players also have to take
quality aspects into account. Studies suggest that quality management influences prof-
it in two different ways: First, good quality management increases company profits
through higher product quality, resulting in higher customer satisfaction and larger
sales volumes. Second, process optimization as a part of quality management leads to
lower variable and fixed costs. Therefore, a trade-off between product quality and its
costs is required [7].

Fig. 1. Principle of the Q-I-Game

The Q-I game model is designed around three pivotal decisions (see Figure 1 for a
schematic representation). First, players have to invest in the inspection of incoming
82 R. Philipsen et al.

goods. Second, players need to control the investments in their company’s internal
production quality. Third, similar to the Beer Distribution Game, players need to
manage the procurement of vendor parts. The players have to find an optimal trade-
off between these three dimensions in order to make the highest profit. The influences
of these dimensions on the company’s profit are explained in the following.
The first dimension contains the inspection planning and control of supplier parts,
including complaint management between the manufacturer and his supplier.
Inspections at goods receipt can cause an ambivalent behavior of quality and pro-
duction managers. While the inspection itself is not a value-adding process and hence
a driver of variable and fixed production costs, inspections give the managers the
opportunity to protect their production systems from faulty parts and goods. Also, it
facilitates the supplier evaluation and development since the quality of supplied parts
and goods is measured.
The production quality dimension is taking the production and final product quality
of the manufactured goods into account. Investments in production quality will in-
crease costs, but it will decrease the number of customer complaints.
To assure a continuous production, the player has to procure necessary parts from
its supplier. Contrary to the Beer Distribution Game, the customer demand is kept
constant within the Q-I game, in order to leave the focus on the decisions of quality
management. Nevertheless the player has to consider scrapped parts due to low pro-
duction quality or blocked parts due to poor supplier product quality in their orders.
The Q-I game gains complexity through the introduction of random events. First,
the quality of the vendor parts can change drastically. Second, the internal production
quality can change. Possible reasons are broken machines, better processes, failures in
the measurement instruments, etc. Third, the customer demand may shift.

3 Evaluation of the Q-I-Game

After implementing the Q-I-Game with Java EE 7, it was used in a study to validate
the game model and research possible effects of human factors on players’ perfor-
mances within the game. In the following sections, we present the defined variables,
the experimental setup, and the sample of the study.

3.1 Independent Variables

In order to understand how decision making in quality management is influenced by


human factors, several demographic data and personality traits were gathered. Age,
gender and educational qualifications were collected as independent variables. In
addition, participants were asked to assess their previous experiences with quality
management, production management, supply chain management, logistics and
business studies. Furthermore, we measured the technical self-efficacy with Beier’s
inventory [8], a method already proven to show performance in computer-based
supply-chain-management simulations [9]. In order to analyze potential effects of
personality, we used a version of the five factor model shortened by Rammstedt [10]
The Role of Human Factors in Production Networks and Quality Management 83

to identify the participants’ levels of the personality traits openness, conscientious-


ness, extraversion, agreeableness and neuroticism. Furthermore, previous studies
revealed that performance regarding supply chain management was affected by their
risk-taking propensity; therefore, we used the “General Risk Aversion” inventory by
Mandrik & Bao [11] as well as the “Need for Security” inventory by Satow [12] to
measure the participants’ willingness to take risks. Xu et al. showed that the personal
attitude towards quality contributes to Total Quality Management practices [13];
therefore, we measured the quality attitude with a newly constructed Quality Attitude
Inventory, which consists of 8 items. 6-point Likert scales were used for all measure-
ments.

3.2 Experimental Variables

In order to analyze the effects of complexity on players’ performances, we imple-


mented two in-game events to vary the degree of difficulty. One was a potential spon-
taneous drop of the supplier’s quality by 30% in the tenth month. The other was a
possible drop of the internal production quality in the same month. The occurrence of
both events was fully randomized between both the participants and the two rounds
played by each player.
The availability of quality signal lights was varied as a within-subject variable;
accordingly, all participants played one round with and one without the signal lights.
Whether the lights were shown in the first or the second round was randomized.

3.3 Dependent Variables

Detailed logs of investments, incomes, costs and profits of each simulated month
were used to analyze the players’ behaviors within the game. The achieved profit was
used as the central measure for the players’ performances. In addition, several infor-
mation about the players’ interactions with the game were recorded: duration of read-
ing the instructions, time to complete a month as well as a round, the number of help
accesses and the number of adjustments to investments and orders.

3.4 Ranking Tasks


In addition, the participants were asked to rank factors of data provisioning and cor-
porate strategy according to their importance for a successful performance in the
game and for an economical production. They were asked to perform these tasks both
before and after the game to discover possible effects on participants’ opinions caused
by playing the game.

3.5 Experimental Setup

The experimental setting consisted of our web-based quality management simulation,


which was embedded between the pre- and post-part of an online survey. Announce-
ments on bulletin boards, social networks, emails and personal invitations were used
84 R. Philipsen et al.

to recruit participants for the study. Each had to play 2 rounds of 24 month each. 219
people started the online pre-survey, 129 played both rounds of the game and finished
the post-survey. The obtained dataset was revised to eliminate players who did not
play seriously, i.e. who placed excessive investments or orders or did not change the
settings at all. Therefore, two cases had to be removed for not performing any adjust-
ment during both rounds. Accordingly, the final revised dataset contained 127 cases.
Although the participants had to play 24 simulated month per round, only the data of
up to and including month 20 were used in the analysis to exclude possible changes of
players’ strategies late in the game like emptying the warehouse completely.

3.6 Participants

97 (76.4%) of the participants were male, 30 (23.6%) were female. They were be-
tween 17 and 53 years of age. The mean (M) age was 27.7 years (SD 7.2 years).
58.6% (60) of the participants reported a university degree as their highest achieved
level of education. 39.7% (50) participants had a high school diploma and 6.3% (8)
had vocational training. The average level of previous experiences regarding the sub-
ject matter were rather high. 67.7% (86) had previous knowledge in quality manage-
ment, 65.9% (83) in business studies and 57.5% (73) in production management.
The participants’ average personality traits regarding the five factor model were
comparable to the reference sample of Rammstedt [10] with the exception of a
slightly lower level of agreeableness. The only significant difference between men
and women regarding this model was found at the neuroticism scale (F(1, 125) =
7.498, p = .007 < .05*): men showed lower average levels (M = 1.99, SD = 0.97) than
women (M = 2.58, SD =1.22). In addition, gender related differences were found
regarding all three inventories of needs (recognition, power, security) (p < .05* for all
needs), technical self-efficacy (p = .000 < .05*), willingness to take risks (p = .002 <
.05*) and performance motivation (p = .000 < .05*). With the exception of the need
for security men showed higher average levels in all aforementioned scales. In con-
trast, there was no significant difference found regarding the attitude towards quality.

4 Results

The result section is structured as follows: First, we will present the impact of the
game mechanics and instructions on the player’s performance. Second, we will have a
closer look at the impact of user diversity. Furthermore, we will present the effects of
behavior and strategies within the game. Last, we will report the ranking task results.
The data was analyzed by using uni- and multivariate analyses of variance
(ANOVA, MANOVA) as well as bivariate correlations. Pillai’s trace values (V) were
used for significance in multivariate tests, and the Bonferroni method in pair-wise
comparisons. The criterion for significance was p < .05 in all conducted tests. Median
splits were used for groupings unless the factor offered a clear dichotomy.
Unless otherwise described, the effects in the following are valid for both rounds of
the game. However, for clarity reasons, only the effect values of the second round will
The Role of Human Factors in Production Networks and Quality Management 85

be reported. All profit related values like means and standard deviations will be re-
ported in thousands for similar reasons; for computations the exact values were used.

4.1 Effect of Game Conditions

As expected, the participants made the highest average profit (M = 148.5, SD =


128.0) on the condition that there was no spontaneous drop of supplier’s and internal
production’s qualities during the game. The mean profit in games with a drop of sup-
plier quality was only slightly lower (M = 132.9, SD = 81.2). In contrast, average
profits were considerably lower (M = 11.5, SD = 236.8) with drops in either both
supplier’s and internal production’s quality or in internal production’s quality only (M
= -1.3, SD = 316.4), as shown in Table 1.

Table 1. Achieved average profits under different game conditions

Drop of supplier's quality


no yes
Drop of internal no 148.5 132.9
production's
quality yes -1.3 11.5

A two-way ANOVA revealed that the drop of internal production quality had a
significant effect on players’ average profits (F(1, 122) = 12.342, p = .001 < .05*); in
particular, players averagely performed significantly worse under game conditions
containing the aforementioned drop. On the other hand, the spontaneous drop of sup-
plier’s quality had no significant influence on average profits.
With both possible quality drops controlled, the presence of signal lights had no
significant effect on players’ average profits (p = .537, n.s.). Also, the impact of sig-
nal light availability within any of the four possible game conditions resulting from
quality drop combinations did not reach the criterion of significance. Both the pres-
ence of signal lights and the quality drops of supplier and internal production as expe-
rimental variables will be controlled in the computations of the following sections.

4.2 Effect of Repetition

There was a strong correlation between players’ average profits in the first and in the
second round (r=.730, p=.000 < .05*); accordingly, participants who achieved a
high/low profit in the first round, on average achieved the same level of profit in the
second round. Furthermore, players’ mean profit increased significantly between the
first (M = -19.0, SD = 258.5) and the second round (M = 76.6, SD = 218.3) with
Pillai’s trace value (V) = 0.23, F(1, 126) = 36.6, p = .000 < .05*.
86 R. Philipsen et al.

4.3 Effect of User Diversity

Several aspects of user diversity have been studied for potential effects on players’
performances within the game. First, male participants made a higher average profit
(M = 104.9, SD = 187.1) than women (M = -14.7, SD = 282.5). However, the effect is
only significant for the second round (F(1, 124) = 7.160, p = .008 < .05*), not the first
round (F(1, 124) = 3.235, p = .074, n.s.). Second, there was no correlation between
age and the player’s profit (r = .057, p = .553, n.s.). Previous experiences did not
influence the game performance, e.g., neither knowledge in quality management (p =
.087, n.s.) nor business studies (p = .070, n.s.) had a significant effect on performance
within the game with game conditions controlled. Although participants with a high
level of domain knowledge performed better under game conditions containing the
aforementioned drop of internal production’s quality (M2QM = 86.8, SD2QM = 150.3)
than players with low knowledge (M2QM = -59.5, SD2QM = 333.8), this effect was only
significant in the second round of the game (F(1, 58) = 4.928, p = .030 < .05*).
In addition to the customary demographic data several personality traits were ana-
lyzed. First, none of the “Big Five personality traits” of Rammstedt et al. [10]
impacted the players’ performances significantly (p > .05, n.s. for all indexes).
Second, and contrary to several previous studies, there was no significant relation
between technical self-efficacy and achieved average profit (r = .163, p = .084, n.s.).
Third, there was no effect of the willingness to take risks on players’ performances.
Neither the “General Risk Aversion”-index of Mandrik & Bao [11] (r = -.174, p =
.065, n.s.) nor the “Need for Security”-index of Satow [12] (r = .054, p = .573, n.s.)
correlated with the achieved profits. Moreover, the personal attitude towards quality
did not correlate with participants’ average performances within the game (r = .109, p
= .248, n.s.).

4.4 Effects of Behavior within the Game

Two main factors were analyzed regarding the players’ behaviors within the game.
First, the duration of playing correlated with players’ average profits in the first round
(r = .301, p = .001 < .05*). Therefore, spending a higher amount of time for a game
averagely led to significantly higher profits in the first round. However, the effect was
no longer significant in the second round (r = .142 p = .112, n.s.).
Second, the number of adjustments correlated with players’ performances
(r = .303, p = .001 < .05*). Users who adapted their investments and orders frequently
achieved higher mean profits. A per-month analysis revealed that the average number
of adjustments made by participants who achieved a high profit exceeded the adjust-
ments of low performers in every month, as shown in Figure 2. Moreover, there was a
peak in high performers’ adjustments in month 11 as a reaction to the spontaneous
drops of the supplier’s and/or the internal production’s quality in month 10. This
change in interaction between month 10 and 11 is significant for high performers
(V = .164, F(1, 62) = 12.140, p = .001 < .05*). In contrast, there was no significant
change in the adaption behavior of low performers at that time (V = .001, F(1, 63) =
0.088, p = .768, n.s.). Also, there is a medium correlation between the averagely
The Role of Human Factors in Production Networks and Quality Management 87

performed adjustments in the first and the second round (r = .580, p = .000 < .05*). In
particular, players who frequently/rarely adapted their investments and orders in the
first round, acted similarly in the second round.

low performer high performer


number of adjustments

3
2
1
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

month

Fig. 2. Average adjustments per month of high and low performers in the second round

4.5 Effects of Strategy


There were several effects on players’ performances regarding the used game plans.
First, participants who assessed their behavior in the game as highly conscientious
made a higher profit (M = 135.0, SD = 111.5) than those with low conscientiousness
values (M = 38.4, SD = 261.1). This effect was significant (F(1, 123) = 4.987, p =
0.27 < .05*). Second, the stated level of forward planning in game strategy correlates
with average profits (r = .184, p = .040 < .05*): Users who stated their strategy was
dominated more by forward planning than by reacting, on average made higher prof-
its. Third, the level of risk taking in the game plan negatively correlated with players’
average performances (r = .-.217, p = .015 < .05*), e.g., players who claimed to have
taken more risks than they would in real live made significantly lower profits. Also,
there was a low correlation between participants’ profits and the tendency to keep a
small safety buffer of parts readily available (r = .273, p = .002 < .05*).
M = 136.1
150 (SD = 96.3)
profit in thousands

100

50 M = 21.1
(SD = 280.4)

0
low high
level of quality orientation in strategy
Fig. 3. Means (SD) of profit regarding strategies with different levels of quality orientation
88 R. Philipsen et al.

Most of all, the level of quality orientation in players’ strategies correlated signifi-
cantly with the average performances (r = .370, p = .000 < .05*); therefore, partici-
pants with a quality-oriented strategy averagely performed better (M = 136.1, SD =
96.3) than participants who were inclined to ignore quality aspects (M = 21.1, SD =
280.4), as shown in Figure 2.

4.6 Requirements for an Economic Production

Participants averagely ranked “Increasing economic efficiency” as the most important


requirement for an economic production (M = 2.1, SD = 1.3) before they played the
game, followed by “Increasing quality of own production” (M = 2.2, SD = 1.1), “In-
creasing supplier’s quality” (M = 3.3, SD = 1.1), “Optimizing stock” (M = 3.7, SD =
1.2) and “Decreasing delivery time” (M = 3.8, SD = 1.2). Although there is an abso-
lute ranking, which results from comparing the aforementioned means, there is neither
a significant difference between the first two ranks (p = 1.00, n.s.) nor between the
ranks 3 to 5 (p > .05, n.s. for all comparisons). The positions of “quality of own pro-
duction” and “economic efficiency” had been switched in post-game ranking, while
there was no difference regarding the absolute ranks 3 to 5, as shown in Table 2.

Table 2. Ranking, means, and standard deviations of requirements for an economical


production (left) and data requirements for successful performance (right) (ranked after
playing)

Rank Requirement M SD Rank Requirement M SD


Increasing quality of own
1 1.8 0.9 1 High quality of data 1.8 0.9
production
Increasing economic Good data visualiza-
2 2.8 1.5 2 2.3 1.1
efficiency tion
Increasing supplier’s
3 2.9 1.2 3 Decision support 2.8 1.2
quality
4 Optimizing stock 3.3 1.2 4 High data volume 3.8 1.2
5 Decreasing delivery time 4.2 1.1 5 Low data volume 4.3 0.9

Pairwise comparison of all factors revealed that there is a significant difference


between the average raking of “Increasing quality of own production” and all other
factors (p = .000 < .05* for all comparisons). Similarly, the ranking of “Decreasing
delivery times” averagely differs from each of the other factors with p = .000 < .05*.
On the other hand, there was no significant difference between the rankings of the
remaining items (2-4). In particular, while in pre-game ranking there were only signifi-
cant differences between ranks 1 and 2 on the one hand and ranks 3 to 5 on the other
hand, there is a significant distinction between three levels of importance in post-game
ranking, mainly caused by an averagely higher ranking of one’s own quality’s impor-
tance (Pillai’s trace value (V) = 0.87, F(1, 123) = 11.695, p = .001 < .05*) and a lower
ranking of shorter delivery times (V = 0.81, F(1, 123) = 10.848, p = .001 < .05*) after
playing the game.
The Role of Human Factors in Production Networks and Quality Management 89

4.7 Requirements for Data quality

The participants also had to rank different requirements regarding their demands on
the provision of data. There was no significant difference in the average rankings of
any of the factors before and after playing the game (p > .05, n.s. for all pre-post fac-
tor pairs); therefor, the absolute positions were equal in both pre- and post-game rank-
ing. Participants identified the data quality as the most important aspect (M = 1.8, SD
= 0.9), followed by the visualization of data (M = 2.3, SD = 1.1), decision support (M
= 2.8, SD = 1.2) and the volume of data, as shown in Table 2. Pairwise comparison
revealed that there is no significant difference between the average rankings of “Good
data visualization” and “Decision support” (p = .059, n.s.). In contrast, for all other
comparisons of two factors the criterion of significance (p < .05, n.s. for all compari-
sons) was reached.

5 Discussion

Regarding the technical factors influencing game complexity we learned the easiest
condition is the one without drops in either the supplier’s quality or the internal pro-
duction quality. To our surprise, however, we found that the most difficult condition
to play is one with drops only in the internal production quality drops, but the suppli-
er’s quality stays constant. Counterintuitively, this condition is even more difficult to
play than the condition in which both qualities drop. We suspect that to be the case,
because the consequences of the quality drops are easier to notice within the company
dashboard, as the number of returned parts increases and the incoming quality de-
creases (two visible changes), while only one measure changes if only the production
quality decreases.
Interestingly, the display of traffic lights indicating the supplier’s quality and the
internal production quality did not influence the decision quality of the players and
the performance within the game. Interviews with players after the game suggest that
players had difficulties to understand the correct meaning of the traffic signals.
While the investigation of the game mechanics yielded clear findings, the search
for human factors that explain performance was only partially successful in this study.
We learned underlying factors exist that explain game performance, as players who
did well in the first round of the game also did well in the second round (i.e. high
correlation of the performances of the first and second round of the game). However
none of the variables assessed prior to the interaction with the game explained game
performance with adequate accuracy. Surprisingly, the positive impact of high tech-
nical self-efficacy on performance [9] could not be replicated within this study. None-
theless, players with good performance can be differentiated from players with bad
performance when in-game metrics or the post-game survey are considered. First,
players who achieved higher profits in the game took more time than players who
achieved lower profits. Second, good players not only spent more time on the game,
they also perform more changes within the game’s decision cockpit. Both findings are
in line with previous studies [14] and suggest that intense engagement with the
subject leads to a better performance. It is unclear however, what causes this effect:
90 R. Philipsen et al.

Are people who perform better in the game just more motivated, and therefore spend
more time on the game and on changes within the game, or do better players have an
increased overview over the company data and are therefore able to adapt more quick-
ly to changing scenarios.
Using games as a vehicle to mediate learning processes is getting more and more
popular in various disciplines [15]. Our findings suggest that our game-based ap-
proach for teaching fundamentals of quality management also works very well. First,
we found that the game is learnable and that the player’s performance increases from
the first to the second round of the game, showing that the players gained expertise in
making complex decisions for the simulated company. Second, the intention of the
game is to raise the awareness about quality management and shift the attention to-
wards quality management techniques within the game. After the game the players’
relative weighting of quality management was significantly higher than before the
game. Hence we can conclude, that the Q-I game is a suitable tool for teaching quality
management within vocational trainings, university courses or advanced trainings.

6 Summary, Limitations, and Outlook

Contrary to previous studies, we could not identify human factors that explain game
performance. We suspect that the small number of participants per experimental con-
dition, the large noise and huge spread within the data makes the dataset difficult to
evaluate. In a follow-up study we will therefore reduce the number of experimental
factors and increase the number of participants per condition, assuming that this will
yield clearer results. Furthermore, the questions assessing the game strategy from the
post-game survey will be rephrased and used in the pre-game survey, as we then hope
to be able to predict game performance according to player strategy. In addition, we
assume that information processing ability is also influencing performance within the
game; hence we will closely investigate the effect of information processing capacity
and speed on the outcome of the game in a follow-up study.
The traffic signs were conceptualized to indicate the results from quality audits of
the supplying company and of the internal production quality, not as indicators that
represent current quality levels. However, many people misinterpreted these indica-
tors and assumed that they show exactly that. A future version of the decision cockpit
will therefore clarify this issue and provide both, a clear indicator of the current sup-
plier quality and the current production quality, as well as clear indicators that
represent the results from quality audits.
The overall rating of the game was fairly positive and we found that it increased
the awareness of the importance of quality management in supply chain management.

Acknowledgements. The authors thank Hao Ngo and Chantal Lidynia for their sup-
port. This research was funded by the German Research Foundation (DFG) as part of
the Cluster of Excellence “Integrative Production Technology for High-Wage Coun-
tries” [16].
The Role of Human Factors in Production Networks and Quality Management 91

References
1. Forrester, J.W.: Industrial dynamics. MIT Press, Cambridge (1961)
2. Bossel, H.: Systeme Dynamik Simulation – Modellbildung, Analyse und Simulation
komplexer Systeme, p. 24. Books on Demand GmbH, Norderstedt (2004)
3. Robinson, S.: Simulation: The Practice of Model Development and Use, pp. 4–11.
John Wiley & Sons, West Sussex (2004)
4. Greasley, A.: Simulation Modelling for Business, pp. 1–11. Ashgate Publishing Company,
Burlington (2004)
5. Kühl, S., Strodtholz, P., Taffertshofer, A.: Handbuch Methoden der Organisations
forschung, pp. 498-578. VS Verlag für Sozialwissenschaften, Wiesbaden (2009)
6. Hardman, D.: Judgement and Decision Making. In: Psychological Perpectives,
pp. 120–124. John Wiley & Sons, West Sussex (2009)
7. Kamiske, G., Brauer, J.: ABC des Qualitätsmanagements, p. 24. Carl Hanser Verlag,
München (2012)
8. Beier, G.: Kontrollüberzeugungen im Umgang mit Technik [Locus of control when inte-
racting with technology]. Report Psychologie 24(9), 684–693 (1999)
9. Brauner, P., Runge, S., Groten, M., Schuh, G., Ziefle, M.: Human Factors in Supply Chain
Management – Decision making in complex logistic scenarios. In: Yamamoto, S. (ed.)
HCI 2013, Part III. LNCS, vol. 8018, pp. 423–432. Springer, Heidelberg (2013)
10. Rammstedt, B., Kemper, C.J., Klein, M.C., Beierlein, C., Kovaleva, A.: Eine kurze Skala
zur Messung der fünf Dimensionen der Persönlichkeit: Big-Five-Inventory-10 (BFI-10).
In: GESIS – Leibniz-Institut für Sozialwissenschaften (eds.) GESIS-Working Papers, vol.
22. Mannheim (2012)
11. Mandrik, C.A., Bao, Y.: Exploring the Concept and Measurement of General Risk
Aversion. In: Menon, G., Rao, A.R. (eds.) NA - Advances in Consumer Research, vol. 32,
pp. 531–539. Association for Consumer Research, Duluth (2005)
12. Satow, L.: B5T. Psychomeda Big-Five-Persönlichkeitstest. Skalendokumentation und
Normen sowie Fragebogen mit Instruktion. In: Leibniz-Zentrum für Psychol. Inf. und Do-
kumentation (ZPID) (eds.) Elektron. Testarchiv (2011), http://www.zpid.de
13. Xu, Y., Zhu, J., Huang, L., Zheng, Z., Kang, J.: Research on the influences of staff’s
psychological factors to total quality management practices: An empirical study of Chinese
manufacturing industry. In: 2012 IEEE International Conference on Management of Inno-
vation and Technology (ICMIT), pp. 303–308 (2012)
14. Dörner, D.: Die Logik des Mißlingens. Strategisches Denken in komplexen Situationen.
rororo, Reinbek (2013)
15. Schäfer, A., Holz, J., Leonhardt, T., Schroeder, U., Brauner, P., Ziefle, M.: From boring to
scoring – a collaborative serious game for learning and practicing mathematical logic for
computer science education. Computer Science Education 23(2), 87–111 (2013)
16. Brecher, C.: Integrative Production Technology for High-Wage Countries. Springer,
Heidelberg (2012)
Managing User Acceptance Testing of Business
Applications

Robin Poston1, Kalyan Sajja2, and Ashley Calvert2


1
University of Memphis
[email protected]
2
System Testing Excellence Program

Abstract. User acceptance testing (UAT) events gather input from actual
system users to determine where potential problems may exist in a new soft-
ware system or major upgrade. Modern business systems are more complex and
decentralized than ever before making UAT more complicated to perform. The
collaborative nature of facilitated UAT events requires close interaction be-
tween the testers and the facilitation team, even when located in various loca-
tions worldwide. This study explores the best approaches for facilitating UAT
remotely and globally in order to effectively facilitate geographically-dispersed
actual system users in performing UAT exercises. While research suggests user
involvement is important, there is a lack of understanding about the specifics of
how to best engage users for maximizing the results, and our study addresses
this gap. This study examines the following research questions: How should
UAT facilitators (1) schedule user participation with a minimum impact to their
regular work duties and maximum ability to be present when testing and not
be distracted; (2) enable direct interactions with users including face-to-face
conversations during the UAT event and access to user computer screens for
configuration and validation; and (3) utilize quality management software that
can be used seamlessly by all involved in UAT. To examine these questions,
we utilize Social Presence Theory (SPT) to establish a conceptual lens for
addressing these research questions. SPT supports that the communication envi-
ronment must enable people to adopt the appropriate level of social presence
required for that task. This study proposes a theoretically-derived examination
based on SPT of facilitated UAT delineating when and how facilitators should
involve actual system users in the UAT activities either through local facilita-
tion or remote hosting of UAT exercises, among other options.

Keywords: User Acceptance Testing, Social Presence Theory, Computer


Mediated Conferencing, Quality Management Software.

1 Introduction

The purpose of user acceptance testing (UAT) is to gather input from actual system
users, those who have experience with the business processes and will be using the
system to complete related tasks (Klein, 2003; Larson, 1995). Actual users bring
knowledge of process flows and work systems and are able to test how the system

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 92–102, 2014.
© Springer International Publishing Switzerland 2014
Managing User Acceptance Testing of Business Applications 93

meets all that is required of it, including undocumented inherent requirements, and
where potential problems may surface. UAT is a critical phase of testing that typically
occurs after the system is built and before the software is released. Modern business
systems are more complex and decentralized than ever before making UAT more
complicated to perform. The global nature of commerce continues to push business
systems deployments well beyond traditional geographic boundaries. The global
nature of such deployments has created new challenges for the execution of UAT and
the effective participation of geographically dispersed actual system users. The colla-
borative nature of facilitated UAT events requires close interaction between the
testers and the facilitation team (Larson, 1995), even when located in various loca-
tions worldwide. However current obstacles exist such as, global dispersion of the
user base, travel expenses and extended time away from regular work assignments.
This study explores the best approaches for facilitating UAT remotely and globally in
order to effectively facilitate geographically-dispersed actual system users in perform-
ing UAT exercises.
Systems development theory suggests users should be involved throughout the
development lifecycle, yet involving the users is often difficult. One study of case
organizations found different approaches and strategies for the facilitation of user
involvement (Iivari, 2004; Lohmann and Rashid, 2008). An important aspect in
human computer interaction is usability evaluation that improves software quality
(Butt and Fatimah, 2012). User involvement occurs between industry experts who use
the system and the development team suggesting it is imperative to have senior and
experienced user representation involved (Majid et al., 2010). One study of the degree
of user involvement in the process indicates that user involvement is mainly concen-
trated in the functional requirements gathering process (Axtell et al., 1997). Software
firms spend approximately 50-75% of the total software development cost on debug-
ging, testing, and verification activities, soliciting problem feedback from users to
improve product quality (Muthitacharoen and Saeed, 2009).
Today, the distinction between development and adoption are blurring which
provides developers with opportunities for increasing user involvement (Hilbert et al.,
1997). User involvement is a widely accepted principle in the development of usable
systems, yet it is a vague concept covering many approaches. Research studies illu-
strate how users can be an effective source of requirements generation, as long as role
of users is carefully considered along with cost-efficient practices (Kujala, 2003).
User’s participation is important for successful software program execution (Butt and
Fatimah, 2012) and business analyst facilitation and patience in UAT events is critical
whether the system is a new installation, major upgrade, or commercial-off-the-shelf
package (Beckett, 2005; Klein, 2003; Larson, 1995). In summary, while research
suggests user involvement is important, there is a lack of understanding about the
specifics of how to best engage users for maximizing the results, and our study
addresses this gap.
This study examines the following research questions: How should UAT facilita-
tors (1) schedule user participation with a minimum impact to their regular work
duties and maximum ability to be present when testing and not be distracted; (2) ena-
ble direct interactions with users including face-to-face conversations during the UAT
94 R. Poston, K. Sajja, and A. Calvert

event and access to user computer screens for configuration and validation; and (3)
utilize quality management software that can be used seamlessly by all involved
in UAT.
To examine these questions, we recognize the need to resolve the complexity of
communication challenges among technology facilitators and business users. We
draw on Social Presence Theory (SPT) to establish a conceptual lens for addressing
these research questions. Traditionally, SPT classifies different communication media
along a continuum of social presence. Social presence (SP) reflects the degree of
awareness one person has of another person when interacting (Sallnas et al., 2000).
People utilize many communication styles when face-to-face (impression leaving,
contentiousness, openness, dramatic existence, domination, precision, relaxed flair,
friendly, attentiveness, animation, and image managing (Norton, 1986) or when on-
line (affective, interactive, and cohesive (Rourke et al., 2007). SPT supports that the
communication environment must enable people to adopt the appropriate level of
social presence required for that task. This study proposes a theoretically-derived
examination based on SPT of facilitated UAT delineating when and how facilitators
should involve actual system users in the UAT activities either through local facilita-
tion or remote hosting of UAT exercises, among other options.

2 Theoretical Background

To examine the challenges of facilitating actual system users in UAT events, SPT
incorporates a cross-section of concepts from social interdependence and media rich-
ness theories. SPT promotes that through discourse, intimacy and immediacy create a
degree of salience or being there between the parties involved (Lowenthal, 2010).
Researchers have found perception of the other party’s presence is more important
than the capabilities of the communications medium (Garrison et al., 2000). Thus,
UAT events will need to enable the appropriate level of SP for users to learn their role
in UAT and execute testing activities.
Facilitating users in remotely-hosted UAT events draws similarities to online
teaching activities. The similarities emanate from both activities comprising novice
users working with expert facilitators to learn new knowledge, tackle new skills, and
express confusion and questions in text-written print. SP has been established as a
critical component of online teaching success. Table 1 encapsulates select research in
the online teaching domain, illustrating the growing support for designing courses and
maintaining a personal presence to influence student satisfaction and learning. This
research helps us identify factors needed for user success in an online UAT event
context. SP largely reflects the trust-building relationship a facilitator or instructor
creates with users or students. SP is more easily developed in face-to-face richer
media settings, however SP can be encouraged in computer-mediated learner media
settings as well.
Managing User Acceptance Testing of Business Applications 95

Table 1. Select studies of online teaching and social presence

Reference How Social Presence (SP) was Estab- Key Findings


lished
Hostetter and Course design by weekly threaded SP leads to student satisfaction and
Busch, 2006; discussion, course credit for discus- learning
Swan and Shih, sion participation, provoking discus-
2005 sion questions. Also with instructor Perceived presence of instructors may
and peer presence in online discus- be more influential factor than per-
sions promoting sharing personal ceived presence of peers for student
experiences and feelings satisfaction
Richardson and Course activities with class discus- SP leads to satisfaction with instructor
Swan, 2003 sion, group projects, individual and perceived learning
projects, self-tests, written assign-
ments, lectures, readings Women have higher social presence
than men

No age or experience influence


Russo and Course components organized for SP leads to instructor presence and
Benson, 2005 cognitive learning (student assessment peer presence
of their learning), affective learning
(attitude about the course), perception SP leads to affective learning and
of presence (peers, instructors, and student learning satisfaction
self)
Important to establish and maintain SP
including own SP which leads to high-
er grades
Tu, 2000 Attention process by drawing inter- SP leads to learner-to-learner interac-
personal attractions (inviting public tion
speakers, Good communication style)
SP increases student’s performance,
Retention process by showing images proficiency, retention and motivation
that increase sensory stimulation
Student attitudes towards the subject
Motor reproduction process by cogni- are increased
tive organization

Motivational process with incentives


to learn
Picciano, 2002 Course is structured around readings SP leads to student interaction and
and weekly discussions, students as perceived learning
facilitators
SP has a significant relationship with
Asynchronous and synchronous dis- performance on written assignments
cussion session with peers and in- which requires discussion with instruc-
structors tor and peers

Instructor immediacy
Aragon, 2003 Course design, instructor, and partici- Creating a platform for SP
pant strategies Instructors can establish and maintain
SP encouraging student participation
96 R. Poston, K. Sajja, and A. Calvert

Research examining UAT activities suggests both facilitator and users need face-
to-face communication options when the system under test is newly developed (Lar-
son, 1995). Typical UAT timelines involve: A system almost fully developed, user
guides and training materials developed by the technology group, business analytic
review and input on these materials then drawing up the test scripts, users performing
tests based on the scripts and with open unscripted use, user reporting issues to the
business analyst who reviews and logs the appropriate defects for the development
team to address. This is repeated until the users sign off that the system works as
needed (Larson, 1995). Research illustrates the UAT process can be improved with
users having the ability to engage in direct interactions with both the business analyst
and development teams when questions arise (Larson, 1995).
Facilitated testing by the real time users can be implemented in 3 ways (Seffah and
Habied-Mammar, 2009): 1. Require remote users to travel to a local facility, 2. Send
facilitator to remote locations, 3. Facilitator from local facility does computer me-
diated conferencing (CMC) with users in remote location. Each of these approaches
establishes different communication environments. SPT suggests facilitated UAT
local facilitation or remote hosting of UAT exercises will require different dimensions
of where and how facilitators should involve users in the UAT activities. Table 2
demonstrates researchers’ views on facilitated UAT approaches and how SPT
attributes are expected to affect three different UAT approaches based on studies of
SP in online teaching. Remote users travelling to local facility and facilitator travel-
ling to remote locations are treated as same in Table 2 as both are similar to instructor
teaching to students face to face while remote UAT is compared with online teaching.
As Table 2 illustrates how attributes of SP tend to be low for remote UAT events
because face-to-face communications are highly advantages when establishing high
SP. Also, online research on SP for online learning is high if SP is established using
various techniques like incentives, course design, etc.

Table 2. Facilitated UAT Approaches


Remote Facilitator Computer mediated
users travel to travel to conferencing
local remote between facilitator at
facility locations local facility & users at
remote locations
Facilitator Local Remote Local
User Local Remote Remote
Challenges in approach:
Type of system1 New New or Upgrade Upgrade
Costs2 $100,000- $15,000-$20,000 More participants form
$150,000 USUS dollars, includ- diverse backgrounds,
dollars, exclud-ing test software, lower budget, and less
ing cost of dep-per location time
loyment, man-
agement, train-
ing, upgrades,
and test analysis
software
Managing User Acceptance Testing of Business Applications 97

Table 2. (Continued.)

Size of group2 Limited Limited Greater participation


SPT Attributes adopted from online teaching environments3:
Expression of High High Low
emotions
Use of humor High High Low
Self-disclosure High High Low
Dialogue High High Low
Asking questions High High Low
Compliment, ex- High High Low
press appreciation,
agreement
Assertive/ acquies- High High Low
cent
Informal/formal High High Low
relationships
Trust relationship High High Low
Social relation- High High Low
ships
Attitude toward Positive Positive Apathetic
technology
Access and Easy Easy Hard
location
Timely response High High Low
1
(Klein, 2003; Larson, 1995; Seffah and Habieb-Mammar, 2009)
2
(Seffah and Habieb-Mammar, 2009)
3
(Rourke et al., 2007; Tu and McIsaac, 2002)

Mostly used in research examining online education, SPT informs remote commu-
nications environments by examining the way people represent themselves online
through the way information is shared (e.g., how messages are posted and interpreted
by others) and how people related to each other (Kehrwald, 2008). When face-to-face,
people use everyday skills to share information through multiple cues using rich
nonverbal communication inherent in tone of voice and facial expression. Richer
communications allow individuals to provide and respond to the sight, sound, and
smell of others which inherently provides an awareness of the presence of others
(Mehrabian, 1969). Online information sharing lacks the cues needed to create an
awareness of the presence of others and offers the ability to discuss information but
not to connect or bond with others on a more personal level (Sproull and Kiesler,
1986). Research studies of online education have found that the lack of SP impedes
interactions and as a result hinders student-learning performance (Wei et al., 2012).
One proposed solution is to combine the use of both asynchronous (pre-produced
98 R. Poston, K. Sajja, and A. Calvert

content accessed by users when needed) and synchronous (real-time, concurrent audio
and video connections) components, with synchronous efforts providing a much more
full social exchange greatly increasing the potential for SP. Thus, SP is an important
factor in information exchange when learning and performance are required, as is the
case of user participation in UAT events.

3 Case Study Methodology

The research methodology follows a qualitative approach in gathering case study data
on UAT practices in order to provide descriptive and explanatory insights into the
management activities in software development work. This approach has been used
successfully in prior research (Pettigrew, 1990; Sutton, 1997) and allows us to induce
a theoretical account of the activities found in empirical observations and analysis of
team member’s viewpoints. This approach is also known to lead to accurate and use-
ful results by including an understanding of the contextual complexities of the envi-
ronment in the research analysis and outcomes. Finally, this approach encourages an
understanding of the holistic systematic view of the issues and circumstances of the
situation being addressed, in this case the issues of managing development projects
from team member perspectives about their testing practices (Checkland et al., 2007;
Yin, 1989). To identify the practices, we selected a large multinational fortune 500
company known to have successful UAT events. The focus of our study is specific to
the UAT practices of large scale complex globally-deployed software development
projects.

4 Data Collection

The results reported in the present study are based on interviews with UAT facilita-
tors. Our data gathering began with the creation of semi-structured interview proto-
cols which comprised both closed and open-ended questions. To inform our interview
question development, we reviewed documentation about the company, and held
background discussions with company personnel. The data collection methods em-
ployed focused on interviewees’ perspectives on UAT issues, roles played by various
stakeholders involved, and the challenges of incorporating actual systems users in the
process. Face-to-face interviews of approximately 1 to 1.5 hours were conducted with
various project stakeholders. The goal of these interviews was to identify and better
understand the issues related to UAT. In total, we interviewed 8 stakeholders. Inter-
views were conducted between November 2013 and January 2014, with additional
follow-up clarification Q&A sessions conducted over e-mail. Job descriptions of
those interviewed are shown in Table 3.
Managing User Acceptance Testing of Business Applications 99

Table 3. Job Descriptions of Interviewees


Years Times
Job Title Description of Expe- Responsibility Inter-
rience viewed
Business Systems Quality 2 UAT test plans, writing UAT test
Analysis Analysts cases, UAT facilitation and defect 2
management
Business Systems Quality 6 UAT test plans, writing UAT test
Analysis Analysts cases, UAT facilitation and defect 1
management
Business Systems Quality 6 UAT test plans, writing UAT test
Analysis Advisor cases, leading teams of quality ana-
lysts, UAT facilitation, defect man-
2
agement, quality process and stan-
dards design, 3rd party contract
quality analysis and management
Business Systems Quality 18 UAT test plans, writing UAT test
Analysis Advisor cases, leading teams of quality ana-
lysts, UAT facilitation, defect man- 2
agement, quality process and stan-
dards design
Business Systems Quality 16 leading a team of quality analysts
Analysis and quality advisors responsible for
Manager enterprise level activities globally
2
including process and standards,
UAT management and execution
and third party contracts
UAT Tester 1 n/a testing the “administrative func-
tions” of an app as part of an end 1
user support role
UAT Tester 2 n/a Same 1
UAT Tester 3 n/a Same 1
Total Interviews 12

By collecting and triangulating data across a variety of methods, we were able to


develop robust results because of the perspectives we gained about UAT issues. This
approach provides in-depth information on emerging concepts, and allows cross-
checking the information to substantiate the findings (Eisenhardt, 1989; Glaser and
Strauss, 1967; Pettigrew, 1990).

5 Findings
In this research, we gathered and analyzed interview data from a large multinational
company with multiple stakeholders of UAT events along with best practices from the
research literature. From these data sources, we next address the research questions
100 R. Poston, K. Sajja, and A. Calvert

proposed earlier to offer insights about managing UAT events. For completely new
complex systems and novice UAT participants, SP will be a critical factor enabling
better testing outcomes. In this case, facilitators should schedule user participation
locally at the testing location where face-to-face interactions can occur. While cogni-
zant of the need to minimize the impact to users’ regular work duties and keep from
having work requirements outside of regular working hour, these events can be con-
centrated into a shorter timeframe and more efficiently administered when everyone is
together. Accommodating users locally maximizes users’ ability to be present when
testing and not be distracted. Complicated tasks and difficult questions can be ad-
dressed and more readily communicated. Additionally, peer-to-peer face-to-face
learning can be enabled, which has been shown to improve outcomes (Tu, 2000).
Media richness theory has long held that richer media are the key to building trust-
ing relationships (Campbell, 2000). Media richness theory suggests settings should be
assessed on how well they support the ability of communicating parties to discern
multiple information cues simultaneously, enable rapid feedback, establish a personal
message, and use natural language. Richer media tend to run on a continuum from
rich face-to-face settings to lean written documents. Thus, consistent with above, for
completely new complex systems and novice UAT participants, richer media settings
are needed to enable direct interactions with users including face-to-face conversa-
tions during the UAT event and access to user computer screens for configuration and
validation. Richer settings also enable facilitators to collaborate and train users to
improve information sharing. Furthermore, peer-to-peer learning and immediacy of
replies for help and answers enables a more productive UAT outcome. When users
are located in distant remote locations, time lags between queries and answers im-
pedes productivity and dedication to task.
Quality management software (QMS) enables standard procedures and processes,
effective control, maintainability, higher product quality at a reduced cost (Ludmer,
1969). In our interviews with facilitators and user acceptance testers we found that
QMS plays a critical role while performing UAT. UAT testers use QMSs to read and
execute test scripts, input result of their tests, log defects and verify defects are fixed.
Facilitators use QMSs to write test scripts, review the results of test runs, track de-
fects, prioritize defects, and assign defects to developers. In summary, QMS serves as
a common platform for facilitators and UAT testers.
Facilitators are tasked with training non-technical business users on how to
use QMS technical tools. QMS that are globally available in the market include HP
Quality Center, IBM Rational Quality Manager etc. These tools have a plethora of
multilingual support with study materials, user guides and social networking com-
munities. The next steps with this research is to determine how to replicate SP created
in a face-to-face UAT event within a remote UAT experience.

References
1. Aragon, S.R.: Creating social presence in online environments. New Directions for Adult
and Continuing Education (100), 57–68 (2003)
2. Axtell, C.M., Waterson, P.E., Clegg, C.W.: Problems Integrating User Participation into
Software Development. International Journal of Human-Computer Studies, 323–345
(1997)
Managing User Acceptance Testing of Business Applications 101

3. Beckett, H.: Going Offshore. Computer Weekly 32–34 (2005)


4. Butt, W., Fatimah, W.: An Overview of Software Models with Regard to the Users
Involvement. International Journal of Computer Science 3(1), 107–112 (2012)
5. Butt, W., Fatimah, W.: Overview of Systems Design and Development with Regards to
the Involvement of User, HCI and Software Engineers. International Journal of Computer
Applications 58(7), 1–4 (2012)
6. Campbell, J.A.: User acceptance of videoconferencing: perceptions of task characteristics
and media traits. In: Proceedings of the 33rd Annual Hawaii International Conference on
System Sciences, p. 10 (2000)
7. Checkland, K., McDonald, R., Harrison, S.: Ticking boxes and changing the social world:
data collection and the new UK general practice contract. Social Policy & Administra-
tion 41(7), 693–710 (2007)
8. Eisenhardt, K.M.: Making fast strategic decisions in high-velocity environment. Academy
of Management Journal 32(3), 543–576 (1989)
9. Garrison, D.R., Anderson, T., Archer, W.: Critical Inquiry in a Text-Based Environment:
Computer Conferencing in Higher Education. The Internet and Higher Education 2(2),
87–105 (2000)
10. Glaser, B., Strauss, A.: The discovery grounded theory: strategies for qualitative inquiry
(1967)
11. Hilbert, D.M., Robbins, J.E., Redmiles, D.F.: Supporting Ongoing User Involvement in
Development via Expectation-Driven Event Monitoring. Technical Report for Department
of Information and Computer Science 97(19), pp. 1–11 (1997)
12. Hostetter, C., Busch, M.: Measuring up online: The relationship between social presence
and student learning satisfaction. Journal of Scholarship of Teaching and Learning 6(2),
1–12 (2006)
13. Iivari, N.: Enculturation of User Involvement in Software Development Organizations- An
Interpretive Case Study in the Product Development Context. Department of Information
Processing Science, pp. 287–296 (2004)
14. Kehrwald, B.: Understanding Social Presence in Text-Based Online Learning Environ-
ments. Distance Education 29(1), 89–106 (2008)
15. Klein, Gorbett, S.: Lims User Acceptance Testing. Quality Assurance 10(2), 91–106
(2003)
16. Kujala, S.: User Involvement: A Review of the Benefits and Challenges. Behavior and In-
formation Technology 22(1), 1–16 (2003)
17. Larson, G.B.: The User Acceptance Testing Process. Journal of Systems Manage-
ment 46(5), 56–62 (1995)
18. Lohmann, S., Rashid, A.: Fostering Remote User Participation and Integration of User
Feedback into Software Development, pp. 1–3 (2008)
19. Lowenthal, P.: Social Presence. Journal of Social Computing; Concepts, Methodologies,
Tools and Applications, 129–136 (2010)
20. Ludmer, H.: Zero Defects. Industrial Management 11(4) (1969)
21. Majid, R.A., Noor, N.L.M., Adnan, W.A.W., Mansor, S.: A Survey on User Involvement
in Software Development Life Cycle from Practitioner’s Perspectives. In: Computer
Sciences and Convergence Information Technology Conference, pp. 240–243 (2010)
22. Mehrabian, A.: Some referents and measures of nonverbal behavior. Journal of Behavior
Research Methods and Instrumentation 1(6), 203–207 (1969)
23. Muthitacharoen, A., Saeed, K.A.: Examining User Involvement in Continuous Software
Development. Communications of the ACM 52(9), 113–117 (2009)
102 R. Poston, K. Sajja, and A. Calvert

24. Norton, R.W.: Communicator Style in Teaching: Giving Good Form to Content. Commu-
nicating in College Classrooms (26), 33–40 (1986)
25. Pettigrew, A.M.: Longitudinal Field Research on Change: Theory and Practice. Organiza-
tion Science 1(3), 267–292 (1990)
26. Picciano, A.: Beyond student perceptions: Issues of interaction, presence, and performance
in an online course. Journal of Asynchronous Learning Networks 6(1), 21–40 (2002)
27. Richardson, J.C., Swan, K.: Examining social presence in online courses in relation to stu-
dents’ perceived learning and satisfaction. Journal of Asynchronous Learning Net-
works 7(1), 68–88 (2003)
28. Rourke, L., Anderson, T., Garrison, D.R., Archer, W.: Assessing social presence in asyn-
chronous text-based computer conferencing. The Journal of Distance Education/Revue de
l’Éducation à Distance 14(2), 50–71 (2007)
29. Russo, T., Benson, S.: Learning with invisible others: Perceptions of online presence
and their relationship to cognitive and effective learning. Educational Technology and
Society 8(1), 54–62 (2005)
30. Sallnas, E.L., Rassmus-Grohn, K., Sjostrom, C.: Supporting presence in collaborative
environments by haptic force feedback. ACM Transactions on Computer-Human Interac-
tions 7(4), 461–467 (2000), Science 8(1), 97–106 (2000)
31. Seffah, A., Habieb-Mammar, H.: Usability engineering laboratories: Limitations and
challenges toward a unifying tools/practices environment. Behaviour & Information Tech-
nology 28(3), 281–291 (2009)
32. Sproull, L., Keisler, S.: Reducing social context cues: Electronic mail in organizational
communication. Management Science 32(11), 1492–1513 (1986)
33. Sutton, R.I.: Crossroads-The Virtues of Closet Qualitative Research. Organization
Science 8(1), 97–106 (1997)
34. Swan, K., Shih, L.F.: On the nature of development of social presence in online course
discussion. Journal of Asynchronous Learning Networks 9(3), 115–136 (2005)
35. Tu, C.H.: Online learning migration: Form social learning theory to social presence theory
in a CMC environment. Journal of Network and Computer Applications 2, 27–37 (2000)
36. Tu, C.H., McIsaac, M.: The relationship of social presence and interaction in online
classes. The American Journal of Distance Education 16(3), 131–150 (2002)
37. Walther, J.B., Burgoon, J.K.: Relational commination in computer-mediated interaction.
Human Communication Research 19(1), 50–88 (1992)
38. Wei, C., Chen, N., Kinshuk: A model for social presence in online classrooms. Education-
al Technology Research and Development 60(3), 529–545 (2012)
39. Yin, R.K.: Case Study Research: Design and Methods. Sage Publications, Beverly Hills
(1984)
How to Improve Customer Relationship Management
in Air Transportation Using Case-Based Reasoning

Rawia Sammout1, Makram Souii2, and Mansour Elghoul3


1
Higher Institute of Management of Gabes
Street Jilani Habib, Gabes 6002, Tunisia
2
University of Lille de nord, F-59000 Lille, France
UVHC, LAMIH, F-59313 Valenciennes, France
CNRS, UMR 8201, F-59313 Valenciennes, France
3
University of Lorraine, Nancy 2, France
[email protected], [email protected],
[email protected]

Abstract. This paper describes research that aims to provide a new strategy for
Customer Relationship Management for Air Transportation. It presents our pro-
posed approach based on Knowledge Management processes, Enterprise Risk
Management and Case-Based Reasoning. It aims to mitigate risks facing in air
transportation process. The principle of this method consists in treating a new
risk by counting on previous former experiments (case of reference). This type
of reasoning rests on the following hypothesis: if a past risk and the new one are
sufficiently similar, then all that can be explained or applied to the past risks or
experiments (case bases) remains valid if one applies it to the new risk or for
new situation which represents the new risk or problem to be solved. The idea
of this approach consists on predicting adapted solution basing on the existing
risks in the case base having the same contexts.

Keywords: Customer Relationship Management, Air Transportation, Know-


ledge Management, Enterprise Risk Management, Case Based Reasoning.

1 Introduction

The aim of knowledge Management (KM) as an organized and crucial process is to


protect the organization’s intellectual capital (knowledge of employees) for future
benefits. In fact, sharing the right knowledge to the right person, at the right time in
the right formats are very important steps that lead to max maximize the productive
efficiency of the enterprise. In addition, this knowledge will be used and integrated
for business needs in many different contexts (such as production, logistics and trans-
port etc.) in order to increase the organization short and long term value to its stake-
holders. In this paper, we study how to improve Customer Relationship Management
(CRM) in Air Transportation (AT) using Case Based Reasoning (CBR)? A risk is the
probability of the occurrence of an external or internal action which may lead to a
threat of damage, injury, liability, loss, or any other negative result, and that may be

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 103–111, 2014.
© Springer International Publishing Switzerland 2014
104 R. Sammout, M. Souii, and M. Elghoul

avoided and reduced through preemptive action [1] [2]. For example: death, injuries
form turbulence and baggage, dissatisfaction, bad provision of information, bad
communication , misunderstanding, noise and mobility, bad cleaner staff, bad service
quality, bad presentation of safety rules, lack or lost of baggage, uncomfortability of
customer, lack of respect etc. Generally, these risks have great impacts on the achiev-
ing the origination objectives. In this context, our approach’s aim is to mitigate the
danger based on the interaction between Enterprise Risk Management (ERM) and
KM and using the CBR. The idea is to deal with all the risks that may affect customer
during the air transportation process from the registration of the customer to the ana-
lytics and feedback post-journey. Furthermore, it also endeavors also to create new
opportunities in order to enhance the capacity of building a perceived value to its
customers.

2 The Proposed Approach Overview

Based on KM processes [3], our method has four phases (Fig. 1): (1) Knowledge
creation and sharing phase, (2) Knowledge analyzing phase, (3) Knowledge storage
phase, (4) Knowledge application and transfer phase.

Fig. 1. Our research model design

2.1 Knowledge Creation and Sharing Process

The purpose of this phase is the identification of risk caused customer dissatisfaction.
It includes two steps as below:
How to Improve Customer Relationship Management 105

Identification of Risk and Proposition of Its Appropriate Solution. Each employee


adds the risk faced during the air transportation process and that may affect customer
satisfaction (such as noise, mobility, bad services, lack of safe, bad communication,
lack of baggage, misunderstanding etc). Then he proposes its associate solution in the
professional social network in order to, create a Community of Practice (CoP)1 with
other employees, discussing the relative issue and generating a number of solutions
(references cases).

Formulate New Request. The employee faces a risk and wants to know how to solve
it. He formulates a request to the system specifying the risk. The system treats the
request based on the CBR method and answers the employee with the appropriate
solution adapted on his/her context based on fuzzy logic.

2.2 Knowledge Analysis Process


The goal of this phase is the optimization of the best adequate solution associated to
each risk defined using the CBR. Case-based reasoning is used to solve a new prob-
lem by remembering a previous similar situation and by reusing information and
knowledge of previous situations. It is based on the following hypothesis: if a past
experience and a new situation are sufficiently similar, then everything can be ex-
plained or applied to past experience (case base) is still valid when it’s applied to the
new situation that represents the new problem to solve [5] [6] [7].
The purpose of CBR is to composite a relevant solution in current context by com-
paring it with other similar contexts of use. CBR is composed by four steps: selecting
the similar cases, fuzzy adaptation, revision and learning. The two latest steps (revi-
sion and learning) are described in the following phase.

Step1: Selecting the similar cases. This step is based on the contextual filtering. The
system uses the characteristics of context in order to compare the new case (NC) with
the existing cases (EC) using the following formula:

Sim (NC, EC) = (1)

With NC is the new case, EC is the existing one.


A is the set of the user attributes; NCxa represents the value of the current user
attribute and ECxa, the value in the existing contexts.
DM is the difference between the maximum threshold and the minimum threshold.
Bc is the case base filtered by selecting similar cases of the current user request
(risk) in the context C.
The contextual filtering aims to measure the similarity between the current context
and the existing contexts basing on the Pearson correlation. In this context, the most
similar cases are selected from the collection Bc. The context Ci is composed by a finite

1
Communities of Practice (CoP) are techniques used in KM, the purpose is to connect people
with specific objective that voluntarily want to share knowledge [4].
106 R. Sammout, M. Souii, and M. Elghoul

set {a1i, a2i, …, ani that differs in number from a risk to another. Two contexts
are similar if its attributes are respectively similar. C1= a11 ∪ a21 ∪ …∪ an1; C2= a12 ∪
a22 ∪ …∪ an2 with .
Sim (NC; EC) = (SimR (Ri; Rj),SimC (Ci; Cj)) (2)

SimC (Ci; Cj) = (Sim (a1i; a1j), Sim (a2i; a2j), …, Sim (ani; an)) (3)
With an represents an attribute that characterises the context C.
And i, j are the coefficients of two different contexts relative to the same risk.

Sim(NC; EC) = (SimR(Ri; Rj), Sim (a1i; a1j), Sim (a2i; a2j), …, Sim(ani; anj) ) (4)

Step2: Adapting the new solution. Basing on the selected cases, the idea is to pro-
pose an adapted solution to the new context. It is a combination of many parts of the
solutions (Si, Sj …) from the most similar cases. To this end, this step is segmented
into three levels fuzzification, fuzzy inference and defuzzification.

Fuzzification. It is the process by which an element is rendered diffuse by the combi-


nation of real values and membership functions. It converges an input determined to a
fuzzy output. The similarities corresponding to the different dimensions of context
calculated in the previous phase are the input variables of fuzzy system.
The fuzzy system is based on n attributes of the context as inputs: Sim(a1i; a1j), Sim
(a2i; a2j), …, Sim(ani; anj) with .The system output is the relevant solution "S"
which is the combination of many parts of the solutions (Si, Sj …) from the most simi-
lar cases. These input and output variables are the linguistic variables of the fuzzy
system.
The linguistic variable is represented by:
Sim is the similarity of the context attribute between two similar contexts i and j
with i, j .
L is the set of linguistic terms.
U is the universe of discourse.
Number of rules = Ln *S (5)
With n is the number of fuzzy system inputs.
S is the number of output

Fuzzy inference. It aims to assess the contributions of all active rules. The fuzzy infe-
rence is affected from a rules database. Each fuzzy rule expresses a relationship be-
tween the input variables (context attributes similarity Sim) and the output variable
(relevance of the solution "S"). The fuzzy rule in our approach is as follows:
If (Sim is A) Then (S is B)
Where Sim is the context attributes similarity correlated (the premises of the rule), S
is the relevance of the solution (the conclusion of the rule), and A and B are linguistic
terms determined by the fuzzy sets.
How to Improve Customer Relationship Management 107

In the Mamdani model, the implication and aggregation are two fragments of the
fuzzy inference. It is based on the use of the minimum operator "min" for implication
and the maximum operator "max" for the aggregation rules.

Defuzzification. It is the process by which fuzzy results of similarities correlated are


translated into specific numerical results indicating the relevance of the solution.
After combining the rules obtained, we must produce an encryption output. The eval-
uation of the solution is implemented based on "Mamdani" model.
In our inspired Mamdani model approach, defuzzification is performed by the cen-
ter of gravity method of rules results.

F (ri,ci,si) = (6)

F(ri,ci,si) is the function associated with the case ci with µ(s) is the membership func-
tion of the output variable si and ri is the rules.
The fuzzy inference releases a sorted list of relevant solutions LF.
LF = {(si, F(ri,ci,si)) \ (ri,ci, ci) ∈ Bc }
The informational content SI is an integral part of the relevant solution from the sorted
list LF of relevant solutions, maximizing the similarity Sim correlated to the retrieved
case. The solution recommended to the user is a combination of torque solutions (SI).

2.3 Knowledge Storage Process


To be usable, a base of case must contain a certain number of cases. An empty base of
case does not allow any reasoning. Consequently, it is important to initiate the base of
case with relevant cases. To this end, the adapted solutions will be revised by the eva-
luator. Then, the validated solutions will be added to the base of cases =
∪ . In fact, Learning involves the enrichment of the context of use and
solutions.

2.4 Knowledge Application Process


This phase represents that the transfer and the use of knowledge can enhance customer
value. In this level, decision maker interprets these results (e.g., statistics, classifica-
tion) and suggests a radical way for a new improvement process through training, sto-
rytelling, lesson learned etc [7].

3 Application in Air Transportaation

In order to validate our method, we have implemented a professional network in air


transportation described in the following figure (cf.Fig.2). This application provides
employees with a relevant solutions responding to the current risk basing on the pre-
vious experiences. In an integrated development environment “Netbeans”, we devel-
oped the application integrating Java API/ Matlab Control.
108 R. Sammout, M. Souii, and M. Elghoul

Fig. 2. Professional network for air transport service

3.1 Phase 1: Knowledge Creation and Sharing Process

When an employee is faced a new risk, he can formulate a new request in order to find an
appropriate solution. The figure 3 presents the interface that can be used by an employee.

Fig. 2. Example of an employee request


This request must include the current context. The figure 4 presents an example of
a context.

Risk: Cancellation flight


Context: Weather condition
C2= Hurricane Charley (2004) = Wind 150 mph (240 km/h), pressure 941 mbar
(hPa); 27.79 inHg
C1= Hurricane Katrina (2005) = Wind 175 mph (280 km/h), pressure 902 mbar
(hPa); 26.64 inHg

Fig. 3. Example of a context


How to Improve Customer Relationship Management 109

3.2 Phase 2: Knowledge Analyzing Phase


Step1: Selecting of similar cases. We have to calculate the Sim of the context be-
tween the new case C1 and the existing case C2 as follow:

SimC (C1, C2) = (0.545) (7)

Step 2: Adapting the new solution. This step is divided into three levels as below:
Fuzzification. The fuzzifier mapes two inputs numbers (Sim(wind) and
Sim(pressure)) into fuzzy membership. The universe of discourse represents by U =
[0, 1]. We propose Low, Medium and High as the set of linguistic terms. The mem-
bership function implemented for Sim(wind) and Sim(pressure) is trapezoid.
The figure 5 describes the partition of fuzzy classes. It aims to divide the universe
of discourse of each linguistic variable on fuzzy classes. It is universal for all the lin-
guistic variables as below: Low [-0.36 -0.04 0.04 0.36], Medium [0.14 0.46 0.54
0.86], High [0.64 0.96 1.04 1.36].

Fig. 5. Partition of fuzzy classes

Fuzzy inference. It defines mapping from input fuzzy sets into output fuzzy sets bas-
ing on the active rules (cf. Fig. 6). The number of rules in this case is: 32 *1=9 rules.
110 R. Sammout, M. Souii, and M. Elghoul

R1: If (Sim(pressure) is Low) and (Sim(wind) is Low) Then S is Low


R2: If (Sim(pressure) is Low) and (Sim(wind) is Medium) Then S is Low
R3: If (Sim(pressure) is Low) and (Sim(wind) is High) Then S is Low
R4: If (Sim(pressure) is Medium) and (Sim(wind) is Low) Then S is Low
R5: If (Sim(pressure) is Medium) and (Sim(wind) is Medium) Then S is Medium
R6: If (Sim(pressure) is Medium) and (Sim(wind) is High) Then S is High
R7: If (Sim(pressure) is High) and (Sim(wind) is Low) Then S is Medium
R8: If (Sim(pressure) is High) and (Sim (wind) is Medium) Then S is Medium
R9: If (Sim(pressure) is High) and (Sim (wind) is High) Then S is High

Fig. 6. List of fuzzy rules

Defuzzification. It is based on Mamdani model (cf. Fig.7) which incorporates the cen-
ter gravity method by the evaluation of the set of rules in the fuzzy inference. It maps
output fuzzy into a crisp values.

Fig. 4. Mamdani Inference: Activation of the result S

For the example of Hurricane Katrina, the solution is adapted from the solution of
Hurricane Charley (wind= 280, pressure= 902) F= 0.387.

3.3 Phase 3: Knowledge Storage Process

At this level of our work, the adapted solution resulted from the previous phase will
be evaluated by an expert. Then, the validated solutions will be retain in the case base.
How to Improve Customer Relationship Management 111

3.4 Phase 4: Knowledge Application Process

Training and lesson learning session will be establishing for the employees basing on
the case base retained from the previous phase. The purpose of this process is to
exploit the previous experiences in order to improve the intellectual capital and com-
petences of the employees and facilitate the management of risk caused customer
dissatisfaction.

4 Conclusion

In this paper, we presented a crucial and generic approach based on the interaction
between two disciplines KM and ERM and using CBR and fuzzy logic in order to
enhance CRM in AT. First, by identifying risks caused customer dissatisfaction.
Second, proposing new solutions responding to risks faced in all touch points of the
AT process. Finally, the application of a learning process from the previous expe-
riences (risk and solutions) for the employees will be established. A challenge for
future research will be to refine the optimization of the adapted solution based on
genetic algorithm.

References
1. Monahan, G.: Enterprise Risk Management: A Methodology for Achieving Strategic Objec-
tives. John Wiley & Sons Inc., New Jersey (2008)
2. International Organization for Standardization, ISO (2009)
3. Alavi, M., Leidner, D.: Review: Knowledge management and knowledge management sys-
tems: Conceptual foundations and research issues. MIS Quarterly 25(1), 107–136 (2001)
4. Rodriguez, E., Edwards, J.S.: Before and After Modeling: Risk Knowledge Management is
required, Society of Actuaries. Paper presented at the 6th Annual Premier Global Event on
ERM, Chicago (2008)
5. Coyle, L., Cunningham, P., Hayes, C.: A Case-Based Personal Travel Assistant for
Elaborating User Requirements and Assessing Offers. In: 6th European Conference on Ad-
vances in Case-Based Reasoning, ECCBR, Aberdeen Scotland, UK (2002)
6. Lajmi, S., Ghedira, C., Ghedira, K.: CBR Method for Web Service Composition. In:
Damiani, E., Yetongnon, K., Chbeir, R., Dipanda, A. (eds.) SITIS 2006. LNCS, vol. 4879,
pp. 314–326. Springer, Heidelberg (2009)
7. Aamodt, A.: Towards robust expert systems that learn from experience an architectural
framework. In: Boose, J., Gaines, B., Ganascia, J.-G. (eds.) EKAW-89: Third European
Knowledge Acquisition for Knowledge-Based Systems Workshop, Paris, pp. 311–326
(July 1989)
Toward a Faithful Bidding
of Web Advertisement

Takumi Uchida, Koken Ozaki, and Kenichi Yoshida

Graduate School of Business Sciences, University of Tsukuba, Japan


{uchida,koken,yoshida}@gssm.otsuka.tsukuba.ac.jp

Abstract. Web marketing is a key activity of e-commerce. Due to the


proliferation of internet technology, available internet marketing data be-
come huge and complex. Efficient use of such large data maximizes the
profit of web marketing. Although there are a variety of studies moti-
vated by these backgrounds, there still remains room for improvement on
data usage. In this paper, we have proposed a method to realize faithful
bidding of web advertisement. The experimental results show: 1) The use
of data by the current operators is unreliable, 2) By using the proposed
method, the advertisement value of bidding becomes clear. For exam-
ple, the method could find a cluster of advertisements that has clear
cost-effectiveness over other clusters.

Keywords: Internet advertisement, allocation of advertising budget,


decision support.

1 Introduction
Web marketing is a key activity of e-commerce today. Due to the proliferation of
internet technology, available internet marketing data become huge and complex.
Efficient use of such large data maximizes the profit of web marketing. Although
there exist a variety of studies such as [1],[2],[3] motivated by these backgrounds,
actual business scenes still rely on the operators’ know-how. There still remains
room for improvement on data usage.
For example, Fig.1 shows how operators who are working for an advertising
agency make their decision on advertisement. They decide the allocation of ad-
vertising budget using the Fig.1. X-axis is the number of past actions by the
customers. Here, actions are typically web clicks toward the purchase and the
installation of software. The target of advertising agency is the maximization
of the actions. Y-axis is the budget (costs) used to advertise web pages for the
purchase and the software installation. One another target of advertising agency
is the minimization of this cost. Cost effectiveness which is typically calculated
by X/Y (i.e., actions/costs) is important.
An example of know-how which we interviewed from operators of an adver-
tising agency is: “If the current web advertisement is laid out on the lower right
segment, increase the budget since the past advertisement worked well (having
height cost efficiency)”. This know-how is reasonable if the number of data is

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 112–118, 2014.

c Springer International Publishing Switzerland 2014
Toward a Faithful Bidding of Web Advertisement 113

KƉĞƌĂƚŝŽŶŵĂƉŽĨĂĚǀĞƌƚŝƐĞŵĞŶƚ
ĂŐĞŶĐLJ

ƐƉĞŶƚ

ĂĐƚŝŽŶƐ
㼄㻙㼍㼤㼕㼟㻌㼕㼟㻌㼚㼡㼙㼎㼑㼞㻌㼛㼒㻌㼜㼍㼟㼠㻌㼍㼏㼠㼕㼛㼚㼟㻌㼎㼥㻌㼠㼔㼑㻌㼏㼡㼟㼠㼛㼙㼑㼞㼟㻌㼟㼡㼏㼔㻌㼍㼟㻌㼣㼑㼎㻌㼏㼘㼕㼏㼗㼟㻌㼠㼛㼣㼍㼞㼐
㼠㼔㼑㻌㼜㼡㼞㼏㼔㼍㼟㼑㻌㼍㼚㼐㻌㼕㼚㼟㼠㼍㼘㼘㼍㼠㼕㼛㼚㻌㼛㼒㻌㼟㼛㼒㼠㼣㼍㼞㼑㻚㻌㼅㻙㼍㼤㼕㼟㻌㼕㼟㻌㼠㼔㼑㻌㼎㼡㼐㼓㼑㼠㻌㼡㼟㼑㼐㻌㼠㼛㻌㼍㼐㼢㼑㼞㼠㼕㼟㼑
㼣㼑㼎㻌㼜㼍㼓㼑㼟㻌㼒㼛㼞㻌㼠㼔㼑㻌㼍㼏㼠㼕㼛㼚㼟㻚

Fig. 1. Operation Map of an Advertisement Agency

EƵŵďĞƌŽĨĐůŝĐŬƐĞĂĐŚǁĞď
ĂĚǀĞƌƚŝƐĞŵĞŶƚ
ϭϬϬϬϬϬϬ ĚǀĞƌƚŝƐĞŵĞŶƚ ǁŚŝĐŚŚĂƐ
ϭϬϬϬϬϬ ůŝƚƚůĞĚĂƚĂĂŶĚƵŶƌĞůŝĂďůĞ͘
dŚĞŶƵŵďĞƌŽĨĐůŝĐŬƐ

ϭϬϬϬϬ

ϭϬϬϬ

ϭϬϬ

ϭϬ

ϭ
ZĂŶŬŽĨƚŚĞǁĞďĂĚǀĞƌƚŝƐĞŵĞŶƚ

Fig. 2. Number of Clicks for Each Web Advertisements

sufficient and reliable. However, we have found that they don’t have enough data
in most cases. Fig.2 shows the fact we found. In Fig.2, Y-axis shows the number
114 T. Uchida, K. Ozaki, and K. Yoshida

of clicks for some web advertisement. X-axis shows the rank of the web advertise-
ment in clicks order. Although the total number of data is large, most of the data
plotted on Fig.1 has little data and statistically unreliable. The operator use too
trifling attributes to plot data on Fig.1. In this study, we propose a method to
enlarge the number of data each plot on Fig.1 has. The enlargement increases
the statistical reliability of data and increases the adequacy of the operators’
judgments.

2 Evaluating Statistical Reliability of Operators Action


2.1 Statistical Background
Statistical problem of current operator’s action is the size of data. Since most
of the web advertisement does not have sufficient customer’s clicks, the size of
data about each web advertisement is small. This makes the operators judgment
unreliable. Thus, we develop a method to form groups of similar web advertise-
ments. By merging the data for the similar web advertisements, the number of
the data in the resulting cluster becomes large. By using the resulting cluster
as the basic unit of decisions, we can realize faithful bidding for each cluster.
Following equations give us theoretical background [4]:
   
n (1 − n ) n (1 − n )
c c c c
c c
P rob −s ≤p≤ +s ≈1−α (1)
n n n n

c
(1 − nc )
m=s n (2)
n
m
E= c (3)
n

Here, c is the number of the observed actions (i.e., purchase or software instal-
lation). n is the number of the observed clicks which users made on the adver-
tisement. p is the real n/c. s is the approximate value of the each percentile
point of the normal distribution. E is the error of estimated c/n. To calculate
95% confidential interval (α=0.05), we set s as 1.96 in this paper. In the rest of
this paper, we propose a method which makes clusters of similar advertisement
whose E calculated by Eq. (2), i.e., error, is small. By using cluster with small
error, we try to realize faithful bidding.

2.2 Enlarging Cluster


According to the real data, most of actions/clicks are lower than 5%. With such
data, we try to make a model to predict action from clicks. We assume that the
number of actions made by customers follows Poisson distribution [5]. Precisely
speaking, we assume following Poisson regression:
Toward a Faithful Bidding of Web Advertisement 115

μci i e−μi
f (ci ) = (4)
ci !
J
μi = n i e β 0 + j=1 βj xij
(5)

J
logμi = logni + β0 + βj xij (6)
j=1

Here, index i is the cluster-id of advertisements. Each cluster is formed by the


advertisements whose attributes share common xij . xij is the attributes which
specify the characteristics of the advertisement and the users who click that
advertisement. Table.1 shows example of attributes. Precisely speaking, since all
the attributes we found are categorical attributes, we use binary representation of
these attributes. In other words, we actually use attributes xij each corresponds
to the attributes values such as “Tokyo” and “Oosaka”. If the value of original
attribute “Region” is “Tokyo”, the corresponding xij is set to be 1.

Table 1. Example of Attributes

Attribute Value
Age Ex) 10-19, 20-29,,,
Region Ex) Tokyo, Oosaka,,,
Sex Male, Female
User Interest Ex) Fashion, Sports,,,
Contents Ex) Movie, Music,,,

ci is the number of the observed actions. ni is the number of the observed


clicks which users did on the advertisement. f(ci ) is probability distribution of
actions ci . μi is expectation of ci (actions). β is regression coefficient. Institutions
behind above equations are 1) we can use the number of clicks to estimate the
number of customers actions, 2) age, region, and other attributes in Table.1
affect the process of user behavior and affect the conversion process from clicks
to actions, 3) Poisson process is reasonable way to represent this process. If the
number of actions can be modeled by Poisson process based on the number of
clicks and attribute xij , equation (5), i.e., μi , estimates the number of actions.
Here some of attributes xij seems to be non-essential. Thus, we try to eliminate
non-essential xij from the equations. We use Akaike’s information criterion to
eliminate non-essential attributes. We perfume a greedy elimination process. In
each step of elimination process, we select attribute xij which improves AIC
index most. This elimination process terminates when none of remaining xij
improves AIC index.

3 Experimental Results
To shows the advantage of the proposed method, we have applied the proposed
method on the data shown in Fig.2. Fig.3 and 4 show results. Fig.3 shows the
116 T. Uchida, K. Ozaki, and K. Yoshida

estimated error for the cluster of advertisements. Here clusters are formed by
grouping advertisements with same attributes. All the attributes are used to
make clusters for Fig.3. X-axis shows the errors of clusters. It is E calculated
by Eq. (3). Y-axis shows the number of actions gained by the advertisement
(actions share). It also shows the total cost for the advertisement (spent share)
and cost-effectiveness (actions share/spent share). For example, the height of
left most histograms indicates low error rate ( E<0.2, i.e. error<0.2 ). The cus-
tomer actions won by corresponding advertisements are 62% with error rate less
than 0.2. Although the use of budget on this segment seems to be reasonable,
the clusters made with all attributes fail in allocating budget on this segment.
Actually, budget used on the same advertisements is only 32%. This result shown
in Fig.3 shows our start point of improvement.

ĂĚǀĞƌƚƐŝŶŐƌĞƐƵůƚĚĂƚĂ
;dŚĞŶƵŵďĞƌŽĨĂĚǀĞƌƚŝƐĞŵĞŶƚŝƐϯϱ͕ϳϯϯ
ϭϬϬй Ϯ͘ϱϬ ĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ
Ɛ ϵϬй

Ś ϴϬй Ϯ͘ϬϬ
Ă ϳϬй
ƌ ϲϬй ϭ͘ϱϬ
Ğ
ϱϬй ĂĐƚŝŽŶƐͺƐŚĂƌĞ
ϰϬй ϭ͘ϬϬ ƐƉĞŶƚͺƐŚĂƌĞ
ƌ
Ă ϯϬй
ŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ
ƚ ϮϬй Ϭ͘ϱϬ
Ğ ϭϬй
Ϭй Ϭ͘ϬϬ
HϬ͘Ϯ HϬ͘ϰ HϬ͘ϲ HϬ͘ϴ Hϭ͘Ϭ Hϭ͘Ϯ ϭ͘ϮH
ŵĂdžĞƌƌŽƌƌĂƚĞŽĨŵĞĂƐƵƌĞĚĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐŝŶϵϱйĐŽŶĨŝĚĞŶĐĞŝŶƚĞƌǀĂů;Ϳ

Fig. 3. Reliability of Current Operation

Fig.4 shows the process of improvement by our proposed method. Data shown
in Fig.2 is based on 35,733 advertisements of 177 clients. We have applied the
method on data of 177 advertisements of one client company to make Fig.4. The
reason we have used the data of only one client is that the value of actions/clicks
varies according to the industry. For example, the value of actions/clicks for
cosmetics is far larger than that of real estimate. Mixing result of such indus-
tries makes the figure unclear. In Fig.4, X-axis is the error of estimated cost-
effectiveness of operations (E of equation 3). Y-axis is the cost-effectiveness (total
actions/total cost for the advertisements). Size of the circle is spent share (to-
tal cost of advertisement / total cost of all advertisements). Fig.4 (a) shows
the results of clusters formed with all attributes xij (i.e., start point). Fig.4 (b)
shows results of clusters formed with selected attributes xij (i.e., the results by
the proposed method). Fig.4 (c) shows results of clusters formed with randomly
Toward a Faithful Bidding of Web Advertisement 117

ƌĞƐƵůƚĚĂƚĂŽĨĂŶĂĚǀĞƌƚŝƐĞƌ
;ĨƵůůĂƚƚƌŝďƵƚĞƐͿ
ϭ͘ϰϬϬ

ϭ͘ϮϬϬ

ĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ
ϯй Ϯй
ϭ͘ϬϬϬ
ϭϴй ϯй
Ϭ͘ϴϬϬ ϮϮй ϴй ϰй Ϯй
ϭϬй
Ϭ͘ϲϬϬ ϭϳй
ĐŽƐƚƐŚĂƌĞ
Ϭ͘ϰϬϬ

Ϭ͘ϮϬϬ

Ϭ͘ϬϬϬ
Ϭ͘ϬϬϬ Ϭ͘ϮϬϬ Ϭ͘ϰϬϬ Ϭ͘ϲϬϬ Ϭ͘ϴϬϬ ϭ͘ϬϬϬ
ŵĂdžĞƌƌŽƌƌĂƚĞŽĨŵĞĂƐƵƌĞĚĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐŝŶϵϱй
ĐŽŶĨŝĚĞŶĐĞŝŶƚĞƌǀĂů;Ϳ

㻔㼍㻕 㼀㼔㼕㼟㻌㼟㼔㼛㼣㼟㻌㼐㼍㼠㼍㻌㼛㼒㻌㼍㼚㻌㼍㼐㼢㼑㼞㼠㼕㼟㼑㼞㻚㻌㻴㼑㼞㼑㻘㻌㻺㼛㻌㼍㼐㼢㼑㼞㼠㼕㼟㼑㼙㼑㼚㼠㻌㼏㼘㼡㼟㼠㼑㼞
㻌㼕㼟㻌㼞㼑㼘㼕㼍㼎㼘㼑㻚㻌㻮㼑㼏㼍㼛㼡㼟㼑㻘㻌㼑㼍㼏㼔㻌㼏㼘㼡㼟㼠㼑㼞㻌㼔㼍㼟㻌㼘㼕㼠㼠㼘㼑㻌㼐㼍㼠㼍㻌㼍㼚㼐㻌㼡㼚㼞㼑㼘㼕㼍㼎㼘㼑㻚

ƌĞƐƵůƚĚĂƚĂŽĨĂŶĂĚǀĞƌƚŝƐĞƌ
;ŽŶůLJĂƚƚƌŝďƵƚĞƐƐĞůĞĐƚĞĚďLJŽƵƌŵĞƚŚŽĚͿ
ϭ͘ϰϬϬ
ϰй
ϭ͘ϮϬϬ ϯй
ĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ

ϭ͘ϬϬϬ ϭй
ϱϮй
Ϭ͘ϴϬϬ
ϯϰй
Ϭ͘ϲϬϬ
ĐŽƐƚƐŚĂƌĞ
Ϭ͘ϰϬϬ

Ϭ͘ϮϬϬ

Ϭ͘ϬϬϬ
Ϭ͘ϬϬϬ Ϭ͘ϮϬϬ Ϭ͘ϰϬϬ Ϭ͘ϲϬϬ Ϭ͘ϴϬϬ ϭ͘ϬϬϬ
ŵĂdžĞƌƌŽƌƌĂƚĞŽĨŵĞĂƐƵƌĞĚĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐŝŶϵϱй
ĐŽŶĨŝĚĞŶĐĞŝŶƚĞƌǀĂů;Ϳ

㻔㼎㻕 㻤㻢㻑㻌㼛㼒㻌㼏㼛㼟㼠㻌㼎㼑㼏㼛㼙㼑㻌㼞㼑㼘㼕㼍㼎㼘㼑㻚㻌㻭㼚㼐㻌㼛㼜㼑㼞㼍㼠㼛㼞㼟㻌㼏㼛㼡㼘㼐㻌㼐㼕㼟㼏㼛㼢㼑㼞㻌㼎㼛㼠㼔
㼑㼒㼒㼑㼏㼠㼕㼢㼑㻌㼏㼘㼡㼟㼠㼑㼞㻌㼍㼚㼐㻌㼕㼚㼑㼒㼒㼑㼏㼠㼕㼢㼑㻌㼏㼘㼡㼟㼠㼑㼞㻌㼣㼔㼕㼏㼔㻌㼕㼟㻌㼑㼚㼛㼡㼓㼔㻌㼞㼑㼘㼕㼍㼎㼘㼑㻚
㼀㼔㼑㼥㻌㼏㼛㼡㼘㼐㻌㼐㼑㼏㼕㼐㼑㻌㼛㼚㻌㼍㼘㼘㼛㼏㼍㼠㼑㼠㼕㼛㼚㻚

ƌĞƐƵůƚĚĂƚĂŽĨĂŶĂĚǀĞƌƚŝƐĞƌ
;ĂƚƚƌŝďƵƚĞƐƐĞůĞĐƚĞĚƌĂŶĚŽŵůLJͿ
ϭ͘ϰϬϬ
ϯй
ϭ͘ϮϬϬ
ĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐ

ϭ͘ϬϬϬ ϭϮй
ϯϳй
ϰϭй
Ϭ͘ϴϬϬ

Ϭ͘ϲϬϬ
ĐŽƐƚƐŚĂƌĞ
Ϭ͘ϰϬϬ

Ϭ͘ϮϬϬ

Ϭ͘ϬϬϬ
Ϭ͘ϬϬϬ Ϭ͘ϮϬϬ Ϭ͘ϰϬϬ Ϭ͘ϲϬϬ Ϭ͘ϴϬϬ ϭ͘ϬϬϬ
ŵĂdžĞƌƌŽƌƌĂƚĞŽĨŵĞĂƐƵƌĞĚĐŽƐƚͲĞĨĨĞĐƚŝǀĞŶĞƐƐŝŶϵϱй
ĐŽŶĨŝĚĞŶĐĞŝŶƚĞƌǀĂů;Ϳ

㻔㼏㻕 㻣㻤㻑㻌㼛㼒㻌㼏㼛㼟㼠㻌㼎㼑㼏㼛㼙㼑㻌㼞㼑㼘㼕㼍㼎㼘㼑㻚㻌㻮㼡㼠㻌㼑㼒㼒㼑㼏㼠㼕㼢㼑㼚㼑㼟㼟㻌㼛㼒㻌㼑㼍㼏㼔㻌㼏㼘㼡㼟㼠㼑㼞㻌
㼣㼔㼕㼏㼔㻌㼕㼟㻌㼑㼚㼛㼡㼓㼔㻌㼞㼑㼘㼕㼍㼎㼘㼑㻌㼕㼟㻌㼟㼕㼙㼕㼘㼍㼞㻚㻌㼀㼔㼡㼟㻘㻌㼠㼔㼑㼥㻌㼏㼛㼡㼘㼐㻌㼚㼛㼠㻌㼐㼑㼏㼕㼐㼑㻌㼛㼚㻌㼍㼘㼘㼛㼏㼍㼠㼑㼕㼛

Fig. 4. Effect of Enlarged Cluster

selected attributes xij for the comparison purpose. As shown in figures, using
all attributes results too many clusters, and all E of clusters are larger than 0.2
(see Fig.4 (a)). Although, Fig.4 (a) shows the results with slightly larger clusters
than that with clusters used in Fig.2, none of cluster has E less than 0.2. In the
118 T. Uchida, K. Ozaki, and K. Yoshida

practical view points, E larger than 0.2 is too large. Thus none of results shown
in Fig.2 has enough accuracy. On the contrary, 86% of results in Fig.4 (b) have E
less than 0.2. This shows the clear improvement of accuracy. Moreover, it shows
the fact the one large cluster which has 52% of advertisements has clear advan-
tage of cost-effectiveness over another large cluster with 34% of advertisements.
Note that this improvement cannot be achieved by random attributes selection
(Fig.4 (c)). In Fig.4 (c), 78% of results have E less than 0.2. However, the found
clusters have no clear cost-effectiveness over other clusters. Thus we cannot use
the results of Fig.4 (c), i.e., randomly selected attributes.

4 Conclusion
In this paper, we have proposed a method to realize faithful bidding of web
advertisement. The characteristics of the proposed method are:

– Enlargement of data cluster by removing non-essential attributes during the


clustering phase.
– A statistical index is used to select non-essential attributes. Poisson regres-
sion analysis and AIC are the theatrical background to select non-essential
attributes.

The experimental results show:


– The use of data by the current operators is unreliable. In fact, 67% of current
bidding operations don’t have sufficient number of data.
– By using the proposed method, the advertisement value of bidding becomes
clear. For example, the method could find a cluster that has clear cost-
effectiveness over other clusters.

Acknowledgments. This work was partly supported by JSPS KAKENHI


Grant Number 25280114.

References
1. Schlosser, A.E., Shavitt, S., Kanfer, A.: Survey of Internet users’ attitudes toward
Internet advertising. Journal of Interactive Marketing 13(3), 34–54 (1999)
2. Manchanda, P., Dube, J.-P., Goh, K.Y., Chintagunta, P.K.: The Effect of Banner
Advertising on Internet Purchasing. Journal of Marketing Research 43(1), 98–108
(2006)
3. Shabbir, G., Niazi, K., Siddiqui, J., Shah, B.A., Hunjra, A.I.: Effective advertising
and its influence on consumer buying behavior. MPRA Paper No. 40689 (August
2012)
4. Hogg, R.V., McKean, J.W., Craig, A.T.: Introduction to Mathematical Statistics,
6th edn. Person Education, Inc. (June 2004)
5. Dobson, A.J.: An Introduction to Generalized Linear Models, ch. 9, 3rd edn. Chap-
man and Hall/CRC (November 2001)
Social Media for Business
An Evaluation Scheme for Performance Measurement
of Facebook Use
An Example of Social Organizations in Vienna

Claudia Brauer1, Christine Bauer2, and Mario Dirlinger3


1
Management Center Innsbruck, Innsbruck,
Austria & Vienna University of Economics and Business,
Department of Information Systems & Operations, Vienna, Austria
2
Vienna University of Economics and Business,
Department of Information Systems & Operations, Vienna, Austria
3
WUK Bildung und Beratung, Vienna, Austria
[email protected],
[email protected],[email protected]

Abstract. Online social networks, and Facebook in particular, have evolved


from a niche to a mass phenomenon. Organizations have recognized the impor-
tance of using Facebook to achieve their organizational goals. Still, literature
lacks a systematic evaluation scheme for measuring the performance of an
organization’s Facebook use. When investigating how organizations use Face-
book, research tends to focus on for-profit organizations, overlooking the way
social organizations use Facebook. This article introduces an evaluation scheme
that includes nine categories of performance measurement. Applying the
scheme to Facebook’s use by social organisations in Vienna, we demonstrate
the scheme’s applicability. Plus, by using various indicators and benchmarks,
we evaluate the level of sophistication of each organization’s use of Facebook.
We investigated all 517 social organizations based in Vienna, including those in
all fields of practice, based on publicly available Facebook data from January to
June 2012. The analysis reveals that the majority of social organizations are
beginners at utilizing Facebook’s potential.

Keywords: Facebook, online social networks, performance measurement,


social organizations, evaluation scheme.

1 Introduction
Online social networks have evolved from a niche to a mass phenomenon that epito-
mizes the digital era [1]. With a daily average use of 30 to 60 minutes [2] by one
billion users [3], the world’s largest social network, Facebook, has become an integral
part of everyday life [3]. In recent years, organizations have recognized the impor-
tance of using Facebook to achieve their organizational goals. Research on the use of
Facebook tends to focus on for-profit companies or end users, and rarely investigates
how social organizations use Facebook, especially in German-speaking regions.

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 121–132, 2014.
© Springer International Publishing Switzerland 2014
122 C. Brauer, C. Bauer, and M. Dirlinger

The few existing studies mainly discuss the general importance of social media for
social organizations (e.g., [4-6]). Because these studies commonly use qualitative
research methods, there are few quantitative results on the use of Facebook in social
organizations. For example, Waters [7] investigated the use of social media in non-
profit organizations. The analysis of expert interviews and focus groups showed that
social organizations use Facebook to build and maintain relationships with their
stakeholders. Other studies, in contrast, have revealed that social organizations use
Facebook primarily to describe the organization but do not leverage the interaction
possibilities and networking opportunities that Facebook offers. Furthermore, re-
search shows that the majority of social organizations start using social media without
having an integrated social media strategy or a sophisticated Facebook strategy. Most
studies on Facebook use in social organizations comes from the United States (e.g.,
[8]); in German-speaking regions, empirical research on that topic is scarce. Annually
since 2009, Kiefer [5] has investigated the use of online social networks in a cross-
sectional study of 60 German non-profit organizations [5, 9, 10]; however, this
research only considers organizations in three fields of practice (environmental/nature
protection, international affairs, social affairs). While Kiefer’s work may identify
Facebook as the strongest online social network of non-profit organizations, it has not
garnered profound insights about the use and the development potential of online
social networks. To date, there is no scientific work based on real data that investi-
gates the use and the development potential of Facebook for social organizations.
Against this background, the present article is dedicated to the following research
questions: How can the use of Facebook be evaluated in terms of performance mea-
surement? How do social organizations perform with respect to their use of Face-
book? To what extent are these organizations utilizing Facebook’s potential? This
article introduces an evaluation scheme that includes nine categories of performance
measurement. Using social organizations in Vienna as our example, we demonstrate
the scheme’s applicability and, with various indicators and benchmarks, we evaluate
the level of sophistication of each organization’s use of Facebook. We investigated all
social organizations based in Vienna (N=517), including those in all fields of practice,
based on publicly available Facebook data from 1 January 2012 to 30 June 2012. We
analyzed the organizations’ use of the various Facebook functionalities as well as the
2479 publicly available Facebook posts for the respective time period.Due to the
topic’s relevance and the lack of comparative studies, this research contributes to both
science and practice. The next section presents a literature review of Facebook use by
non-profit organizations and discusses performance measurement of this use. Subse-
quently, the data collection is described and the research results and evaluation
scheme are presented. Finally, research results are discussed and new fields of
research are identified.

2 Related Work

In this section, we present related work concerning online social networks, with a
focus on Facebook use by non-profit organizations. Then, we describe performance
metrics for measuring the success of a Facebook page for social organizations.
An Evaluation Scheme for Performance Measurement of Facebook Use 123

2.1 Facebook Use in Non-profit Organizations


Some studies have already investigated the importance of social media for non-profit
organizations [4-6, 9-12]. For example, Waters [7] revealed that non-profit organiza-
tions use Facebook to interact with their stakeholders and to build and maintain rela-
tionships with relevant stakeholders. Although some studies have investigated the use
of online social networks for social organizations in particular, little research has fo-
cused on the use of Facebook. For example, Waters, Burnett, Lamm and Lucas [8]
studied the importance of Facebook based on a content analysis of 275 randomly
selected non-profit organizations in the United States. They found that non-profit
organizations do not comprehensively use the information and communication oppor-
tunities of Facebook, and that the majority of social organizations have not yet estab-
lished an integrated Facebook strategy. Other studies have found that non-profit
organizations do not comprehensively use the interaction [5, 8] and networking op-
portunities [10] of Facebook, and that the majority of social organizations have devel-
oped neither an online social media strategy nor a specific Facebook strategy [13].

2.2 Performance Metrics for Measuring Facebook Use


Only a few scientific articles are dedicated to the performance measurement of online
social networks, or Facebook in particular, which may be due to the novelty of the
topic. While some authors refer to performance measurement of any kind of online
social networks under the term “social media analytics”, other authors focus on Face-
book and still use the general term “social media analytics” [14, 15]. In contrast to
academic literature, practitioners (e.g., Jim Sterne, Avinash Kaushnik, etc.) and sever-
al associations (e.g., Interactive Advertising Bureau, International Association for
Measurement and Evaluation of Communication, etc.) have deeply discussed the
topic of performance measurement of online social networks, specifically Facebook.
They suggested a variety of performance metrics to measure the success of Facebook
use (e.g., number of “likes” (fans), number of posts, number of photos uploaded,
number of links, number of comments, number of foreign contributions, number and
percentage of responses to posts of other users, etc.). In addition, various metrics have
been developed to compare different online social networks (e.g., virality, interactivi-
ty of posts, use of multiple media in posts). In the present article, we have developed
an evaluation scheme based on these metrics.

3 Research Procedure
In order to answer the research questions, we conducted an empirical study of Face-
book use among social organizations in Vienna. Our analysis is based on publicly
accessible data, from which we calculated the various performance metrics.

3.1 Research Sample


Our first step was to retrieve the names of all social organizations in Vienna that were
registered in the online database, “Social Austria”, of the Federal Ministry of Labour,
124 C. Brauer, C. Bauer, and M. Dirlinger

Social Affairs and Consumer Protection; this resulted in a set of 1682 social organiza-
tions based in Vienna (retrieved on 12 April 2012). After removing organizations
from the data set that were assigned to multiple fields of practice, we had a list of 517
social organizations. 25 organizations were removed from the list because they were
either not within the scope of the definition of a social organization by Dimmel [16]
or were already closed. Then, for every organization on the list, we investigated
whether it had registered a Facebook page. Only 73 of the 492 (14.8%) social organi-
zations in Vienna had its own Facebook page. For 127 (25.8%) organizations, the
umbrella organization or the carrier of the organization operated the Facebook page.
18 organizations used Facebook via a “Facebook personal profile” and 104 via “Fa-
cebook Community”. 292 social organizations (59.4%) did not have a Facebook page.

3.2 Coding Schemes for the Analysis of Facebook Pages and Posts
The coding scheme for the analysis of the Facebook pages was developed ex ante
based on Waters, Burnett, Lamm and Lucas [8]1. Using this coding scheme, the vari-
ous applications within Facebook (e.g., “information”, “views”, and “applications”)
were analyzed. In addition, the Facebook pages were analyzed to determine which
applications, out of all those offered, were used by the social organizations. Further-
more, for deeper insights into how social organizations use Facebook, we conducted a
content analysis of the posts in the organizations’ Facebook timelines (all posts from
1 January 2012 to 30 June 2012). The coding scheme was developed inductively from
raw data and was adapted during the coding phase. For every Facebook post, we cap-
tured a formal description and a description of the content. The formal information
included the date of the entry, the number of “likes”, the number of comments, and
the sharing frequency of the post within Facebook. Regarding the content of posts, we
recorded whether the posts were manually entered or automatically retrieved (for
instance via other online social networks), and whether they contained links, photos,
videos, or audio files. Finally, we classified all Facebook posts by topic.

4 Research Results

4.1 Fields of Practice


As can be seen from Table 1, social organizations in the “Multicultural / Internation-
al” (28.6%), “Work / Occupation” (22.9%), and “Migration” (21.8%) fields of
practice use Facebook to a great extent. However, these percentages have a limited
significance, because the number of organizations varies considerably between the
different fields of practice. Looking at the absolute values, the fields of practice of
“Social general” (n=39), “Health / Disease” (n=31), and “Work / Occupation” (n=30)
have the most Facebook pages. The fields of practice of “Delinquency” (22 organiza-
tions) and “Administration” (13 organizations) are hardly represented via Facebook.
Although the field of practice of “Family / Partner / Single parents” has a total of 137
organizations, the percentage of those social organizations with a Facebook page is

1
The coding schemes can be requested from the authors.
An Evaluation Scheme for Performance Measurement of Facebook Use 125

relatively low (10.9%, n=15). Moreover, there is a significant correlation (Pearson


correlation, p<0.01) between the number of organizations per field of practice and the
use of a Facebook page.

Table 1. Social organizations ranked by percentage of Facebook pages per field of practice

Field of practice # of organi- # of organiza- Share of Face-


zations / field of tions with a Fa- book pages per field
practice cebook page of practice
Multicultural / International 28 8 28.6%
Work / Occupation 131 30 22.9%
Migration 55 12 21.8%
Education 85 18 21.2%
Social general 185 39 21.1%
Health / Disease 161 31 19.3%
Housing / Accommodation 62 11 17.7%
Psyche 121 21 17.4%
Disability 188 28 14.9%
Children / Young adults 178 26 14.6%
Senior 86 12 14.0%
Men / Women 126 17 13.5%
Addiction 60 7 11.7%
Consumer / Legal regulations 44 5 11.4%
Family / Partner / Single parents 137 15 10.9%
Delinquency 22 1 4.5%
Administration 13 0 0.0%
Total 1682 281 -

4.2 Design of Facebook Pages and Use of Applications


On their Facebook pages, the majority of the analyzed social organizations provide a
description of the organization (84.9%, n=62), identify their target groups (79.5%,
n=58), and provide contact information (80.8%, n=59). Almost all social organiza-
tions link their Facebook page to their website (93.2%, n=68). Few organizations
link in the notification area of the Facebook page to other online social communica-
tion channels (11%, n=8). Of those that do, the organizations have linked their Face-
book page to Foursquare (n=3), YouTube (n=2), Twitter (n=2), MySpace (n=1) and
Flickr (n=1).The photo application is the most commonly used Facebook application.
During the investigation period, 1360 photos were uploaded, with an average of 23
photos uploaded per social organization. The events application is also highly util-
ized (50.7%, n=37). About a third of the social organizations have integrated the
geographic map application, where the location of the organization is automatically
shown on a map (34.2%, n=25). In contrast, donation applications (4.1%, n=3), vid-
eos (15.1%, n=11), and notes (6.8%, n=5) are hardly integrated into the Facebook
126 C. Brauer, C. Bauer, and M. Dirlinger

pages. Individualized Facebook applications (e.g., netiquette, mission statement,


offer, jobs, petitions, invitations, catalogue order, charity event, blog, and newsletter)
are used by some organizations (21.9%, n=16). The group application is not used by
any social organization.The number of “likes” is a key metric for measuring the suc-
cess of a Facebook page as well as an organization’s Facebook activities. The aver-
age number of “likes” per organization is 672 (sd=1.403, max=8.066, min=1). In
contrast, Waters, Burnett, Lamm and Lucas [8] found in their study an average num-
ber of only 193 (sd=547.71, max=6.062) “likes” per organization. Moreover, in the
present study, a significant correlation (Pearson correlation, p<0.01) between
the number of “likes” and the field of practice was determined. Another key metric is
the number of “people talking about this” per post. This metric is an indicator of the
interactivity on a Facebook page within the previous seven days. The studied social
organizations had an average “talking about” number of 14 (sd=29.74) during the
investigation period, with 27 organizations having a “talking about” number of zero.
Two social organizations reached a value for “talking about” of more than 100 (171
and 125, respectively). Due to the novelty of this Facebook application, there are
currently no benchmarks published. In total, 2479 posts were published by the social
organizations on their Facebook pages during the investigation period, which corres-
ponds to an average of 34.43 posts (n=72, sd=45.62) per organization. 13 organiza-
tions (18.1%) did not publish any Facebook posts during the investigation period.
One social organization published 308 (max. value) posts within that time, which
corresponds to a frequency of 1.7 messages per day. Considering that the second-
ranked organization published only 154 Facebook posts, the organizations’ usage
behavior is clearly diverse. The average daily post frequency of the social organiza-
tions was 0.19, which illustrates the discrepancy in posting behavior between the
leading organization and the other organizations.

4.3 Content of Posts


Most Facebook posts concerned social policy issues (n=402), announcements of an
organization’s events (n=401), and product / service offers (n=393). Still, only half of
the organizations (51.4%, n=37) posted content about social policy issues during the
investigation period. Furthermore, the economic importance and impact of social
organizations is reflected by their high demand for employees [17]; few social organi-
zations, however, announced job vacancies via Facebook (n=19). In contrast, the mes-
sages application was often used by the organizations to provide information about
their services and products. More than a tenth of all posts contained information about
an organization’s own offers (11.3%, n=284). Furthermore, the social organizations
often announced internal and external events via Facebook (401 posts; 16.2%). This
number only includes posts from 47 (out of 73) social organizations, since 26 organi-
zations never announced an event via Facebook. Still, the rather high frequency of
events posting may be due to imitation among competitors or attempts to establish an
opinion leadership. The relationship between event announcements and follow-up
news of the event (2:1) illustrates that there is room for improvement concerning fol-
low-up on the events. The high number of Facebook posts about opinions on social
policy issues reflects the essential goal of social organizations and demonstrates that
Facebook is used as an external communication channel rather than as a tool for
An Evaluation Scheme for Performance Measurement of Facebook Use 127

communicating with internal stakeholders. The relatively low number of posts about
issues of organizational structure also indicates that Facebook is used for external
rather than internal communication. Fundraising is another of the social organiza-
tions’ most frequent post topics (190 posts; 7.7%). Fundraising posts were published
by 39 of the 73 social organizations. In these posts, the organizations call for dona-
tions, report on fundraising activities and fundraising dedications, and express thanks
to donors (139 post; 5.6%). During the investigation period, the social organizations
published an average of 2.64 posts about fundraising issues. In addition, three social
organizations have implemented a specific Facebook application for soliciting dona-
tions. Overall, Facebook’s potential for fundraising is not being exploited to its full
extent; there is room for improvement. Examples of individual success stories were
published 30 times out of all the Facebook posts (1.2%). Few posts dealt with volun-
teer management (2.4%, n = 59). 60 posts (2.4%) included greetings for holidays or
seasonal events. Approximately 3% of the posts contained humorous pictures, videos,
and recommendations for cultural events.

4.4 Interactivity and Virality


Our analysis of the number of “likes”, number of comments, and frequency of shared
posts provides information about each organization’s level of interaction with Face-
book users [18]. In the analyzed period, an average of 287 posts per organization were
marked with “like” (sd=723.78), which corresponds to 4.46 “likes” per post. 17 social
organizations did not receive any “likes”; however, 13 of those organizations had not
published any posts within the investigation period. 23 social organizations did not
receive any comments on their posts in their Facebook timelines. The highest number
of Facebook comments received by a single organization was 359, a much higher
number than all the other social organisations received (m=26.1, sd=59.09). The
highest number of “shares” (of comments) and the highest number of responses to
posts that were written by users (m=4.88) were achieved by the same social organiza-
tion. Further analysis shows that a high frequency of self-written posts does not
necessarily indicate a high interactivity with users.

4.5 Relation between Self-written Posts and Posts Written by Other Users
The relationship between self-written posts and those written by other users is a key
metric of an organization’s interaction with Facebook users [18]. During the investi-
gation period, 15.5% of posts were written by users, and 35 social organizations did
not receive any posts written by users. In this context, it should be mentioned that 12
organizations deactivated the possibility for users to respond to posts.Overall, posts
written by other users resulted in an average of 8.33 “likes” per post and 0.74 com-
ments per post. Posts written by users had reached a total of 391 “likes” and 143
comments. In comparison, the responses to posts written by users had lower interac-
tivity impact and achieved on average only 0.86 “likes” and 0.31 comments.Another
indicator of a successful Facebook page is a high number of posts by users that were
commented on by the organization [18]. The analysis revealed that 70.4% of posts
written by users were marked with “like” or commented on by the respective organi-
zations. Other Facebook users responded significantly more often to user-generated
128 C. Brauer, C. Bauer, and M. Dirlinger

posts with “likes” (69.7%, n=318) or comments (17.8%, n=81) from the organiza-
tions, compared to user posts without reactions by the social organizations, where a
total of only 16% of user posts had been marked with “like” (n=73) and 13.6% of
posts were commented on (n=62).

4.6 Multimediality of Facebook Posts


More than a fifth of the studied Facebook posts contained photos (n=12) or links to
photos or to photo-sharing portals (outside of Facebook) (n=4). 79.1% of the posts did
not contain any photos or links to photos, although, according to Facebook, posts with
attached photos achieve about 120% more interaction with Facebook users [19]. This
is also reflected in our data. Posts with photos resulted in 3.07 times more “likes” and
3.02 times more comments than posts without photos. Posts with photos were also
more often shared than those without photos.In general, videos were rarely used. 105
of the 2479 posts embedded videos or linked to videos on specialized social media
platforms such as YouTube and Vimeo (4.2%). The video application of Facebook
was only used in 5 posts. In total, more than half of the posts (53.9%, n=1336) in-
cluded links to other online services (outside of Facebook). Data suggests that 56.2%
of the social organizations linked in at least one post (n=41) to their organization's
official website. 308 links (34.2%) referred to external websites containing press
releases or press articles. Interestingly, 69.5% of all links to press articles or press
releases were published by only three social organizations.

4.7 Links from Facebook to Other Online Social Networks


Various indicators can be used to analyze whether an organization has implemented
an integrated social media strategy. Almost all organizations linked to the organiza-
tion's website (93.2%, n=68) in the notification area. More than half of the organiza-
tions linked via posts to the organization's website (56.9%, n=41). Only 8
organizations linked to other social media channels on their “about” pages. 45 of the
73 organizations (61.6%) had implemented a link from their website to their Face-
book page and 21 organizations (28.8%) used social plug-ins that provide “like” and
“share” buttons on their websites. 40 social organizations implemented such links on
a prominent page (e.g., the homepage) of their websites, which indicates that Face-
book has a high relevance for these organizations.

5 Evaluation Scheme and Results

Based on the indicators described in Section 4, an evaluation scheme was developed


to assess the developmental stage of each organization’s Facebook page: “Beginner”,
“Advanced”, “Intermediate”, or “Expert”. The presented indicators (Section 4) were
grouped into nine categories. Category 1 evaluates the existence of an organization
description and contact data in the information area of the Facebook page. Category 2
describes whether a social organization uses a profile and a cover photo. Category 3
characterizes the use of photos. Based on the median of uploaded photos within the
An Evaluation Scheme for Performance Measurement of Facebook Use 129

investigation time period (as a benchmark), at least 23 photos have to be uploaded to


Facebook Photo View to achieve the maximum 2 points in this category. Category 4
refers to the number of “likes”, taking into account the date of registration of the Fa-
cebook page. Therefore, the minimum of “likes” was defined as 0.5 “likes” per day
within the first three years (again based on our data set, where the minimum value of
0.5 lies between the median and mean of the “like”, taking into account the organiza-
tion’s registration date). Category 5 assesses whether an organization uses applica-
tions such as the map, events, or fundraising applications. Category 6 refers to the
frequency of self-written posts. The post frequency should be at least one post per
week, with a maximum of one post per day; this range corresponds to, during the
investigation period, a minimum of 25 posts and a maximum of 181 posts [20]. High-
er post frequencies result in lower interaction rates; thus, one post per day is defined
as the maximum value. Category 7 analyzes the average number of responses per
Facebook post. The minimum values per post were set to at least 3 “likes”, 0.3 com-
ments, or 0.3 “shares”. These numbers were derived from the means and medians of
the responses to the respective posts (as discussed in Section 4). Category 8 evaluates
whether a minimum percentage of the posts, as recommended by Facebook, include
photos. 20.1% of all analyzed Facebook posts contain photos; thus, the respective
organizations are assigned points if at least every fifth post contains a photo. Catego-
ry 9 analyzes Facebook users’ reactions to posts written by users based on the number
of “likes” and number of comments. In 7 of the 9 categories, two points are achieva-
ble (see Table 2): These categories describe the basic requirements for adequate use
of a Facebook page. We consider the use of applications (Category 5) and the integra-
tion of photos into posts (Category 8) as advanced Facebook use. Accordingly, we
weighted these indicators less than the basic requirements in our evaluation scheme.
Therefore, only one point can be achieved in these two categories. Based on this eval-
uation scheme, four stages can be derived as follows: “Beginner” (0-7 points), “Ad-
vanced” (8-10 points), “Intermediate” (11-13 points), and “Expert” (14-16 points).

Table 2. Evaluation Scheme for the Use of Facebook by Social Organizations in Vienna

Category Description of Category Points Dimension


views, and appli- Facebook posts

1 Description of the organization and contact information 2


Design of page Design of the time-
information,

2 Using a profile and cover picture 2


cations

3 At least 23 uploaded photos in the Photo View 2


4 Minimum of 0.5 “likes” on the Facebook page per day 2
during the first three years of use
5 Use of applications (e.g., donations, events, map, etc.) 1
6 Post frequency is at least one post per week and a maxi- 2
written by other

line, reaction to

mum of one post per day


7 Minimum requirements of the average responses per 2
users

post: 3 “likes”, 0.3 comments, 0.3 “shares”


8 At least every fifth post contains a photo 1
9 100% response rate to comments, criticisms, and ques- 2
tions in external posts
Total 16
130 C. Brauer, C. Bauer, and M. Dirlinger

Based on the evaluation scheme, 5 of the 73 organizations (6.8%) were assigned


the maximum of 16 points. In total (Table 3), 11 organizations (15.1%) can be classi-
fied as “Expert”, which means that these organizations fulfilled almost all require-
ments and may be considered as “best practices”. About one-fifth of the social
organizations (20.5%, n=15) received 11 to 13 points, and therefore these organiza-
tions are classified as “Intermediate”. The category “Advanced” includes 14 organiza-
tions (19.2%). A total of 33 organizations were assigned less than 8 points (45.2%)
and therefore are classified as “Beginner”.

Table 3. Result overview concerning the evaluation scheme

Category Organizations (absolute values) Ratio


Beginner 33 45.2%
Advanced 14 19.2%
Intermediate 15 20.5%
Expert 11 15.1%

6 Discussion and Conclusion

Facebook offers social organizations a range of possibilities to help achieve their


organizational goals and build and maintain relationships with stakeholders. So far
there have been no empirical studies about the use and development of Facebook by
social organizations in the European and German-speaking countries. The present
paper contributes to closing this research gap by analyzing all social organizations in
Vienna regarding their Facebook pages and posting behavior. Interestingly, a large
number of posts were simply holiday greetings and expressions of thanks for dona-
tions, which are typical examples of posts by organizations that are less experienced
with social media. Also, a rather low number of social organizations link their Face-
book pages to other social network platforms, which indicates that the development
and implementation of an integrated social media strategy in social organizations in
Vienna is the exception; the potential of online social networks is not being fully uti-
lized. Previous studies [5, 7, 8, 13] have demonstrated both the potential and weak use
of social media for fundraising and volunteer management. The present study con-
firms that social organizations in Vienna have not exhausted Facebook’s potential for
fundraising and volunteer management. The low interactivity rates demonstrate that
the majority of social organizations may improve the formal design and content-
related aspects of their posts. There are more than twice as many beginner organiza-
tions than more experienced ones. The 73 analyzed organizations were classified into
four categories by using a self-developed evaluation scheme. Only 11 organizations
are classified as “Expert” (15.1%). Most organizations have been classified as “Be-
ginner” (45.2%, n=33). This value has to be considered in relation to the total number
of social organizations in Vienna: In the investigation period, only 14.8% of all social
organizations in Vienna had registered a Facebook page. Furthermore, on average the
evaluated organizations had reached 7.8 out of 16 points, which corresponds to the
An Evaluation Scheme for Performance Measurement of Facebook Use 131

“Beginner” category. Thus, overall we conclude that social organizations in Vienna


only limitedly use Facebook to achieve their organizational goals. Our evaluation
scheme may be adopted for other organizations. While it may be used as is for eva-
luating the success of Facebook strategies by other social organizations, the reference
values (benchmarks) used in the scheme have to be adjusted to reflect the Facebook
metrics of the industries to which the organizations belong.The present work also has
limitations: Only publicly available data was used, and metrics based on Facebook
Insights could not be taken into account. Furthermore, the present study is limited to
social organizations in Vienna, resulting in regional limitations of the findings. How-
ever, the majority of Austria’s social organizations are located in Vienna, which sug-
gests that the results also have value on a national level. Future research may compare
Facebook use between social organizations and commercial organizations. Moreover,
the development of a comprehensive performance measurement system for measuring
activities in various online social networks is a relevant research topic. With respect
to this, a study about the importance of social media guidelines for social organiza-
tions would be interesting.

References
1. Richter, A., Koch, M.: Funktionen von Social-Networking-Diensten. In: Multikonferenz
Wirtschaftsinformatik 2008 (2008)
2. Royal Pingdom: Facebook, YouTube, our collective time sinks (stats) (2011)
3. Facebook, http://newsroom.fb.com/Key-Facts (accessed April 18, 2013)
4. Curtis, L., Edwards, C., Fraser, K.L., Gudelsky, S., Holmquist, J., Thornton, K.,
Sweetser, K.D.: Adoption of social media for public relations by nonprofit organizations.
Public Relations Review 36, 90–92 (2010)
5. Kiefer, K.: Social Media Engagement deutscher NPO. Performance Management in
Nonprofit-Organisationen 386 (2012)
6. Lovejoy, K., Saxton, G.D.: Information, Community, and Action: How Nonprofit Organi-
zations Use Social Media. J. of Computer. Mediated Communication 17, 337–353 (2012)
7. Waters, R.D.: The use of social media by nonprofit organizations: An examination from
the diffusion of innovations perspective. In: Handbook of Research on Social Interaction
Technologies and Collaboration Software: Concepts and Trends. IGI, Hershey (2010)
8. Waters, R.D., Burnett, E., Lamm, A., Lucas, J.: Engaging stakeholders through social
networking: How nonprofit organizations are using Facebook. Public Relations Review
35, 102–106 (2009)
9. Kiefer, K.: NGOs im Social Web. Eine inhaltsanalytische Untersuchung zum Einsatz
und Potential von Social Media für die Öffentlichkeitsarbeit von gemeinnützigen Organi-
sationen. Institut für Journalistik und Kommunikationsforschung. Universität Hannover,
Hanover, Germany (2009)
10. Kiefer, K.: NPOs im Social Web: Status quo und Entwicklungspotenziale. In: Fundraising
im Non-Profit-Sektor, pp. 283–296. Springer (2010)
11. Briones, R.L., Kuch, B., Liu, B.F., Jin, Y.: Keeping up with the digital age: How the
American Red Cross uses social media to build relationships. Public Relations Review 37,
37–43 (2011)
12. Miller, D.: Nonprofit organizations and the emerging potential of social media and internet
resources. SPNHA Review 6, 4 (2010)
132 C. Brauer, C. Bauer, and M. Dirlinger

13. Reynolds, C.: Friends Who Give: Relationship-Building and Other Uses of Social Net-
working Tools by Nonprofit Organizations. The Elon Journal of Undergraduate Research
in Communications 2, 15–40 (2011)
14. Heidemann, J., Klier, M., Landherr, A., Probst, F.: Soziale Netzwerke im Web–Chancen
und Risiken im Customer Relationship Management von Unternehmen. Wirtschaftsinfor-
matik & Management 3, 40–45 (2011)
15. Reisberger, T., Smolnik, S.: Modell zur Erfolgsmessung von Social-Software-Systemen.
In: Multikonferenz Wirtschaftsinformatik, pp. 565–577 (2008)
16. Dimmel, N.: Sozialwirtschaft in der Sozialordnung. In: Dimmel, N. (ed.) Das Recht der
Sozialwirtschaft, pp. 9–58. Wien/Graz, Austria (2007)
17. Badelt, C., Pennerstorfer, A., Schneider, U.: Der Nonprofit Sektor in Österreich. In: Simsa,
R., Meyer, M., Badelt, C. (eds.) Handbuch der Nonprofit-Organisation, pp. 55–75 (2013)
18. Brocke, A., Faust, A.: Berechnung von Erfolgskennzahlen für Facebook Fan-Pages.
ICOM 10, 44–48 (2011)
19. Facebook, http://www.facebook.com/business/build
(accessed April 18, 2013)
20. Reimerth, G., Wigand, J.: Welche Inhalte in Facebook funktionieren: Facebook Postings
von Consumer Brands und Retail Brands unter der Lupe. knallgrau, Vienna, Austria
(2012)
Understanding the Factors That Influence the Perceived
Severity of Cyber-bullying

Sonia Camacho, Khaled Hassanein, and Milena Head

DeGroote School of Business, McMaster University, Hamilton, ON, Canada


{camachsm,hassank,headm}@mcmaster.ca

Abstract. Cyberbullying is a phenomenon that involves aggressive behaviors


performed through Information and Communication Technologies (ICT) with
the intention to cause harm or discomfort to victims. Researchers have meas-
ured the incidence of cyber-bullying by presenting participants with a list of
behaviors and determining whether they have experienced those behaviors or
the frequency of their occurrence. However, those measures do not take into ac-
count a victim’s perspective of those behaviors. This study draws on the Trans-
actional Theory of Stress and Coping and introduces the concept of perceived
cyber-bullying severity to measure a victim’s appraisal of cyberbullying. This
study also proposes a set of antecedents to perceived cyber-bullying severity,
which will be validated using a survey-based study and structural equation
modeling techniques.

Keywords: cyber-bullying, victim, bully, audience, message.

1 Introduction

Cyberbullying can be defined as hostile or aggressive behaviors performed through


information and communication technologies (ICT) (e.g. Internet applications, mobile
phones) that are intended to harm or inflict discomfort on others [1]. Although this
definition is adopted here, it is important to note that (i) there is a lack of an agreed
upon definition of cyberbullying in the literature [2]; and (ii) there is a debate about
the elements from the definition of traditional bullying that should be in-
cluded/excluded in/from the definition of cyberbullying (e.g. power imbalance be-
tween bullies and victims) [3]. Cyber-bullying is a phenomenon that can have varied
consequences on the victim, such as low academic scores, social anxiety, social isola-
tion, self-harm, low self-confidence, and depressive symptoms [4-6]. In extreme cas-
es, those consequences can lead the victim to commit suicide [7]. Between 2012 and
2013, at least 9 cases of teenage suicides have been linked to cyber-bullying [8].
Studies in cyber-bullying in the area of Information Systems (IS) have focused
mainly on the prevalence of this phenomenon [9-10] and the potential motivations
and antecedents of online aggression (e.g. gaining social status) [11]. Researchers
in other areas (e.g. psychology, healthcare) have also explored (i) the outcomes of
cyber-bullying (e.g. psychosomatic problems, depression) [2,12], (ii) the relationship

F.F.-H. Nah (Ed.): HCIB/HCII 2014, LNCS 8527, pp. 133–144, 2014.
© Springer International Publishing Switzerland 2014
134 S. Camacho, K. Hassanein, and M. Head

between cyber-bullying and traditional bullying [13] and (iii) strategies used by vic-
tims to deal with cyber-bullying incidents (e.g. deleting unwanted messages, changing
e-mail address) [14-15].
Researchers have used different measures of cyber-bullying, relying mainly on
providing specific behavioral examples of what this phenomenon entails and asking a
global question as to whether individuals have experienced cyber-bullying [16].
Furthermore, some measures have been developed to specifically measure cyber-
victimization [17-18] and those are concerned with the frequency at which certain
behaviors (e.g. insulting language in e-mails) occur. In general, cyber-bullying meas-
ures used to date are concerned with the incidence of specific behaviors and do not
consider that the victims’ perception of those behaviors may vary (e.g. the same be-
havior may be interpreted as harmless by some people and rather hurtful by others)
[19]. Moreover, there is a lack of research studying the degree to which victims perce-
ive cyber-bullying as being harmful [20].
This study addresses the above gap by introducing the construct of perceived cy-
ber-bullying severity to measure a victim’s evaluation of cyber-bullying. In addition,
this study proposes a set of factors that may affect a victim’s perception of cyber-
bullying severity.

2 Theoretical Background

Lazarus and Folkman (1984) proposed the Transactional Theory of Stress and Coping
(TTSC). They defined psychological stress as a relationship between a person and the
environment that is seen by the person as taxing her resources or threatening her well-
being [21]. Embedded in this definition is the fact that although there may be objec-
tive conditions that can be considered as stressors (e.g. natural disasters, having an
argument with a loved person), individuals will vary in the degree and type of reac-
tion to these stressors. In order to understand the individuals’ varied reactions when
facing the same stressful situation, it is necessary to understand the cognitive
processes that take place between the stressor and the reaction [21].
TTSC proposes cognitive appraisal as the mediating factor, which reflects the
changing relationships between individuals with certain characteristics (e.g. values,
thinking style) and an environment that must be predicted and interpreted [21]. Spe-
cifically, the theory outlines a primary appraisal of the stressor and a secondary
appraisal of the coping mechanisms available to deal with the stressor [22]. In the
primary appraisal phase, individuals determine if and how the situation is relevant to
their goal attainment or well-being. When the situation affects negatively goal attain-
ment and/or well-being (i.e. it is stressful), individuals determine the extent to which
the situation is harming, threatening, or challenging [23]. Harm refers to damage that
has already occurred and threat refers to a future potential damage, while challenge
produces a positive motivation in individuals to overcome obstacles [24]. After the
primary appraisal phase, individuals move to the secondary appraisal phase where
they evaluate their options in terms of coping with the stressful situation [24].
Understanding the Factors That Influence the Perceived Severity of Cyber-bullying 135

The appraisal of a stressful situation is affected by some situational characteristics.


In particular, TTSC identifies three factors that can affect individual’s assessment of a
situation as harming, threatening, or challenging that are relevant in the context of this
study. The first factor is novelty, which refers to situations with which the individual
has no experience. Completely novel situations are rare since individuals may have
information about situations from others. However, if an individual has not expe-
rienced a situation yet (i.e. a novel situation), she will consider it as stressful if it is
previously associated in her mind with harm or danger (e.g. based on others’ expe-
riences). The second factor is uncertainty, which refers to an individual’s confusion
about the meaning of the situation. Uncertain situations are considered highly stress-
ful. The final factor is duration, which refers to how long a stressful event persists.
Enduring or chronic stressful situations may affect an individual psychologically and
physically [21].
TTSC offers a suitable framework to study victim’s assessment of a cyber-bullying
episode. A cyber-bullying episode may constitute one action (e.g. posting a comment
on a public forum) or several actions related to the same issue (e.g. sending several
threatening text messages over a certain period of time). Cyber-bullying episodes are
situations that may be appraised as harmful or threatening to certain extents,
depending on the characteristics of the situation (i.e. the message received by the
victim, the medium through which the message is sent, the bully’s characteristics, and
the audience witnessing the episode) and the characteristics of the victim (e.g.
neuroticism and self-esteem). The appraisal of these episodes as stressful may affect
negatively the victims (e.g. negative emotions, depressive symptoms) and may affect
their experience with information and communication technologies through which
cyber-bullying occurs (e.g. Facebook).

3 Research Model and Hypotheses

The proposed research model is shown in Figure 1. The constructs and hypotheses
included in the model, along with their appropriate support, are described below.

3.1 Perceived Cyber-bullying Severity

Perceived Cyber-bullying Severity (PCS) is a new construct introduced to measure a


victim’s appraisal of a cyber-bullying episode (a stressful situation), as per TTSC.
The assessment of a cyber-bullying episode varies by the context of the situation (i.e.
message, bully, medium, and audience) and the victim characteristics [19] as
explained below (see section 3.2). The degree of variability of the assessment of a
specific episode by a victim is consistent with the primary appraisal involved in TTSC
[21], whereby victims evaluate whether the cyber-bullying episode is relevant to their
goals or well-being.
Although some studies have explored victims’ perceptions of the harshness of
cyber-bullying compared to traditional bullying [16], a measure for the victim’s per-
ception of the severity of a cyber-bullying episode has not been developed. Studying a
136 S. Camacho, K. Hassanein, and M. Head

victim’s appraisal of cyber-bullying is important in pursuing a rigorous understanding


of the cyber-bullying phenomenon and its impacts, as the victims’ perspective is criti-
cal to understand the impacts of the episode on their psychosocial functioning [25].

-Saliency
-Sensitivity Message
-Frequency
-Offensiveness

Medium
Perceived importance

Awareness of
provision of recourse

Victim
H4 +
Neuroticism
Perceived
Self-esteem cyber-bullying
severity

Bully
Power differential

Relationship strength

Audience

Size

Sensitivity

Reaction

Fig. 1. Research model

3.2 Factors That Influence PCS

According to TTSC, it is the appraisal of a particular situation as harmful or threaten-


ing that triggers the need to manage or cope with the situation [21]. This highlights
the importance of understanding how variables that are relevant to the cyber-bullying
context may affect the appraisal process of the cyber-bullying episode (i.e. percep-
tions of cyber-bullying severity). Past research on cyber-bullying suggests an initial
set of factors that are deemed to be relevant in the appraisal of the severity of a cyber-
bullying episode. These sets of factors are explored below, where the most relevant
characteristics of each are discussed.
Understanding the Factors That Influence the Perceived Severity of Cyber-bullying 137

Message Harshness. Four characteristics of the message(s) the cyber-bullying victim


receives are explored. The first characteristic is saliency, which refers to “an attribute
of a particular stimulus that makes it stand out and be noticed” (p. 1) [26]. The misuse
of pictures and videos (i.e. a more salient message) is more stressful for victims than
other forms of cyberbullying such as insults using written messages [27]. The second
characteristic is the sensitivity of the message. Disclosing secrets (i.e. privacy viola-
tion) or embarrassing aspects of the victim’s life is more stressful for victims of cy-
ber-bullying than messages that do not involve aspects of the victim’s real world (e.g.
name calling on a chat room) [27]. The third characteristic is frequency, where the
occurrence of several acts in a cyber-bullying episode is posited to increase the vic-
tim’s PCS compared to a single act [28]. The last characteristic is offensiveness,
where receiving vulgar, angry messages, or threats of real injuries is more stressful for
victims compared to more benign messages [29]. The saliency, sensitivity, frequency,
and offensiveness of the message speak of the harshness of the message content and
are posited to collectively heighten victims’ perceptions of severity in a cyber-
bullying episode. Thus, we hypothesize that:
H1: Message harshness is positively related to PCS

Medium Characteristics. Two characteristics of the cyber-bullying medium are


explored. The first one is the perceived importance of the cyber-bullying medium for
the victim. Individuals prefer to use certain forms of electronic communication in
order to maintain their social lives [30] and thus, it is expected that victims will per-
ceive a cyber-bullying episode as being more severe if the cyber-bullying medium is
among their preferred communication media. The second characteristic is victims’
awareness of provision of recourse mechanisms available to them through the cyber-
bullying medium. Researchers have found that online buyers rely on institutional
mechanisms such as credit card guarantees for reducing their perception of risk [31].
In the same vein, technology providers have mechanisms built into their platforms
(i.e. the cyber-bullying medium) that can be used by victims to deal with cyber-
bullying episodes (e.g. reporting a bully on Facebook). It is expected that the victim’s
awareness of such recourse provisions will reduce her/his perception of severity of a
cyber-bullying episode. Thus, we hypothesize that:
H2: Perceived importance of the cyber-bullying medium to the victim is positively
related to PCS
H3: Awareness of provision of recourse mechanisms is negatively related to PCS

Victim Characteristics. Two individual characteristics deemed relevant in explaining


victim’s perceptions of bullying [32] are explored. Neuroticism refers to a personality
trait characterized by insecurity, anxiousness, and hostility [33]. Individuals high in
neuroticism tend to appraise ambiguous situations in a negative manner and perceive
threats in situations where others would not [34]. Self-esteem is the subjective percep-
tion of one’s worth [35]. Individuals with low self-esteem tend to have less confi-
dence to overcome any problems they are faced with, leading them to experience
138 S. Camacho, K. Hassanein, and M. Head

higher stress in such situations [36]. In light of these arguments, it is expected that
confronted with the same cyber-bullying episode, individuals with low self-esteem or
high neuroticism will perceived it as more severe than others. Thus, we hypothesize
that:
H4: Neuroticism is positively related to PCS
H5: Self-esteem is negatively related to PCS

Bully Characteristics. Two characteristics of the bully are explored. In terms of


power differential, the victim may be afraid of denouncing or taking revenge on a
person that holds more power than s