BigData Computing
BigData Computing
Computing
This page intentionally left blank
Big Data
Computing
Edited by
Rajendra Akerkar
Western Norway Research Institute
Sogndal, Norway
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
All the visionary minds who have helped create a modern data science profession
This page intentionally left blank
Contents
Preface.......................................................................................................................ix
Editor.................................................................................................................... xvii
Contributors.......................................................................................................... xix
Section I Introduction
vii
viii Contents
15. Big Data Application: Analyzing Real-Time Electric Meter Data..... 449
Mikhail Simonov, Giuseppe Caragnano, Lorenzo Mossucca, Pietro Ruiu,
and Olivier Terzo
Index...................................................................................................................... 539
Preface
* http://www.gartner.com/it/content/1258400/1258425/january_6_techtrends_rpaquet.pdf
† http://www.idc.com/
ix
x Preface
Data. The key to leveraging Big Data is to realize these differences before
expediting its use. The most noteworthy difference is that data are typically
governed in a centralized manner, but Big Data is self-governing. Big Data
is created either by a rapidly expanding universe of machines or by users of
highly varying expertise. As a result, the composition of traditional data will
naturally vary considerably from Big Data. The composition of data serves
a specific purpose and must be more durable and structured, whereas Big
Data will cover many topics, but not all topics will yield useful information
for the business, and thus they will be sparse in relevancy and structure.
The technology required for Big Data computing is developing at a sat-
isfactory rate due to market forces and technological evolution. The ever-
growing enormous amount of data, along with advanced tools of exploratory
data analysis, data mining/machine learning, and data visualization, offers
a whole new way of understanding the world.
Another interesting fact about Big Data is that not everything that is con-
sidered “Big Data” is in fact Big Data. One needs to explore deep into the
scientific aspects, such as analyzing, processing, and storing huge volumes
of data. That is the only way of using tools effectively. Data developers/
scientists need to know about analytical processes, statistics, and machine
learning. They also need to know how to use specific data to program algo-
rithms. The core is the analytical side, but they also need the scientific back-
ground and in-depth technical knowledge of the tools they work with in
order to gain control of huge volumes of data. There is no one tool that offers
this per se.
As a result, the main challenge for Big Data computing is to find a novel
solution, keeping in mind the fact that data sizes are always growing. This
solution should be applicable for a long period of time. This means that the
key condition a solution has to satisfy is scalability. Scalability is the ability of
a system to accept increased input volume without impacting the profits; that
is, the gains from the input increment should be proportional to the incre-
ment itself. For a system to be totally scalable, the size of its input should not
be a design parameter. Pushing the system designer to consider all possible
deployment sizes to cope with different input sizes leads to a scalable archi-
tecture without primary bottlenecks. Yet, apart from scalability, there are
other requisites for a Big Data–intensive computing system.
Although Big Data is an emerging field in data science, there are very few
books available in the market. This book provides authoritative insights and
highlights valuable lessons learnt by authors—with experience.
Some universities in North America and Europe are doing their part to
feed the need for analytics skills in this era of Big Data. In recent years,
they have introduced master of science degrees in Big Data analytics,
data science, and business analytics. Some contributing authors have been
involved in developing a course curriculum in their respective institution
and country. The number of courses on “Big Data” will increase worldwide
because it is becoming a key basis of competition, underpinning new waves
Preface xi
Organization
This book comprises five sections, each of which covers one aspect of Big Data
computing. Section I focuses on what Big Data is, why it is important, and
how it can be used. Section II focuses on semantic technologies and Big Data.
Section III focuses on Big Data processing—tools, technologies, and methods
essential to analyze Big Data efficiently. Section IV deals with business and
economic perspectives. Finally, Section V focuses on various stimulating Big
Data applications. Below is a brief outline with more details on what each
chapter is about.
Section I: Introduction
Chapter 1 provides an approach to address the problem of “understanding”
Big Data in an effective and efficient way. The idea is to make adequately
grained and expressive knowledge representations and fact collections that
evolve naturally, triggered by new tokens of relevant data coming along.
The chapter also presents primary considerations on assessing fitness in an
evolving knowledge ecosystem.
Chapter 2 then gives an overview of the main features that can character-
ize architectures for solving a Big Data problem, depending on the source of
data, on the type of processing required, and on the application context in
which it should be operated.
* http://www.mckinsey.com/Insights/MGI/Research/Technology_and_Innovation/Big_
data_The_next_frontier_for_innovation
xii Preface
Chapter 3 discusses Big Data from three different standpoints: the busi-
ness, the technological, and the social. This chapter lists some relevant initia-
tives and selected thoughts on Big Data.
Intended Audience
The aim of this book is to be accessible to researchers, graduate students,
and to application-driven practitioners who work in data science and related
fields. This edited book requires no previous exposure to large-scale data
analysis or NoSQL tools. Acquaintance with traditional databases is an
added advantage.
This book provides the reader with a broad range of Big Data concepts,
tools, and techniques. A wide range of research in Big Data is covered, and
comparisons between state-of-the-art approaches are provided. This book
can thus help researchers from related fields (such as databases, data sci-
ence, data mining, machine learning, knowledge engineering, information
retrieval, information systems), as well as students who are interested in
entering this field of research, to become familiar with recent research devel-
opments and identify open research challenges on Big Data. This book can
help practitioners to better understand the current state of the art in Big Data
techniques, concepts, and applications.
The technical level of this book also makes it accessible to students taking
advanced undergraduate level courses on Big Data or Data Science. Although
such courses are currently rare, with the ongoing challenges that the areas
of intelligent information/data management pose in many organizations in
both the public and private sectors, there is a demand worldwide for gradu-
ates with skills and expertise in these areas. It is hoped that this book helps
address this demand.
In addition, the goal is to help policy-makers, developers and engineers,
data scientists, as well as individuals, navigate the new Big Data landscape.
I believe it can trigger some new ideas for practical Big Data applications.
Preface xv
Acknowledgments
The organization and the contents of this edited book have benefited from our
outstanding contributors. I am very proud and happy that these researchers
agreed to join this project and prepare a chapter for this book. I am also very
pleased to see this materialize in the way I originally envisioned. I hope this
book will be a source of inspiration to the readers. I especially wish to express
my sincere gratitude to all the authors for their contribution to this project.
I thank the anonymous reviewers who provided valuable feedback and
helpful suggestions.
I also thank Aastha Sharma, David Fausel, Rachel Holt, and the staff at
CRC Press (Taylor & Francis Group), who supported this book project right
from the start.
Last, but not least, a very big thanks to my colleagues at Western Norway
Research Institute (Vestlandsforsking, Norway) for their constant encour-
agement and understanding.
I wish all readers a fruitful time reading this book, and hope that they expe-
rience the same excitement as I did—and still do—when dealing with Data.
Rajendra Akerkar
This page intentionally left blank
Editor
xvii
This page intentionally left blank
Contributors
xix
xx Contributors
Olivier Terzo
Michael Schmidt Advanced Computing and
Fluid Operations AG Electromagnetics Unit
Walldorf, Germany Istituto Superiore Mario Boella
Torino, Italy
Mikhail Simonov
Advanced Computing and Arild Waaler
Electromagnetics Unit Department of Computer Science
Istituto Superiore Mario Boella University of Oslo
Torino, Italy Oslo, Norway
Introduction
This page intentionally left blank
1
Toward Evolving Knowledge Ecosystems for
Big Data Understanding
Contents
Introduction..............................................................................................................4
Motivation and Unsolved Issues...........................................................................6
Illustrative Example............................................................................................7
Demand in Industry............................................................................................9
Problems in Industry..........................................................................................9
Major Issues....................................................................................................... 11
State of Technology, Research, and Development in Big Data Computing... 12
Big Data Processing—Technology Stack and Dimensions.......................... 13
Big Data in European Research....................................................................... 14
Complications and Overheads in Understanding Big Data....................... 20
Refining Big Data Semantics Layer for Balancing Efficiency
Effectiveness....................................................................................................... 23
Focusing......................................................................................................... 25
Filtering.......................................................................................................... 26
Forgetting....................................................................................................... 27
Contextualizing............................................................................................. 27
Compressing................................................................................................. 29
Connecting..................................................................................................... 29
Autonomic Big Data Computing....................................................................30
Scaling with a Traditional Database.................................................................... 32
Large Scale Data Processing Workflows........................................................ 33
Knowledge Self-Management and Refinement through Evolution...............34
Knowledge Organisms, their Environments, and Features........................ 36
Environment, Perception (Nutrition), and Mutagens............................. 37
Knowledge Genome and Knowledge Body............................................. 39
Morphogenesis............................................................................................. 41
Mutation........................................................................................................42
Recombination and Reproduction.............................................................44
Populations of Knowledge Organisms.......................................................... 45
Fitness of Knowledge Organisms and Related Ontologies......................... 46
3
4 Big Data Computing
Some Conclusions.................................................................................................. 48
Acknowledgments................................................................................................. 50
References................................................................................................................ 50
Introduction
Big Data is a phenomenon that leaves a rare information professional negli-
gent these days. Remarkably, application demands and developments in the
context of related disciplines resulted in technologies that boosted data gen-
eration and storage at unprecedented scales in terms of volumes and rates. To
mention just a few facts reported by Manyika et al. (2011): a disk drive capable
of storing all the world’s music could be purchased for about US $600; 30 bil-
lion of content pieces are shared monthly only at Facebook (facebook.com).
Exponential growth of data volumes is accelerated by a dramatic increase in
social networking applications that allow nonspecialist users create a huge
amount of content easily and freely. Equipped with rapidly evolving mobile
devices, a user is becoming a nomadic gateway boosting the generation of
additional real-time sensor data. The emerging Internet of Things makes
every thing a data or content, adding billions of additional artificial and
autonomic sources of data to the overall picture. Smart spaces, where people,
devices, and their infrastructure are all loosely connected, also generate data
of unprecedented volumes and with velocities rarely observed before. An
expectation is that valuable information will be extracted out of all these data
to help improve the quality of life and make our world a better place.
Society is, however, left bewildered about how to use all these data effi-
ciently and effectively. For example, a topical estimate for the number of a
need for data-savvy managers to take full advantage of Big Data in the United
States is 1.5 million (Manyika et al. 2011). A major challenge would be finding
a balance between the two evident facets of the whole Big Data adventure: (a)
the more data we have, the more potentially useful patterns it may include
and (b) the more data we have, the less the hope is that any machine-learn-
ing algorithm is capable of discovering these patterns in an acceptable time
frame. Perhaps because of this intrinsic conflict, many experts consider that
this Big Data not only brings one of the biggest challenges, but also a most
exciting opportunity in the recent 10 years (cf. Fan et al. 2012b)
The avalanche of Big Data causes a conceptual divide in minds and opin-
ions. Enthusiasts claim that, faced with massive data, a scientific approach “. . .
hypothesize, model, test—is becoming obsolete. . . . Petabytes allow us to say:
‘Correlation is enough.’ We can stop looking for models. We can analyze the
data without hypotheses about what it might show. We can throw the numbers
into the biggest computing clusters the world has ever seen and let statistical
algorithms find patterns . . .” (Anderson 2008). Pessimists, however, point out
Toward Evolving Knowledge Ecosystems for Big Data Understanding 5
Figure 1.1
Evolution of data collections—dimensions (see also Figure 1.3) have to be treated with care.
(Courtesy of Vladimir Ermolayev.)
6 Big Data Computing
the phenomenon of Big Data in their dialog over means of improving sense-
making. The phenomenon remains a constructive way of introducing others,
including nontechnologists, to new approaches such as the Apache Hadoop
(hadoop.apache.org) framework. Apparently, Big Data is collected to be ana-
lyzed. “Fundamentally, big data analytics is a workflow that distills terabytes
of low-value data down to, in some cases, a single bit of high-value data. . . .
The goal is to see the big picture from the minutia of our digital lives” (cf.
Fisher et al. 2012). Evidently, “seeing the big picture” in its entirety is the key
and requires making Big Data healthy and understandable in terms of effec-
tiveness and efficiency for analytics.
In this section, the motivation for understanding the Big Data that improves
the performance of analytics is presented and analyzed. It begins with pre-
senting a simple example which is further used throughout the chapter. It
continues with the analysis of industrial demand for Big Data analytics. In
this context, the major problems as perceived by industries are analyzed and
informally mapped to unsolved technological issues.
Illustrative Example
Imagine a stock market analytics workflow inferring trends in share price
changes. One possible way of doing this is to extrapolate on stock price data.
However, a more robust approach could be extracting these trends from
market news. Hence, the incoming data for analysis would very likely be
several streams of news feeds resulting in a vast amount of tokens per day.
An illustrative example of such a news token is:
-baseOf * * -basedIn
* -successorOf *
-built
* PlaneMaker MarketForecast
-has -by -SalesVolume *
*
-sellsTo
* * -predecessorOf
* -buysForm * -hikes -hiked by *
EfficientNewPlane
Airline -seeksFor -soughtBy
-fuelConsumption : <unspecified> = low
-built : <unspecified> = >2009
-delivered : Date
* *
Owns
basedIn AllNipponAirways : Airline B787-JA812A : EfficientNewPlane Individual assertions
Owned by Fuel consumption = 20% lower than others
baseOf Built = >2009
delivered : Date = 2012/07/03
Japan : Country
built builtBy
has by
basedIn Boeing : PlaneMaker New20YMarketForecastbyBoeing : MarketForecast
SalesVolume = 4.5 trillion
Figure 1.2
Semantics associated with a news data token.
not answer several important questions for revealing the motives for Boeing
to hike their market forecast:
from Boeing (the rest of Figure 1.2); and a relevant list of emerging regions
and growth factors (not shown in Figure 1.2). The challenge for a human
analyst in performing the task is low speed of data analysis. The available
time slot for providing his recommendation is too small, given the effort to
be spent per one news token for deep knowledge extraction. This is one good
reason for growing demand for industrial strength technologies to assist in
analytical work on Big Data, increase quality, and reduce related efforts.
Demand in Industry
Turning available Big Data assets into action and performance is considered
a deciding factor by today’s business analytics. For example, the report by
Capgemini (2012) concludes, based on a survey of the interviews with more
than 600 business executives, that Big Data use is highly demanded in indus-
tries. Interviewees firmly believe that their companies’ competitiveness and
performance strongly depend on the effective and efficient use of Big Data.
In particular, on average,
• Big Data is already used for decision support 58% of the time, and
29% of the time for decision automation
• It is believed that the use of Big Data will improve organizational
performance by 41% over the next three years
The report by Capgemini (2012) also summarizes that the following are the
perceived benefits of harnessing Big Data for decision-making:
Problems in Industry
Though the majority of business executives firmly believe in the utility
of Big Data and analytics, doubts still persist about its proper use and the
availability of appropriate technologies. As a consequence, “We no longer
10 Big Data Computing
speak of the Knowledge Economy or the Information Society. It’s all data
now: Data Economy and Data Society. This is a confession that we are no
longer in control of the knowledge contained in the data our systems col-
lect” (Greller 2012).
Capgemini (2012) outlines the following problems reported by their
interviewees:
• Big Data changes the way knowledge is acquired and even defined. As
already mentioned above (cf. Anderson 2008), correlations mined
from Big Data may hint about model changes and knowledge repre-
sentation updates and refinements. This may require conceptually
novel solutions for evolving knowledge representation, reasoning,
and management.
• Having Big Data does not yet imply objectivity, or accuracy, on time. Here,
the clinch between efficiency and effectiveness of Big Data inter-
pretation and processing is one of the important factors. Selecting
Toward Evolving Knowledge Ecosystems for Big Data Understanding 11
Major Issues
Applying Big Data analytics faces different issues related with the charac-
teristics of data, analysis process, and also social concerns. Privacy is a very
sensitive issue and has conceptual, legal, and technological implications.
This concern increases its importance in the context of big data. Privacy
is defined by the International Telecommunications Union as the “right of
individuals to control or influence what information related to them may
be disclosed” (Gordon 2005). Personal records of individuals are increas-
ingly being collected by several government and corporate organizations.
These records usually used for the purpose of data analytics. To facilitate
data analytics, such organizations publish “appropriately private” views
over the collected data. However, privacy is a double-edged sword—there
should be enough privacy to ensure that sensitive information about the
individuals is not disclosed and at the same time there should be enough
data to perform the data analysis. Thus, privacy is a primary concern that
has widespread implications for someone desiring to explore the use of Big
Data for development in terms of data acquisition, storage, preservation,
presentation, and use.
Another concern is the access and sharing of information. Usually private
organizations and other institutions are reluctant to share data about their
12 Big Data Computing
clients and users, as well as about their own operations. Barriers may include
legal considerations, a need to protect their competitiveness, a culture of con-
fidentiality, and, largely, the lack of the right incentive and information struc-
tures. There are also institutional and technical issues, when data are stored
in places and ways that make them difficult to be accessed and transferred.
One significant issue is to rethink security for information sharing in Big
Data use cases. Several online services allow us to share private informa-
tion (i.e., facebook.com, geni.com, linkedin.com, etc.), but outside record-level
access control we do not comprehend what it means to share data and how
the shared data can be linked.
Managing large and rapidly increasing volumes of data has been a chal-
lenging issue. Earlier, this issue was mitigated by processors getting faster,
which provide us with the resources needed to cope with increasing vol-
umes of data. However, there is a fundamental shift underway considering
that data volume is scaling faster than computer resources. Consequently,
extracting sense of data at required scale is far beyond human capability.
So, we, the humans, increasingly “. . . require the help of automated systems
to make sense of the data produced by other (automated) systems” (Greller
2012). These instruments produce new data at comparable scale—kick-start-
ing a new iteration in this endless cycle.
In general, given a large data set, it is often necessary to find elements
in it that meet a certain criterion which likely occurs repeatedly. Scanning
the entire data set to find suitable elements is obviously impractical. Instead,
index structures are created in advance to permit finding the qualifying ele-
ments quickly.
Moreover, dealing with new data sources brings a significant number of
analytical issues. The relevance of these issues will vary depending on the
type of analysis being conducted and on the type of decisions that the data
might ultimately inform. The big core issue is to analyze what the data are
really telling us in an entirely transparent manner.
Focused services
Efficiency... ...Effectiveness
Volume Complexity
Big Data storage, access, management infrastructure
Query planning Data
and exceution management
Velocity
Volume
Big Data
Variety Complexity
Effectiveness
Figure 1.3
Processing stack, based on Driscoll (2011), and the four dimensions of Big Data, based on Beyer
et al. (2011), influencing efficiency and effectiveness of analytics.
14 Big Data Computing
The middle layer of the stack is responsible for analytics. Here data ware-
housing technologies (e.g., Nemani and Konda 2009; Ponniah 2010; Thusoo
et al. 2010) are currently exploited for extracting correlations and features
(e.g., Ishai et al. 2009) from data and feeding classification and prediction
algorithms (e.g., Mills 2011).
Focused applications or services are at the top of the stack. Their func-
tionality is based on the use of more generic lower-layer technologies and
exposed to end users as Big Data products.
Example of a startup offering focused services is BillGuard (billguard.
com). It monitors customers’ credit card statements for dubious charges and
even leverages the collective behavior of users to improve its fraud predic-
tions. Another company called Klout (klout.com/home) provides a genuine
data service that uses social media activity to measure online influence.
LinkedIn’s People you may know feature is also a kind of focused service. This
service is presumably based on graph theory, starting exploration of the
graph of your relations from your node and filtering those relations accord-
ing to what is called “homophily.” The greater the homophily between two
nodes, the more likely two nodes will be connected.
According to its purpose, the foundational layer is concerned about being
capable of processing as much as possible data (volume) and as soon as pos-
sible. In particular, if streaming data are used, the faster the stream is (veloc-
ity), the more difficult it is to process the data in a stream window. Currently
available technologies and tools for the foundational level are not equally
well coping with volume and velocity dimensions which are, so to say, anti-
correlated due to their nature. Therefore, hybrid infrastructures are in use
for balancing processing efficiency aspects (Figure 1.3)—comprising solu-
tions focused on taking care of volumes, and, separately, of velocity. Some
examples are given in “Big Data in European Research” section.
For the analytics layer (Figure 1.3), volume and velocity dimensions (Beyer
et al. 2011) are also important and constitute the facet of efficiency—big vol-
umes of data which may change swiftly have to be processed in a timely
fashion. However, two more dimensions of Big Data become important—
complexity and variety—which form the facet of effectiveness. Complexity
is clearly about the adequacy of data representations and descriptions for
analysis. Variety describes a degree of syntactic and semantic heterogeneity
in distributed modules of data that need to be integrated or harmonized for
analysis. A major conceptual complication for analytics is that efficiency is
anticorrelated to effectiveness.
Table 1.1
FP7 ICT Call 5 Projects and their Contributions to Big Data Processing and Understanding
Contribution
to Coping Contribution to
with Big Big Data
Data Processing Stack
IIM Clustera dimensionsb Layersc
Contribution
ii. Fast
Services
Variety
Volume
Velocity
Information
Analytics of
Exploitation
iii. Focused
Knowledge
to Big Data
Data at Scale
Social Media
Complexity
Management
Discovery and
i. Fast Access
Interactive and
of Data at Scale
Reasoning and
Online Content,
to/Management
Acronym Domain(s)/Industry(ies) Understanding
SmartVortex X X X Industrial innovation engineering X X X X
LOD2 X X Media and publishing, corporate data X X X X X O, ML
intranets, eGovernment
Tridec X X Crisis/emergency response, government, X X X X X R
oil and gas
First X X Market surveillance, investment X X X X X IE
management, online retail banking and
brokerage
iProd X X Manufacturing: aerospace, automotive, X X X X R, Int
and home appliances
Teleios X X Civil defense, environmental agencies. X X X X DM, QL, KD
Use cases: a virtual observatory for
TerraSAR-X data; real-time fire
monitoring
Big Data Computing
Khresmoi X X Medical imaging in healthcare, X X X X X IE, DLi, M-LS,
biomedicine MT
Robust X X Online communities (internet, extranet X X X AEP
and intranet) addressing: customer
support; knowledge sharing; hosting
services
Digital.me X X Personal sphere X X
Fish4Knowledge X X Marine sciences, environment X X X DV (fish), SUM
Render X X Information management (wiki), news X X X DS
aggregation (search engine), customer
relationship management
(telecommunications)
PlanetData X Cross-domain X
LATC X Government X X X
Advance X Logistics X X X X
Cubist X Market intelligence, computational X X X SDW, SBI, FCA
biology, control centre operations
Promise X Cross-domain X X X X
Dicode X Clinico-genomic research, healthcare, X X X X DM, OM
marketing
a IIM clustering information has been taken from the Commission’s source cordis.europa.eu/fp7/ict/content-knowledge/projects_en.html.
b As per the Gartner report on extreme information management (Gartner 2011).
c The contributions of the projects to the developments in the Big Data Stack layers have been assessed based on their public deliverables.
Toward Evolving Knowledge Ecosystems for Big Data Understanding
17
18 Big Data Computing
Focused services
MT SBI
Big data analytics
Figure 1.4
Contribution of the selection of FP7 ICT projects to technologies for Big Data understanding.
Abbreviations are explained in the legend to Table 1.1.
in the introduction of this chapter. Analysis of Table 1.1 reveals that no one
of the reviewed projects addresses all four dimensions of Big Data in a bal-
anced manner. In particular, only two projects—Trydec and First—claim
contributions addressing Big Data velocity and variety-complexity. This fact
points out that the clinch between efficiency and effectiveness in Big Data
processing still remains a challenge.
* Technologies for information and knowledge extraction are also developed and need to be
regarded as bottom-up. However, these technologies are designed to work off-line for updat-
ing the existing ontologies in a discrete manner. Their execution is not coordinated with the
top-down query processing and data changes. So, the shortcomings outlined below persist.
24 Big Data Computing
This will enable reducing the overheads of the top-down path by perform-
ing refined inference using highly expressive and complex queries over
harmonization
Evolution
Bottom-up path for knowledge
Contextualization
Extraction
evolution
Figure 1.5
Refining Big Data semantics layer for balancing efficiency and effectiveness.
evolving (i.e., consistent with data) and linked (i.e., harmonized), but reason-
ably small fragments of knowledge. Query results may also be materialized
for further decreasing computational effort.
After outlining the abstract architecture and the bottom-up approach, we
will now explain at a high level how Big Data needs to be treated along the
way. A condensed formula for this high-level approach is “3F + 3Co” which
is unfolded as
3F: Focusing-Filtering-Forgetting
3Co: Contextualizing-Compressing-Connecting
Notably, both 3F and 3Co are not novel and used in parts extensively in
many domains and in different interpretations. For example, an interesting
interpretation of 3F is offered by Dean and Webb (2011) who suggest this for-
mula as a “treatment” for senior executives (CEOs) to deal with information
overload and multitasking. Executives are offered to cope with the problem
by focusing (doing one thing at a time), filtering (delegating so that they do
not take on too many tasks or too much information), and forgetting (taking
breaks and clearing their minds).
Focusing
Following our Boeing example, let us imagine a data analyst extracting
knowledge tokens from a business news stream and putting these tokens
as missing bits in the mosaic of his mental picture of the world. A tricky
part of his work, guided by intuition or experience in practice, is choosing
the order in which the facts are picked up from the token. Order of focusing
is very important as it influences the formation and saturation of different
fragments in the overall canvas. Even if the same input tokens are given, dif-
ferent curves of focusing may result in different knowledge representations
and analysis outcomes.
A similar aspect of proper focusing is of importance also for automated
processing of Big Data or its semantics. One could speculate whether a pro-
cessing engine should select data tokens or assertions in the order of their
appearance, in a reversed order, or anyhow else. If data or assertions are pro-
cessed in a stream window and in real time, the order of focusing is of lesser
relevance. However, if all the data or knowledge tokens are in a persistent
storage, having some intelligence for optimal focusing may improve process-
ing efficiency substantially. With smart focusing at hand, a useful token can
be found or a hidden pattern extracted much faster and without making a
complete scan of the source data. A complication for smart focusing is that
the nodes on the focusing curve have to be decided upon on-the-fly because
generally the locations of important tokens cannot be known in advance.
Therefore, the processing of a current focal point should not only yield what
26 Big Data Computing
is intended directly of this portion of data, but also hint about the next point
on the curve.
A weak point in such a “problem-solving” approach is that some potentially
valid alternatives are inevitably lost after each choice made on the decision
path. So, only a suboptimal solution is practically achievable. The evolution-
ary approach detailed further in section “Knowledge Self-Management and
Refinement through Evolution” follows, in fact, a similar approach of smart
focusing, but uses a population of autonomous problem-solvers operating
concurrently. Hence, it leaves a much smaller part of a solution space without
attention, reduces the bias of each choice, and likely provides better results.
Filtering
A data analyst who receives dozens of news posts at once has to focus on the
most valuable of them and filter out the rest which, according to his informed
guess, do not bring anything important additionally to those in his focus.
Moreover, it might also be very helpful to filter out noise, that is, irrelevant
tokens, irrelevant dimensions of data, or those bits of data that are unread-
able or corrupted in any other sense. In fact, an answer to the question about
what to trash and what to process needs to be sought based on the under-
standing of the objective (e.g., was the reason for Boeing to hike their market
forecast valid?) and the choice of the proper context (e.g., should we look into
the airline fleets or the economic situation in developing countries?).
A reasonable selection of features for processing or otherwise a rational
choice of the features that may be filtered out may essentially reduce the
volume as well as the variety/complexity of data which result in higher effi-
ciency balanced with effectiveness.
Quite similar to focusing, a complication here is that for big heterogeneous
data it is not feasible to expect a one-size-fits-all filter in advance. Even more,
for deciding about an appropriate filtering technique and the structure of a
filter to be applied, a focused prescan of data may be required, which implies
a decrease in efficiency. The major concern is again how to filter in a smart
way and so as to balance the intentions to reduce processing effort (efficiency)
and keep the quality of results within acceptable bounds (effectiveness).
Our evolutionary approach presented in the section “Knowledge Self-
Management and Refinement through Evolution” uses a system of environ-
mental contexts for smart filtering. These contexts are not fixed but may be
adjusted by several independent evolutionary mechanisms. For example, a
context may become more or less “popular” among the knowledge organ-
isms that harvest knowledge tokens in them because these organisms
may migrate freely between contexts in search for better, more appropri-
ate, healthier knowledge to collect. Another useful property we propose
for knowledge organisms is their resistance to sporadic mutagenic factors,
which may be helpful for filtering out noise.
Toward Evolving Knowledge Ecosystems for Big Data Understanding 27
Forgetting
A professional data analyst always keeps a record of data he used in his work
and the knowledge he created in his previous analyses. The storage for all
these gems of expertise is, however, limited, so it has to be cleaned periodi-
cally. Such a cleaning implies trashing potentially valuable things, though
never or very rarely used, but causing doubts and further regrets about the
lost. Similar thing happens when Big Data storage is overflown—some parts
of it have to be trashed and so “forgotten.” A question in this respect is about
which part of a potentially useful collection may be sacrificed. Is forgetting
the oldest records reasonable?—perhaps not. Shall we forget the features that
have been previously filtered out?—negative again. There is always a chance
that an unusual task for analysis pops up and requires the features never
exploited before. Are the records with minimal potential utility the best can-
didates for trashing?—could be a rational way to go, but how would their
potential value be assessed?
Practices in Big Data management confirm that forgetting following
straightforward policies like fixed lifetime for keeping records causes regret
almost inevitably. For example, the Climate Research Unit (one of the leading
institutions that study natural and anthropogenic climate change and collect
climate data) admits that they threw away the key data to be used in global
warming calculations (Joseph 2012).
A better policy for forgetting might be to extract as much as possible knowl-
edge out of data before deleting these data. It cannot be guaranteed, however,
that future knowledge mining and extraction algorithms will not be capable of
discovering more knowledge to preserve. Another potentially viable approach
could be “forgetting before storing,” that is, there should be a pragmatic reason
to store anything. The approach we suggest in the section “Knowledge Self-
Management and Refinement through Evolution” follows exactly this way.
Though knowledge tokens are extracted from all the incoming data tokens, not
all of them are consumed by knowledge organisms, but only those assertions
that match to their knowledge genome to a sufficient extent. This similarity is
considered a good reason for remembering a fact. The rest remains in the envi-
ronment and dies out naturally after the lifetime comes to end as explained in
“Knowledge Self-Management and Refinement through Evolution”.
Contextualizing
Our reflection of the world is often polysemic, so a pragmatic choice of a
context is often needed for proper understanding. For example, “taking a
mountain hike” or “hiking a market forecast” are different actions though
the same lexical root is used in the words. An indication of a context: recre-
ation or business in this example would be necessary for making the state-
ment explicit. To put it even broader, not only the sense of statements, but
28 Big Data Computing
also judgments, assessments, attitudes, and sentiments about the same data
or knowledge token may well differ in different contexts. When it goes about
data, it might be useful to know:
RESULT = INSTRUMENT(Predictive Features).
INSTRUMENT = CONTEXTUALIZATION(Contextual Features).
Hence, a correct way to process each data token and benefit of contextu-
alization would be: (i) decide, based on contextual features, which would
be an appropriate instrument to process the token; and then (ii) process it
using the chosen instrument that takes the predictive features as an input.
This approach to contextualization is not novel and is known in data mining
and knowledge discovery as a “dynamic” integration, classification, selec-
tion, etc. Puuronen et al. (1999) and Terziyan (2001) proved that the use of
dynamic contextualization in knowledge discovery yields essential quality
improvement compared to “static” approaches.
Toward Evolving Knowledge Ecosystems for Big Data Understanding 29
Compressing
In the context of Big Data, having data in a compact form is very important for
saving storage space or reducing communication overheads. Compressing is
a process of data transformation toward making data more compact in terms
of required storage space, but still preserving either fully (lossless compres-
sion) or partly (lossy compression) the essential features of these data—those
potentially required for further processing or use.
Compression, in general, and Big Data compression, in particular, are
effectively possible due to a high probability of the presence of repetitive,
periodical, or quasi-periodical data fractions or visible trends within data.
Similar to contextualization, it is reasonable to select an appropriate data
compression technique individually for different data fragments (clusters),
also in a dynamic manner and using contextualization. Lossy compression
may be applied if it is known how data will be used, at least potentially.
So that some data fractions may be sacrificed without losing the facets of
semantics and the overall quality of data required for known ways of its use.
A relevant example of a lossy compression technique for data having quasi-
periodical features and based on a kind of “meta-statistics” was reported by
Terziyan et al. (1998).
Connecting
It is known that nutrition is healthy and balanced if it provides all the neces-
sary components that are further used as building blocks in a human body.
These components become parts of a body and are tightly connected to the
rest of it. Big Data could evidently be regarded as nutrition for knowledge
economy as discussed in “Motivation and Unsolved Issues”. A challenge is
to make this nutrition healthy and balanced for building an adequate mental
representation of the world, which is Big Data understanding. Following the
allusion of human body morphogenesis, understanding could be simplisti-
cally interpreted as connecting or linking new portions of data to the data
that is already stored and understood. This immediately brings us about the
concept of linked data (Bizer et al. 2009), where “linked” is interpreted as a
sublimate of “understood.” We have written “a sublimate” because having
data linked is not yet sufficient, though necessary for further, more intel-
ligent phase of building knowledge out of data. After data have been linked,
data and knowledge mining, knowledge discovery, pattern recognition,
diagnostics, prediction, etc. could be done more effectively and efficiently.
For example, Terziyan and Kaykova (2012) demonstrated that executing busi-
ness intelligence services on top of linked data is noticeably more efficient
than without using linked data. Consequently, knowledge generated out of
linked data could also be linked using the same approach, resulting in the
linked knowledge. It is clear from the Linking Open Data Cloud Diagram
by Richard Cyganiak and Anja Jentzsch (lod-cloud.net) that knowledge
30 Big Data Computing
These principles may remain valid for evolving software systems, in par-
ticular, for Big Data computing. Processing knowledge originating from Big
Data may, however, imply more complexity due to its intrinsic social features.
Knowledge is a product that needs to be shared within a group so that
survivability and quality of life of the group members will be higher than
those of any individual alone. Sharing knowledge facilitates collaboration
and improves individual and group performance. Knowledge is actively
consumed and also left as a major inheritance for future generations, for
example, in the form of ontologies. As a collaborative and social substance,
knowledge and cognition evolve in a more complex way for which additional
facets have to be taken into account such as social or group focus of attention,
bias, interpretation, explicitation, expressiveness, inconsistency, etc.
In summary, it may be admitted that Big Data is collected and super-
vised by different communities following different cultures, standards,
32 Big Data Computing
Sharding is a client-side affair, that is, the database server does not do it
for user. In this kind of environment, when someone accesses data, the data
access layer uses consistent hashing to determine which machine in the clus-
ter a precise data should be written to (or read from). Enhancing capacity to
a sharded system is a process of manually rebalancing the data across the
cluster. The database system itself takes care of rebalancing the data and
guaranteeing that it is adequately replicated across the cluster. This is what
it means for a database to be horizontally scalable.
In many cases, constructing Big Data systems on premise provides better
data flow performance, but requires a greater capital investment. Moreover,
one has to consider the growth of the data. While many model linear growth
curves, interestingly the patterns of data growth within Big Data systems
are more exponential. Therefore, model both technology and costs to match
up with sensible growth of the database so that the growth of the data flows.
Structured data transformation is the traditional approach of changing the
structure of the data found within the source system to the structure of the
target system, for instance, a Big Data system. The advantage of most Big Data
systems is that deep structure is not a requirement; without doubt, structure
can typically be layered in after the data arrive at the goal. However, it is a
best practice to form the data within the goal. It should be a good abstrac-
tion of the source operational databases in a structure that allows those who
analyze the data within the Big Data system to effectively and efficiently find
the data required. The issue to consider with scaling is the amount of latency
that transformations cause as data moves from the source(s) to the goal, and
the data are changed in both structure and content. However, one should
avoid complex transformations as data migrations for operational sources
to the analytical goals. Once the data are contained within a Big Data sys-
tem, the distributed nature of the architecture allows for the gathering of the
proper result set. So, transformations that cause less latency are more suit-
able within Big Data domain.
reflect the change in the world snap-shotted by Big Data tokens. Inspiration
and analogies are taken from evolutionary biology.
Mutagen
KO KO
(TBox)
(ABox) Perception Communication
Sensor input
Deliberation
Genome
(TBox)
Morphogenesis
Body
(ABox)
Mutation
Reproduction Action output
Recombination
Excretion
KO
Environment
Figure 1.6
A Knowledge Organism: functionality and environment. Small triangles of different trans-
parency represent knowledge tokens in the environment—consumed and produced by KOs.
These knowledge tokens may also referred to as mutagens as they may trigger mutations.
Toward Evolving Knowledge Ecosystems for Big Data Understanding 37
Airline Business
has
basedIn Boeing: PlaneMaker
by
hikes
New20YMarketForecast
baseOf hikedBy
LONDON (Reuters)
U.S. planemaker Boeing hiked its 20-year
market forecast, predicting demand for
34,000 new aircraft worth $4.5 trillion, on Business News Stream
growth in emerging regions and as airlines
seek efficient new planes to counter high
fuel costs.
Knowledge Extraction
and Contextualization
Figure 1.7
Environmental contexts, knowledge tokens, knowledge extraction, and contextualization.
38 Big Data Computing
* Unified Modeling Language (UML) notation is used for picturing the knowledge token in
Figure 1.7 because it is more illustrative. Though not shown in Figure 1.7, it can be straight-
forwardly coded in OWL, following, for example, Kendall et al. (2009).
Toward Evolving Knowledge Ecosystems for Big Data Understanding 39
EKO
C1 Species
Genome
Etalon
∅
KOa
KOb
C1
C1
Genome
Genome
Body
∅
∅ Body
Environmental Contexts
Figure 1.8
Knowledge genomes and bodies. Different groups of assertions in a KO body are attributed to
different elements of its genome, as shown by dashed arrows. The more assertions relate to a
genome element, the more dominant this element is as shown by shades of gray.
40 Big Data Computing
Morphogenesis
Morphogenesis in a KO could be seen as a process of developing the shape
of a KO body. In fact, such a development is done by adding new assertions
to the body and attributing them to the correct parts of the genome. This
process could be implemented using ontology instance migration technique
(Davidovsky et al. 2011); however, the objective of morphogenesis differs from
that of ontology instance migration. The task of the latter is to ensure correct-
ness and completeness, that is, that, ideally, all the assertions are properly
aligned with and added to the target ontology ABox. Morphogenesis requires
that only the assertions that fit well to the TBox of the target ontology are
consumed for shaping it out. Those below the fitness threshold are excreted.
If, for example, a mutagen perceived by a KO is the one of our Boeing example
presented in Figures 1.2 or 1.7, then the set of individual assertions will be*
{AllNipponAirways:Airline, B787-JA812A:EfficientNewPlane,
Japan:Country, Boeing:PlaneMaker, New20YMarketForecastbyBoeing:Mark
etForecast, United States:Country, Old20YMarketForecastbyBoeing:MarketF
orecast}. (1.1)
Let us now assume that the genome (TBox) of the KO contains only the con-
cepts represented in Figure 1.2 as grey-shaded classes—{Airline, PlaneMaker,
MarketForecast} and thick-line relationships—{seeksFor–soughtBy}. Then
only the bolded assertions from (1.1) could be consumed for morphogen-
esis by this KO and the rest have to be excreted back to the environment.
Interestingly, the ratio of mutagen ABox consumption may be used as a good
* The syntax for representing individual assertions is similar to the syntax in UML for compat-
ibility with Figure 1.2: 〈assertion-name〉:〈concept-name〉.
42 Big Data Computing
Mutation
Mutation of a KO could be understood as the change of its genome caused
by the environmental influences (mutagenic factors) coming with the con-
sumed knowledge tokens. Similar to the biological evolution, a KO and its
genome are resistent to mutagenic factors and do not change at once because
of any incoming influence, but only because of those which could not be
ignored because of their strength. Different genome elements may be dif-
ferently resistant. Let us illustrate different aspects of mutation and resis-
tance using our Boeing example. As depicted in Figure 1.9, the change of the
AirPlaneMaker concept name (to PlaneMaker) in the genome did not happen
though a new assertion had been added to the body as a result of morpho-
genesis (Boeing: (PlaneMaker) AirPlaneMaker*). The reason AirPlaneMaker con-
cept resisted this mutation was that the assertions attributed to the concept
of PlaneMaker were in the minority—so, the mutagenic factor has not yet
been strong enough. This mutation will have a better chance to occur if simi-
lar mutagenic factors continue to come in and the old assertions in the body
of the KO die out because their lifetime periods come to end. More generally,
the more individual assertions are attributed to a genome element at a given
point in time—the more strong this genome element is to mutations.
In contrast to the AirPlaneMaker case, the mutations brought by hikes—
hikedBy and successorOf—predecessorOf object properties did happen
(Figure 1.9) because the KO did not possess any (strong) argument to resist
* UML syntax is used as basic. The name of the class from the knowledge token is added in
brackets before the name of the class to which the assertion is attributed in the KO body. This
is done for keeping the information about the occurrences of a different name in the incom-
ing knowledge tokens. This historical data may further be used for evaluating the strength
of the mutagenic factor.
Toward Evolving Knowledge Ecosystems for Big Data Understanding 43
These unused elements are excreted (Figure 1.9) back to the environment
as a knowledge token. This token may further be consumed by another
* Country Mutating KO
-baseOf
hikes successorOf
- mutations hikedBy predecessorOf
-basedIn AirPlaneMaker MarketForecast
-has -by -SalesVolume Genome
*
* *
*
5YMarketForcastbyBoeing2010 : MarketForecast
- morphogeneses Boeing : AirPlaneMaker *
Boeing (PlaneMaker) : AirPlaneMaker hikes
Boeing : AirPlaneMaker New20YMarketForecastbyBoeing : MarketForecast
Boe Boeing (PlaneMaker) : AirPlaneMaker successorOf
- irrelevant hikedby Old20YMarketForecastbyBoeing : MarketForecast
PlaneMaker MarkerForecast
Consumed -has -by -SalesVolume Excreted
knowledge * * Airline knowledge
token token
Airline EfficientNewPlane
-fuelConsumption : <unspecified> = low
-built : <unspecified> = >2009
* -seeksFor
-delivered : Date * -soughtBy
EfficientNewPlane
* -seeksFor
-fuelConsumption : <unspecified> = low
-soughtBy * Genome -built : <unspecified> = >2009
-delivered : Date
has by Body
basedIn Boeing : PlaneMaker New20YMarketForecastbyBoeing : MarketForecast Genome
SalesVolume = 4.5 trillion
Figure 1.9
Mutation in an individual KO illustrated by our Boeing example.
44 Big Data Computing
goal. This artificial way of control over the natural evolutionary order of
things may be regarded as breeding—a controlled process of sequencing
desired mutations that causes the emergence of a species with the required
genome features.
Ontologies are the “blood and flesh” of the KOs and the whole ecosys-
tem as they are both the code registering a desired evolutionary change and
the result of this evolution. From the data-processing viewpoint, the ontolo-
gies are consensual knowledge representations that facilitate improving
data integration, transformation, and interoperability between the process-
ing nodes in the infrastructure. A seamless connection through the layers
of the processing stack is facilitated by the way ontologies are created and
changed. As already mentioned above in the introduction of the “Knowledge
Self-Management and Refinement through Evolution” section, ontologies
are traditionally designed beforehand and further populated by assertions
taken from the source data. In our evolving ecosystem, ontologies evolve in
parallel to data processing. Moreover, the changes in ontologies are caused
by the mutagens brought by the incoming data. Knowledge extraction sub-
system (Figure 1.7) transforms units of data to knowledge tokens. These in
turn are sown in a corresponding environmental context by a contextualiza-
tion subsystem and further consumed by KOs. KOs may change their body
or even mutate due to the changes brought by consumed mutagenic knowl-
edge tokens. The changes in the KOs are in fact the changes in the ontologies
they carry. So, ontologies change seamlessly and naturally in a way to best
suite the substance brought in by data. For assessing this change, the judg-
ments about the value and appropriateness of ontologies in time are impor-
tant. Those should, however, be formulated accounting for the fact that an
ontology is able to self-evolve.
A degree to which an ontology is reused is one more important character-
istic to be taken into account. Reuse means that data in multiple places refers
to this ontology and when combined with interoperability it implies that
data about similar things is described using the same ontological fragments.
When looking at an evolving KO, having a perfect ontology would mean that
if new knowledge tokens appear in the environmental contexts of an organ-
ism, the organism can integrate all assertions in the tokens, that is, without
a need to excrete some parts of the consumed knowledge tokens back to
the environment. That is to say, the ontology which was internal to the KO
before the token was consumed was already prepared for the integration of
the new token. Now, one could turn the viewpoint by saying that the infor-
mation described in the token was already described in the ontology which
the KO had and thus that the ontology was reused in one more place. This
increases the value, that is, the fitness of the ontology maintained by the KO.
Using similar argumentation, we can conclude that if a KO needs to excrete
a consumed knowledge token, the ontology fits worse to describing the frag-
ment of data to which the excreted token is attributed. Thus, in conclusion,
we could say that the fitness of a KO is directly dependent on the propor-
tion between the parts of knowledge tokens which it: (a) is able to consume
for morphogenesis and possibly mutation; versus (b) needs to excrete back
to the environment. Additionally, the age of the assertions which build up
the current knowledge body of a KO influences its quality. If the proportion
48 Big Data Computing
of very young assertions in the body is high, the KO might be not resistant
to stochastic changes, which is not healthy. Otherwise, if only long-living
assertions form the body, it means that the KO is either in a wrong context
or too resistant to mutagens. Both are bad as no new information is added,
the KO ignores changes, and hence the ontology it carries may become irrel-
evant. Therefore, a good mix of young and old assertions in the body of
a KO indicates high fitness—KO’s knowledge is overall valid and evolves
appropriately.
Of course stating that fitness depends only on the numbers of used and
excreted assertions is an oversimplification. Indeed, incoming knowledge
tokens that carry assertions may be very different. For instance, the knowl-
edge token in our Boeing example contains several concepts and properties
in its TBox: a Plane, a PlaneMaker, a MarketForecast, an Airline, a Country,
SalesVolume, seeksFor—soughtBy, etc. Also, some individuals attrib-
uted to these TBox elements are given in the ABox: UnitedStates, Boeing,
New20YMarketForecastByBoeing, 4.5 trillion, etc. One can imagine a less
complex knowledge token which contains less information. In addition to
size and complexity, a token has also other properties which are important to
consider. One is the source where the token originates from. A token can be
produced by knowledge extraction from a given channel or can be excreted
by a KO. When the token is extracted from a channel, its value depends on
the quality of the channel, relative to the quality of other channels in the
system (see also the context of origin in the “Contextualizing” section). The
quality of knowledge extraction is important as well, though random errors
could be mitigated by statistical means. Further, a token could be attributed
to a number of environmental contexts. A context is important, that is, adds
more value to a token in the context if there are a lot of knowledge tokens in
that context or more precisely there have appeared many tokens in the con-
text recently. Consequently, a token becomes less valuable along its lifetime
in the environment.
Till now, we have been looking at different fitness, value, and quality fac-
tors in insulation. The problem is, however, that there is no straightforward
way to integrate these different factors. For this, an approach to address the
problem of assessing the quality of an ontology as a dynamic optimization
problem (Cochez and Terziyan 2012) may be relevant.
Some Conclusions
For all those who use or process Big Data a good mental picture of the world,
dissolved in data tokens, may be worth of petabytes of raw information
and save weeks of analytic work. Data emerge reflecting a change in the
world. Hence, Big Data is a fine-grained reflection of the changes around
Toward Evolving Knowledge Ecosystems for Big Data Understanding 49
us. Knowledge extracted from these data in an appropriate and timely way
is an essence of adequate understanding of the change in the world. In this
chapter, we provided the evidence that numerous challenges stand on the
way of understanding the sense, the trends dissolved in the petabytes of
Big Data—extracting its semantics for further use in analytics. Among those
challenges, we have chosen the problem of balancing between effectiveness
and efficiency in understanding Big Data as our focus. For better explaining
our motivation and giving a reader the key that helps follow how our prem-
ises are transformed into conclusions, we offered a simple walkthrough
example of a news token.
We began the analysis of Big Data Computing by looking at how the
phenomenon influences and changes industrial landscapes. This overview
helped us figure out that the demand in industries for effective and efficient
use of Big Data, if properly understood, is enormous. However, this demand
is not yet fully satisfied by the state-of-the-art technologies and methodolo-
gies. We then looked at current trends in research and development in order
to narrow the gaps between the actual demand and the state of the art. The
analysis of the current state of research activities resulted in pointing out the
shortcomings and offering an approach that may help understand Big Data
in a way that balances effectiveness and efficiency.
The major recommendations we elaborated for achieving the balance are: (i)
devise approaches that intelligently combine top-down and bottom-up pro-
cessing of data semantics by exploiting “3F + 3Co” in dynamics, at run time;
(ii) use a natural incremental and evolutionary way of processing Big Data
and its semantics instead of following a mechanistic approach to scalability.
Inspired by the harmony and beauty of biological evolution, we further
presented our vision of how these high-level recommendations may be
approached. The “Scaling with a Traditional Database” section offered a
review of possible ways to solve scalability problem at data processing level.
The “Knowledge Self-Management and Refinement through Evolution” sec-
tion presented a conceptual level framework for building an evolving ecosys-
tem of environmental contexts with knowledge tokens and different species
of KOs that populate environmental contexts and collect knowledge tokens
for nutrition. The genomes and bodies of these KOs are ontologies describing
corresponding environmental contexts. These ontologies evolve in line with
the evolution of KOs. Hence they reflect the evolution of our understanding
of Big Data by collecting the refinements of our mental picture of the change
in the world. Finally, we found out that such an evolutionary approach to
building knowledge representations will naturally allow assuring fitness of
knowledge representations—as the fitness of the corresponding KOs to the
environmental contexts they inhabit.
We also found out that the major technological components for building
such evolving knowledge ecosystems are already in place and could be effec-
tively used, if refined and combined as outlined in the “Knowledge Self-
Management and Refinement through Evolution” section.
50 Big Data Computing
Acknowledgments
This work was supported in part by the “Cloud Software Program” man-
aged by TiViT Oy and the Finnish Funding Agency for Technology and
Innovation (TEKES).
References
Abadi, D. J., D. Carney, U. Cetintemel, M. Cherniack, C. Convey, S. Lee, M. Stonebraker,
N. Tatbul, and S. Zdonik. 2003. Aurora: A new model and architecture for data
stream management. VLDB Journal 12(2): 120–139.
Anderson, C. 2008. The end of theory: The data deluge makes the scientific method
obsolete. Wired Magazine 16:07 (June 23). http://www.wired.com/science/
discoveries/magazine/16–07/pb_theory.
Ankolekar, A., M. Krotzsch, T. Tran, and D. Vrandecic. 2007. The two cultures:
Mashing up Web 2.0 and the Semantic Web. In Proc Sixteenth Int Conf on World
Wide Web (WWW’07), 825–834. New York: ACM.
Berry, D. 2011. The computational turn: Thinking about the digital humanities. Culture
Machine 12 (July 11). http://www.culturemachine.net/index.php/cm/article/
view/440/470.
Beyer, M. A., A. Lapkin, N. Gall, D. Feinberg, and V. T. Sribar. 2011. ‘Big Data’ is only
the beginning of extreme information management. Gartner Inc. (April). http://
www.gartner.com/id=1622715 (accessed August 30, 2012).
Bizer, C., T. Heath, and T. Berners-Lee. 2009. Linked data—The story so far. International
Journal on Semantic Web and Information Systems 5(3): 1–22.
Bollier, D. 2010. The promise and peril of big data. Report, Eighteenth Annual
Aspen Institute Roundtable on Information Technology, the Aspen Institute.
http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/The_
Promise_and_Peril_of_Big_Data.pdf (accessed August 30, 2012).
Bowker, G. C. 2005. Memory Practices in the Sciences. Cambridge, MA: MIT Press.
Boyd, D. and K. Crawford. 2012. Critical questions for big data. Information, Communication
& Society 15(5): 662–679.
Broekstra, J., A. Kampman, and F. van Harmelen. 2002. Sesame: A generic architecture
for storing and querying RDF and RDF schema. In The Semantic Web—ISWC
2002, eds. I. Horrocks and J. Hendler, 54–68. Berlin, Heidelberg: Springer-Verlag,
LNCS 2342.
Cai, M. and M. Frank. 2004. RDFPeers: A scalable distributed RDF repository based
on a structured peer-to-peer network. In Proc Thirteenth Int Conf World Wide
Web (WWW’04), 650–657. New York: ACM.
Capgemini. 2012. The deciding factor: Big data & decision making. Report. http://
www.capgemini.com/services-and-solutions/technology/business-informa-
tion-management/the-deciding-factor/ (accessed August 30, 2012).
Chang, F., J. Dean, S. Ghemawat, W. C. Hsieh, D. A. Wallach, M. Burrows,
T. Chandra, A. Fikes, and R. E. Gruber. 2008. Bigtable: A distributed storage
Toward Evolving Knowledge Ecosystems for Big Data Understanding 51
Ermolayev, V., C. Ruiz, M. Tilly, E. Jentzsch, J.-M. Gomez-Perez, and W.-E. Matzke.
2010. A context model for knowledge workers. In Proc Second Workshop on
Content, Information, and Ontologies (CIAO 2010), eds. V. Ermolayev, J.-M.
Gomez-Perez, P. Haase, and P. Warren, CEUR-WS, vol. 626. http://ceur-ws.
org/Vol-626/regular2.pdf (online).
Euzenat, J. and P. Shvaiko. 2007. Ontology Matching. Berlin, Heidelberg: Springer-Verlag.
Fan, W., A. Bifet, Q. Yang, and P. Yu. 2012a. Foreword. In Proc First Int Workshop on Big
Data, Streams, and Heterogeneous Source Mining: Algorithms, Systems, Programming
Models and Applications, eds. W. Fan, A. Bifet, Q. Yang, and P. Yu, New York: ACM.
Fan, J., A. Kalyanpur, D. C. Gondek, and D. A. Ferrucci. 2012b. Automatic knowledge
extraction from documents. IBM Journal of Research and Development 56(3.4):
5:1–5:10.
Fensel, D., F. van Harmelen, B. Andersson, P. Brennan, H. Cunningham, E. Della
Valle, F. Fischer et al. 2008. Towards LarKC: A platform for web-scale reason-
ing, Semantic Computing, 2008 IEEE International Conference on, pp. 524, 529,
4–7 Aug. 2008. doi: 10.1109/ICSC.2008.41.
Fisher, D., R. DeLine, M. Czerwinski, and S. Drucker. 2012. Interactions with big data
analytics. Interactions 19(3):50–59.
Gangemi, A. and V. Presutti. 2009. Ontology design patterns. In Handbook on
Ontologies, eds. S. Staab and R. Studer, 221–243. Berlin, Heidelberg: Springer-
Verlag, International Handbooks on Information Systems.
Ghemawat, S., H. Gobioff, and S.-T. Leung. 2003. The Google file system. In Proc
Nineteenth ACM Symposium on Operating Systems Principles (SOSP’03), 29–43.
New York: ACM.
Golab, L. and M. Tamer Ozsu. 2003. Issues in data stream management. SIGMOD
Record 32(2): 5–14.
Gordon, A. 2005. Privacy and ubiquitous network societies. In Workshop on ITU
Ubiquitous Network Societies, 6–15.
Greller, W. 2012. Reflections on the knowledge society. http://wgreller.wordpress.
com/2010/11/03/big-data-isnt-big-knowledge-its-big-business/ (accessed
August 20, 2012).
Gu, Y. and R. L. Grossman. 2009. Sector and sphere: The design and implementation
of a high-performance data cloud. Philosophical Transactions of the Royal Society
367(1897): 2429–2445.
Guarino, N. and C. Welty. 2001. Supporting ontological analysis of taxonomic rela-
tionships. Data and Knowledge Engineering 39(1): 51–74.
Guéret, C., E. Oren, S. Schlobach, and M. Schut. 2008. An evolutionary perspective
on approximate RDF query answering. In Proc Int Conf on Scalable Uncertainty
Management, eds. S. Greco and T. Lukasiewicz, 215–228. Berlin, Heidelberg:
Springer-Verlag, LNAI 5291.
He, B., M. Yang, Z. Guo, R. Chen, B. Su, W. Lin, and L. Zhou. 2010. Comet: Batched
stream processing for data intensive distributed computing, In Proc First ACM
symposium on Cloud Computing (SoCC’10), 63–74. New York: ACM.
Hepp, M. 2007. Possible ontologies: How reality constrains the development of rel-
evant ontologies. IEEE Internet Computing 11(1): 90–96.
Hogan, A., J. Z. Pan, A. Polleres, and Y. Ren. 2011. Scalable OWL 2 reasoning for linked
data. In Lecture Notes for the Reasoning Web Summer School, Galway, Ireland
(August). http://aidanhogan.com/docs/rw_2011.pdf (accessed October 18,
2012).
Toward Evolving Knowledge Ecosystems for Big Data Understanding 53
Isaac, A., C. Trojahn, S. Wang, and P. Quaresma. 2008. Using quantitative aspects
of alignment generation for argumentation on mappings. In Proc ISWC’08
Workshop on Ontology Matching, ed. P. Shvaiko, J. Euzenat, F. Giunchiglia, and
H. Stuckenschmidt, CEUR-WS Vol-431. http://ceur-ws.org/Vol-431/om2008_
Tpaper5.pdf (online).
Ishai, Y., E. Kushilevitz, R. Ostrovsky, and A. Sahai. 2009. Extracting correlations,
Foundations of Computer Science, 2009. FOCS '09. 50th Annual IEEE Symposium
on, pp. 261, 270, 25–27 Oct. 2009. doi: 10.1109/FOCS.2009.56.
Joseph, A. 2012. A Berkeley view of big data. Closing keynote of Eduserv Symposium
2012: Big Data, Big Deal? http://www.eduserv.org.uk/newsandevents/
events/2012/symposium/closing-keynote (accessed October 8, 2012).
Keberle, N. 2009. Temporal classes and OWL. In Proc Sixth Int Workshop on OWL:
Experiences and Directions (OWLED 2009), eds. R. Hoekstra and P. F. Patel-
Schneider, CEUR-WS, vol 529. http://ceur-ws.org/Vol-529/owled2009_sub-
mission_27.pdf (online).
Kendall, E., R. Bell, R. Burkhart, M. Dutra, and E. Wallace. 2009. Towards a graphical
notation for OWL 2. In Proc Sixth Int Workshop on OWL: Experiences and Directions
(OWLED 2009), eds. R. Hoekstra and P. F. Patel-Schneider, CEUR-WS, vol 529.
http://ceur-ws.org/Vol-529/owled2009_submission_47.pdf (online).
Klinov, P., C. del Vescovo, and T. Schneider. 2012. Incrementally updateable and
persistent decomposition of OWL ontologies. In Proc OWL: Experiences and
Directions Workshop, ed. P. Klinov and M. Horridge, CEUR-WS, vol 849. http://
ceur-ws.org/Vol-849/paper_7.pdf (online).
Kontchakov, R., C. Lutz, D. Toman, F. Wolter, and M. Zakharyaschev. 2010. The com-
bined approach to query answering in DL-Lite. In Proc Twelfth Int Conf on the
Principles of Knowledge Representation and Reasoning (KR 2010), eds. F. Lin and U.
Sattler, 247–257. North America: AAAI.
Knuth, D. E. 1998. The Art of Computer Programming. Volume 3: Sorting and Searching.
Second Edition, Reading, MA: Addison-Wesley.
Labrou, Y. 2006. Standardizing agent communication. In Multi-Agent Systems and
Applications, eds. M. Luck, V. Marik, O. Stepankova, and R. Trappl, 74–97. Berlin,
Heidelberg: Springer-Verlag, LNCS 2086.
Labrou, Y., T. Finin, and Y. Peng. 1999. Agent communication languages: The current
landscape. IEEE Intelligent Systems 14(2): 45–52.
Lenat, D. B. 1995. CYC: A large-scale investment in knowledge infrastructure.
Communications of the ACM 38(11): 33–38.
Lin, J. and C. Dyer. 2010. Data-Intensive Text Processing with MapReduce. Morgan &
Claypool Synthesis Lectures on Human Language Technologies. http://lintool.
github.com/MapReduceAlgorithms/MapReduce-book-final.pdf.
Manyika, J., M. Chui, B. Brown, J. Bughin, R. Dobbs, C. Roxburgh, and A. Hung Byers.
2011. Big data: The next frontier for innovation, competition, and productivity.
McKinsey Global Institute (May). http://www.mckinsey.com/insights/mgi/
research/technology_and_innovation/big_data_the_next_frontier_for_inno-
vation (accessed October 8, 2012).
McGlothlin, J. P. and L. Khan. 2010. Materializing inferred and uncertain knowledge
in RDF datasets. In Proc Twenty-Fourth AAAI Conference on Artificial Intelligence
(AAAI-10), 1951–1952. North America: AAAI.
Mills, P. 2011. Efficient statistical classification of satellite measurements. International
Journal of Remote Sensing 32(21): 6109–6132.
54 Big Data Computing
Mitchell, I. and M. Wilson. 2012. Linked Data. Connecting and Exploiting Big Data.
Fujitsu White Paper (March). http://www.fujitsu.com/uk/Images/Linked-
data-connecting-and-exploiting-big-data-(v1.0).pdf.
Nardi, D. and R. J. Brachman. 2007. An introduction to description logics. In
The Description Logic Handbook, eds. F. Baader, D. Calvanese, D. L. McGuinness,
D. Nardi, and P. F. Patel-Schneider. New York: Cambridge University Press.
Nemani, R. R. and R. Konda. 2009. A framework for data quality in data warehousing.
In Information Systems: Modeling, Development, and Integration, eds. J. Yang, A.
Ginige, H. C. Mayr, and R.-D. Kutsche, 292–297. Berlin, Heidelberg: Springer-
Verlag, LNBIP 20.
Olston, C. 2012. Programming and debugging large scale data processing workflows.
In First Int Workshop on Hot Topics in Cloud Data Processing (HotCDP’12),
Switzerland.
Oren, E., S. Kotoulas, G. Anadiotis, R. Siebes, A. ten Teije, and F. van Harmelen. 2009.
Marvin: Distributed reasoning over large-scale Semantic Web data. Journal of
Web Semantics 7(4): 305–316.
Ponniah, P. 2010. Data Warehousing Fundamentals for IT Professionals. Hoboken, NJ:
John Wiley & Sons.
Puuronen, S., V. Terziyan, and A. Tsymbal. 1999. A dynamic integration algorithm
for an ensemble of classifiers. In Foundations of Intelligent Systems: Eleventh Int
Symposium ISMIS’99, eds. Z.W. Ras and A. Skowron, 592–600. Berlin, Heidelberg:
Springer-Verlag, LNAI 1609.
Quillian, M. R. 1967. Word concepts: A theory and simulation of some basic semantic
capabilities. Behavioral Science 12(5): 410–430.
Quillian, M. R. 1969. The teachable language comprehender: A simulation program
and theory of language. Communications of the ACM 12(8): 459–476.
Rahwan, T. 2007. Algorithms for coalition formation in multi-agent systems. PhD
diss., University of Southampton. http://users.ecs.soton.ac.uk/nrj/download-
files/lesser-award/rahwan-thesis.pdf (accessed October 8, 2012).
Rimal, B. P., C. Eunmi, and I. Lumb. 2009. A taxonomy and survey of cloud comput-
ing systems. In Proc Fifth Int Joint Conf on INC, IMS and IDC, 44–51. Washington,
DC: IEEE CS Press.
Roy, G., L. Hyunyoung, J. L. Welch, Z. Yuan, V. Pandey, and D. Thurston. 2009. A
distributed pool architecture for genetic algorithms, Evolutionary Computation,
2009. CEC '09. IEEE Congress on, pp. 1177, 1184, 18–21 May 2009. doi: 10.1109/
CEC.2009.4983079
Sakr, S., A. Liu, D.M. Batista, and M. Alomari. 2011. A survey of large scale data
management approaches in cloud environments. IEEE Communications Society
Surveys & Tutorials 13(3): 311–336.
Salehi, A. 2010. Low Latency, High Performance Data Stream Processing: Systems
Architecture. Algorithms and Implementation. Saarbrücken: VDM Verlag.
Shvachko, K., K. Hairong, S. Radia, R. Chansler. 2010. The Hadoop distributed file
system, Mass Storage Systems and Technologies (MSST), 2010 IEEE 26th Symposium
on, pp.1,10, 3–7 May 2010. doi: 10.1109/MSST.2010.5496972.
Smith, B. 2012. Big data that might benefit from ontology technology, but why this
usually fails. In Ontology Summit 2012, Track 3 Challenge: Ontology and Big
Data. http://ontolog.cim3.net/file/work/OntologySummit2012/2012-02-09_
BigDataChallenge-I-II/Ontology-for-Big-Data—BarrySmith_20120209.pdf
(accessed October 8, 2012).
Toward Evolving Knowledge Ecosystems for Big Data Understanding 55
Contents
Introduction............................................................................................................ 58
Main Requirements and Features of Big Data Solutions..................................65
Infrastructral and Architectural Aspects........................................................65
Scalability.......................................................................................................65
High Availability.......................................................................................... 67
Computational Process Management........................................................ 68
Workflow Automation................................................................................. 68
Cloud Computing........................................................................................ 69
Self-Healing................................................................................................... 70
Data Management Aspects.............................................................................. 70
Database Size................................................................................................ 71
Data Model.................................................................................................... 71
Resources....................................................................................................... 72
Data Organization........................................................................................ 73
Data Access for Rendering.......................................................................... 74
Data Security and Privacy........................................................................... 74
Data Analytics Aspects..................................................................................... 75
Data Mining/Ingestion................................................................................ 76
Data Access for Computing........................................................................77
Overview of Big Data Solutions........................................................................... 78
Couchbase...................................................................................................... 79
eXist................................................................................................................ 82
Google Map-Reduce..................................................................................... 82
Hadoop..........................................................................................................83
Hbase..............................................................................................................83
Hive................................................................................................................84
MonetDB........................................................................................................84
MongoDB.......................................................................................................85
Objectivity.....................................................................................................85
OpenQM........................................................................................................ 86
RDF-3X........................................................................................................... 86
57
58 Big Data Computing
Introduction
Although the management of huge and growing volumes of data is a chal-
lenge for the past many years, no long-term solutions have been found so far.
The term “Big Data” initially referred to huge volumes of data that have the
size beyond the capabilities of current database technologies, consequently
for “Big Data” problems one referred to the problems that present a combina-
tion of large volume of data to be treated in short time. When one establishes
that data have to be collected and stored at an impressive rate, it is clear
that the biggest challenge is not only about the storage and management,
their analysis, and the extraction of meaningful values, but also deductions
and actions in reality is the main challenge. Big Data problems were mostly
related to the presence of unstructured data, that is, information that either
do not have a default schema/template or that do not adapt well to relational
tables; it is therefore necessary to turn to analysis techniques for unstruc-
tured data, to address these problems.
Recently, the Big Data problems are characterized by a combination of the
so-called 3Vs: volume, velocity, and variety; and then a fourth V too has been
added: variability. In essence, every day a large volume of information is pro-
duced and these data need a sustainable access, process, and preservation
according to the velocity of their arrival, and therefore, the management of
large volume of data is not the only problem. Moreover, the variety of data,
metadata, access rights and associating computing, formats, semantics, and
software tools for visualization, and the variability in structure and data
models significantly increase the level of complexity of these problems. The
first V, volume, describes the large amount of data generated by individuals,
groups, and organizations. The volume of data being stored today is explod-
ing. For example, in the year 2000 about 800,000 petabytes of data in the world
were generated and stored (Eaton et al., 2012) and experts estimated that in
the year 2020, about 35 zettabyte of data will be produced. The second V,
velocity, refers to speed at which Big Data are collected, processed, and elabo-
rated, may handle a constant flow of massive data, which are impossible to
be processed with traditional solutions. For this reason, it is not only impor-
tant to consider “where” the data are stored, but also “how” they are stored.
The third V, variety, is concerned with the proliferation of data types from
social, mobile sources, machine-to-machine, and traditional data that are
part of it. With the explosion of social networks, smart devices, and sensors,
Tassonomy and Review of Big Data Solutions Navigation 59
data have become complex, because they include raw, semistructured, and
unstructured data from log files, web pages, search indexes, cross media,
emails, documents, forums, and so on. Variety represents all types of data
and usually the enterprises must be able to analyze all of them, if they want
to gain advantage. Finally, variability, the last V, refers to data unpredictabil-
ity and to how these may change in the years, following the implementation
of the architecture. Moreover, the concept of variability can be attributed to
assigning a variable interpretation to the data and to the confusions created
in Big Data analysis, referring, for example, to different meanings in Natural
Language that some data may have. These four properties can be considered
orthogonal aspects of data storage, processing, and analysis and it is also
interesting that increasing variety and variability also increases the attrac-
tiveness of data and their potentiality in providing hidden and unexpected
information/meanings.
Especially in science, the need of new “infrastructures for global research
data” that can achieve interoperability to overcome the limitations related to
language, methodology, and guidelines (policy) would be needed in short
time. To cope with these types of complexities, several different techniques
and tools may be needed, they have to be composed and new specific algo-
rithms and solutions may also have to be defined and implemented. The
wide range of problems and the specifics needs make almost impossible to
identify unique architectures and solutions adaptable to all possible applica-
tive areas. Moreover, not only the number of application areas so different
from each other, but also the different channels through which data are daily
collected increases the difficulties of companies and developers to identify
which is the right way to achieve relevant results from the accessible data.
Therefore, this chapter can be a useful tool for supporting the researchers
and technicians in making decisions about setting up some Big Data infra-
structure and solutions. To this end, it is very helpful to have an overview
about Big Data techniques; it can be used as a sort of guidelines to better
understand possible differences and relevant best features among the many
needed and proposed by the product as the key aspects of Big Data solutions.
These can be regarded as requirements and needs according to which the
different solutions can be compared and assessed, in accordance with the
case study and/or application domain.
To this end, and to better understand the impact of Big Data science and
solutions, in the following, a number of examples describing major appli-
cative domains taking advantage from the Big Data technologies and solu-
tions are reported: education and training, cultural heritage, social media
and social networking, health care, research on brain, financial and business,
marketing and social marketing, security, smart cities and mobility, etc.
Big Data technologies have the potential to revolutionize education.
Educational data such as students’ performance, mechanics of learning, and
answers to different pedagogical strategies can provide an improved under-
standing of students’ knowledge and accurate assessments of their progress.
60 Big Data Computing
These data can also help identify clusters of students with similar learn-
ing style or difficulties, thus defining a new form of customized education
based on sharing resources and supported by computational models. The
proposed new models of teaching in Woolf et al. (2010) are trying to take into
account student profile and performance, pedagogical and psychological
and learning mechanisms, to define personalized instruction courses and
activities that meet the different needs of different individual students and/
or groups. In fact, in the educational sector, the approach to collect, mine,
and analyze large data sets has been consolidated, in order to provide new
tools and information to the key stakeholders. This data analysis can provide
an increasing understanding of students’ knowledge, improve the assess-
ments of their progress, and help focus questions in education and psychol-
ogy, such as the method of learning or how different students respond to
different pedagogical strategies. The collected data can also be used to define
models to understand what students actually know and understand how
to enrich this knowledge, and assess which of the adopted techniques can
be effective in whose cases, and finally produce a case-by-case action plan.
In terms of Big Data, a large variety and variability of data is presented to
take into account all events in the students’ career; the data volume is also
an additional factor. Another sector of interest, in this field, is the e-learn-
ing domain, where two main kinds of users are defined: the learners and
the learning providers (Hanna, 2004). All personal details of learners and
the online learning providers’ information are stored in specific database,
so applying data mining with e-learning can enable one to realize teach-
ing programs targeted to particular interests and needs through an efficient
decision making.
For the management of large amounts of cultural heritage information data,
Europeana has been created with over 20 millions of content indexed that can
be retrieved in real time. Earlier, each of them was modeled with a simple
metadata model, ESE, while a new and more complete models called EDM
(Europeana Data Model) with a set of semantic relationships is going to be
adopted in the 2013 [Europeana]. A number of projects and activities are con-
nected to Europeana network to aggregate content and tools. Among them
ECLAP is a best practice network that collected not only content metadata for
Europeana, but also real content files from over 35 different institutions hav-
ing different metadata sets and over 500 file formats. A total of more than 1
million of cross media items is going to be collected with an average of some
hundreds of metadata each, thus resulting in billions of information elements
and multiple relationships among them to be queried, navigated, and accessed
in real time by a large community of users [ECLAP] (Bellini et al., 2012a).
The volume of data generated by social network is huge and with a high
variability in the data flow over time and space, due to human factor; for
example, Facebook receives 3 billion uploads per month, which corresponds
to approximately 3600 TB/year. Search engine companies such as Google
and Yahoo! collect every day trillions of bytes of data, around which real new
Tassonomy and Review of Big Data Solutions Navigation 61
Scalability
This feature may impact on the several aspects of the Big Data solution (e.g.,
data storage, data processing, rendering, computation, connection, etc.) and
66 Big Data Computing
A good solution to optimize the reaction time and to obtain a scalable solu-
tion at limited costs is the adoption of a multitiered storage system, including
cache levels, where data pass from one level to another along the hierar-
chy of storage media having different response times and costs. In fact, a
multitier approach to storage, utilizing arrays of disks for all backup with
a primary storage and the adoption of an efficient file systems, allows us to
both provide backups and restores to online storage in a timely manner, as
well as to scale up the storage when primary storage grows. Obviously, each
specific solution does not have to implement all layers of the memory hier-
archy because their needs depend on the single specific case, together with
the amount of information to be accessed per second, the deepness of the
cache memories, binning in classes of different types of data based on their
availability and recoverability, or the choice to use a middleware to connect
separate layers. The structure of the multitiered storage can be designed on
the basis of a compromise from access velocity to general storage cost. The
multiple storages create as counterpart a large amount of maintenance costs.
Scalability may take advantage from the recent cloud solutions that imple-
ments techniques for dynamic and bursting on cloud storage and processes
from private to public clouds and among the latter. Private cloud computing
has recently gained much traction from both commercial and open-source
interests (Microsoft, 2012). For example, tools such as OpenStack [OpenStack
Project] can simplify the process of managing virtual machine resources. In
most cases, for small-to-medium enterprises, there is a trend to migrate mul-
titier applications into public cloud infrastructures (e.g., Amazon), which are
delegated to cope with scalability via elastic cloud solutions. A deep discus-
sion on cloud is out of the scope of this chapter.
High Availability
The high availability of a service (e.g., it may be referred to general service,
to storage, process, and network) is a key requirement in an architecture that
can affect the simultaneous use of a large number of users and/or compu-
tational nodes located in different geographical locations (Cao et al., 2009).
Availability refers to the ability of the community of users to access a system
exploiting its services. A high availability leads to increased difficulties in
guarantee data updates, preservations, and consistency in real time, and it is
fundamental that a user perceives, during his session, the actual and proper
reactivity of the system. To cope with these features, the design should be
fault-tolerant, as in redundant solution for data and computational capabili-
ties to make them highly available despite the failure of some hardware and
software elements of the infrastructure. The availability of a system is usu-
ally expressed as a percentage of time (the nines method) that a system is up
over a given period of time, usually a year. In cloud systems, for instance,
the level of 5 nines (99.999% of time means HA, high availability) is typically
related to the service at hardware level, and it indicates a downtime per year
68 Big Data Computing
of approximately 5 min, but it is important to note that time does not always
have the same value but it depends on the organization referred to by the
critical system. The present solutions obtain the HA score by using a range
of techniques of cloud architectures as fault-tolerant capabilities for virtual
machines, redundant storage for distributed database and balancing for the
front end, and the dynamic move of virtual machines.
Workflow Automation
Big Data processes are typically formalized in the form of process work-
flows from data acquisition to results production. In some cases, the work-
flow is programed by using simple XML (Extensible Markup Language)
formalization or effective programing languages, for example, in Java,
JavaScript, etc. Related data may strongly vary in terms of dimensions and
data flow (i.e., variability): an architecture that handles well with both lim-
ited and large volumes of data, must be able to full support creation, organi-
zation, and transfer of these workflows, in single cast or broadcast mode. To
implement this type of architectures, sophisticated automation systems are
used. These systems work on different layers of the architecture through
applications, APIs (Application Program Interface), visual process design
environment, etc. Traditional Workflow Management Systems (WfMS) may
Tassonomy and Review of Big Data Solutions Navigation 69
not be suitable for processing a huge amount of data in real time, formal-
izing the stream processing, etc. In some Big Data applications, the high
data flow and timing requirements (soft real time) have made inadequate
the traditional paradigm “store-then-process,” so that the complex event pro-
cessing (CEP) paradigms are proposed (Gulisano et al., 2012): a system that
processes a continuous stream of data (event) on the fly, without any stor-
age. In fact, the CEP can be regarded as an event-driven architecture (EDA),
dealing with the detection and production of reaction to events, that spe-
cifically has the task of filtering, match and aggregate low-level events in
high-level events. Furthermore, creating a parallel-distributed CEP, where
data are partitioned across processing nodes, it is possible to realize an
elastic system capable of adapting the processing resources to the actual
workload reaching the high performance of parallel solutions and over-
coming the limits of scalability.
An interesting application example is the Large Hadron Collider (LHC),
the most powerful particle accelerator in the world, that is estimated to pro-
duce 15 million gigabytes of data every year [LHC], then made available to
physicists around the world thanks to the infrastructure support “worldwide
LHC computing grid” (WLCG). The WLCG connects more than 140 computing
centers in 34 countries with the main objective to support the collection and
storage of data and processing tools, simulation, and visualization. The idea
behind the operation requires that the LHC experimental data are recorded
on tape at CERN before being distributed to 11 large computer centers (cen-
ters called “Tier 1”) in Canada, France, Germany, Italy, the Netherlands,
Scandinavia, Spain, Taiwan, the UK, and the USA. From these sites, the data
are made available to more than 120 “Tier-2” centers, where you can conduct
specific analyses. Individual researchers can then access the information
using computer clusters or even their own personal computer.
Cloud Computing
The cloud capability allows one to obtain seemingly unlimited storage space
and computing power that it is the reason for which cloud paradigm is
considered a very desirable feature in each Big Data solution (Bryant et al.,
2008). It is a new business where companies and users can rent by using
the “as a service” paradigm infrastructure, software, product, processes,
etc., Amazon [Amazon AWS], Microsoft [Microsoft Azure], Google [Google
Drive]. Unfortunately, these public systems are not enough to extensive
computations on large volumes of data, due to the low bandwidth; ideally a
cloud computing system for Big Data should be geographically dispersed, in
order to reduce its vulnerability in the case of natural disasters, but should
also have a high level of interoperability and data mobility. In fact, there are
systems that are moving in this direction, such as the OpenCirrus project
[Opencirrus Project], an international test bed that allows experiments on
interlinked cluster systems.
70 Big Data Computing
Self-Healing
This feature refers to the capability of a system to autonomously solve the fail-
ure problems, for example, in the computational process, in the database and
storage, and in the architecture. For example, when a server or a node fails, it
is important to have the capability of automatically solve the problem to avoid
repercussions on the entire architecture. Thus, an automated recovery from
failure solution that may be implemented by means of fault-tolerant solutions,
balancing, hot spare, etc., and some intelligence is needed. Therefore, it is
an important feature for Big Data architectures, which should be capable of
autonomously bypassing the problem. Then, once informed about the prob-
lems and the performed action to solve it, the administrator may perform an
intervention. This is possible, for example, through techniques that automati-
cally redirected to other resources, the work that was planned to be carried out
by failed machine, which has to be automatically put offline. To this end, there
are commercial products which allow setting up distributed and balanced
architecture where data are replicated and stored in clusters geographically
dispersed, and when a node/storage fails, the cluster can self-heal by recreat-
ing the missing data from the damage node, in its free space, thus reconstruct-
ing the full capability of recovering from the next problem. On the contrary,
the breakdown results and capacity may decrease in the degraded conditions
until the storage, processor, resource is replaced (Ghosh et al., 2007).
features: consistency, availability, and partition tolerance (Fox and Brewer, 1999).
Property of consistency states that a data model after an operation is still in
a consistent state providing the same data to all its clients. The property of
availability means that the solution is robust with respect to some internal
failure, that is, the service is still available. Partition tolerance means that
the system is going to continue to provide service even when it is divided in
disconnected subsets, for example, a part of the storage cannot be reached.
To cope with CAP theorem, Big Data solutions try to find a trade-off between
continuing to issue the service despite of problems of partitioning and at
the same time attempting to reduce the inconsistencies, thus supporting the
so-called eventual consistency.
Furthermore, in the context of relational database, the ACID (Atomicity,
Consistency, Isolation and Durability) properties describe the reliability of
database transactions. This paradigm does not apply to NoSQL database
where, in contrast to ACID definition, the data state provides the so-called
BASE property: Basic Available, Soft state, and Eventual consistent. Therefore,
it is typically hard to guaranteed an architecture for Big Data management
in a fault-tolerant BASE way, since, as the Brewer’s CAP theorem says, there
is no other choice to make a compromise if you want to scale up. In the fol-
lowing, some of the above aspects are discussed and better explained.
Database Size
In Big Data problems, the database size may easily reach magnitudes like
hundreds of Tera Byte (TB), Peta Byte (PB), or Exa Byte (EB). The evolution
of Big Data solutions has seen an increment of the amounts of data that can
be managed. In order to exploit these huge volumes of data and to improve
the productivity of scientific, new technologies, new techniques are needed.
The real challenge of database size are related to the indexing and to access
at the data. These aspects are treated in the following.
Data Model
To cope with huge data sets, a number of different data models are available
such as Relational Model, Object DB, XML DB, or Multidimensional Array
model that extend database functionality as described in Baumann et al.
(1998). Systems like Db4o (Norrie et al., 2008) or RDF 3X (Schramm, 2012)
propose different solutions for data storage can handle structured informa-
tion or less and the relationships among them. The data model represents
the main factor that influences the performance of the data management. In
fact, the performance of indexing represents in most cases the bottleneck of
the elaboration. Alternatives may be solutions that belong to the so-called
category of NoSQL databases, such as ArrayDBMS (Cattel, 2010), MongoDB
[mongoDB], CouchDB [Couchbase], and HBase [Apache HBase], which pro-
vide higher speeds with respect to traditional RDBMS (relational database
72 Big Data Computing
Resources
The main performance bottlenecks for NoSQL data stores correspond to
main computer resources: network, disk and memory performance, and the
Tassonomy and Review of Big Data Solutions Navigation 73
Data Organization
The data organization impacts on storage, access, and indexing perfor-
mance of data (Jagadish et al., 1997). In most cases, a great part of data
accumulated are not relevant for estimating results and thus they could be
filtered out and/or stored in compressed size, as well as moved into slower
memory along the multitier architecture. To this end, a challenge is to define
rules for arranging and filtering data in order to avoid/reduce the loss of use-
ful information preserving performances and saving costs (Olston et al.,
2003). The distribution of data in different remote tables may be the cause
of inconsistencies when connection is lost and the storage is partitioned for
some fault. In general, it is not always possible to ensure locally available data
on the node that would process them. It is evident that if this condition is
generally achieved, the best performance would be obtained. Otherwise, it
would be needed to retrieve the missed data blocks, to transfer them and
process them in order to produce the results with a high consumption of
resources on the node requested them and on the node that owns them,
and thus on the entire network; therefore, the time of completion would be
significantly higher.
74 Big Data Computing
and thus have to be managed in some coded protected format, for exam-
ple, with some encryption. Solutions based on conditional access, channel
protection, and authentication may still have sensible data stored in clear
into the storage. They are called Conditional Access Systems (CAS) and are
used to manage and control the user access to services and data (normal
users, administrator, etc.) without protecting each single data element via
encryption. Most Big Data installations are based on web services models,
with few facilities for countering web threats, whereas it is essential that
data are protected from theft and unauthorized accesses. While, most of the
present Big Data solutions present only conditional access methods based on
credentials only for accessing the data information and not to protect them
with encrypted packages. On the other hand, content protection technolo-
gies are sometimes supported by Digital Rights Management (DRM), solu-
tions that allow to define and execute licenses that formalize the rights that
can be exploited on a given content element, who can exploit that rights and
at which condition (e.g., time, location, number of times, etc.). The control
of the user access rights is per se a Big Data problem (Bellini et al., 2013).
The DRM solutions use authorization, authentication, and encryption tech-
nologies to manage and enable the exploitation of rights at different types
of users; logical control of some users with respect to each single pieces of
the huge quantities of data. The same technology can be used to provide
contribution to safeguard the data privacy allowing keeping the encrypted
data until they are effectively used by authorized and authenticated tools
and users. Therefore, the access to data outside permitted rights and con-
tent would be forbidden. Data security is a key aspect of architecture for the
management of such big quantities of data and is excellent to define who can
access to what. This is a fundamental feature in some areas such as health/
medicine, banking, media distribution, and e-commerce. In order to enforce
data protection, some frameworks are available to implement DRM and/
or CAS solutions exploiting different encryption and technical protection
techniques (e.g., MPEG-21 [MPEG-21], AXMEDIS (Bellini et al., 2007), ODRL
(Iannella, 2002)). In the specific case of EPR, several millions of patients with
hundreds of elements have to be managed; where for each of them some tens
of rights should to be controlled, thus resulting in billions of accesses and
thus of authentications per day.
Data Mining/Ingestion
Aspects are two key features in the field of Big Data solutions; in fact, in
most cases there is a trade-off between the speed of data ingestion, the abil-
ity to answer queries quickly, and the quality of the data in terms of update,
coherence, and consistency. This compromise impacts the design of the stor-
age system (i.e., OLTP vs OLAP, On-Line Transaction Processing vs On-Line
Analytical Processing), that has to be capable of storing and index the new
data at the same rate at which they reach the system, also taking into account
that a part of the received data could not be relevant for the production of
requested results. Moreover, some storage and file systems are optimized to
read and others for writing; while workloads generally involve a mix of both
these operations. An interesting solution is GATE, a framework and graphical
development environment to develop applications and engineering compo-
nents for language processing tasks, especially for data mining and infor-
mation extraction (Cunningham et al., 2002). Furthermore, the data mining
process can be strengthened and completed by the usage of crawling techniques,
now consolidated in the extraction of meaningful data from web pages richer
information, also including complex structures and tags. The processing of a
large amount of data can be very expensive in terms of resources used and
computation time. For these reasons, it may be helpful to use a distributed
approach of crawlers (with additional functionality) who works as distrib-
uted system under, with a central control unit which manages the allocation
of tasks between the active computers in the network (Thelwall, 2001).
Tassonomy and Review of Big Data Solutions Navigation 77
caches stores, as temporary data. Some kinds of data analytic algorithms create
enormous amounts of temporary data that must be opportunely managed
to avoid memory problems and to save time for the successive computa-
tions. In other cases, however, in order to make some statistics on the infor-
mation that is accessed more frequently, it is possible to use techniques to
create well-defined cache system or temporary files to optimize the com-
putational process. With the same aim, some incremental and/or hierarchical
algorithms are adopted in combination of the above-mentioned techniques,
for example, the hierarchical clustering k-means and k-medoid for recom-
mendation (Everitt et al., 2001; Xui and Wunsch, 2009; Bellini et al., 2012c).
A key element of Big Data access for data analysis is the presence of metadata
as data descriptors, that is, additional information associated with the main
data, which help to recover and understand their meaning with the context.
In the financial sector, for example, metadata are used to better understand
customers, date, competitors, and to identify impactful market trends; it is
therefore easy to understand that having an architecture that allows the stor-
age of metadata also represents a benefit for the following operations of data
analysis. Structured metadata and organized information help to create a
system with more easily identifiable and accessible information, and also
facilitate the knowledge identification process, through the analysis of avail-
able data and metadata. A variety of attributes can be applied to the data,
which may thus acquire greater relevance for users. For example, keyword,
temporal, and geospatial information, pricing, contact details, and anything
else that improves the quality of the information that has been requested. In
most cases, the production of suitable data descriptors could be the way to
save time in recovering the real full data, since the matching and the further
computational algorithms are based on those descriptors rather than on the
original data. For example, the identification duplicated documents could be
performed by comparing the document descriptors, the production of user
recommendations can be performed on the basis of collective user descrip-
tors or on the basis of the descriptors representing the centre of the clusters.
Couchbase
[Couchbase] is designed for real-time applications and does not support SQL
queries. Its incremental indexing system is realized to be native to JSON
(JavaScript Object Notation) storage format. Thus, JavaScript code can be
used to verify the document and select which data are used as index key.
Couchbase Server is an elastic and open-source NoSQL database that auto-
matically distributes data across commodity servers or virtual machines
and can easily accommodate changing data management requirements,
thanks to the absence of a schema to manage. Couchbase is also based on
Memcached which is responsible for the optimization of network proto-
cols and hardware, and allows obtaining good performance at the network
level. Memcached [Memcached] is an open-source distributed caching sys-
tem based on main memory, which is specially used in high trafficked web-
sites and high-performance demand web applications. Moreover, thanks
to Memcached, CouchBase can improve its online users experience main-
taining low latency and good ability to scale up to a large number of users.
CouchBase Server allows managing in a simple way system updates, which
can be performed without sending offline the entire system. It also allows
80 Big Data Computing
Table 2.1
Main Features of Reviewed Big Data Solutions
Google
ArrayDBMS CouchBase Db4o eXist MapReduce Hadoop
Distributed Y Y Y A Y Y
High availability A Y Y Y Y Y
Cloud A Y A Y/A Y Y
Parallelism Y Y Transactional Y Y Y
Data model Multidim. array 1 document/concept object DB + B-tree for XML-DB + index Big table (CF, Big table
(document store) index tree KV, OBJ, (column
DOC) family)
Users access type Web interface Multiple point Remote user interface Web interface, Many types of API, common
REST interface interface line interface
or HDFS-UI
web app
Type of indexing Multimensional Y (incremental) B-Tree field indexes B+-tree (XISS) Distributed HDFS
index multilevel
tree indexing
Data relationships Y NA Y Y Y A
Log analysis NA Y NA NA Y Y
Indexing speed More than Non-optimal 5–10 times more than SQL High speed with Y/A A
RDBMS performance B+Tree
Note: Y, supported; N, no info; P, partially supported; A, available but supported by means a plug-in or external extension; NA, not available.
Tassonomy and Review of Big Data Solutions Navigation 81
RdfHdt
HBase Hive MonetDB MongoDB Objectivity OpenQM Library RDF 3X
Y Y Y Y Y NA A Y
Y Y P Y Y Y A A
Y Y A Y Y NA A A
Y Y Y Y Y NA NA Y
Big table Table— BAT (Ext SQL) 1 Table for each Classic Table in 1 file/table (data 3 structures, RDF 1 Table +
(column partitions— collection which define + + dictionary) graph for Header permutations
family) Bucket (column (document DB) models (MultivalueDB) (RDF store) (RDF store)
family) (GraphDB)
Optimized use NA Efficient use Document + Like RDBMS No compression −50% data set Less than data
Metadata set
Jython or scala HiveQL queries Full SQL interfaces Command Line, Multiple access Console or web Access on demand Web interface
interface, Web Interface from different application (SPARQL)
rest or thrift query application,
gateway AMS
NA Accelerate queries Fast data access Over 50 GB, 10 Not provide any NA NA NA
with bitmap (MonetDB/ times faster optimization for
indices XQuery is among than MySQL accessing replicas
the fastest and
mostscalable)
h-files Bitmap indexing Hash index Index RDBMS Y (function e B-tree based RDF-graph Y (efficient
like objectivity/ triple
SQL++ interfaces) indexes)
Y NA Y Y Y Y Y Y
NA NA NA A A (PerSay) Y Y P
Y y A (SciQL) Y (network Y A A Y
traffic)
P Y NA A Y NA Y Y
Y NA NA NA NA NA Y Y
Y NA More than RDBMS High speed if DB High speed Increased speed 15 times faster than Aerodynamic
dimension with alternate RDF
doesnot exceed key
memory
eXist
eXist (Meier, 2003) is grounded on an open-source project to develop a native
XML database system that can be integrated into a variety of possible appli-
cations and scenarios, ranging from web-based applications to documenta-
tion systems. The eXist database is completely written in Java and maybe
deployed in different ways, either running inside a servlet-engine as a stand-
alone server process, or directly embedded into an application. eXist pro-
vides schema-less storage of XML documents in hierarchical collections. It
is possible to query a distinct part of collection hierarchy, using an extended
XPath syntax, or the documents contained in the database. The eXist’s query
engine implements efficient, index-based query processing. According to
path join algorithms, a large range of queries are processed using index infor-
mation. This database is an available solution for applications that deal with
both large and small collections of XML documents and frequent updates of
them. eXist also provides a set of extensions that allow to search by keyword,
by proximity to the search terms, and by regular expressions.
Google Map-Reduce
Google Map-Reduce (Yang et al., 2007) is the programing model for process-
ing Big Data used by Google. Users specify the computation in terms of a map
and a reduction function. The underlying system parallelizes the computa-
tion across large-scale clusters of machines and is also responsible for the fail-
ures, to maintain effective communication and the problem of performance.
The Map function in the master node takes the inputs, partitioning them into
smaller subproblems, and distributes them to operational nodes. Each opera-
tional node could perform this again, creating a multilevel tree structure. The
operational node processes the smaller problems and returns the response to
its parent node. In the Reduce function, the root node takes the answers from
the subproblems and combine them to produce the answer at the global prob-
lem is trying to solve. The advantage of Map-Reduce consists in the fact that it
is intrinsically parallel and thus it allows to distribute processes of mapping
operations and reduction. The operations of Map are independent of each
other, and can be performed in parallel (with limitations given from the data
source and/or the number of CPU/cores near to that data); in the same way,
a series of Reduce can perform the reduction step. This results in running
Tassonomy and Review of Big Data Solutions Navigation 83
Hadoop
[Hadoop Apache Project] is a framework that allows managing distributed
processing of Big Data across clusters of computers using simple program-
ing models. It is designed to scale up from single servers to thousands of
machines, each of them offering local computation and storage. The Hadoop
library is designed to detect and handle failures at the application layer, so
delivering a highly available service on top of a cluster of computers, each of
which may be prone to failures. Hadoop was inspired from Google’s Map-
Reduce and Google File System, GFS, and in practice it has been realized to
be adopted in a wide range of cases. Hadoop is designed to scan large data
set to produce results through a distributed and highly scalable batch pro-
cessing systems. It is composed of the Hadoop Distribute File System (HDFS)
and of the programing paradigm Map-Reduce (Karloff et al., 2010); thus, it is
capable of exploiting the redundancy built into the environment. The pro-
graming model is capable of detecting failures and solving them automati-
cally by running specific programs on various servers in the cluster. In fact,
redundancy provides fault tolerance and capability to self-healing of the
Hadoop Cluster. HDFS allows applications to be run across multiple servers,
which have usually a set of inexpensive internal disk drives; the possibility
of the usage of common hardware is another advantage of Hadoop. A similar
and interesting solution is HadoopDB, proposed by a group of researchers
at Yale. HadoopDB was conceived with the idea of creating a hybrid system
that combines the main features of two technological solutions: parallel data-
bases in performance and efficiency, and Map-Reduce-based system for scal-
ability, fault tolerance, and flexibility. The basic idea behind HadoopDB is to
use Map-Reduce as the communication layer above multiple nodes running
single-node DBMS instances. Queries are expressed in SQL and then trans-
lated into Map-Reduce. In particular, the solution implemented involves the
use of PostgreSQL as database layer, Hadoop as a communication layer, and
Hive as the translation layer (Abouzeid et al., 2009).
Hbase
Hbase (Aiyer et al., 2012) is a large-scale distributed database build on top of
the HDFS, mentioned above. It is a nonrelational database developed by means
of an open source project. Many traditional RDBMSs use a single mutating
B-tree for each index stored on disk. On the other hand, Hbase uses a Log
Structured Merge Tree approach: first collects all updates into a special data
structure on memory and then, periodically, flush this memory on disk, creating
a new index-organized data file, the called also Hfile. These indices are immu-
table over time, while the several indices created on the disk are periodically
84 Big Data Computing
merged. Therefore, by using this approach, the writing to the disk is sequen-
tially performed. HBase’s performance is satisfactory in most cases and may
be further improved by using Bloom filters (Borthakur et al., 2011). Both HBase
and HDFS systems have been developed by considering elasticity as funda-
mental principle, and the use of low cost disks has been one of the main goals
of HBase. Therefore, to scale the system results is easy and cheap, even if it has
to maintain a certain fault tolerance capability in the individual nodes.
Hive
[Apache Hive] is an open-source data warehousing solution based on top of
Hadoop. Hive has been designed with the aim of analyzing large amounts
of data more productively, improving the query capabilities of Hadoop. Hive
supports queries expressed in an SQL-like declarative language—HiveQL—
to extract data from sources such as HDFS or HBase. The architecture is
divided into: Map-Reduce paradigm for computation (with the ability for
users to enrich the queries with custom Map-Reduce scripts), metadata
information for a data storage, and a processing part that receives a query
from user or applications for execution. The core in/out libraries can be
expanded to analyze customized data formats. Hive is also characterized
by the presence of a system catalog (Metastore) containing schemas and sta-
tistics, which is useful in operations such as data exploration, query optimi-
zation, and query compilation. In Facebook, the Hive warehouse contains
tens of thousands of tables and stores over 700 TB of data and is being used
extensively for both reporting and ad-hoc analyses by more than 200 users
per month (Thusoo et al., 2010).
MonetDB
MonetDB (Zhang et al., 2012) is an open-source DBMS for data mining appli-
cations. It has been designed for applications with large databases and que-
ries, in the field of Business Intelligence and Decision Support. MonetDB has
been built around the concept of bulk processing: simple operations applied
to large volumes of data by using efficient hardware, for large-scale data pro-
cessing. At present, two versions of MonetDB are available and are working
with different types of databases: MonetDB/SQL with relational database,
and MonetDB/XML with an XML database. In addition, a third version is
under development to introduce RDF and SPARQL (SPARQL Protocol and
RDF Query Language) supports. MonetDB provides a full SQL interface and
does not allow a high-volume transaction processing with its multilevel ACID
properties. The MonetDB allows performance improvement in terms of speed
for both relational and XML databases thanks to innovations introduced at
DBMS level, a storage model based on vertical fragmentation, run-time query
optimization, and on modular software architecture. MonetDB is designed
to take advantage of the large amount of main memory and implements new
Tassonomy and Review of Big Data Solutions Navigation 85
MongoDB
[MongoDB] is a document-oriented database that memorizes document data
in BSON, a binary JSON format. Its basic idea consists in the usage of a more
flexible model, like the “document,” to replace the classic concept of a “row.” In
fact, with the document-oriented approach, it is possible to represent complex
hierarchical relationships with a single record, thanks to embedded docu-
ments and arrays. MongoDB is open-source and it is schema-free—that is,
there is no fixed or predefined document’s keys—and allows defining indices
based on specific fields of the documents. In order to retrieve data, ad-hoc que-
ries based on these indices can be used. Queries are created as BSON objects
to make them more efficient and are similar to SQL queries. MongoDB sup-
ports MapReduce queries and atomic operations on individual fields within
the document. It allows realizing redundant and fault-tolerant systems that
can easily horizontally scaled, thanks to the sharing based on the document
keys and the support of asynchronous and master–slave replications. A rele-
vant advantage of MongoDB are the opportunities of creating data structures
to easily store polymorphic data, and the possibility of making elastic cloud
systems given its scale-out design, which increases ease of use and developer
flexibility. Moreover, server costs are significantly low because MongoDB
deployment can use commodity and inexpensive hardware, and their hori-
zontal scale-out architecture can also reduce storage costs.
Objectivity
[Objectivity Platform] is a distributed OODBMS (Object-Oriented Database
Management System) for applications that require complex data models.
It supports a large number of simultaneous queries and transactions and
provides high-performance access to large volumes of physically distrib-
uted data. Objectivity manages data in a transparent way and uses a dis-
tributed database architecture that allows good performance and scalability.
The main reasons for using a database of this type include the presence of
complex relationships that suggest tree structures or graphs, and the pres-
ence of complex data, that is, when there are components of variable length
and in particular multi-dimensional arrays. Other reasons are related to the
presence of a database that must be geographically distributed, and which
is accessed via a processor grid, or the use of more than one language or
platform, and the use of workplace objects. Objectivity has an architecture
consisting of a single distributed database, a choice that allows achieving
86 Big Data Computing
high performance in relation to the amount of data stored and the number of
users. This architecture distributes tasks for computation and data storage in
a transparent way through the different machines and it is also scalable and
has a great availability.
OpenQM
[OpenQM Database] is a DBMS that allows developing and run applications
that includes a wide range of tools and advanced features for complex appli-
cations. Its database model belongs to the family of Multivalue and therefore
has many aspects in common with databases Pick-descended and is trans-
actional. The development of applications Multivalue is often faster than
using other types of database and this therefore implies lower development
costs and easier maintenance. This instrument has a high degree of compat-
ibility with other types of systems with database Multivalue as UniVerse
[UniVerse], PI/open, D3, and others.
The RDF-HDT (Header-Dictionary-Triples) [RDF-HDT Library] is a new
representation format that modularizes data and uses structures of large
RDF graphs to get a big storage space and is based on three main compo-
nents: Header, Dictionary, and a set of triples. Header includes logical and
physical data that describes the RDF data set, and it is the entry point to the
data set. The Dictionary organizes all the identifiers in an RDF graph and
provides a catalog of the amount of information in RDF graph with a high
level of compression. The set of Triples, finally, includes the pure structure of
the underlying RDF graph and avoids the noise produced by long labels and
repetitions. This design gains in modularity and compactness, and addresses
other important characteristics: allows access addressed on-demand to the
RDF graph and is used to design specific compression techniques RDF (HDT-
compress) able to outperform universal compressors. RDF-HDT introduces
several advantages like compactness and compression of stored data, using
small amount of memory space, communication bandwidth, and time. RDF-
HDT uses a low storage space, thanks to the asymmetric structure of large
RDF graph and its representation format consists of two primary modules,
Dictionary and Triple. Dictionary contains mapping between elements and
unique IDs, without repetition, thanks to which achieves a high compres-
sion rate and speed in searches. Triple corresponds to the initial RDF graph
in a compacted form where elements are replaced with corresponding IDs.
Thanks to the two processes, HDT can be also generated from RDF (HDT
encoder) and can manage separate logins to run queries, to access full RDF
or to carry out management operations (HDT decoder)
RDF-3X
RDF-3X (Schramm, 2012) is an RDF store that implements SPARQL [SPARQL]
that achieves excellent performance making an RISC (Reduced Instruction
Tassonomy and Review of Big Data Solutions Navigation 87
Table 2.2
Relevance of the Main Features of Big Data Solutions with Respect to the Most Interesting Applicative Domains
Data Analysis Educational Social Network
Scientific and Internet
Research Cultural Energy/ Financial/ Smart Cities Social Media Service Web
(biomedical) Heritage Transportation Business Healthcare Security and mobility Marketing Data
tools in queries
CEP (active query) H L M H H M H L H
Log analysis L M L H H H H H H
Streaming M L M H M H H M H (network
processing monitoring)
89
90 Big Data Computing
important to take account of issues related to the concurrent access and thus
data consistency, while in social media and smart cities it is important to
provide on-demand and multidevice access to information, graphs, real-time
conditions, etc. A flexible visual rendering (distributions, pies, histograms,
trends, etc.) may be a strongly desirable features to be provided, for many
scientific and research applications, as well as for financial data and health
care (e.g., for reconstruction, trend analysis, etc.). Faceted query results can
be very interesting for navigating in mainly text-based Big Data as for edu-
cational and cultural heritage application domains. Graph navigation among
resulted relationships can be an avoidable solution to represent the resulted
data in smart cities and social media, and for presenting related implications
and facts in financial and business applications. Moreover, in certain specific
contexts, the data rendering has to be compliant with standards, for example,
in the health care.
In terms of data analytic aspects, several different features could be of
interest in the different domains. The most relevant feature in this area is
the type of indexing, which in turn characterizes the indexing performance.
The indexing performance are very relevant in the domains in which a huge
amount of small data have to be collected and need to be accessed and elabo-
rated in the short time, such as in finance, health care, security, and mobil-
ity. Otherwise, if the aim of the Big Data solution is mainly on the access
and data processing, then fast indexing can be less relevant. For example,
the use of HDFS may be suitable in contexts requiring complex and deep
data processing, such as the evaluation on the evolution of a particular dis-
ease in the medical field, or the definition of a specific business models.
This approach, in fact, runs the process function on a reduced data set, thus
achieving scalability and availability required for processing Big Data. In
education, instead, the usage of ontologies and thus of RDF databases and
graphs provides a rich semantic structure better than any other method of
knowledge representation, improving the precision of search and access for
educational contents, including the possibility of enforcing inference in the
semantic data structure.
The possibility of supporting statistical and logical analyses on data via
specific queries and reasoning can be very important for some applications
such as social media and networking. If this feature is structurally sup-
ported, it is possible to realize direct operations on the data, or define and
store specific queries to perform direct and fast statistical analysis: for exam-
ple, for estimating recommendations, firing conditions, etc.
In other contexts, however, it is very important to the continuous process-
ing of data streams, for example, to respond quickly to requests for informa-
tion and services by the citizens of a “smart city,” real-time monitoring of
the performance of financial stocks, or report to medical staff unexpected
changes in health status of patients under observation. As can be seen from
the table, in these contexts, a particularly significant feature is the use of
the approach CEP (complex event processing), based on active query, which
Tassonomy and Review of Big Data Solutions Navigation 93
Table 2.3
Relevance of the Main Features of Big Data Solutions with Respect to the Most
Interesting Applicative Domains
Google MapReduce
RdfHdt Library
ArrayDBMS
Objectivity
CouchBase
MongoDB
MonetDB
OpenQM
Hadoop
RDF 3X
HBase
Db4o
eXist
Hive
Data analysis scientific research X X X X X X X X X X
(biomedical)
Education and cultural heritage X X X X
Energy/transportation X X X X X
Financial/business X X X X X X X
Healthcare X X X X X X X X X
Security X X X Y
Smart mobility, smart cities X X X X X X
Social marketing X X X X X X X X X
Social media X X X X X X X X X X
Note: Y = frequently adopted.
Conclusions
We have entered an era of Big Data. There is the potential for making faster
advances in many scientific disciplines through better analysis of these large
volumes of data and also for improving the profitability of many enterprises.
The need for these new-generation data management tools is being driven
by the explosion of Big Data and by the rapidly growing volumes and vari-
ety of data that are collecting today from alternative sources such as social
networks like Twitter and Facebook.
NoSQL Database Management Systems represents a possible solution to
these problems; unfortunately they are not a definitive solutions: these tools
have a wide range of features that can be further developed to create new
products more adaptable to this huge stream of data constantly growing and
to its open challenge such as error handling, privacy, unexpected correlation
detection, trend analysis and prediction, timeliness analysis, and visualiza-
tion. Considering this latter challenge, it is clear that, in a fast-growing mar-
ket for maps, charts, and other ways to visually sort using data, these larger
volumes of data and analytical capabilities become the new coveted features;
today, in fact in the “Big Data world,” static bar charts and pie charts just
do not make more sense, and more and more companies are demanding
Tassonomy and Review of Big Data Solutions Navigation 97
References
Abouzeid A., Bajda-Pawlikowski C., Abadi D., Silberschatz A., Rasin A., HadoopDB:
An architectural hybrid of MapReduce and DBMS technologies for analytical
workloads. Proceedings of the VLDB Endowment, 2(1), 922–933, 2009.
Aiyer A., Bautin M., Jerry Chen G., Damania P., Khemani P., Muthukkaruppan K.,
Ranganathan K., Spiegelberg N., Tang L., Vaidya M., Storage infrastructure
behind Facebook messages using HBase at scale. Bulletin of the IEEE Computer
Society Technical Committee on Data Engineering, 35(2), 4–13, 2012.
AllegroGraph, http://www.franz.com/agraph/allegrograph/
Amazon AWS, http://aws.amazon.com/
Amazon Dynamo, http://aws.amazon.com/dynamodb/
Antoniu G., Bougè L., Thirion B., Poline J.B., AzureBrain: Large-scale Joint Genetic
and Neuroimaging Data Analysis on Azure Clouds, Microsoft Research Inria Joint
Centre, Palaiseau, France, September 2010. http://www.irisa.fr/kerdata/lib/
exe/fetch.php?media=pdf:inria-microsoft.pdf
Apache Cassandra, http://cassandra.apache.org/
Apache HBase, http://hbase.apache.org/
Apache Hive, http://hive.apache.org/
Apache Solr, http://lucene.apache.org/solr/
Baumann P., Dehmel A., Furtado P., Ritsch R., The multidimensional database sys-
tem RasDaMan. SIGMOD’98 Proceedings of the 1998 ACM SIGMOD International
Conference on Management of Data, Seattle, Washington, pp. 575–577, 1998,
ISBN: 0-89791-995-5.
Bellini P., Cenni D., Nesi P., On the effectiveness and optimization of information
retrieval for cross media content, Proceedings of the KDIR 2012 is Part of IC3K 2012,
International Joint Conference on Knowledge Discovery, Knowledge Engineering
and Knowledge Management, Barcelona, Spain, 2012a.
Bellini P., Bruno, I., Cenni, D., Fuzier, A., Nesi, P., Paolucci, M., Mobile medicine:
Semantic computing management for health care applications on desktop and
mobile devices. Multimedia Tools and Applications, Springer, 58(1), 41–79, 2012b.
98 Big Data Computing
Domingos P., Mining social networks for viral marketing. IEEE Intelligent Systems,
20(1), 80–82, 2005.
Dykstra D., Comparison of the frontier distributed database caching system to NoSQL
databases, Computing in High Energy and Nuclear Physics (CHEP) Conference,
New York, May 2012.
Eaton C., Deroos D., Deutsch T., Lapis G., Understanding Big Data: Analytics for
Enterprise Class Hadoop and Streaming Data, McGraw Hill Professional, McGraw
Hill, New York, 2012, ISBN: 978-0071790536.
ECLAP, http://www.eclap.eu
Europeana Portal, http://www.europeana.eu/portal/
Everitt B., Landau S., Leese M., Cluster Analysis, 4th edition, Arnold, London, 2001.
Figueireido V., Rodrigues F., Vale Z., An electric energy consumer characteriza-
tion framework based on data mining techniques. IEEE Transactions on Power
Systems, 20(2), 596–602, 2005.
Foster I., Jeffrey M., and Tuecke S. Grid services for distributed system integration,
IEEE Computer, 5(6), 37–46, 2002.
Fox A., Brewer E.A., Harvest, yield, and scalable tolerant systems, Proceedings of the
Seventh Workshop on Hot Topics in Operating Systems, Rio Rico, Arizona, pp. 174–
178, 1999.
Gallego M.A., Fernandez J.D., Martinez-Prieto M.A., De La Fuente P., RDF visual-
ization using a three-dimensional adjacency matrix, 4th International Semantic
Search Workshop (SemSearch), Hyderabad, India, 2011.
Ghosh D., Sharman R., Rao H.R., Upadhyaya S., Self-healing systems—Survey and
synthesis, Decision Support Systems, 42(4), 2164–2185, 2007.
Google Drive, http://drive.google.com
GraphBase, http://graphbase.net/
Gulisano V., Jimenez-Peris R., Patino-Martinez M., Soriente C., Valduriez P., A big
data platform for large scale event processing, ERCIM News, 89, 32–33, 2012.
Hadoop Apache Project, http://hadoop.apache.org/
Hanna M., Data mining in the e-learning domain. Campus-Wide Information Systems,
21(1), 29–34, 2004.
Iaconesi S., Persico O., The co-creation of the city, re-programming cities using
real-time user generated content, 1st Conference on Information Technologies for
Performing Arts, Media Access and Entertainment, Florence, Italy, 2012.
Iannella R., Open digital rights language (ODRL), Version 1.1 W3C Note, 2002,
http://www.w3.org/TR/odrl
Jacobs A., The pathologies of big data. Communications of the ACM—A Blind Person’s
Interaction with Technology, 52(8), 36–44, 2009.
Jagadish H.V., Narayan P.P.S., Seshadri S., Kanneganti R., Sudarshan S., Incremental
organization for data recording and warehousing, Proceedings of the 23rd
International Conference on Very Large Data Bases, Athens, Greece, pp. 16–25,
1997.
Karloff H., Suri S., Vassilvitskii S., A model of computation for MapReduce. Proceedings
of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, pp.
938–948, 2010.
LHC, http://public.web.cern.ch/public/en/LHC/LHC-en.html
Liu L., Biderman A., Ratti C., Urban mobility landscape: Real time monitoring of urban
mobility patterns, Proceedings of the 11th International Conference on Computers in
Urban Planning and Urban Management, Hong Kong, June 2009.
100 Big Data Computing
Mans R.S., Schonenberg M.H., Song M., Van der Aalst W.M.P., Bakker P.J.M.,
Application of process mining in healthcare—A case study in a Dutch hospital.
Biomedical Engineering Systems and Technologies, Communications in Computer and
Information Science, 25(4), 425–438, 2009.
McHugh J., Widom J., Abiteboul S., Luo Q., Rajaraman A., Indexing semistructured
data, Technical report, Stanford University, California, 1998.
Meier W., eXist: An open source native XML database. Web, Web-Services, and Database
Systems—Lecture Notes in Computer Science, 2593, 169–183, 2003.
Memcached, http://memcached.org/
Microsoft Azure, http://www.windowsazure.com/it-it/
Microsoft, Microsoft private cloud. Tech. rep., 2012.
Mislove A., Gummandi K.P., Druschel P., Exploiting social networks for Internet
search, Record of the Fifth Workshop on Hot Topics in Networks: HotNets V, Irvine,
CA, pp. 79–84, November 2006.
MongoDB, http://www.mongodb.org/
MPEG-21, http://mpeg.chiariglione.org/standards/mpeg-21/mpeg-21.htm
Neo4J, http://neo4j.org/
Norrie M.C., Grossniklaus M., Decurins C., Semantic data management for db4o,
Proceedings of 1st International Conference on Object Databases (ICOODB 2008),
Frankfurt/Main, Germany, pp. 21–38, 2008.
NoSQL DB, http://nosql-database.org/
Obenshain M.K., Application of data mining techniques to healthcare data, Infection
Control and Hospital Epidemiology, 25(8), 690–695, 2004.
Objectivity Platform, http://www.objectivity.com
Olston C., Jiang J., Widom J., Adaptive filters for continuous queries over distributed
data streams, Proceedings of the 2003 ACM SIGMOD International Conference on
Management of Data, pp. 563–574, 2003.
OpenCirrus Project, https://opencirrus.org/
OpenQM Database, http://www.openqm.org/docs/
OpenStack Project, http://www.openstack.org
Oracle Berkeley, http://www.oracle.com/technetwork/products/berkeleydb/
Pierre G., El Helw I., Stratan C., Oprescu A., Kielmann T., Schuett T., Stender J.,
Artac M., Cernivec A., ConPaaS: An integrated runtime environment for elastic
cloud applications, ACM/IFIP/USENIX 12th International Middleware Conference,
Lisboa, Portugal, December 2011.
RDF-HDT Library, http://www.rdfhdt.org
Rivals E., Philippe N., Salson M., Léonard M., Commes T., Lecroq T., A scalable
indexing solution to mine huge genomic sequence collections. ERCIM News,
89, 20–21, 2012.
Rusitschka S., Eger K., Gerdes C., Smart grid data cloud: A model for utilizing cloud
computing in the smart grid domain, 1st IEEE International Conference of Smart
Grid Communications, Gaithersburg, MD, 2010.
Setnes M., Kaymak U., Fuzzy modeling of client preference from large data sets: An
application to target selection in direct marketing. IEEE Transactions on Fuzzy
Systems, 9(1), February 2001.
SCAPE Project, http://scape-project.eu/
Schramm M., Performance of RDF representations, 16th TSConIT, 2012.
Silvestri Ludovico (LENS), Alessandro Bria (UCBM), Leonardo Sacconi (LENS), Anna
Letizia Allegra Mascaro (LENS), Maria Chiara Pettenati (ICON), SanzioBassini
Tassonomy and Review of Big Data Solutions Navigation 101
Roberto V. Zicari
Contents
Introduction.......................................................................................................... 104
The Story as it is Told from the Business Perspective..................................... 104
The Story as it is Told from the Technology Perspective................................ 107
Data Challenges............................................................................................... 107
Volume......................................................................................................... 107
Variety, Combining Multiple Data Sets................................................... 108
Velocity......................................................................................................... 108
Veracity, Data Quality, Data Availability................................................. 109
Data Discovery............................................................................................ 109
Quality and Relevance............................................................................... 109
Data Comprehensiveness.......................................................................... 109
Personally Identifiable Information......................................................... 109
Data Dogmatism......................................................................................... 110
Scalability..................................................................................................... 110
Process Challenges.......................................................................................... 110
Management Challenges................................................................................ 110
Big Data Platforms Technology: Current State of the Art......................... 111
Take the Analysis to the Data!.................................................................. 111
What Is Apache Hadoop?.......................................................................... 111
Who Are the Hadoop Users?.................................................................... 112
An Example of an Advanced User: Amazon.......................................... 113
Big Data in Data Warehouse or in Hadoop?........................................... 113
Big Data in the Database World (Early 1980s Till Now)....................... 113
Big Data in the Systems World (Late 1990s Till Now)........................... 113
Enterprise Search........................................................................................ 115
Big Data “Dichotomy”............................................................................... 115
Hadoop and the Cloud.............................................................................. 116
Hadoop Pros................................................................................................ 116
Hadoop Cons.............................................................................................. 116
Technological Solutions for Big Data Analytics.......................................... 118
Scalability and Performance at eBay....................................................... 122
Unstructured Data...................................................................................... 123
Cloud Computing and Open Source....................................................... 123
103
104 Big Data Computing
Introduction
“Big Data is the new gold” (Open Data Initiative)
Every day, 2.5 quintillion bytes of data are created. These data come from
digital pictures, videos, posts to social media sites, intelligent sensors, pur-
chase transaction records, cell phone GPS signals, to name a few. This is
known as Big Data.
There is no doubt that Big Data and especially what we do with it has the
potential to become a driving force for innovation and value creation. In
this chapter, we will look at Big Data from three different perspectives: the
business perspective, the technological perspective, and the social good
perspective.
“Big Data” refers to datasets whose size is beyond the ability of typical
database software tools to capture, store, manage and analyze.
This definition is quite general and open ended, and well captures the rapid
growth of available data, and also shows the need of technology to “catch up”
with it. This definition is not defined in terms of data size; in fact, data sets
will increase in the future! It also obviously varies by sectors, ranging from a
few dozen terabytes to multiple petabytes (1 petabyte is 1000 terabytes).
Big Data 105
• Creating transparencies;
• Discovering needs, exposing variability, and improving performance;
• Segmenting customers; and
• Replacing/supporting human decision-making with automated algo-
rithms—Innovating new business models, products, and services.
• There is no single set formula for extracting value from Big Data; it
will depend on the application.
• There are many applications where simply being able to comb
through large volumes of complex data from multiple sources via
106 Big Data Computing
Gorbet gives an example of the result of such Big Data Search: “it was anal-
ysis of social media that revealed that Gatorade is closely associated with flu
and fever, and our ability to drill seamlessly from high-level aggregate data
into the actual source social media posts shows that many people actually
take Gatorade to treat flu symptoms. Geographic visualization shows that
this phenomenon may be regional. Our ability to sift through all this data
in real time, using fresh data gathered from multiple sources, both internal
and external to the organization helps our customers identify new actionable
insights.”
Where Big Data will be used? According to MGI, Big Data can generate finan-
cial value across sectors. They identified the following key sectors:
• Health care (this is a very sensitive area, since patient records and, in
general, information related to health are very critical)
• Public sector administration (e.g., in Europe, the Open Data
Initiative—a European Commission initiative which aims at open-
ing up Public Sector Information)
• Global personal location data (this is very relevant given the rise of
mobile devices)
• Retail (this is the most obvious, since the existence of large Web
retail shops such as eBay and Amazon)
• Manufacturing
What are examples of Big Data Use Cases? The following is a sample list:
• Log analytics
• Fraud detection
• Social media and sentiment analysis
• Risk modeling and management
• Energy sector
Big Data 107
Currently, the key limitations in exploiting Big Data, according to MGI, are
Both limitations reflect the fact that the current underlying technology is
quite difficult to use and understand. As every new technology, Big Data
Analytics technology will take time before it will reach a level of maturity
and easiness to use for the enterprises at large. All the above-mentioned
examples of values generated by analyzing Big Data, however, do not take
into account the possibility that such derived “values” are negative.
In fact, the analysis of Big Data if improperly used poses also issues, specifi-
cally in the following areas:
• Access to data
• Data policies
• Industry structure
• Technology and techniques
This is outside the scope of this chapter, but it is for sure one of the most
important nontechnical challenges that Big Data poses.
Data Challenges
Volume
The volume of data, especially machine-generated data, is exploding,
how fast that data is growing every year, with new sources of data
that are emerging. For example, in the year 2000, 800,000 petabytes
(PB) of data were stored in the world, and it is expected to reach 35
zettabytes (ZB) by 2020 (according to IBM).
108 Big Data Computing
It used to be the case that all the data an organization needed to run
its operations effectively was structured data that was generated
within the organization. Things like customer transaction data,
ERP data, etc. Today, companies are looking to leverage a lot more
data from a wider variety of sources both inside and outside the
organization. Things like documents, contracts, machine data, sen-
sor data, social media, health records, emails, etc. The list is endless
really.
A lot of this data is unstructured, or has a complex structure that’s
hard to represent in rows and columns. And organizations want to
be able to combine all this data and analyze it together in new ways.
For example, we have more than one customer in different industries
whose applications combine geospatial vessel location data with
weather and news data to make real-time mission-critical decisions.
Data come from sensors, smart devices, and social collaboration tech-
nologies. Data are not only structured, but raw, semistructured,
unstructured data from web pages, web log files (click stream data),
search indexes, e-mails, documents, sensor data, etc.
Semistructured Web data such as A/B testing, sessionization, bot
detection, and pathing analysis all require powerful analytics on
many petabytes of semistructured Web data.
Velocity
Shilpa Lawande of Vertica defines this challenge nicely [4]: “as busi-
nesses get more value out of analytics, it creates a success problem—
they want the data available faster, or in other words, want real-time
analytics.
And they want more people to have access to it, or in other words, high
user volumes.”
Big Data 109
One of the key challenges is how to react to the flood of information in the
time required by the application.
Data Discovery
This is a huge challenge: how to find high-quality data from the vast collec-
tions of data that are out there on the Web.
Data Comprehensiveness
Are there areas without coverage? What are the implications?
Data Dogmatism
Analysis of Big Data can offer quite remarkable insights, but we must be
wary of becoming too beholden to the numbers. Domain experts—and com-
mon sense—must continue to play a role.
For example, “It would be worrying if the healthcare sector only responded
to flu outbreaks when Google Flu Trends told them to.” (Paul Miller [5])
Scalability
Shilpa Lawande explains [4]: “techniques like social graph analysis, for
instance leveraging the influencers in a social network to create better
user experience are hard problems to solve at scale. All of these problems
combined create a perfect storm of challenges and opportunities to create
faster, cheaper and better solutions for Big Data analytics than traditional
approaches can solve.”
Process Challenges
“It can take significant exploration to find the right model for analysis, and
the ability to iterate very quickly and ‘fail fast’ through many (possible throw
away) models—at scale—is critical.” (Shilpa Lawande)
According to Laura Haas (IBM Research), process challenges with deriv-
ing insights include [5]:
• Capturing data
• Aligning data from different sources (e.g., resolving when two
objects are the same)
• Transforming the data into a form suitable for analysis
• Modeling it, whether mathematically, or through some form of
simulation
• Understanding the output, visualizing and sharing the results,
think for a second how to display complex analytics on a iPhone or
a mobile device
Management Challenges
“Many data warehouses contain sensitive data such as personal data. There
are legal and ethical concerns with accessing such data.
So the data must be secured and access controlled as well as logged for
audits.” (Michael Blaha)
The main management challenges are
• Data privacy
• Security
Big Data 111
• Governance
• Ethical
The challenges are: Ensuring that data are used correctly (abiding by its
intended uses and relevant laws), tracking how the data are used, trans-
formed, derived, etc., and managing its lifecycle.
This confirms Gray’s Laws of Data Engineering, adapted here to Big Data:
• SQL → SQL Compiler
• Relational Dataflow Layer (runs the query plans, orchestrate the local
storage managers, deliver partitioned, shared-nothing storage ser-
vices for large relational tables)
• Row/Column Storage Manager (record-oriented: made up of a set of
row-oriented or column-oriented storage managers per machine in a
cluster)
Note: no open-source parallel database exists! SQL is the only way into the
system architecture. Systems are monolithic: Cannot safely cut into them to
access inner functionalities.
The Hadoop software stack comprises (Michael J. Carey):
Note: all tools are open-source! No SQL. Systems are not monolithic: Can
safely cut into them to access inner functionalities.
A key requirement when handling Big Data is scalability.
Scalability has three aspects
• data volume
• hardware size
• concurrency
What is the trade-off between scaling out and scaling up? What does it mean
in practice for an application domain?
Chris Anderson of Couchdb explains [11]: “scaling up is easier from a soft-
ware perspective. It’s essentially the Moore’s Law approach to scaling—buy
a bigger box. Well, eventually you run out of bigger boxes to buy, and then
you’ve run off the edge of a cliff. You’ve got to pray Moore keeps up.
Big Data 115
Scaling out means being able to add independent nodes to a system. This
is the real business case for NoSQL. Instead of being hostage to Moore’s Law,
you can grow as fast as your data. Another advantage to adding independent
nodes is you have more options when it comes to matching your workload.
You have more flexibility when you are running on commodity hardware—
you can run on SSDs or high-compute instances, in the cloud, or inside your
firewall.”
Enterprise Search
Enterprise Search implies being able to search multiple types of data gener-
ated by an enterprise. There are two alternatives: Apache Solr or implement-
ing a proprietary full-text search engine.
There is an ecosystem of open source tools that build on Apache Solr.
There are concerns about performance issues that arise along with the
transfer of large amounts of data between the two systems. The use of con-
nectors could introduce delays and data silos, and increase Total Cost of
Ownership (TCO).
Daniel Abadi of Hadapt says [10]: “this is a highly undesirable architecture,
since now you have two systems to maintain, two systems where data may be
stored, and if you want to do analysis involving data in both systems, you end
up having to send data over the network which can be a major bottleneck.”
Big Data is not (only) Hadoop.
“Some people even think that ‘Hadoop’ and ‘Big Data’ are synonymous
(though this is an over-characterization). Unfortunately, Hadoop was
designed based on a paper by Google in 2004 which was focused on use
cases involving unstructured data (e.g., extracting words and phrases from
Web pages in order to create Google’s Web index). Since it was not origi-
nally designed to leverage the structure in relational data in order to take
116 Big Data Computing
Hadoop Pros
• Open source.
• Nonmonolithic support for access to file-based external data.
• Support for automatic and incremental forward-recovery of jobs
with failed task.
• Ability to schedule very large jobs in smaller chunks.
• Automatic data placement and rebalancing as data grows and
machines come and go.
• Support for replication and machine fail-over without operation
intervention.
• The combination of scale, ability to process unstructured data along
with the availability of machine learning algorithms, and recom-
mendation engines create the opportunity to build new game chang-
ing applications.
• Does not require a schema first.
• Provides a great tool for exploratory analysis of the data, as long as
you have the software development expertise to write MapReduce
programs.
Hadoop Cons
• Hadoop is difficult to use.
• Can give powerful analysis, but it is fundamentally a batch-oriented
paradigm. The missing piece of the Hadoop puzzle is accounting for
real-time changes.
Big Data 117
Daniel Abadi: “A lot of people are using Hadoop as a sort of data refinery.
Data starts off unstructured, and Hadoop jobs are run to clean, transform,
and structure the data. Once the data is structured, it is shipped to SQL
databases where it can be subsequently analyzed. This leads to the raw data
being left in Hadoop and the refined data in the SQL databases. But it’s basi-
cally the same data—one is just a cleaned (and potentially aggregated) ver-
sion of the other. Having multiple copies of the data can lead to all kinds of
problems. For example, let’s say you want to update the data in one of the
two locations—it does not get automatically propagated to the copy in the
other silo. Furthermore, let’s say you are doing some analysis in the SQL
database and you see something interesting and want to drill down to the
raw data—if the raw data is located on a different system, such a drill down
118 Big Data Computing
With this solution, an NoSQL data store is used as a front end to process
selected data in real time data, and having Hadoop in the back end process-
ing Big Data in batch mode.
“In my opinion the primary interface will be via the real time store,
and the Hadoop layer will become a commodity. That is why there is
so much competition for the NoSQL brass ring right now” says J. Chris
Anderson of Couchbase (an NoSQL datastore).
Another approach is to use a NewSQL data store designed for Big Data
Analytics, such as HP/Vertica. Quoting Shilpa Lawande [4] “Vertica was
designed from the ground up for analytics.” Vertica is a columnar database
engine including sorted columnar storage, a query optimizer, and an execu-
tion engine, providing standard ACID transaction semantics on loads and
queries.
With sorted columnar storage, there are two methods that drastically
reduce the I/O bandwidth requirements for such Big Data analytics work-
loads. The first is that Vertica only reads the columns that queries need.
Second, Vertica compresses the data significantly better than anyone else.
Vertica’s execution engine is optimized for modern multicore processors
and we ensure that data stays compressed as much as possible through
the query execution, thereby reducing the CPU cycles to process the query.
Additionally, we have a scale-out MPP architecture, which means you can
add more nodes to Vertica.
All of these elements are extremely critical to handle the data volume chal-
lenge. With Vertica, customers can load several terabytes of data quickly (per
hour in fact) and query their data within minutes of it being loaded—that is
real-time analytics on Big Data for you.
There is a myth that columnar databases are slow to load. This may have
been true with older generation column stores, but in Vertica, we have a
hybrid in-memory/disk load architecture that rapidly ingests incoming data
into a write-optimized row store and then converts that to read-optimized
sorted columnar storage in the background. This is entirely transparent to
the user because queries can access data in both locations seamlessly. We
have a very lightweight transaction implementation with snapshot isolation
queries can always run without any locks.
And we have no auxiliary data structures, like indices or material-
ized views, which need to be maintained postload. Last, but not least, we
designed the system for “always on,” with built-in high availability features.
Operations that translate into downtime in traditional databases are online
in Vertica, including adding or upgrading nodes, adding or modifying data-
base objects, etc. With Vertica, we have removed many of the barriers to mon-
etizing Big Data and hope to continue to do so.
“Vertica and Hadoop are both systems that can store and analyze large
amounts of data on commodity hardware. The main differences are how the
data get in and out, how fast the system can perform, and what transaction
120 Big Data Computing
guarantees are provided. Also, from the standpoint of data access, Vertica’s
interface is SQL and data must be designed and loaded into an SQL schema
for analysis. With Hadoop, data is loaded AS IS into a distributed file sys-
tem and accessed programmatically by writing Map-Reduce programs.”
(Shilpa Lawande [4])
A NewSQL Data Store for OLTP (VoltDB) Connected with Hadoop or a Data
Warehouse
With this solution, a fast NewSQL data store designed for OLTP (VoltDB) is
connected to either a conventional data warehouse or Hadoop.
“We identified 4 sources of significant OLTP overhead (concurrency con-
trol, write-ahead logging, latching and buffer pool management).
Unless you make a big dent in ALL FOUR of these sources, you will not
run dramatically faster than current disk-based RDBMSs. To the best of my
knowledge, VoltDB is the only system that eliminates or drastically reduces
all four of these overhead components. For example, TimesTen uses conven-
tional record level locking, an Aries-style write ahead log and conventional
multi-threading, leading to substantial need for latching. Hence, they elimi-
nate only one of the four sources.
VoltDB is not focused on analytics. We believe they should be run on a
companion data warehouse. Most of the warehouse customers I talk to want
to keep increasing large amounts of increasingly diverse history to run their
analytics over. The major data warehouse players are routinely being asked
to manage petabyte-sized data warehouses. VoltDB is intended for the OLTP
portion, and some customers wish to run Hadoop as a data warehouse plat-
form. To facilitate this architecture, VoltDB offers a Hadoop connector.
VoltDB supports standard SQL. Complex joins should be run on a com-
panion data warehouse. After all, the only way to interleave ‘big reads’
with ‘small writes’ in a legacy RDBMS is to use snapshot isolation or run
with a reduced level of consistency. You either get an out-of-date, but con-
sistent answer or an up-to-date, but inconsistent answer. Directing big
reads to a companion DW, gives you the same result as snapshot isolation.
Hence, I do not see any disadvantage to doing big reads on a companion
system.
Concerning larger amounts of data, our experience is that OLTP problems
with more than a few Tbyte of data are quite rare. Hence, these can easily fit
in main memory, using a VoltDB architecture.
In addition, we are planning extensions of the VoltDB architecture to han-
dle larger-than-main-memory data sets.” (Mike Stonebraker [13])
The main technical challenges for Big Data analytics at eBay are
• EDW: models for the unknown (close to third NF) to provide a solid
physical data model suitable for many applications, which limits
the number of physical copies needed to satisfy specific application
requirements.
A lot of scalability and performance is built into the database, but as any
shared resource it does require an excellent operations team to fully leverage
the capabilities of the platform
But since they are leveraging the latest database release, they are exploring
ways to adopt new storage and processing patterns. Some new data sources
are stored in a denormalized form significantly simplifying data model-
ing and ETL. On top they developed functions to support the analysis of
the semistructured data. It also enables more sophisticated algorithms that
would be very hard, inefficient, or impossible to implement with pure SQL.
One example is the pathing of user sessions. However, the size of the data
requires them to focus more on best practices (develop on small subsets, use
1% sample; process by day).
Unstructured Data
Unstructured data are handled on Hadoop only. The data are copied from
the source systems into HDFS for further processing. They do not store any
of that on the Singularity (Teradata) system.
Use of Data management technologies:
today, and how researchers and policy-makers are beginning to realize the
potential for leveraging Big Data to extract insights that can be used for
Good, in particular, for the benefit of low-income populations.
“A flood of data is created every day by the interactions of billions of peo-
ple using computers, GPS devices, cell phones, and medical devices. Many
of these interactions occur through the use of mobile devices being used by
people in the developing world, people whose needs and habits have been
poorly understood until now.
Researchers and policymakers are beginning to realize the potential for
channeling these torrents of data into actionable information that can be
used to identify needs, provide services, and predict and prevent crises for
the benefit of low-income populations. Concerted action is needed by gov-
ernments, development organizations, and companies to ensure that this
data helps the individuals and communities who create it.”
Three examples are cited in WEF paper:
“All our activities in our lives can be looked at from different perspec-
tives and within various contexts: our individual view, the view of our
families and friends, the view of our company and finally the view of
society—the view of the world. Which perspective means what to us
is not always clear, and it can also change over the course of time. This
might be one of the reasons why our life sometimes seems unbalanced.
We often talk about work-life balance, but maybe it is rather an imbal-
ance between the amount of energy we invest into different elements of
our life and their meaning to us.”
—Eran Davidson, CEO Hasso Plattner Ventures
128 Big Data Computing
Acknowledgments
I would like to thank Michael Blaha, Rick Cattell, Michael Carey, Akmal
Chaudhri, Tom Fastner, Laura Haas, Alon Halevy, Volker Markl, Dave
Thomas, Duncan Ross, Cindy Saracco, Justin Sheehy, Mike OSullivan, Martin
Verlage, and Steve Vinoski for their feedback on an earlier draft of this chapter.
But all errors and missing information are mine.
References
1. McKinsey Global Institute (MGI), Big Data: The next frontier for innovation,
competition, and productivity, Report, June, 2012.
2. Managing Big Data. An interview with David Gorbet ODBMS Industry Watch,
July 2, 2012. http://www.odbms.org/blog/2012/07/managing-big-data-an-
interview-with-david-gorbet/
3. On Big Data: Interview with Dr. Werner Vogels, CTO and VP of Amazon.
com. ODBMS Industry Watch, November 2, 2011. http://www.odbms.org/
blog/2011/11/on-big-data-interview-with-dr-werner-vogels-cto-and-vp-of-
amazon-com/
4. On Big Data: Interview with Shilpa Lawande, VP of Engineering at Vertica.
ODBMs Industry Watch, November 16, 2011.
5. “Big Data for Good”, Roger Barca, Laura Haas, Alon Halevy, Paul Miller,
Roberto V. Zicari. ODBMS Industry Watch, June 5, 2012.
6. On Big Data Analytics: Interview with Florian Waas, EMC/Greenplum. ODBMS
Industry Watch, February 1, 2012.
7. Next generation Hadoop—interview with John Schroeder. ODBMS Industry
Watch, September 7, 2012.
8. Michael J. Carey, EDBT keynote 2012, Berlin.
9. Marc Geall, “Big Data Myth”, Deutsche Bank Report 2012.
10. On Big Data, Analytics and Hadoop. Interview with Daniel Abadi. ODBMS
Industry Watch, December 5, 2012.
11. Hadoop and NoSQL: Interview with J. Chris Anderson. ODBMS Industry Watch,
September 19, 2012.
12. Analytics at eBay. An interview with Tom Fastner. ODBMS Industry Watch,
October 6, 2011.
13. Interview with Mike Stonebraker. ODBMS Industry Watch, May 2, 2012.
Links:
ODBMS.org www.odbms.org
ODBMS Industry Watch, www.odbms.org/blog
Section II
Semantic Technologies
and Big Data
This page intentionally left blank
4
Management of Big Semantic Data
Contents
Big Data................................................................................................................. 133
What Is Semantic Data?....................................................................................... 135
Describing Semantic Data.............................................................................. 135
Querying Semantic Data................................................................................ 136
Web of (Linked) Data........................................................................................... 137
Linked Data...................................................................................................... 138
Linked Open Data........................................................................................... 139
Stakeholders and Processes in Big Semantic Data.......................................... 140
Participants and Witnesses............................................................................ 141
Workflow of Publication-Exchange-Consumption.................................... 144
State of the Art for Publication-Exchange-Consumption.......................... 146
An Integrated Solution for Managing Big Semantic Data............................. 148
Encoding Big Semantic Data: HDT............................................................... 149
Querying HDT-Encoded Data Sets: HDT-FoQ........................................... 154
Experimental Results........................................................................................... 156
Publication Performance................................................................................ 157
Exchange Performance................................................................................... 159
Consumption Performance............................................................................ 159
Conclusions and Next Steps............................................................................... 162
Acknowledgments............................................................................................... 163
References.............................................................................................................. 164
In 2007, Jim Gray preached about the effects of the Data deluge in the sciences
(Hey et al. 2009). While experimental and theoretical paradigms originally
led science, some natural phenomena were not easily addressed by analyti-
cal models. In this scenario, computational simulation arose as a new para-
digm enabling scientists to deal with these complex phenomena. Simulation
produced increasing amounts of data, particularly from the use of advanced
exploration instruments (large-scale telescopes, particle colliders, etc.) In this
scenario, scientists were no longer interacting directly with the phenomena,
131
132 Big Data Computing
Big Data
Much has been said and written these days about Big Data. News in rel-
evant magazines (Cukier 2010; Dumbill 2012b; Lohr 2012), technical reports
(Selg 2012) and white papers from leading enterprises (Dijcks 2012), some
134 Big Data Computing
To fulfill these goals, the Semantic Web community and the World Wide
Consortium (W3C)* have developed (i) models and languages for represent-
ing the semantics and (ii) protocols and languages for querying it. We will
briefly describe them in the next items.
a. The pattern matching part, which includes the most basic features of
graph pattern matching, such as optional parts, union of patterns,
nesting, filtering values of possible matchings, and the possibility of
choosing the data source to be matched by a pattern.
b. The solution modifiers which, once the output of the pattern has been
computed (in the form of a table of values of variables), allow to
modify these values applying standard classical operators such as
projection, distinct, order, and limit.
c. Finally, the output of an SPARQL query comes in tree forms. (1) May
be: yes/no queries (ASK queries); (2) selections of values of the vari-
ables matching the patterns (SELECT queries), and (3) construction
of new RDF data from these values, and descriptions of resources
(CONSTRUCT queries).
Linked Data
The Linked Data project* originated in leveraging the practice of linking data
to the semantic level, following the ideas of Berners-Lee (2006). Its authors
state that:
Linked Data is about using the WWW to connect related data that wasn’t
previously linked, or using the WWW to lower the barriers to linking
data currently linked using other methods. More specifically, Wikipedia
defines Linked Data as “a term used to describe a recommended best
practice for exposing, sharing, and connecting pieces of data, infor-
mation, and knowledge on the Semantic Web using URIs (Uniform
Resource Identifiers) and RDF.”
1. Use URIs as names for things. This rule enables each possible real-
world entity or its relationships to be unequivocally identified at
universal scale. This simple decision guarantees that any raw data
has its own identity in the global space of the Web of Data.
2. Use HTTP URIs so that people can look up those names. This decision
leverages HTTP to retrieve all data related to a given URI.
3. When someone looks up a URI, provide useful information, using stan-
dards. It standardizes processes in the Web of Data and pacts the
languages spoken by stakeholders. RDF and SPARQL, together with
semantic technologies described in the previous section, defines the
standards mainly used in the Web of Data.
4. Include links to other URIs. It materializes the aim of data integration
by simply adding new RDF triples which link data from two dif-
ferent data sets. This inter-data set linkage enables the automatic
browsing.
* http://www.linkeddata.org
Management of Big Semantic Data 139
These four rules provide the basics for publishing and integrating Big
Semantic Data into the global space of the Web of Data. They enable raw data
to be simply encoded by combining the RDF model and URI-based identi-
fication, both for entities and for their relationships adequately labeled over
rich semantic vocabularies. Berners-Lee (2002) expresses the Linked Data
relevance as follows:
Linked Data allows different things in different data sets of all kinds to be con-
nected. The added value of putting data on the WWW is given by the way it
can be queried in combination with other data you might not even be aware
of. People will be connecting scientific data, community data, social web
data, enterprise data, and government data from other agencies and organi-
zations, and other countries, to ask questions not asked before.
Linked data is decentralized. Each agency can source its own data without a big
cumbersome centralized system. The data can be stitched together at the edges,
more as one builds a quilt than the way one builds a nuclear power station.
A virtuous circle. There are many organizations and companies which will
be motivated by the presence of the data to provide all kinds of human access
to this data, for specific communities, to answer specific questions, often in
connection with data from different sites.
The project and further information about linked data can be found in
Bizer et al. (2009) and Heath and Bizer (2011).
The LOD cloud has grown significantly since its origins in May 2007.* The first
report pointed that 12 data sets were part of this cloud, 45 were acknowledged
in September 2008, 95 data sets in 2009, 203 in 2010, and 295 different data sets
* http://richard.cyganiak.de/2007/10/lod/
140 Big Data Computing
in the last estimation (September 2011). These last statistics* point out that more
than 31 billion triples are currently published and more than 500 million links
establish cross-relations between data sets. Government data are predominant
in LOD, but other fields such as geography, life sciences, media, or publications
are also strongly represented. It is worth emphasizing the existence of many
cross-domain data sets comprising data from some diverse fields. These tend
to be hubs because providing data that may be linked from and to the vast
majority of specific data sets. DBpedia† is considered the nucleus for the LOD
cloud (Auer et al. 2007). In short, DBpedia gathers raw data underlying to the
Wikipedia web pages and exposes the resulting representation following the
Linked Data rules. It is an interesting example of Big Semantic Data, and its
management is considered within our experiments.
* http://www4.wiwiss.fu-berlin.de/lodcloud/state/
† http://dbpedia.org
Management of Big Semantic Data 141
first establish a simple set of stakeholders in Big Semantic Data, from where
we define a common data workflow in order to better understand the main
processes performed in the Web of Data.
• From scratch
• Conversion from other data format
Creator • Data integration from existing content
• Direct consumption
• Intensive consumer processing
Consumer • Composition of data
Figure 4.1
Stakeholder classification in Big Semantic Data management.
Creator: one that generates a new RDF data set by, at least, one of these
processes:
• Creation from scratch: the novel data set is not based on a previous
model. Even if the data exist beforehand, the data modeling process
is unbiased from the previous data format. RDF authoring tools* are
traditionally used.
• Conversion from other data format: the creation phase is highly deter-
mined by the conversion of the original data source; potential map-
pings between source and target data could be used; for example, from
relational databases (Arenas et al. 2012), as well as (semi-)automatic
conversion tools.†
• Data integration from existing content: the focus moves to an efficient
integration of vocabularies and the validation of shared entities
(Knoblock et al. 2012).
Several tasks are shared among all three processes. Some examples of this
commonalities are the identification of the entities to be modeled (but this
Workflow of Publication-Exchange-Consumption
The previous RFID network example shows the enormous diversity of pro-
cesses and different concerns for each type of stakeholder. In what follows,
we will consider the creation step out of the scope of this work, because our
approach relies on the existence of big RDF data sets (without belittling those
ones which can be created hereinafter). We focus on tasks involving large-scale
management; for instance, scalability issues of visual authoring a big RDF data
set are comparable to RDF visualization by consumers, or the performance of
RDF data integration from existing content depends on efficient access to the
data and thus existing indexes, a crucial issue also for query response.
Management processes for publishers and consumers are diverse and
complex to generalize. However, it is worth characterizing a common work-
flow present in almost every application in the Web of Data in order to place
Management of Big Semantic Data 145
l i s h e rs 3. Con
ub
sumpti
on
R/I
P
ge
chan Q/P
1. Publication Dereferenceable URls 2. Ex I
Reasoning/integration
RDF dump Quality/Provenance
APP
Sensor Indexing
SPARQL endpoints/ o APP
C
APIs ns
u m e rs
Figure 4.2
Publication-Exchange-Consumption workflow in the Web of Data.
* http://sindice.com/
Management of Big Semantic Data 147
ability adding interesting extra features, for example, abbreviated RDF data
sets. RDF/JSON (Alexander 2008) has the advantage of being coded in a lan-
guage easier to parse and more widely accepted in the programing world.
Although all these formats present features to “abbreviate” constructions,
they are still dominated by a document-centric and human-readable view
which adds an unnecessary overhead to the final data set representation.
In order to reduce exchange costs and delays on the network, universal
compressors (e.g., gzip) are commonly used over these plain formats. In
addition, specific interchange oriented representations may also be used. For
instance, the Efficient XML Interchange Format: EXI (Schneider and Kamiya
2011) may be used for representing any valid RDF/XML data set.
Efficient RDF Consumption: the aforementioned variety of consumer tasks
hinders to achieve a one-size-fits-all technique. However, some general con-
cerns can be outlined. In most scenarios, the performance is influenced by
(i) the serialization format, due to the overall data exchange time, and (ii) the
RDF indexing/querying structure. In the first case, if a compressed RDF has
been exchanged, a previous decompression must be done. In this sense, the
serialization format affects the consumption through the transmission cost,
but also with the easiness of parsing. The latter factor affects the consump-
tion process in different ways:
be directly resolved in the corresponding index and (ii) the first join step
to be resolved through fast merge-join. Although it achieves a global com-
petitive performance, the index replication largely increases spatial require-
ments. Other solutions take advantage of structural properties of the data
model (Tran et al. 2012), introduce specific graph compression techniques
(Atre et al. 2010; Álvarez-García et al. 2011), or use distributed nodes within a
MapReduce infrastructure (Urbani et al. 2010).
its content, even before retrieving the whole data set. It enhances the VoID
Vocabulary (Alexander et al. 2009) to provide a standardized binary data set
description in which some additional HDT-specific properties are appended.*
The Header component comprises four distinct sections:
Since RDF enables data integration at any level, the Header component
ensures that HDT-encoded data sets are not isolated and can be intercon-
nected. For instance, it is a great tool for query syndication. A syndicated
query engine could maintain a catalog composed by the Headers of different
HDT-encoded data sets from many publishers and use it to know where to
find more data about a specific subject. Then, at query time, the syndicated
query engine can either use the remote SPARQL endpoint to query directly
the third-party server or even download the whole data set and save it in a
local cache. Thanks to the compact size of HDT-encoded data sets, both the
transmission and storage costs are highly reduced.
* http://www.w3.org/Submission/2011/SUBM-HDT-Extending-VoID-20110330/
† http://dublincore.org/
Management of Big Semantic Data 151
1 1 1
Shared Predicates
|sh| |sh| |P|
Subjects
Objects
|S|
|O|
Figure 4.3
HDT dictionary organization into four sections.
152 Big Data Computing
For instance, these advanced operations are very convenient when serv-
ing query suggestions to the user, or when evaluating SPARQL queries that
include REGEX filters.
We suggest a Front-Coding (Witten et al. 1999) based representation as the
most simple way of dictionary encoding. It has been successfully used in
many WWW-based applications involving URL management. It is a very sim-
ple yet effective technique based on differential compression. This technique
applies to lexicographically sorted dictionaries by dividing them into buckets
of b terms. By tweaking this bucket size, different space/time trade-offs can
be achieved. The first term in the bucket is explicitly stored and the remain-
ing b − 1 ones are encoded with respect to their precedent: the common prefix
length is first encoded and the remaining suffix is appended. More technical
details about these dictionaries are available in Brisaboa et al. (2011).
The work of Martínez-Prieto et al. (2012b) surveys the problem of encoding
compact RDF dictionaries. It reports that Front-Coding achieves a good perfor-
mance for a general scenario, but more advanced techniques can achieve bet-
ter compression ratios and/or handle directly complex operations. In any case,
HDT is flexible enough to support any of these techniques, allowing stake-
holders to decide which configuration is better for their specific purposes.
Triples. As stated, the Dictionary component allows spatial savings to be
achieved, but it also enables RDF triples to be compactly encoded, represent-
ing tuples of three IDs referring the corresponding terms in the Dictionary.
Thus, our original RDF graph is now transformed into a graph of IDs which
encoding can be carried out in a more optimized way.
We devise a Triples encoding that organizes internally the information
in a way that exploits graph redundancy to keep data compact. Moreover,
this encoding can be easily mapped into a data structure that allows basic
retrieval operations to be performed efficiently.
Triple patterns are the SPARQL query atoms for basic RDF retrieval. That
is, all triples matching a template (s, p, o) (where s, p, and o may be variables)
must be directly retrieved from the Triples encoding. For instance, in the
geographic data set Geonames,* the triple pattern below searches all the sub-
jects whose feature code (the predicate) is “P” (the object), a shortcode for
“country.” In other words, it asks about all the URIs representing countries:
? <http://www.geonames.org/ontology#featureCode>
<http://www.geonames.org/ontology#P>
Thus, the Triples component must be able to retrieve the subject of all those
triples matching this pair of predicate and object.
* http://www.geonames.org
Management of Big Semantic Data 153
Each triple in the data set is now represented as a full path root-to-leave in
the corresponding tree. This simple reorganization reveals many interesting
features.
• The subject can be implicitly encoded given that the trees are sorted
by subject and we know the total number of trees. Thus, BT does not
perform a triples encoding, but it represents pairs (predicate, object).
This is an obvious spatial saving.
• Predicates are sorted within each tree. This is very similar to a well-
known problem: posting list encoding for Information Retrieval
(Witten et al. 1999; Baeza-Yates and Ribeiro-Neto 2011). This allows
applying many existing and optimized techniques to our problem.
2 6 1
2 6 3 Objects: 2 4 4 1 3 4 Bo 1 1 1 1 0 1
Objects:
2 7 4 So 2 4 4 1 3 4
Figure 4.4
Description of Bitmap Triples.
154 Big Data Computing
BT encodes the Triples component level by level. That is, predicate and
object levels are encoded in isolation. Two structures are used for predicates:
(i) an ID sequence (Sp) concatenates predicate lists following the tree order-
ing; (ii) a bitsequence (Bp) uses one bit per element in Sp: 1 bits mean that
this predicate is the first one for a given tree, whereas 0 bits are used for
the remaining predicates. Object encoding is performed in a similar way:
So concatenates object lists, and Bo tags each position in such way that 1 bits
represent the first object in a path, and 0 bits the remaining ones. The right
part of Figure 4.4 illustrates all these sequences for the given example.
until the end (because no more this is the last 1 bit in Bp). Thus, the
predicate list is {5, 6, 7}.
2. The predicate 6 is searched in the list. We binary search it and
find that it is the second element in the list. Thus, it is at position
P2 + 2 − 1 = 3 + 2 − 1 = 4 in Sp so we are traversing the 4th path of the
forest.
3. We retrieve the corresponding object list. It is the 4th one in So. We
obtain it as before: firstly locate the fourth 1 bit in Bo:O4 = 4 and then
retrieve all objects until the next 1 bit. That is, the list comprises the
objects {1, 3}.
4. Finally, the object list is binary searched and locates the object 3 in
its first position. Thus, we are sure that the triple (2, 6, 1) exists in the
data set.
All triple patterns providing the subjects are efficiently resolved on vari-
ants of this process. Thus, the data structure directly mapped from the
encoding provides fast subject-based retrieval, but makes difficult access-
ing by predicate and object. Both can easily be accomplished with a limited
overhead on the space used by the original encoding. All fine-grain details
about the following decisions are also explained in Martínez-Prieto et al.
(2012a).
Enabling access by predicate. This retrieval operation demands direct access
to the second level of the tree, so it means efficient access to the sequence Sp .
However, the elements of Sp are sorted by subject, so locating all predicate
occurrences demands a full scanning of this sequence and this result in a
poor response time.
Although accesses by predicate are uncommon in general (Arias et al.
2011), some applications could require them (e.g., extracting all the informa-
tion described with a set of given predicates). Thus, we must address it by
considering the need of another data structure for mapping Sp. It must enable
efficient predicate locating but without degrading basic access because it is
used in all operations by subject. We choose a structure called wavelet tree.
The wavelet tree (Grossi et al. 2003) is a succinct structure which reorganizes a
sequence of integers, in a range [1, n], to provide some access operations to the
data in logarithmic time. Thus, the original Sp is now loaded as a wavelet tree,
not as an array. It means a limited additional cost (in space) which holds HDT
scalability for managing Big Semantic Data. In return, we can locate all predi-
cate occurrences in logarithmic time with the number of different predicates
used for modeling in the data set. In practice, this number is small and it means
efficient occurrence location within our access operations. It is worth noting
that to access to any position in the wavelet tree has also now a logarithmic cost.
Therefore, access by predicate is implemented by firstly performing an
occurrence-to-occurrence location, and for each one traversing the tree by
following comparable steps to than explained in the previous example.
156 Big Data Computing
Enabling access by object. The data structure designed for loading HDT-
encoded data sets, considering a subject-based order, is not suitable for doing
accesses by object. All the occurrence of an object are scattered throughout
the sequence So and we are not able to locate them unless we do sequential
scan. Furthermore, in this case a structure like the Wavelet Tree becomes
inefficient; RDF data sets usually have few predicates, but they contain many
different objects and logarithmic costs result in very expensive operation.
We enhance HDT-FoQ with an additional index (called O-Index), that is
responsible for solving accesses by object. This index basically gathers the
positions in where each object appears in the original So. Please note that
each leave is associated to a different triple, so given the index of an element
in the lower level, we can guess the predicate and subject associated by tra-
versing the tree upwards processing the bit sequences in a similar way than
that used for subject-based access.
In relative terms, this O-Index has a significant impact in the final HDT-
FoQ requirements because it takes considerable space in comparison to the
other data structures used for modeling the Triples component. However, in
absolute terms, the total size required by HDT-FoQ is very small in compari-
son to that required by the other competitive solutions in the state of the art.
All these results are analyzed in the next section.
Joining Basic Triple Patterns. All this infrastructure enables basic triple pat-
terns to be resolved, in compressed space, at higher levels of the hierarchy of
memory. As we show below, it guarantees efficient triple pattern resolution.
Although this kind of queries are massively used in practice (Arias et al.
2011), the SPARQL core is defined around the concept of Basic Graph Pattern
(BGP) and its semantics to build conjunctions, disjunctions, and optional parts
involving more than a single triple pattern. Thus, HDT-FoQ must provide more
advanced query resolution to reach a full SPARQL coverage. At this moment, it
is able to resolve conjunctive queries by using specific implementations of the
well-known merge and index join algorithms (Ramakrishnan and Gehrke 2000).
Experimental Results
This section analyzes the impact of HDT for encoding Big Semantic Data
within the Publication-Exchange-Consumption workflow described in the
Web of Data. We characterize the publisher and consumer stakeholders of
our experiments as follows:
Publication Performance
As explained, RDF data sets are usually released in plain-text form (NTriples,
Turtle, or RDF-XML), and their big volume is simply reduced using any tradi-
tional compressor. This way, volume directly affects the publication process
because the publisher must, at least, process the data set to convert it to a suit-
able format for exchange. Attending to the current practices, we set gzip com-
pression as the baseline and we also include lzma because of its effectiveness.
We compare their results against HDT, in plain and also in conjunction with
the same compressors. That is, HDT plain implements the encoding described
in section “Encoding Big Semantic Data: HDT”, and HDT + X stands for the
result of compressing HDT plain with the compressor X.
Table 4.1
Statistics of the Real-World Data sets Used in the Experimentation
Plain Size
Data set Ntriples (GB) Available at
LinkedMDB 6,148,121 0,85 http://queens.db.toronto.edu/~oktie/linkedmdb
DBLP 73,226,756 11,16 http://DBLP.l3s.de/DBLP++.php
Geonames 119,316,724 13,79 http://download.Geonames.org/all-Geonames-rdf.zip
DBpedia 296,907,301 48,62 http://wiki.dbpedia.org/Downloads37
Freebase 639,078,932 84,76 http://download.freebase.com/datadumps/a
Mashup 1,055,302,957 140,46 Mashup of Geonames + Freebase + DBPedia
a Dump on 2012-07-26 converted to RDF using http://code.google.com/p/freebase-quad-rdfize/.
* http://www.rdfhdt.org
158 Big Data Computing
Figure 4.5 shows compression ratios for all the considered techniques.
In general, HDT plain requires more space than traditional compressors.
It is an expected result because both Dictionary and Triples use very basic
approaches. Advanced techniques for each component enable signifi-
cant improvements in space. For instance, our preliminary results using
the technique proposed in Martínez-Prieto et al. (2012b) for dictionary
encoding show a significant improvement in space. Nevertheless, if we
apply traditional compression over the HDT-encoded data sets, the spa-
tial requirements are largely diminished. As shown in Figure 4.5, the com-
parison changes when the HDT-encoded data sets are compressed with
gzip and lzma. These results show that HDT + lzma achieves the most
compressed representations, largely improving the effectiveness reported
by traditional approaches. For instance, HDT + lzma only uses 2.56% of
the original mash-up size, whereas compressors require 5.23% (lzma) and
7.92% (gzip).
Thus, encoding the original Big Semantic Data with HDT and then apply-
ing compression reports the best numbers for publication. It means that
publishers using our approach require 2−3 times less storage space and
bandwidth than using traditional compression. These savings are achieved
at the price of spending some time to obtain the corresponding representa-
tions. Note that traditional compression basically requires compressing the
data set, whereas our approach firstly transforms the data set into its HDT
encoding and then compresses it. These publication times (in minutes) are
depicted in Table 4.2.
Compression ratio
0.00 2.00 4.00 6.00 8.00 10.00 12.00
1.43
1.81
LinkedMDB 2.20
4.46
6.34
1.56
2.01
DBLP 2.87
4.10 HDT + LZMA
6.68 HDT + gz
1.68 NT + Izma
2.16
Geonames 2.47
4.97
NT + gz
8.33 HDT
3.34
4.32
DBPedia 6.67
9.73
11.25
2.06
2.63
Freebase 4.40
6.65
6.24
2.56
3.32
Mashup 5.23
7.92
8.70
Figure 4.5
Dataset compression (expressed as percent of the original size in NTriples).
Management of Big Semantic Data 159
Table 4.2
Publication Times (Minutes)
HDT+
Data set gzip lzma gzip lzma
LinkedMDB 0.19 14.71 1.09 1.52
DBLP 2.72 103.53 13.48 21.99
Geonames 3.28 244.72 26.42 38.96
DBPedia 18.90 664.54 84.61 174.12
Freebase 24.08 1154.02 235.83 315.34
Mash-up 47.23 2081.07 861.87 1033.0
Note: Bold values emphasize the best compression
times.
Exchange Performance
In the ideal network regarded in our experiments, exchange performance is
uniquely determined by the data size. Thus, our approach also appears as
the most efficient because of its excellent compression ratios. Table 4.3 orga-
nizes processing times for all data sets and each task involved in the work-
flow. Column exchange lists exchanging times required when lzma (in the
baseline) and HDT + lzma are used for encoding.
For instance, the mash-up exchange takes roughly half an hour for
HDT + lzma and slightly more than 1 h for lzma. Thus, our approach reduces
by the half exchange time and also saves bandwidth in the same proportion
for the mash-up.
Consumption Performance
In the current evaluation, consumption performance is analyzed from two com-
plementary perspectives. First, we consider a postprocessing stage in which the
consumer decompresses the downloaded data set and then indexes it for local
consumption. Every consumption task directly relies on efficient query resolu-
tion, and thus, our second evaluation focuses on query evaluation performance.
160 Big Data Computing
Table 4.3
Overall Client Times (Seconds)
Data set Config. Exchange Decomp. Index Total
LinkedMDB Baseline 9.61 5.11 111.08 125.80
HDT 6.25 1.05 1.91 9.21
DBLP Baseline 164.09 70.86 1387.29 1622.24
HDT 89.35 14.82 16.79 120.96
Geonames Baseline 174.46 87.51 2691.66 2953.63
HDT 118.29 19.91 44.98 183.18
DBPedia Baseline 1659.95 553.43 7904.73 10118.11
HDT 832.35 197.62 129.46 1159.43
Freebase Baseline 1910.86 681.12 58080.09 60672.07
HDT 891.90 227.47 286.25 1405.62
Mashup Baseline 3757.92 1238.36 >24 h >24 h
HDT 1839.61 424.32 473.64 2737.57
Note: Bold values highlight the best times for each activity in the workflow. Baseline means
that the file is downloaded in NTriples format, compressed using lzma, and indexed
using RDF-3X. HDT means that the file is downloaded in HDT, compressed with lzma,
and indexed using HDT-FoQ.
• RDF3X* was recently reported as the fastest RDF store (Huang et al.
2011).
• Virtuoso† is a popular store performing on relational infrastructure.
• Hexastore‡ is a well-known memory-resident store.
two main reasons. On the one hand, HDT-encoded data sets are smaller than
its counterparts in NTriples and it improves decompression performance.
On the other hand, HDT-FoQ generates its additional indexing structures
(see section “Querying HDT-Encoded Data sets: HDT-FoQ”) over the origi-
nal HDT encoding, whereas RDF3X first needs parsing the data set and then
building their specific indices from scratch. Both features share an important
fact: the most expensive processing was already done in the server side and
HDT-encoded data sets are clearly better for machine consumption.
Exchange and post-processing times can be analyzed together and because
of it the total time than a consumer must wait until the data is able to be
efficiently used in any application. Our integrated approach, around HDT
encoding and data structures, completes all the tasks 8−43 times faster than
the traditional combination of compression and RDF indexing. It means,
for instance, that the configured consumer retrieves and makes queryable
Freebase in roughly 23 min using HDT, but it needs almost 17 h to complete
the same process over the baseline. In addition, we can see that indexing is
clearly the heavier task in the baseline, whereas exchange is the longer task
for us. However, in any case, we always complete exchange faster due to our
achievements in space.
Querying. Once the consumer has made the downloaded data queryable,
the infrastructure is ready to build on-top applications issuing SPARQL que-
ries. The data volume emerges again as a key factor because it restricts the
ways indices and query optimizers are designed and managed.
On the one hand, RDF3X and Virtuoso rely on disk-based indexes which
are selectively loaded into main memory. Although both are efficiently tuned
for this purpose, these I/O transfers result in very expensive operations that
hinder the final querying performance. On the other hand, Hexastore and
HDT-FoQ always hold their indices in memory, avoiding these slow accesses
to disk. Whereas HDT-FoQ enables all data sets in the setup to be managed
in the consumer configuration, Hexastore is only able to index the smaller
one, showing its scalability problems when managing Big Semantic Data.
We obtain two different sets of SPARQL queries to compare HDT-FoQ
against the indexing solutions within the state of the art. On the one hand,
5000 queries are randomly generated for each triple pattern. On the other
hand, we also generate 350 queries of each type of two-way join, subdivided
into two groups depending on whether they have a small or big amount of
intermediate results. All these queries are run over Geonames in order to
include both Virtuoso and RDF3X in the experiments. Note that, both classes
of queries are resolved without the need of query planning, hence the results
are clear evidence of how the different indexing techniques perform.
Figure 4.6 summarizes these querying experiments. The X-axis lists all
different queries: the left subgroup lists the triple patterns, and the right ones
represent all different join classes. The Y-axis means the number of times that
HDT-FoQ is faster than its competitors. For instance, in the pattern (S, V, V)
(equivalent to dereference the subject S), HDT-FoQ is more than 3 times
162 Big Data Computing
11
10
9 RDF-3x
8
7 Virtuoso
6
5
4
3
2
1
0
spV sVo sVV Vpo VpV VVo SSbig SSsmall SObig SOsmall OObig OOsmall
Figure 4.6
Comparison on querying performance on Geonames.
faster than RDF3X and more than 11 times faster than Virtuoso. In general,
HDT-FoQ always outperforms Virtuoso, whereas RDF3X is slightly faster for
(V, P, V), and some join classes. Nevertheless, we remain competitive in all
theses cases and our join algorithms are still open for optimization.
Acknowledgments
This work was partially funded by MICINN (TIN2009-14009-C02-02); Science
Foundation Ireland: Grant No. ~ SFI/08/CE/I1380, Lion-II; Fondecyt 1110287
and Fondecyt 1-110066. The first author is granted by Erasmus Mundus, the
Regional Government of Castilla y León (Spain) and the European Social
Fund. The third author is granted by the University of Valladolid: pro-
gramme of Mobility Grants for Researchers (2012).
164 Big Data Computing
References
Abadi, D., A. Marcus, S. Madden, and K. Hollenbach. 2009. SW-Store: A vertically
partitioned DBMS for Semantic Web data management. The VLDB Journal 18,
385–406.
Adida, B., I. Herman, M. Sporny, and M. Birbeck (Eds.). 2012. RDFa 1.1 Primer. W3C
Working Group Note. http://www.w3.org/TR/xhtml-rdfa-primer/.
Akar, Z., T. G. Hala, E. E. Ekinci, and O. Dikenelli. 2012. Querying the Web of
Interlinked Datasets using VOID Descriptions. In Proc. of the Linked Data on the
Web Workshop (LDOW), Lyon, France, Paper 6.
Alexander, K. 2008. RDF in JSON: A Specification for serialising RDF in JSON. In
Proc. of the 4th Workshop on Scripting for the Semantic Web (SFSW), Tenerife,
Spain.
Alexander, K., R. Cyganiak, M. Hausenblas, and J. Zhao. 2009. Describing linked
datasets-on the design and usage of voiD, the “vocabulary of interlinked data-
sets”. In Proc. of the Linked Data on the Web Workshop (LDOW), Madrid, Spain,
Paper 20.
Álvarez-García, S., N. Brisaboa, J. Fernández, and M. Martínez-Prieto. 2011.
Compressed k2-triples for full-in-memory RDF engines. In Proc. 17th Americas
Conference on Information Systems (AMCIS), Detroit, Mich, Paper 350.
Arenas, M., A. Bertails, E. Prud’hommeaux, and J. Sequeda (Eds.). 2012. A Direct
Mapping of Relational Data to RDF. W3C Recommendation. http://www.
w3.org/TR/rdb-direct-mapping/.
Arias, M., J. D. Fernández, and M. A. Martínez-Prieto. 2011. An empirical study of
real-world SPARQL queries. In Proc. of 1st Workshop on Usage Analyss and the Web
of Data (USEWOD), Hyderabad, India. http://arxiv.org/abs/1103.5043.
Atemezing, G., O. Corcho, D. Garijo, J. Mora, M. Poveda-Villalón, P. Rozas, D. Vila-
Suero, and B. Villazón-Terrazas. 2013. Transforming meteorological data into
linked data. Semantic Web Journal 4(3), 285–290.
Atre, M., V. Chaoji, M. Zaki, and J. Hendler. 2010. Matrix “Bit” loaded: A scalable
lightweight join query processor for RDF data. In Proc. of the 19th World Wide
Web Conference (WWW), Raleigh, NC, pp. 41–50.
Auer, S., C. Bizer, G. Kobilarov, J. Lehmann, and Z. Ives. 2007. Dbpedia: A nucleus
for a web of open data. In Proc. of the 6th International Semantic Web Conference
(ISWC), Busan, Korea, pp. 11–15.
Baeza-Yates, R. and B. A. Ribeiro-Neto. 2011. Modern Information Retrieval—the
Concepts and Technology Behind Search (2nd edn.). Pearson Education Ltd.
Beckett, D. (Ed.) 2004. RDF/XML Syntax Specification (Revised). W3C Recommendation.
http://www.w3.org/TR/rdf-syntax-grammar/.
Beckett, D. and T. Berners-Lee. 2008. Turtle—Terse RDF Triple Language. W3C Team
Submission. http://www.w3.org/TeamSubmission/turtle/.
Berners-Lee, T. 1998. Notation3. W3C Design Issues. http://www.w3.org/
DesignIssues/Notation3.
Berners-Lee, T. 2002. Linked Open Data. What is the idea? http://www.thenational-
dialogue.org/ideas/linked-open-data (accessed October 8, 2012).
Berners-Lee, T. 2006. Linked Data: Design Issues. http://www.w3.org/DesignIssues/
LinkedData.html (accessed October 8, 2012).
Management of Big Semantic Data 165
Bizer, C., T. Heath, and T. Berners-Lee. 2009. Linked data—the story so far. International
Journal on Semantic Web and Information Systems 5, 1–22.
Brickley, D. 2004. RDF Vocabulary Description Language 1.0: RDF Schema. W3C
Recommendation. http://www.w3.org/TR/rdf-schema/.
Brisaboa, N., R. Cánovas, F. Claude, M. Martínez-Prieto, and G. Navarro. 2011.
Compressed string dictionaries. In Proc. of 10th International Symposium on
Experimental Algorithms (SEA), Chania, Greece, pp. 136–147.
Cukier, K. 2010. Data, data everywhere. The Economist (February, 25). http://www.
economist.com/opinion/displaystory.cfm?story_id=15557443 (accessed October
8, 2012).
Cyganiak, R., H. Stenzhorn, R. Delbru, S. Decker, and G. Tummarello. 2008. Semantic
sitemaps: Efficient and flexible access to datasets on the semantic web. In Proc.
of the 5th European Semantic Web Conference (ESWC), Tenerife, Spain, pp. 690–704.
De, S., T. Elsaleh, P. M. Barnaghi, and S. Meissner. 2012. An internet of things platform
for real-world and digital objects. Scalable Computing: Practice and Experience
13(1), 45–57.
Dijcks, J.-P. 2012. Big Data for the Enterprise. Oracle (white paper) (January). http://
www.oracle.com/us/products/database/big-data-for-enterprise-519135.pdf
(accessed October 8, 2012).
Dumbill, E. 2012a. Planning for Big Data. O’Reilly Media, Sebastopol, CA.
Dumbill, E. 2012b. What is big data? Strata (January, 11). http://strata.oreilly.
com/2012/01/what-is-big-data.html (accessed October 8, 2012).
Fernández, J. D., M. A. Martínez-Prieto, and C. Gutiérrez. 2010. Compact represen-
tation of large RDF data sets for publishing and exchange. In Proc. of the 9th
International Semantic Web Conference (ISWC), Shangai, China, pp. 193–208.
Fernández, J. D., M. A. Martínez-Prieto, C. Gutiérrez, and A. Polleres. 2011. Binary RDF
Representation for Publication and Exchange (HDT). W3C Member Submission.
http://www.w3.org/Submission/2011/03/.
Foulonneau, M. 2011. Smart semantic content for the future internet. In Metadata and
Semantic Research, Volume 240 of Communications in Computer and Information
Science, pp. 145–154. Springer, Berlin, Heidelberg.
García-Silva, A., O. Corcho, H. Alani, and A. Gómez-Pérez. 2012. Review of the state
of the art: Discovering and associating semantics to tags in folksonomies. The
Knowledge Engineering Review 27(01), 57–85.
González, R., S. Grabowski, V. Mäkinen, and G. Navarro. 2005. Practical implementa-
tion of rank and select queries. In Proc. of 4th International Workshop Experimental
and Efficient Algorithms (WEA), Santorini Island, Greece, pp. 27–38.
Grossi, R., A. Gupta, and J. Vitter. 2003. High-order entropy-compressed text indexes.
In Proc. of 9th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA),
Baltimore, MD, pp. 841–850.
Haas, K., P. Mika, P. Tarjan, and R. Blanco. 2011. Enhanced results for web search. In
Proc. of the 34th International Conference on Research and Development in Information
Retrieval (SIGIR), Beijing, China, pp. 725–734.
Halfon, A. 2012. Handling big data variety. http://www.finextra.com/community/
fullblog. aspx?blogid = 6129 (accessed October 8, 2012).
Hausenblas, M. and M. Karnstedt. 2010. Understanding linked open data as a web-
scale database. In Proc. of the 1st International Conference on Advances in Databases
(DBKDA), 56–61.
166 Big Data Computing
Heath, T. and C. Bizer. 2011. Linked Data: Evolving the Web into a Global Data Space.
Synthesis Lectures on the Semantic Web: Theory and Technology, Morgan &
Claypool.
Hey, T., S. Tansley, and K. M. Tolle. 2009. Jim Gray on eScience: A transformed scien-
tific method. In The Fourth Paradigm. Microsoft Research.
Hogan, A., A. Harth, J. Umbrich, S. Kinsella, A. Polleres, and S. Decker. 2011. Searching
and browsing linked data with SWSE: The semantic web search engine. Journal
of Web Semantics 9(4), 365–401.
Hogan, A., J. Umbrich, A. Harth, R. Cyganiak, A. Polleres, and S. Decker. 2012. An
empirical survey of linked data conformance. Web Semantics: Science, Services
and Agents on the World Wide Web 14(0), 14–44.
Huang, J., D. Abadi, and K. Ren. 2011. Scalable SPARQL querying of large RDF
graphs. Proceedings of the VLDB Endowment 4(11), 1123–1134.
Knoblock, C. A., P. Szekely, J. L. Ambite, S. Gupta, A. Goel, M. Muslea, K. Lerman,
and P. Mallick. 2012. Semi-Automatically Mapping Structured Sources into
the Semantic Web. In Proc. of the 9th Extended Semantic Web Conference (ESWC),
Heraklion, Greece, pp. 375–390.
Le-Phuoc, D., J. X. Parreira, V. Reynolds, and M. Hauswirth. 2010. RDF On the Go:
An RDF Storage and Query Processor for Mobile Devices. In Proc. of the 9th
International Semantic Web Conference (ISWC), Shangai, China. http://ceur-ws.
org/Vol-658/paper503.pdf.
Lohr, S. 2012. The age of big data. The New York Times (February, 11). http://www.
nytimes.com/2012/02/12/sunday-review/big-datas-impact-in-the-worl
d.html (accessed October 8, 2012).
Loukides, M. 2012. What is Data Science? O’Reilly Media.
Manola, F. and E. Miller (Eds.). 2004. RDF Primer. W3C Recommendation. www.
w3.org/TR/rdf-primer/.
Martínez-Prieto, M., M. Arias, and J. Fernández. 2012a. Exchange and consumption
of huge RDF data. In Proc. of the 9th Extended Semantic Web Conference (ESWC),
Heraklion, Greece, pp. 437–452.
Martínez-Prieto, M., J. Fernández, and R. Cánovas. 2012b. Querying RDF dictionar-
ies in compressed space. ACM SIGAPP Applied Computing Reviews 12(2), 64–77.
Marz, N. and J. Warren. 2013. Big Data: Principles and Best Practices of Scalable Realtime
Data Systems. Manning Publications.
McGuinness, D. L. and F. van Harmelen (Eds.). 2004. OWL Web Ontology Language
Overview. W3C Recommendation. http://www.w3.org/TR/owl-features/.
Neumann, T. and G. Weikum. 2010. The RDF-3X engine for scalable management of
RDF data. The VLDB Journal 19(1), 91–113.
Prud’hommeaux, E. and A. Seaborne (Eds.). 2008. SPARQL Query Language for RDF.
http://www.w3.org/TR/rdf-sparql-query/. W3C Recommendation.
Quesada, J. 2008. Human similarity theories for the semantic web. In Proceedings of
the First International Workshop on Nature Inspired Reasoning for the Semantic Web,
Karlsruhe, Germany.
Ramakrishnan, R. and J. Gehrke. 2000. Database Management Systems. Osborne/
McGraw-Hill.
Schmidt, M., M. Meier, and G. Lausen. 2010. Foundations of SPARQL query opti-
mization. In Proc. of the 13th International Conference on Database Theory (ICDT),
Lausanne, Switzerland, pp. 4–33.
Management of Big Semantic Data 167
Schneider, J. and T. Kamiya (Eds.). 2011. Efficient XML Interchange (EXI) Format 1.0.
W3C Recommendation. http://www.w3.org/TR/exi/.
Schwarte, A., P. Haase, K. Hose, R. Schenkel, and M. Schmidt. 2011. FedX: Optimization
techniques for federated query processing on linked data. In Proc. of the 10th
International Conference on the Semantic Web (ISWC), Bonn, Germany, pp. 601–616.
Selg, E. 2012. The next Big Step—Big Data. GFT Technologies AG (technical report).
http://www.gft.com/etc/medialib/2009/downloads/techreports/2012.
Par.0001.File. tmp/gft_techreport_big_data.pdf (accessed October 8, 2012).
Sidirourgos, L., R. Goncalves, M. Kersten, N. Nes, and S. Manegold. 2008. Column-
store Support for RDF Data Management: not All Swans are White. Proc. of the
VLDB Endowment 1(2), 1553–1563.
Taheriyan, M., C. A. Knoblock, P. Szekely, and J. L. Ambite. 2012. Rapidly integrating
services into the linked data cloud. In Proc. of the 11th International Semantic Web
Conference (ISWC), Boston, MA, pp. 559–574.
Tran, T., G. Ladwig, and S. Rudolph. 2012. Rdf data partitioning and query processing
using structure indexes. IEEE Transactions on Knowledge and Data Engineering 99.
Doi: ieeecomputersociety.org/10.1109/TKDE.2012.134
Tummarello, G., R. Cyganiak, M. Catasta, S. Danielczyk, R. Delbru, and S. Decker.
2010. Sig.ma: Live views on the web of data. Web Semantics: Science, Services and
Agents on the World Wide Web 8(4), 355–364.
Urbani, J., J. Maassen, and H. Bal. 2010. Massive semantic web data compression with
MapReduce. In Proc. of the 19th International Symposium on High Performance
Distributed Computing (HPDC) 2010, Chicago, IL, pp. 795–802.
Volz, J., C. Bizer, M. Gaedke, and G. Kobilarov. 2009. Discovering and maintaining
links on the web of data. In Proc. of the 9th International Semantic Web Conference
(ISWC), Shanghai, China, pp. 650–665.
Weiss, C., P. Karras, and A. Bernstein. 2008. Hexastore: Sextuple indexing for semantic
web data management. Proc. of the VLDB Endowment 1(1), 1008–1019.
Witten, I. H., A. Moffat, and T. C. Bell. 1999. Managing Gigabytes: Compressing and
Indexing Documents and Images. San Francisco, CA, Morgan Kaufmann.
This page intentionally left blank
5
Linked Data in Enterprise Integration
Contents
Introduction.......................................................................................................... 169
Challenges in Data Integration for Large Enterprises.................................... 173
Linked Data Paradigm for Integrating Enterprise Data................................. 178
Runtime Complexity........................................................................................... 180
Preliminaries.................................................................................................... 181
3
The HR Algorithm......................................................................................... 183
Indexing Scheme......................................................................................... 183
Approach..................................................................................................... 184
Evaluation........................................................................................................ 187
Experimental Setup.................................................................................... 187
Results.......................................................................................................... 188
Discrepancy........................................................................................................... 191
Preliminaries.................................................................................................... 193
CaRLA............................................................................................................... 194
Rule Generation.......................................................................................... 194
Rule Merging and Filtering....................................................................... 195
Rule Falsification........................................................................................ 196
Extension to Active Learning........................................................................ 197
Evaluation........................................................................................................ 198
Experimental Setup.................................................................................... 198
Results and Discussion.............................................................................. 199
Conclusion............................................................................................................ 201
References.............................................................................................................. 202
Introduction
Data integration in large enterprises is a crucial but at the same time a costly,
long-lasting, and challenging problem. While business-critical information
is often already gathered in integrated information systems such as ERP,
CRM, and SCM systems, the integration of these systems themselves as well
169
170 Big Data Computing
* http://opencorporates.com/
† http://www.productontology.org/
Linked Data in Enterprise Integration 171
ERP
SCM
Database A
- Vocabularies
Taxonomy Database B
- Copies of
relevant LOD
CRM-US CRM-DE - Linksets
(internal/external)
Enterprise KB Wiki development
Intranet keyword
search
Portal DE
Enterprise data web
Portal US Wiki marketing
Figure 5.1
Our vision of an Enterprise Data Web (EDW). The solid lines show how IT systems may be
currently connected in a typical scenario. The dotted lines visualize how IT systems could be
interlinked employing an internal data cloud. The EDW also comprises an EKB, which consists
of vocabulary definitions, copies of relevant Linked Open Data, as well as internal and external
link sets between data sets. Data from the LOD cloud may be reused inside the enterprise, but
internal data are secured from external access just like in usual intranets.
The introductory section depicts our vision of an Enterprise Data Web and
the resulting semantically interlinked enterprise IT systems landscape (see
Figure 5.1). We expect existing enterprise taxonomies to be the nucleus of
linking and integration hubs in large enterprises, since these taxonomies
already reflect a large part of the domain terminology and corporate and
organizational culture. In order to transform enterprise taxonomies into
comprehensive EKBs, additional relevant data sets from the Linked Open
Data Web have to be integrated and linked with the internal taxonomies and
knowledge structures. Subsequently, the emerging EKB can be used (1) for
interlinking and annotating content in enterprise wikis, content management
systems, and portals; (2) as a stable set of reusable concepts and identifiers;
and (3) as the background knowledge for intranet, extranet, and site-search
applications. As a result, we expect the current document-oriented intranets
in large enterprises to be complemented with a data intranet, which facili-
tates the lightweight, semantic integration of the plethora of information sys-
tems and databases in large enterprises.
Linked Data in Enterprise Integration 173
Table 5.1
Overview of Data Integration Challenges Occurring in Large Enterprises
Information
Integration Domain Current State Linked Data Benefit
Enterprise Proprietary, centralized, no Open standards (e.g., SKOS),
Taxonomies relationships between distributed, hierarchical,
terms, multiple independent multilingual, reusable in other
terminologies (dictionaries) scenarios
XML Schema Multitude of XML schemas, Relationships between entities from
Governance no integrated different schemas, tracking/
documentation documentation of XML schema
evolution
Wikis Text-based wikis for teams or Reuse of (structured) information via
internal-use encyclopedias data wikis (by other applications),
interlinking with other data sources,
for example, taxonomies
Web Portal and Keyword search over textual Sophisticated search mechanisms
Intranet Search content employing implicit knowledge from
different data sources
Database Integration Data warehouses, schema Lightweight data integration through
mediation, query federation RDF layer
Enterprise Single Consolidated user No passwords, more sophisticated
Sign-On credentials, centralized SSO access control mechanisms (arbitrary
metadata attached to identities)
174 Big Data Computing
Inter-linking/
fusing
Manual/
Classification/
revision/
enrichment
authoring
Enterprise single sign-on
Database integration
Evolution/
Extraction
repair
Search/
browsing
exploration
Figure 5.2
Linked data life cycle supports four crucial data integration challenges arising in enterprise
environments. Each of the challenges can relate to more than one lifecycle stage.
schemas: the oldest and the simplest DTD [3], the popular XML Schema
[31], the increasingly used Relax NG [4], and the rule-based Schematron
[12]. In a typical enterprise, there are hundreds or even thousands of XML
schemas in use, each possibly written in a different XML schema language.
Moreover, as the enterprise and its surrounding environment evolve, the
schemas need to adapt. Therefore, new versions of schemas are created,
resulting in a proliferation of XML schemas. XML schema governance now
is the process of bringing order into the large number of XML schemas
being generated and used within large organizations. The sheer number
of IT systems deployed in large enterprises that make use of the XML tech-
nology bear a challenge in bootstrapping and maintaining an XML schema
repository. In order to create such a repository, a bridge between XML sche-
mata and RDF needs to be established. This requires in the first place the
identification of XML schema resources and the respective entities that are
defined by them. Some useful information can be extracted automatically
from XML schema definitions that are available in a machine-readable for-
mat, such as XML schemas and DTDs. While this is probably given for
systems that employ XML for information exchange, it may not always be
the case in proprietary software systems that employ XML only for data
storage. In the latter case as well as for maintaining additional metadata
(such as responsible department, deployed IT systems, etc.), a substantial
amount of manual work is required. In a second step, the identified schema
metadata needs to be represented in RDF on a fine-grained level. The chal-
lenge here is the development of an ontology, which not only allows for the
annotation of XML schemas, but also enables domain experts to establish
semantic relationships between schemas. Another important challenge is to
develop methods for capturing and describing the evolution of XML sche-
mata, since IT systems change over time and those revisions need to be
aligned with the remaining schemas.
Wikis. These have become increasingly common through the last years
reaching from small personal wikis to the largest Internet encyclopedia
Wikipedia. The same applies for the use of wikis in enterprises [16] too. In
addition to traditional wikis, there is another category of wikis, which are
called semantic wikis. These can again be divided into two categories: seman-
tic text wikis and semantic data wikis. Wikis of this kind are not yet com-
monly used in enterprises, but crucial for enterprise data integration since
they make (at least some of) the information contained in a wiki machine-
accessible. Text-based semantic wikis are conventional wikis (where text is
still the main content type), which allow users to add some semantic annota-
tions to the texts (e.g., typed links). The semantically enriched content can
then be used within the wiki itself (e.g., for dynamically created wiki pages)
or can be queried, when the structured data are stored in a separate data
store. An example is Semantic MediaWiki [14] and its enterprise counterpart
SMW+ [25]. Since wikis in large enterprises are still a quite new phenom-
enon, the deployment of data wikis instead of or in addition to text wikis will
176 Big Data Computing
* http://aksw.org/Projects/SparqlMap
178 Big Data Computing
and tool support for mapping creation. The standardization of the RDB to RDF
Mapping Language (R2RML) by the W3C RDB2RDF Working Group establishes
a common ground for an interoperable ecosystem of tools. However, there is
a lack of mature tools for the creation and application of R2RML mappings.
The challenge lies in the creation of user-friendly interfaces and in the estab-
lishment of best practices for creating, integrating, and maintaining those
mappings. Finally, for a read–write integration updates on the mapped data
need to be propagated back into the underlying RDBMS. An initial solution
is presented in [5]. In the context of enterprise data, an integration with granu-
lar access control mechanisms is of vital importance. Consequently, semantic
wikis, query federation tools, and interlinking tools can work with the data of
relation databases. The usage of SPARQL 1.1 query federation [26] allows rela-
tional databases to be integrated into query federation systems with queries
spanning over multiple databases. This federation allows, for example, portals,
which in combination with an EKB provide an integrated view on enterprise
data.
Enterprise Single Sign-On. As a result of the large number of deployed soft-
ware applications in large enterprises, which are increasingly web-based,
single sign-on (SSO) solutions are of crucial importance. A Linked Data-based
approach aimed at tackling the SSO problem is WebID [30]. In order to deploy
a WebID-based SSO solution in large enterprises, a first challenge is to transfer
user identities to the Enterprise Data Web. Those Linked Data identities need to
be enriched and interlinked with further background knowledge, while main-
taining privacy. Thus, mechanisms need to be developed to assure that only
such information is publicly (i.e., public inside the corporation) available, that
is required for the authentication protocol. Another challenge that arises is
related to user management. With WebID a distributed management of identi-
ties is feasible (e.g., on department level), while those identities could still be
used throughout the company. Though this reduces the likeliness of a single
point of failure, it would require the introduction of mechanisms to ensure
that company-wide policies are enforced. Distributed group management and
authorization is already a research topic (e.g., dgFOAF [27]) in the area of social
networks. However, requirements that are gathered from distributed social
network use-cases differ from those captured from enterprise use-cases. Thus,
social network solutions need a critical inspection in the enterprise context.
usually integrate them into a unified view by the means of the extract-trans-
form-load (ETL) paradigm [13]. For example, IBM’s DeepQA framework [8]
combines knowledge from DBpedia,* Freebase,† and several other knowledge
bases to determine the answer to questions with a speed superior to that of
human champions. A similar view to data integration can be taken within
the Linked Data paradigm with the main difference that the load step can
be discarded when the knowledge bases are not meant to be fused, which
is mostly the case. While the extraction was addressed above, the transfor-
mation remains a complex challenge and has currently not yet been much
addressed in the enterprise context. The specification of this integration pro-
cesses for Linked Data is rendered tedious by several factors, including
Similar issues are found in the Linked Open Data (LOD) Cloud, which con-
sists of more than 30 billion triples‡ distributed across more than 250 knowl-
edge bases. In the following, we will use the Linked Open Data Cloud as
reference implementation of the Linked Data principles and present semi-
automatic means that aim to ensure high-quality Linked Data Integration.
The scalability of Linked Data Integration has been addressed in manifold
previous works on link discovery. Especially, Link Discovery frameworks such
as LIMES [21–23] as well as time-efficient algorithms such as PPJoin+ [34] have
been designed to address this challenge. Yet, none of these manifold approaches
provides theoretical guarantees with respect to their performance. Thus, so far,
it was impossible to predict how Link Discovery frameworks would perform
with respect to time or space requirements. Consequently, the deployment of
techniques such as customized memory management [2] or time-optimization
strategies [32] (e.g., automated scaling for cloud computing when provided with
very complex linking tasks) was rendered very demanding if not impossible.
A novel approach that addresses these drawbacks is the HR 3 algorithm [20].
Similar to the HYPPO algorithm [22] (on whose formalism it is based), HR 3
assumes that the property values that are to be compared are expressed in an
affine space with a Minkowski distance. Consequently, it can be most naturally
used to process the portion of link specifications that compare numeric values
(e.g., temperatures, elevations, populations, etc.). HR 3 goes beyond the state of
the art by being able to carry out Link Discovery tasks with any achievable reduc-
tion ratio [6]. This theoretical guarantee is of practical importance, as it does
not only allow our approach to be more time-efficient than the state of the art
* http://dbpedia.org
† http://www.freebase.com
‡ http://www4.wiwiss.fu-berlin.de/lodcloud/state/
180 Big Data Computing
but also lays the foundation for the implementation of customized memory
management and time-optimization strategies for Link Discovery.
The difficulties behind the integration of Linked Data are not only caused
by the mere growth of the data sets in the Linked Data Web, but also by large
number of discrepancies across these data sets. In particular, ontology mis-
matches [7] affect mostly the extraction step of the ETL process. They occur
when different classes or properties are used in the source knowledge bases to
express equivalent knowledge (with respect to the extraction process at hand).
For example, while Sider* uses the class sider:side_effects to represent
diseases that can occur as a side effect of the intake of certain medication, the
more generic knowledge base DBpedia uses dbpedia:Disease. Such a mis-
match can lead to a knowledge base that integrates DBpedia and Sider contain-
ing duplicate classes. The same type of mismatch also occurs at the property
level. For example, while Eunis† uses the property eunis:binomialName
to represent the labels of species, DBpedia uses rdfs:label. Thus, even
if the extraction problem was resolved at class level, integrating Eunis and
DBpedia would still lead to the undesirable constellation of an integrated
knowledge base where instances of species would have two properties that
serve as labels. The second category of common mismatches mostly affects
the transformation step of ETL and lies in the different conventions used for
equivalent property values. For example, the labels of films in DBpedia differ
from the labels of films in LinkedMDB‡ in three ways: First, they contain a
language tag. Second, the extension “(film)” if another entity with the same
label exists. Third, if another film with the same label exists, the production
year of the film is added. Consequently, the film Liberty from 1929 has the
label “Liberty (1929 film)@en” in DBpedia, while the same film bears the label
“Liberty” in LinkedMDB. A similar discrepancy in naming persons holds for
film directors (e.g., John Frankenheimer (DBpedia: John Frankenheimer@
en, LinkedMDB: John Frankenheimer (Director)) and John Ford (DBpedia:
John Ford@en, LinkedMDB: John Ford (Director))) and actors. Finding a
conform representation of the labels of movies that maps the LinkedMDB
representation would require knowing the rules replace(“@en”, ε) and
replace(“(*film)”, ε), where ε stands for the empty string.
Runtime Complexity
The development of scalable algorithms for link discovery is of crucial
importance to address for the Big Data problems that enterprises are increas-
ingly faced with. While the variety of the data is addressed by the extraction
* http://sideeffects.embl.de/
† http://eunis.eea.europa.eu/
‡ http://linkedmdb.org/
Linked Data in Enterprise Integration 181
processes presented in the sections above, the mere volume of the data makes
it necessary to have single linking tasks carried out as efficiently as possible.
Moreover, the velocity of the data requires that link discovery is carried out
on a regular basis. These requirements were the basis for the development
of HR 3 [20], the first reduction-ratio-optimal link discovery algorithm. In the
following, we present and evaluate this approach.
Preliminaries
In this section, we present the preliminaries necessary to understand the
subsequent parts of this section. In particular, we define the problem of
Link Discovery, the reduction ratio, and the relative reduction ratio for-
mally as well as give an overview of space tiling for Link Discovery. The
subsequent description of HR 3 relies partly on the notation presented in
this section.
Link Discovery. The goal of Link Discovery is to compute the set of pair of
instances (s, t) ∈ S × T that are related by a relation R, where S and T are two
not necessarily distinct sets of instances. One way of automating this discov-
ery is to compare s ∈ S and t ∈ T based on their properties using a distance
measure. Two entities are then considered to be linked via R if their distance
is less or equal to a threshold θ [23].
Given two sets S and T of instances, a distance measure δ over the properties of
s ∈ S and t ∈ T and a distance threshold θ ∈ [0, ∞ [, the goal of Link Discovery is to
compute the set M = {(s, t, δ(s, t)): s ∈ S ∧ t ∈ T ∧ δ(s, t) ≤ θ}
Note that in this paper, we are only interested in lossless solutions, that is,
solutions that are able to find all pairs that abide by the definition given above.
Reduction Ratio. A brute-force approach to Link Discovery would execute
a Link Discovery task on S and T by carrying out |S||T| comparisons. One
of the key ideas behind time-efficient Link Discovery algorithms A is to
reduce the number of comparisons that are effectively carried out to a num-
ber C(A) < |S||T| [29]. The reduction ratio RR of an algorithm A is given by
C( A )
RR(A ) = 1 − . (5.1)
| S T|
RR(A) captures how much of the Cartesian product |S||T| was not explored
before the output of A was reached. It is obvious that even an optimal loss-
less solution which performs only the necessary comparisons cannot achieve
an RR of 1. Let Cmin be the minimal number of comparisons necessary to
complete the Link Discovery task without losing recall, that is, Cmin = |M
|. We define the relative reduction ratio RRR(A) as the proportion of the
182 Big Data Computing
1 − (Cmin /| S T |) | S T | −Cmin
RRR(A ) = = . (5.2)
1 − (C(A )/| S T |) | S T | −C(A )
RRR(A) indicates how close A is to the optimal solution with respect to the
number of candidates it tests. Given that C(A) ≥ Cmin, RRR(A) ≥ 1. Note that
the larger the value of RRR(A), the poorer the performance of A with respect
to the task at hand.
The main observation that led to this work is that while most algorithms
aim to optimize their RR (and consequently their RRR), current approaches to
Link Discovery do not provide any guarantee with respect to the RR (and con-
sequently the RRR) that they can achieve. In this work, we present an approach
to Link Discovery in metric spaces whose RRR is guaranteed to converge to 1.
Space Tiling for Link Discovery. Our approach, HR 3, builds upon the same
formalism on which the HYPPO algorithm relies, that is, space tiling. HYPPO
addresses the problem of efficiently mapping instance pairs (s, t) ∈ S × T
described by using exclusively numeric values in an n-dimensional metric
space and has been shown to outperform the state of the art in the previ-
ous work [22]. The observation behind space tiling is that in spaces (Ω, δ )
with orthogonal (i.e., uncorrelated) dimensions,* common metrics for
Link Discovery can be decomposed into the combination of functions
ϕi,i∈{1. . .n}, which operate on exactly one dimension of Ω: δ = f(ϕ1,. . .,ϕn). For
Minkowski distances of order p, ϕi(x, ω) = |xi − ωi| for all values of i and
∑
n
δ ( x , ω) = p
φ ip ( x , ω )p .
i =1
A direct consequence of this observation is the inequality ϕi (x,ω) ≤ δ
(x, ω). The basic insight into this observation is that the hypersphere H(ω,
θ) = {x ∈ Ω:δ (x, ω) ≤θ} is a subset of the hypercube V defined as V(ω,
θ) = {x ∈ Ω: ∀i ∈ {1. . .n}, ϕi (xi, ωi) ≤ θ. Consequently, one can reduce the num-
ber of comparisons necessary to detect all elements of H(ω, θ ) by discarding
all elements that are not in V (ω, θ) as nonmatches. Let Δ = θ/α, where α ∈ℕ
is the granularity parameter that controls how fine-grained the space tiling
should be (see Figure 5.3 for an example). We first tile Ω into the adjacent
hypercubes (short: cubes) C that contain all the points ω such that
We call the vector (c1, . . ., cn) the coordinates of the cube C. Each point
ω ∈ Ω lies in the cube C(ω) with coordinates ( ω i/∆ )i=1...n . Given such a space
* Note that in all cases, a space transformation exists that can map a space with correlated
dimensions to a space with uncorrelated dimensions.
Linked Data in Enterprise Integration 183
θ θ θ
Figure 5.3
Space tiling for different values of α. The colored squares show the set of elements that must be
compared with the instance located at the black dot. The points within the circle lie within the
distance θ of the black dot. Note that higher values of α lead to a better approximation of the
hypersphere but also to more hypercubes.
tiling, it is obvious that V(ω,θ ) consists of the union of the cubes such that
∀ i ∈ {1,… , n} :|ci − c(ω )i |≤ α .
Like most of the current algorithms for Link Discovery, space tiling does
not provide optimal performance guarantees. The main goal of this paper is
to build upon the tiling idea so as to develop an algorithm that can achieve
any possible RR. In the following, we present such an algorithm, HR 3.
3
The HR Algorithm
The goal of the HR 3 algorithm is to efficiently map instance pairs (s, t) ∈ S × T
that are described by using exclusively numeric values in an n-dimensional
metric space where the distances are measured by using any Minkowski
distance of order p ≥ 2. To achieve this goal, HR 3 relies on a novel indexing
scheme that allows achieving any RRR greater than or equal to than 1. In
the following, we first present our new indexing scheme and show that we
can discard more hypercubes than simple space tiling for all granularities
α such that n(α − 1)p > α p. We then prove that by these means, our approach
can achieve any RRR greater than 1, therewith proving the optimality of our
indexing scheme with respect to RRR.
Indexing Scheme
Let ω ∈ Ω = S ∪ T be an arbitrary reference point. Furthermore, let δ be the
Minkowski distance of order p. We define the index function as follows:
where C is a hypercube resulting from a space tiling and ω ∈ Ω. Figure 5.4
shows an example of such indices for p = 2 with α = 2 (Figure 5.4a) and α = 4
(Figure 5.4b).
Note that the highlighted square with index 0 contains the reference point
ω. Also note that our indexing scheme is symmetric with respect to C(ω).
Thus, it is sufficient to prove the subsequent lemmas for hypercubes C such
that ci > c(ω)i. In Figure 5.4, it is the upper right portion of the indexed space
with the gray background. Finally, note that the maximal index that a hyper-
cube can achieve is n(α − 1)p as max|ci − ci(ω)| = α per construction of H(ω, θ).
The indexing scheme proposed above guarantees the following:
Lemma 1
Proof
This lemma is a direct implication of the construction of the index.
Index(C,ω) = x implies that
n
∑ (c − c(ω ) − 1)
i i
p
= x.
i =1
Now given the definition of the coordinates of a cube (Equation (5.3)), the
following holds:
Consequently,
n n
i =1 i =1
Approach
The main insight behind HR 3 is that in spaces with Minkowski distances,
the indexing scheme proposed above allows one to safely (i.e., without
Linked Data in Enterprise Integration 185
(a)
2 1 1
0 1 2
1 0 0 0 1
1
0 0 0 0 1
0
1 0 0 0 1
2 1 1
0 1 2
(b)
25 18 13 10 9 0 9 10 13 18
20 13 8 5 4 0 4 5 8 13
17 10 5 2 1 0 1 2 5 10
16 9 4 1 0 0 0 1 4 9
0 0 0 0 0 0 0 0 0 0
16 9 4 1 0 0 0 1 4 9
17 10 5 2 1 0 1 2 5 10
20 13 8 5 4 0 4 5 8 13
25 18 13 10 9 0 9 10 13 18
32 25 20 17 16 0 16 17 20 25
Figure 5.4
Space tiling and resulting index for a two-dimensional example. Note that the index in both
subfigures was generated for exactly the same portion of space. The black dot stands for the
position of ω.
186 Big Data Computing
dismissing correct matches) discard more hypercubes than when using sim-
ple space tiling. More specifically,
Lemma 2
Proof
This lemma follows directly from Lemma 1 as
For the purpose of illustration, let us consider the example of α = 4 and p = 2
in the two-dimensional case displayed in Figure 5.4b. Lemma 2 implies that
any point contained in a hypercube C18 with index 18 cannot contain any ele-
ment t such that δ(s, t) ≤ θ. While space tiling would discard all black cubes in
Figure 5.4b but include the elements of C18 as candidates, HR 3 discards them
and still computes exactly the same results, yet with a better (i.e., smaller) RRR.
One of the direct consequences of Lemma 2 is that n(α − 1)p > α p is a neces-
sary and sufficient condition for HR 3 to achieve a better RRR than simple
space tiling. This is simply due to the fact that the largest index that can be
assigned to a hypercube is ∑ i=1(α − 1)p = n(α − 1)p. Now, if n(α − 1)p > α p, then
n
this cube can be discarded. For p = 2 and n = 2, for example, this condition is
∑
n
satisfied for α ≥ 4. Knowing this inequality is of great importance when decid-
i =1
ing on when to use HR 3 as discussed in the “Evaluation” section.
Let H(α , ω ) = {C : index(C , ω ) ≤ α p }. H(α, ω) is the approximation of the
hypersphere H(ω) = {ω′:δ (ω,ω′) ≤ θ} generated by HR 3. We define the volume
of H(α, ω) as
To show that given any r > 1, the approximation H(α, ω) can always achieve
an RRR(HR 3) ≤ r, we need to show the following.
Lemma 3
lim RRR(HR 3, α ) = 1.
α →∞
Proof
The cubes that are not discarded by HR 3 (α) are those for which (|ci − ci(ω)| − 1)
p ≤ α p. When α → ∞, Δ becomes infinitesimally small, leading to the cubes
Linked Data in Enterprise Integration 187
being single points. Each cube C thus contains a single point x with coordi-
nates xi = ciΔ. Especially, ci(ω) = ω. Consequently,
n n p
|xi − ω i| − ∆
∑ (|ci − ci (ω )| −1)p ≤ α p ↔ ∑
∆
p
≤ α . (5.8)
i =1 i =1
n p n
|xi − ω i| − ∆
∑i =1
∆
p
≤ α ↔ ∑ (|x − ω | − ∆)
i =1
i i
p
≤ θ p.
(5.9)
n n
∑ (|xi − ω i| − ∆ )p ≤ θ p ∧ α → ∞ → ∑|x − ω | ≤ θ .i i
p p
(5.10)
i =1 i =1
Evaluation
In this section, we present the data and hardware we used to evaluate our
approach. Thereafter, we present and discuss our results.
Experimental Setup
We carried out four experiments to compare HR 3 with LIMES 0.5’s HYPPO
and SILK 2.5.1. In the first and second experiments, we aimed to deduplicate
DBpedia places by comparing their names (rdfs:label), minimum elevation,
elevation, and maximum elevation. We retrieved 2988 entities that possessed
all four properties. We use the Euclidean metric on the last three values with
the thresholds 49 and 99 m for the first and second experiments, respectively.
The third and fourth experiments aimed to discover links between Geonames
and LinkedGeoData. Here, we compared the labels (rdfs:label), longitude,
188 Big Data Computing
Figure 5.5
Approximation generated by HR 3 for different values of α. The white squares are selected,
whilst the colored ones are discarded. (a) α = 4, (b) α = 8, (c) α = 10, (d) α = 25, (e) α = 50, and
(f) α = 100.
and latitude of the instances. This experiment was of considerably larger scale
than the first one, as we compared 74,458 entities in Geonames with 50,031
entities from LinkedGeoData. Again, we measured the runtime necessary to
compare the numeric values when comparing them by using the Euclidean
metric. We set the distance thresholds to 1 and 9° in experiments 3 and 4,
respectively. We ran all experiments on the same Windows 7 Enterprise 64-bit
computer with a 2.8 GHz i7 processor with 8 GB RAM. The JVM was allocated
7 GB RAM to ensure that the runtimes were not influenced by swapping. Only
one of the kernels of the processors was used. Furthermore, we ran each of the
experiments three times and report the best runtimes in the following.
Results
We first measured the number of comparisons required by HYPPO and
HR 3 to complete the tasks at hand (see Figure 5.6). Note that we could not
carry out this section of evaluation for SILK 2.5.1 as it would have required
altering the code of the framework. In the experiments 1, 3, and 4, HR 3 can
reduce the overhead in comparisons (i.e., the number of unnecessary compar-
isons divided by the number of necessary comparisons) from approximately
24% for HYPPO to approximately 6% (granularity = 32). In experiment 2, the
overhead is reduced from 4.1 to 2%. This difference in overhead reduction
is mainly due to the data clustering around certain values and the clusters
Linked Data in Enterprise Integration 189
having a radius between 49 and 99 m. Thus, running the algorithms with
a threshold of 99 m led to only a small a priori overhead and HYPPO per-
forming remarkably well. Still, even on such data distributions, HR 3 was
able to discard even more data and to reduce the number of unnecessary
computations by more than 50% relative. In the best case (experiment 4,
α = 32, see Figure 5.6d), HR 3 required approximately 4.13 × 106 less compari-
sons than HYPPO for α = 32. Even for the smallest setting (experiment 1, see
Figure 5.6a), HR 3 still required 0.64 × 106 less comparisons.
We also measured the runtimes of SILK, HYPPO, and HR 3. The best run-
times of the three algorithms for each of the tasks is reported in Figure 5.7.
Note that SILK’s runtimes were measured without the indexing time, as the
data fetching and indexing are merged to one process in SILK. Also note
that in the second experiment, SILK did not terminate due to higher memory
requirements. We approximated SILK’s runtime by extrapolating approxi-
mately 11 min it required for 8.6% of the computation before the RAM was
filled. Again, we did not consider the indexing time.
Because of the considerable difference in runtime (approximately two
orders of magnitude) between HYPPO and SILK, we report solely HYPPO
and HR 3‘s runtimes in the detailed runtimes Figures 5.8a,b. Overall, HR
3 outperformed the other two approaches in all experiments, especially for
Number of comparisons
2 × 106 7 × 106
Number of comparisons
106
500 × 103
0 6 × 106
2 4 8 16 32 2 4 8 16 32
Granularity Granularity
(c) (d)
5 × 106 40 × 106
35 × 106
4 × 106
30 × 106
25 × 106
3 × 106
HYPPO 20 × 106 HYPPO
2 × 106 HR3 HR3
Minimum 15 × 106 Minimum
Number of comparisons
10 × 106
Number of comparisons
106
5 × 106
0 0
2 4 8 16 32 2 4 8 16 32
Granularity Granularity
Figure 5.6
Number of comparisons for HR 3 and HYPPO.
Big Data Computing
Linked Data in Enterprise Integration 191
104
103
Runtime (s)
HR3
102 HYPPO
SILK
101
100
Exp. 1 Exp. 2 Exp. 3 Exp. 4
Figure 5.7
Comparison of the runtimes of HR 3, HYPPO, and SILK 2.5.1.
Discrepancy
In this section, we address the lack of coherence that comes about when
integrating data from several knowledge data and using them within one
application. Here, we present CaRLA, the Canonical Representation Learning
Algorithm [19]. This approach addresses the discrepancy problem by learn-
ing canonical (also called conform) representation of data-type property
values. To achieve this goal, CaRLA implements a simple, time-efficient,
and accurate learning approach. We present two versions of CaRLA: a batch
learning and an active learning version. The batch learning approach relies
on a training data set to derive rules that can be used to generate conform
representations of property values. The active version of CaRLA (aCarLa)
extends CaRLA by computing unsure rules and retrieving highly informa-
tive candidates for annotation that allow one to validate or negate these can-
didates. One of the main advantages of CaRLA is that it can be configured
to learn transformations at character, n-gram, or even word level. By these
means, it can be used to improve integration and link discovery processes
based on string similarity/distance measures ranging from character-based
(edit distance) and n-gram-based (q-grams) to word-based (Jaccard similar-
ity) approaches.
(a) 180 (b) 180
192
160 160
140 140
120 120
100 100
HYPPO (Exp. 1) HYPPO (Exp. 3)
80 HR3 (Exp. 1) 80 HR3 (Exp. 3)
Runtime (s)
Runtime (s)
HYPPO (Exp. 2) HYPPO (Exp. 4)
60 HR3 (Exp. 2) 60 HR3 (Exp. 4)
40 40
20 20
0 0
2 4 8 16 32 2 4 8 16 32
Granularity Granularity
1,7
1,004
1,6
1,5 1,003
HYPPO (Exp. 1) HYPPO (Exp. 3)
RRR
1,4
RRR
HR3 (Exp. 1) HR3 (Exp. 3)
HYPPO (Exp. 2) 1,002 HYPPO (Exp. 4)
1,3 HR3 (Exp. 4)
HR3 (Exp. 2)
1,2
1,001
1,1
1 1
2 4 8 16 32 2 4 8 16 32
Granularity Granularity
Figure 5.8
Comparison of runtimes and RRR of HR 3 and HYPPO. (a) Runtimes for experiments 1 and 2, (b) runtimes for experiments 3 and 4, (c) RRR for experi-
ments 1 and 2, and (d) RRR for experiments 3 and 4.
Big Data Computing
Linked Data in Enterprise Integration 193
Preliminaries
In the following, we define terms and notation necessary to formalize the
approach implemented by CaRLA. Let s ∈ Σ* be a string from an alphabet Σ.
We define a tokenization function as follows:
A transformation rule is a function r: A → A that maps a token from the alphabet A
to another token of A.
In the following, we will denote transform rules by using an arrow nota-
tion. For example, the mapping of the token “Alan” to “A.” will be denoted
by <“Alan” → “A.” >. For any rule r = <x → y > , we call x the premise and y the
consequence of r. We call a transformation rule trivial when it is of the form
<x → x> with x ∈ A. We call two transformation rules r and r′ inverse to each
other when r = <x → y > and r′ = <y → x>. Throughout this work, we will
assume that the characters that make up the tokens of A belong to Σ ∪ {ε},
where ε stands for the empty character. Note that we will consequently
denote deletions by rules of the form <x → ε > , where x ∈ A.
Let Γ be the set of all rules. Given a weight function w:Γ → ℝ , a weighted transfor-
mation rule is the pair (r,w(r)), where r ∈ Γ is a transformation rule.
Given a set R of (weighted) transformation rules and a string s, we call the function
φR:Σ* → Σ* ∪ {ε} a transformation function when it maps s to a string φR(s) by
applying all rules ri ∈ R to every token of token(s) in an arbitrary order.
For example, the set R = {<“Alan” → “A.”>} of transformation rules would
lead to φR (“James Alan Hetfield”) = “James A. Hetfield”.
194 Big Data Computing
CaRLA
The goal of CaRLA is two-fold: First, it aims to compute rules that allow
to derive conform representations of property values. As entities can have
several values for the same property, CaRLA also aims to detect a condition
under which two property values should be merged during the integration
process. In the following, we will assume that two source knowledge bases
are to be integrated to one. Note that our approach can be used for any num-
ber of source knowledge bases.
Formally, CaRLA addresses the problem of finding the required transfor-
mation rules by computing an equivalence relation e between pairs of prop-
erty values (p1, p2), that is, such that e(p1, p2) holds when p1 and p2 should be
mapped to the same canonical representation p. CaRLA computes e by generat-
ing two sets of weighted transformation function rules R1 and R 2 such that
for a given similarity function σ ε ( p1 , p2 ) → σ (ϕ R1 ( p1 ), ϕ R2 ( p2 )) ≥ θ , where θ is
a similarity threshold. The canonical representation p is then set to ϕ R1 ( p1 ).
The similarity condition σ (ϕ R1 ( pR1 ), ϕ R2 ( p2 )) ≥ θ is used to distinguish
between the pairs of properties values that should be merged.
To detect R1 and R2, CaRLA assumes two training data sets P and N,
of which N can be empty. The set P of positive training examples is com-
posed of pairs of property value pairs (p1, p2) such that e(p1, p2) holds. The
set N of negative training examples consists of pairs (p1,p2) such that e(p1, p2)
does not hold. In addition, CaRLA assumes being given a similarity func-
tion σ and a corresponding tokenization function token. Given this input,
CaRLA implements a simple three-step approach: It begins by computing
the two sets R1 and R 2 of plausible transformation rules based on the posi-
tive examples at hand (Step 1). Then it merges inverse rules across R1 and R2
and discards rules with a low weight during the rule merging and filtering
step. From the resulting set of rules, CaRLA derives the similarity condition
ε ( p1 , p2 ) → σ (ϕ R1 ( p1 ), ϕ R2 ( p2 )) ≥ θ . It then applies these rules to the negative
examples in N and tests whether the similarity condition also holds for the
negative examples. If this is the case, then it discards rules until it reaches a
local minimum of its error function. The retrieved set of rules and the novel
value of θ constitute the output of CaRLA and can be used to generate the
canonical representation of the properties in the source knowledge bases.
In the following, we explain each of the three steps in more detail.
Throughout the explanation, we use the toy example shown in Table 5.2. In
addition, we will assume a word-level tokenization function and the Jaccard
similarity.
Rule Generation
The goal of the rule generation set is to compute two sets of rules R1 and R2
that will underlie the transformation ϕ R1 and ϕ R2, respectively. We begin by
tokenizing all positive property values pi and pj such that (pi, pj) ∈ P. We call T1
Linked Data in Enterprise Integration 195
Table 5.2
Toy Example Data Set
Type Property Value 1 Property Value 2
⊕ “Jean van Damne” “Jean Van Damne (actor)”
⊕ “Thomas T. van Nguyen” “Thomas Van Nguyen (actor)”
⊕ “Alain Delon” “Alain Delon (actor)”
⊕ “Alain Delon Jr.” “Alain Delon Jr. (actor)”
⊖ “Claude T. Francois” “Claude Francois (actor)”
Note: The positive examples are of type ⊕ and the negative of type ⊖.
the set of all tokens pi such that (pi, pj) ∈ P, while T2 stands for the set of all pj. We
begin the computation of R1 by extending the set of tokens of each pj ∈ T2 by
adding ε to it. Thereafter, we compute the following rule score function score:
Finally, for each token x ∈ T1, we add the rule r = < x → y> to R1 iff x ≠ y
(i.e., r is not trivial) and y = argmax y ′∈T2 score final (< x → y ′>). To compute R2,
we simply swap T1 and T2, invert P (i.e., compute the set {(pj, pi): (pi, pj) ∈ P})
and run through the procedure described above.
For the set P in our example, we obtain the following sets of rules:
R1 = {(<“van” → “Van”>, 2.08), (<“T.” → ε >, 2)} and R2 = {(<“Van” → “van”>,
2.08), (<“(actor)” → ε >, 2)}.
the transformation rules lead to similar canonical forms, the rule merging step
first discards all rules <x → y > ∈ R2 such that <y → x > ∈ R1 (i.e., rules in R2
that are inverse to rules in R1). Then, low-weight rules are discarded. The idea
here is that if there is not enough evidence for a rule, it might just be a random
event. The initial similarity threshold θ for the similarity condition is finally
set to
Rule Falsification
The aim of the rule falsification step is to detect a set of transformations that
lead to a minimal number of elements of N having a similarity superior to θ
via σ. To achieve this goal, we follow a greedy approach that aims to mini-
mize the magnitude of the set
{ ( p1 , p2 )∈P }
E = ( p1 , p2 ) ∈ N : σ (ϕ R1 ( p1 ), ϕ R2 ( p2 )) ≥ θ = min σ (ϕ R1 ( p1 ), ϕ R2 ( p2 )) . (5.14)
Our approach simply tries to discard all rules that apply to elements of E
by ascending score. If E is empty, then the approach terminates. If E does
not get smaller, then the change is rolled back and the next rule is tried.
Else, the rule is discarded from the set of final rules. Note that discarding a
rule can alter the value of θ and thus E. Once the set E has been computed,
CaRLA concludes its computation by generating a final value of the thresh-
old θ.
In our example, two rules apply to the element of N. After discarding the
rule <”T.” → ε >, the set E becomes empty, leading to the termination of the
rule falsification step. The final set of rules are thus R1 = {<“van” → “Van”>}
and R 2 = {<“(actor)” → ε >}. The value of θ is computed to be 0.75. Table 5.3
shows the canonical property values for our toy example. Note that this
threshold allows to discard the elements of N as being equivalent property
values.
It is noteworthy that by learning transformation rules, we also found an
initial threshold θ for determining the similarity of property values using
σ as similarity function. In combination with the canonical forms com-
puted by CaRLA, the configuration (σ, θ) can be used as an initial configu-
ration for Link Discovery frameworks such as LIMES. For example, the
Linked Data in Enterprise Integration 197
Table 5.3
Canonical Property Values for Our Example Data Set
Property Value 1 Property Value 2 Canonical Value
“Jean van Damne” “Jean Van Damne (actor)” “Jean Van Damne”
“Thomas T. van Nguyen” “Thomas Van Nguyen (actor)” “Thomas T. Van Nguyen”
“Alain Delon” “Alain Delon (actor)” “Alain Delon”
“Alain Delon Jr.” “Alain Delon Jr. (actor)” “Alain Delon Jr.”
“Claude T. Francois” “Claude T. Francois”
“Claude Francois (actor)” “Claude Francois”
smallest Jaccard similarity for the pair of property values for our exam-
ple lies by 1/3, leading to a precision of 0.71 for a recall of 1 (F-measure:
0.83). Yet, after the computation of the transformation rules, we reach an
F-measure of 1 with a threshold of 1. Consequently, the pair (σ, θ ) can
be used for determining an initial classifier for approaches such as the
RAVEN algorithm [24].
Evaluation
Experimental Setup
In the experiments reported in this section, we evaluated CaRLA by two
means: First, we aimed to measure how well CaRLA could compute trans-
formations created by experts. To achieve this goal, we retrieved transforma-
tion rules from four link specifications defined manually by experts within
the LATC project.* An overview of these specifications is given in Table 5.4.
Each link specification aimed to compute owl:sameAs links between enti-
ties across two knowledge bases by first transforming their property values
and by then computing the similarity of the entities based on the similar-
ity of their property values. For example, the computation of links between
films in DBpedia and LinkedMDB was carried out by first applying the set of
R1 = {<(film) → ε >} to the labels of films in DBpedia and R 2 = {<(director) → ε >}
to the labels of their directors. We ran both CaRLA and aCaRLA on the prop-
erty values of the interlinked entities and measured how fast CaRLA was
able to reconstruct the set of rules that were used during the Link Discovery
process.
In addition, we quantified the quality of the rules learned by
CaRLA. In each experiment, we computed the boost in the precision
of the mapping of property pairs with and without the rules derived
by CaRLA. The initial precision was computed as |P|/|M|, where
M = {( pi , p j ) : σ ( pi , p j ) ≥ min( p1 , p2 )∈P σ ( p1 , p2 )}. The precision after apply-
ing CaRLA’s results was computed as |P|/|M′|, where M′ = {(pi,pj):
σ (ϕ R1 ( pi ), ϕ R2 ( p j )) ≥ min( p1 , p2 )∈P σ (ϕ R1 ( p1 ), ϕ R2 ( p2 ))}. Note that in both cases,
the recall was 1 given that ∀( pi , p j ) ∈ P : σ ( pi , p j ) ≥ min( p1 , p2 )∈P σ ( p1 , p2 ). In all
experiments, we used the Jaccard similarity metric and a word tokenizer
with κ = 0.8. All runs were carried on a notebook running Windows 7
Enterprise with 3 GB RAM and an Intel Dual Core 2.2 GHz processor. Each
of the algorithms was ran five times. We report the rules that were discov-
ered by the algorithms and the number of experiments within which they
were found.
Table 5.4
Overview of the Data Sets
Experiment Source Target Source Property Target Property Size
Actors DBpedia LinkedMDB rdfs:label rdfs:label 1172
Directors DBpedia LinkedMDB rdfs:label rdfs:label 7353
Movies DBpedia LinkedMDB rdfs:label rdfs:label 9859
Producers DBpedia LinkedMDB rdfs:label rdfs:label 1540
* http://latc-project.eu
Linked Data in Enterprise Integration 199
Table 5.5
Overview of Batch Learning Results
Experiment R1 P5 P10 P20 P50 P100
Actors <“@en” → “(actor)”> 1 1 1 1 1
Directors <“@en” → “(director)”> 1 1 1 1 1
<“(filmmaker)” → “(director)”>* 0 0 0 0 0.2
Directors_clean <“@en” → “(director)”> 1 1 1 1 1
Movies <“@en” → ε > 1 1 1 1 1
<“(film)” → ε > 1 1 1 1 1
<“film)” → ε >* 0 0 0 0 0.6
Movies_clean <“@en” → ε > 1 1 1 1 1
<“(film)” → ε > 0 0.8 1 1 1
<“film)” → ε >* 0 0 0 0 1
Producers <“@en” → (producer)> 1 1 1 1 1
200 Big Data Computing
(a)
100%
80%
60%
Precision
Baseline
CaRLA
40%
20%
0%
Actors Directors Directors_clean Movies Movies_clean Producers
(b)
100%
80%
60%
Threshold
Baseline
CaRLA
40%
20%
0%
Actors Directors Directors_clean Movies Movies_clean Producers
Figure 5.9
Comparison of the precision and thresholds with and without CaRLA. (a) Comparison of the
precision with and without CaRLA. (b) Comparison of the thresholds with and without CaRLA.
can improve the precision of the mapping of property values even on the
noisy data sets.
Interestingly, when used on the Movies data set with a training data
set size of 100, our framework learned low-confidence rules such as
<“(1999” → ε >, which were yet discarded due to a too low score. These are
the cases where aCaRLA displayed its superiority. Thanks to its ability to
ask for annotation when faced with unsure rules, aCaRLA is able to vali-
date or negate unsure rules. As the results on the Movies example show,
Linked Data in Enterprise Integration 201
Table 5.6
Overview of Active Learning Results
Experiment R1 P5 P10 P20 P50 P100
Actors <“@en” → “(actor)”> 1 1 1 1 1
Directors <“@en” → “(director)”> 1 1 1 1 1
<“(actor)” → “(director)”>* 0 0 0 0 1
Directors_clean <“@en” → “(director)”> 1 1 1 1 1
Movies <“@en” → ε > 1 1 1 1 1
<“(film)” → ε > 1 1 1 1 1
<“film)” → ε >* 0 0 0 0 1
<“(2006” → ε >* 0 0 0 0 1
<“(199” → ε >* 0 0 0 0 1
Movies_clean <“@en” → ε > 1 1 1 1 1
<“(film)” → ε > 0 1 1 1 1
<“film)” → ε >* 0 0 0 0 1
Producers <“@en” → (producer)> 1 1 1 1 1
Conclusion
In this chapter, we introduced a number of challenges arising in the context
of Linked Data in Enterprise Integration. A crucial prerequisite for address-
ing these challenges is to establish efficient and effective link discovery and
data integration techniques, which scale to large-scale data scenarios found
in the enterprise. We addressed the transformation and linking steps of the
Linked Data Integration by presenting two algorithms, HR 3 and CaRLA. We
proved that HR 3 is optimal with respect to its reduction ratio by showing
202 Big Data Computing
References
1. S. Auer, J. Lehmann, and S. Hellmann. LinkedGeoData: Adding a spatial
dimension to the Web of Data. The Semantic Web-ISWC 2009, pp. 731–746,
2009.
2. F. C. Botelho and N. Ziviani. External perfect hashing for very large key sets. In
CIKM, pp. 653–662, 2007.
3. T. Bray, J. Paoli, C. M. Sperberg-McQueen, E. Maler, and F. Yergeau. Extensible
Markup Language (XML) 1.0 (Fifth Edition). W3C, 2008.
4. J. Clark and M. Makoto. RELAX NG Specification. Oasis, December 2001. http://
www.oasis-open.org/committees/relax-ng/spec-20011203.html.
5. V. Eisenberg and Y. Kanza. D2RQ/update: Updating relational data via virtual
RDF. In WWW (Companion Volume), pp. 497–498, 2012.
6. M. G. Elfeky, A. K. Elmagarmid, and V. S. Verykios. Tailor: A record linkage tool
box. In ICDE, pp. 17–28, 2002.
7. J. Euzenat and P. Shvaiko. Ontology Matching. Springer-Verlag, Heidelberg,
2007.
8. D. A. Ferrucci, E. W. Brown, J. Chu-Carroll, J. Fan, D. Gondek, A. Kalyanpur,
A. Lally et al. Building Watson: An overview of the deepQA project. AI Magazine,
31(3):59–79, 2010.
9. A. Halevy, A. Rajaraman, and J. Ordille. Data integration: The teenage years. In
Proceedings of the 32nd International Conference on Very Large Data Bases (VLDB’06),
pp. 9–16. VLDB Endowment, 2006.
10. A. Hogan, A. Harth, and A. Polleres. Scalable authoritative OWL reasoning for
the web. International Journal on Semantic Web and Information Systems (IJSWIS),
5(2):49–90, 2009.
11. P. Jaccard. Étude comparative de la distribution florale dans une portion des
Alpes et des Jura. Bulletin del la Société Vaudoise des Sciences Naturelles, 37:547–
579, 1901.
12. R. Jelliffe. The Schematron—An XML Structure Validation Language using Patterns
in Trees. ISO/IEC 19757, 2001.
13. R. Kimball and J. Caserta. The Data Warehouse ETL Toolkit: Practical Techniques for
Extracting, Cleaning, Conforming, and Delivering Data. Wiley, Hoboken, NJ, 2004.
14. M. Krötzsch, D. Vrandečić, and M. Völkel. Semantic Media Wiki. The Semantic
Web-ISWC 2006, pp. 935–942, 2006.
Linked Data in Enterprise Integration 203
Contents
Data Access Problem of Big Data....................................................................... 206
Ontology-Based Data Access.............................................................................. 207
Example............................................................................................................ 209
Limitations of the State of the Art in OBDA................................................ 212
Query Formulation Support............................................................................... 215
Ontology and Mapping Management.............................................................. 219
Query Transformation.........................................................................................222
Time and Streams.................................................................................................225
Distributed Query Execution............................................................................. 231
Conclusion............................................................................................................ 235
References.............................................................................................................. 236
This chapter proposes steps toward the solution to the data access problem that
end-users typically face when dealing with Big Data:
205
206 Big Data Computing
Uniform sources
Simple Application Predefined queries
case
Engineer
In these situations, the turnaround time, by which we mean the time from
when the end-user delivers an information need until the data are there, will
typically be in the range of minutes, maybe even seconds, and Big Data tech-
nologies can be deployed to dramatically reduce the execution time for queries.
Situations where users need to explore the data using ad hoc queries are
considerably more challenging, since accessing relevant parts of the data
typically requires in-depth knowledge of the domain and of the organiza-
tion of data repositories. It is very rare that the end-users possess such skills
themselves. The situation is rather that the end-user needs to collaborate
* See http://www.optique-project.eu/
Scalable End-User Access to Big Data 207
with an IT-skilled person in order to jointly develop the query that solves the
problem at hand, illustrated in the figure below:
Translation
Disparate sources
Complex Information need Specialized query
case
Engineer IT expert
Flexible,
Optique Query Translated Disparate sources
Optique ontology-
trans-
solution Application based lation queries
Engineer queries
End-user IT expert
Appli-
cation
Query answering
Figure 6.1
The basic setup for OBDA.
also, in some cases, become dramatically larger than the original ontology-
based query formulated by the end-user.
The theoretical foundations of OBDA have been thoroughly investigated
in recent years (Möller et al., 2006; Calvanese et al., 2007a,b; Poggi et al.,
2008). There is a very good understanding of the basic mechanisms for
query rewriting, and the extent to which expressivity of ontologies can be
increased, while maintaining the same theoretical complexity as is exhibited
by standard relational database systems.
Also, prototypical implementations exist (Acciarri et al., 2005; Calvanese
et al., 2011), which have been applied to minor industrial case studies (e.g.,
Amoroso et al., 2008). They have demonstrated the conceptual viability of the
OBDA approach for industrial purposes.
There are several features of a successful OBDA implementation that lead
us to believe that it is the right basic approach to the challenges of end-user
access to Big Data:
Example
We will now present a (highly) simplified example that illustrates some of the
benefits of OBDA and explains how the technique works. Imagine that an engi-
neer working in the power generation industry wants to retrieve data about
generators that have a turbine fault. The engineer is able to formalize this infor-
mation need, possibly with the aid of a suitable tool, as a que