0% found this document useful (0 votes)
30 views12 pages

Data Visceralization in Virtual Reality

Uploaded by

soul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views12 pages

Data Visceralization in Virtual Reality

Uploaded by

soul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/346358699

Data Visceralization: Enabling Deeper Understanding of Data Using Virtual


Reality

Article in IEEE Transactions on Visualization and Computer Graphics · October 2020


DOI: 10.1109/TVCG.2020.3030435

CITATIONS READS
41 1,426

6 authors, including:

Benjamin Lee Christophe Hurter


Universität Stuttgart Ecole Nationale de l'Aviation Civile
23 PUBLICATIONS 305 CITATIONS 210 PUBLICATIONS 3,656 CITATIONS

SEE PROFILE SEE PROFILE

Steven M. Drucker Tim Dwyer


Microsoft Monash University (Australia)
165 PUBLICATIONS 8,863 CITATIONS 164 PUBLICATIONS 4,856 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Benjamin Lee on 28 November 2020.

The user has requested enhancement of the downloaded file.


© 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
Computer Graphics. The final version of this record is available at: 10.1109/TVCG.2020.3030435

Data Visceralization: Enabling Deeper Understanding of Data


Using Virtual Reality
Benjamin Lee, Dave Brown, Bongshin Lee, Christophe Hurter, Steven Drucker, and Tim Dwyer

Fig. 1. Prototypes of data visceralizations in VR based on popular representations of physical measurements. (a) Scorecard of results
in seconds from Olympic Men’s 100 m. (b) Data visceralization equivalent to experience one-to-one scale of Olympic sprint speeds.
(c) Comparison diagram of tall skyscrapers (© Saggittarius A, CC BY-SA 4.0). (d) Data visceralization equivalent to experience and
compare true one-to-one scale of select skyscrapers.

Abstract— A fundamental part of data visualization is transforming data to map abstract information onto visual attributes. While this
abstraction is a powerful basis for data visualization, the connection between the representation and the original underlying data (i.e.,
what the quantities and measurements actually correspond with in reality) can be lost. On the other hand, virtual reality (VR) is being
increasingly used to represent real and abstract models as natural experiences to users. In this work, we explore the potential of using
VR to help restore the basic understanding of units and measures that are often abstracted away in data visualization in an approach
we call data visceralization. By building VR prototypes as design probes, we identify key themes and factors for data visceralization.
We do this first through a critical reflection by the authors, then by involving external participants. We find that data visceralization is an
engaging way of understanding the qualitative aspects of physical measures and their real-life form, which complements analytical and
quantitative understanding commonly gained from data visualization. However, data visceralization is most effective when there is a
one-to-one mapping between data and representation, with transformations such as scaling affecting this understanding. We conclude
with a discussion of future directions for data visceralization.
Index Terms—Data visceralization, virtual reality, exploratory study

1 I NTRODUCTION
Communicating information using stories that employ data visualiza- in many cases necessary, poses a limitation for data based on physical
tion has been explored extensively in recent years [48,49]. A fundamen- properties, where the process of measurement causes the connection
tal part of data visualization is processing and transforming raw data, between the visualization and the underlying ‘meaning’ of the data to
ultimately mapping this abstracted information into attributes repre- be lost (i.e., what the data truly represents in the real-world). While
sented in a visualization [11, 17]. This abstraction, while powerful and techniques in data-driven storytelling (e.g., [52]) can help establish
context and resolve ambiguity in these cases, these techniques do little
to help people truly understand the underlying data itself. A common
approach used to help improve comprehension of these measures is by
• Benjamin Lee and Tim Dwyer are with Monash University. Email:
using concrete scales [13]—the association of physical measurements
{benjamin.lee1,tim.dwyer}@monash.edu.
and quantities with more familiar objects. However, this often relies
• Dave Brown, Bongshin Lee, and Steven Drucker are with Microsoft
on prior knowledge and requires cognitive effort to effectively envision
Research. Email: {dave.brown,bongshin,sdrucker}@microsoft.com.
the desired mental imagery.
• Christophe Hurter is with ENAC, French Civil Aviation University. Email:
[email protected]. To complement these approaches in data visualization and story-
Manuscript received xx xxx. 201x; accepted xx xxx. 201x. Date of Publication telling, we introduce data visceralization, which we define as a data-
xx xxx. 201x; date of current version xx xxx. 201x. For information on driven experience which evokes visceral feelings within a user to facili-
obtaining reprints of this article, please send e-mail to: [email protected]. tate intuitive understanding of physical measurements and quantities.
Digital Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx By visceral, we mean a “subjective sensation of being there in a scene
depicted by a medium, usually virtual in nature” [3,4]. To illustrate this

1
2.1 (Immersive) Data-Driven Storytelling
Segel and Heer [49] coined the term narrative visualization in 2010.
Since then, researchers have explored the design patterns of specific
genres of narrative visualizations, such as Amini et al. [1] with data
videos and Bach et al. [2] with data comics. In contrast, work by Stolper
et al. [52] characterized the range of recently emerging techniques used
in narrative visualization as a whole. With the increased use of VR and
augmented reality (AR) devices for the purposes of data visualization
and analytics (known as immersive analytics [38]) and for storytelling
in general [10], it is feasible to begin considering how these devices
can be used for immersive data-driven storytelling [25, 33]. Recent
work by Ren et al. [46] has investigated the creation of immersive data
stories, and work by Bastiras et al. [5] of their effectiveness, but these
resort to using simple 2D images in a VR environment rather than
taking full advantage of the device’s capabilities. A VR piece from the
Wall Street Journal [30] remains one of the few compelling examples
of an immersive data story, using a time series of the NASDAQ’s
Fig. 2. Our conceptual data visceralization pipeline in relation to the price/earnings ratio over 21 years as a roller coaster track which the
information visualization pipeline [11]. Both run in parallel to each other, reader then rides from one end to the other. The story particularly
with data visceralization aimed at complementing data visualization. focuses on the sudden fall in the index as the dot-com bubble began
to burst in 2000, having readers experience this metaphorical fall as a
literal roller coaster drop in VR. In this work, we examine the use of
concept, consider the scenarios depicted in Fig. 1(a,c). Many of us are immersive environments to achieve similar effects of transforming data
familiar with 100 m sprint times, but understanding the ‘ground truth’— into visceral experiences, but in a complementary fashion to existing
how fast they are actually running—is elusive. Similarly, images or data stories. That is, we focus on aiding the understanding of the
diagrams showcasing tall buildings such as skyscrapers are common, underlying data itself through the use of VR, while the narrative and
but mentally envisioning what these actually look like without prior visualizations of the story set the context, background, and messaging.
knowledge is challenging. In these scenarios, only by seeing the ac- 2.2 Concrete Scales and Personalization
tual sprinter or building will achieve this truth. In a sense, seeing—or
more generally, experiencing—is believing. With virtual reality (VR) The use of concrete scales is a popular technique used to aid in com-
technology rapidly advancing and becoming more readily available, it prehending unfamiliar units and extreme magnitudes by comparison
offers an unprecedented opportunity for achieving these visceral ex- to more familiar objects. To formalize this technique, Chevalier et
periences in a manner that is both cost effective and compelling. As al. [13] collected and analyzed over 300 graphic compositions found
depicted in Fig. 1(b,d), we can now simulate what it’s like to have online. They derived a taxonomy of object types and measure relations,
Olympic sprinters run right past you, or the feeling of being next to and identified common strategies such as using analogies of pairwise
one of these skyscrapers. From these experiences, a thorough under- comparison and juxtaposing multiples of smaller objects together. They
standing of these measures and quantities may be achieved, with data discussed the need and challenges of choosing good units, a prob-
visualization and visceralization existing hand-in-hand to provide both lem which Hullman et al. [24] addressed by automatically generating
quantitative (i.e., analytical reasoning) and qualitative (i.e., the ground different re-expression units which may be more familiar to the user.
truth) understanding of the data (Fig. 2). Concrete scales fundamentally assists in building mental models of
In this work, we explore this concept of data visceralization. We scale. In contrast, we use data visceralization to directly represent the
develop six VR prototypes as design probes based on existing data measure that is being conveyed to the user, effectively skipping this
stories and visualizations specifically chosen to explore a range of process altogether.
different measures and phenomena. We critically reflect on these design 2.3 Data Physicalization
probes, identifying key themes and factors for data visceralization such
as: the appropriate types of measures and quantities; the ranges of As compared to information visualization which maps data into visual
magnitudes of physical phenomena that are suitable; and the situations marks and variables [11, 17], data physicalization explores how data
where they are effective or not. We expand this reflection through can be encoded and presented in tangible, physical artifacts through
sessions with external participants to gain feedback on the value and its geometric and/or material properties [26, 27]. Although users can
intricacies of data visceralization.We conclude by discussing multiple directly experience data in a unique physicalized manner, it still funda-
aspects of data visceralizations, along with future work in the area. mentally transforms and remaps abstract data into tangible experiences,
often in equivalents of common visualization types [15, 26, 53]. While
In summary, the main contributions of this paper are:
this tangibility may provide benefits for memory or engagement, the
• The introduction of the novel concept of data visceralization, and its focus is on some higher level conceptual data rather than the physical
applications for understanding the data that underlies visualizations property or measure itself. Data physicalization could be well suited for
• A set of prototype examples to demonstrate the concept and charac- creating visceral experiences if attributes are represented without trans-
terize the experiences formation, as any physical phenomena can theoretically be fabricated
• An exploration into the factors and considerations behind data viscer- and subsequently experienced. However, this would be resource inten-
alization, through both a critical reflection by the authors and external sive and heavily dependent on advances in technology. Indeed, many
feedback from participants museum exhibits construct one-to-one mappings of data phenomena so
that people can understand the underlying data in representation, but
such exhibits are expensive and also can only be experienced by visiting
2 R ELATED WORK
them. Therefore technologies such as VR are well suited for visceral-
While our work focuses on virtual reality (VR), the concept of data ization, overcoming barriers such as fabrication cost and physical space
visceralization can be applied much more broadly. There are other non- restrictions through use of virtual locomotion techniques [32].
VR oriented methods of helping people understand a unit or measure
such as scale models, immersive IMAX films, museum exhibits [41], or 2.4 Immersion and Presence in VR
first hand experiences. In this section, we discuss data visceralization VR and immersive technologies as a whole have been available for
in the context of other related fields. many decades, and have been extensively studied for their impact on

2
© 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
Computer Graphics. The final version of this record is available at: 10.1109/TVCG.2020.3030435
human perception. Core to these technologies are the notions of immer-
sion and presence, where immersion is a characteristic of the technology
that enables a vivid illusion of reality to the user, and presence is the
state of consciousness of being in the virtual environment [50,51]. Pres-
ence is a form of visceral communication that is primal but also difficult
to describe [28]. This notion of presence and viscerally believing that
the virtual world is real is what we aim to leverage with visceraliza-
tion. VR devices have proven effective enough in doing this that they
have been used in applications such as psychological treatment [54],
journalism [14], and military training [34]. Moreover, VR and AR has
been used in scenarios where spatial representations of 3D objects are
necessary, such as in 3D modeling [39] and medical imaging [59].

2.5 Human Perceptual Psychology and Pyschophysics


The sense of presence within VR draws on principles of Gibsonian
psychology [19] which ties human perception and movement to the
overall comprehension of the environment. When the scene changes as
result of a change in our head position, we perceive that as movement
through an environment. Given the focus on the stimuli which simulate
the physical measurements and quantities in data visceralization, we
draw upon high level concepts from the extensive field of psychophysics
[21], most notably the notion of human perceptual limits and how
this may impact the ranges of stimuli used. We use this notion to Fig. 3. Illustrative images of E1 . (a) ‘Photo finish’ of Olympic sprinters
systematically scope our design probes in the next section. (adapted from [45]). (b) Overview of sprinters running down the track at
real scale. (c) Same perspective but using human-sized cubes instead
3 D ESIGN P ROBES INTO DATA V ISCERALIZATION of humanoid models. (d) Experiencing the race from the perspective of
To explore and better define the concept of data visceralization, we the slowest sprinter. (e) Sprinters superimposed on top of one another.
developed a set of VR prototypes using the Unity3D game engine. We (f) Floating labels above each sprinter providing quantitative measures
critically reflect on these prototypes, with each prototype requiring and values.
1 to 2 weeks to create, test, and critique. These design probes were
conducted using an ASUS Windows Mixed Reality Headset (HC102) in
an open space free of obstructions. Each prototype was adapted either average speed. We initially used simple human-sized cubes for the
from existing stories published in online journals and news articles, or sprinters (Fig. 3c), later replacing them with anatomical 3D models.
from popular data-driven graphics/visualizations. We strove to test a We detail this notion later in Sec. 4.3. Thanks to the flexibility of VR,
range of different scenarios to explore data visceralizations as much as the race can be experienced from almost any perspective: standing at
possible. In this section, we describe six of these design probes, which any position on the track and watching them run past, floating above
we refer to as examples for simplicity and abbreviate as E1 , E2 , the track, or even moving with the fastest/slowest sprinter (Fig. 3d),
E3 , E4 , E5 , and E6 . The first half focuses on scenarios which demonstrates the relative speed between sprinters and provides
with common types and scales of physical phenomena, not requiring a glimpse of what it might be like from their perspective. One clear
any scaling or transformation to be readily perceived in VR. The second issue early on was that by copying the original story’s environment
half investigates scenarios that required some form of transformation and its open, endless void, it was challenging to properly assess the
into VR, such as scaling down extreme values and representing abstract speed of each sprinter. While it was still possible to make object-
measures. While we describe, show pictures, and include video of these relative judgments between sprinters [28], the lack of background
examples, one needs to experience them in a VR setup to assess the meant there was no clear frame of reference for the virtual environment
data visceralization experience. Therefore, we made these available in itself. We decided to add a stadium model around the track to resolve
the supplementary material and on a public GitHub repository1 . this issue, which also aided in immersion and contextual information
to the experience. To further improve awareness, we also looked at
E1 – Speed: Olympic Men’s 100 m adding optional annotations of exact times and speeds above each
sprinter’s head (Fig. 3e), as well as experimenting with superimposing
Usain Bolt holds the world record in the Olympics Men’s 100 m with all sprinters on top of one another (Fig. 3f). The latter was motivated
his performance at the 2012 London Olympic Games at a time of by there being more than 80 lanes in the track, making it difficult to see
9.63 seconds. One Race, Every Medalist Ever by the New York Times everything at once. Note that we only re-scaled the width of the entire
[45] puts Bolt’s result into perspective by comparing it with those of all track for this, keeping the length of the track the same.
other Olympic medalists since 1896. While this is commonly shown
on a tabular layout (Fig. 1a), the story shows the relative distance E2 – Distance: Olympic Men’s Long Jump
away from the finish line each sprinter would be when Bolt finishes,
highlighting the wide margins between him and the competition. This Unlike the progressively record breaking times of the Olympic Men’s
is shown in a video rendered with 3D computer generated graphics 100 m into the new millennium, 1968 saw Bob Beamon get the world
(Fig. 3a) and an accompanying scatterplot-like visualization. This record for the farthest long jump at 8.9 m (29 ft 2.5 in), and still holds
relative distance helps to quantitatively compare how much faster Bolt the Olympic record to this day—losing the world record to Mike Powell
is than the rest. However, neither the visualization nor video give an in 1991 at 8.95 m (29 ft 4.25 in). Told in a similar fashion to the Men’s
accurate notion of just how fast Olympic sprinters can run. While 100 m Sprint story with video and visualization [45], Bob Beamon’s
watching it live in person may provide this, most people will not have Long Olympic Shadow [44] highlights not only Beamon’s performance
the opportunity to do so, let alone stand on the track during the event, relative to his peers, but also the sheer distance that these athletes
and it is impossible to resurrect runners from the last 100 years. can jump to begin with: comparable to the distance of a basketball
We based our prototype on the original story’s video in a virtual 3-point line (Fig. 4a). This comparison serves to put these distances
environment at real-world scale (Fig. 3b). The user can play, pause, into perspective, but what if one hasn’t played basketball or even set
and restart the race, causing each sprinter to run down the track at their foot on a basketball court? Of course, it is possible to use a more
familiar anchor instead [24], but visualizing these distances at real-life
1 https://github.com/benjaminchlee/Data-Visceralization-Prototypes scales would allow users to experience them for themselves.

3
Fig. 4. Illustrative images of E2 . (a) Distances of long jumps compared
to a 3-point line (adapted from [44]). (b) Overview of long-jumpers at the
final ‘freeze-frame’ when landing. (c) Experiencing the jump as it happens
from a close angle. (d) Long-jumpers represented as superimposed flags
on the ground with labels on the furthest and closest jumps.

Our VR prototype shares much in common with E1 , the key


difference being the use of distance rather than speed. While the user
can still control the playback of the event, the focus is on the final Fig. 5. Illustrative images of E3 . (a) Looking at the Eiffel Tower, CN
‘freeze-frame’ of each long jumper at the end of their jump (Fig. 4b). Tower, and Burj Khalifa at real-life scale from below. (b) Looking at
The user can change their viewpoint to any position to get a visceral the Space Needle from the top of the Statue of Liberty. (c) Crowds
sense of the sheer distance of the long-jumpers (Fig. 4c), which is not randomly walk around on the ground floor. (d) Shadows cast from one
possible by just watching the video. As is the case with E1 , we also building to another from a light source directed parallel to the ground.
superimpose all of the long-jumpers on top of one another for easier (e) Miniaturized 3D version of comparison diagram with y-axis scale. (f)
comparison. Given the precision and fine differences of the results in Same setup but from a distant view.
long jump however, the long-jumpers can instead be viewed as planted
flags to more closely judge distances between each other (Fig. 4d).

E3 – Height: Comparison of Skyscrapers E4 – Scale: Solar System


When comparing physical measurements of related objects or crea-
tures, one common visualization to use is a comparison diagram. This Many graphics, visualizations, and videos exist to educate people on the
juxtaposes each subject as 2D images or silhouettes, with the y-axis enormous scale of our solar system. One such example [43] (Fig. 6a)
usually showing height. One popular use of this is the comparison focuses on the surprisingly large distance between the Earth and the
of skyscrapers, such as the one shown in Fig. 1c. These are great for Moon. It does so by illustrating how all of the other planets can fit
understanding relative sizes between objects (e.g., that Mount Elbrus, between the two when at their average distance apart. This presented a
Kilimanjaro, and Denali are roughly equivalent in elevation), but not challenge which we wanted to investigate: the effects of re-scaling and
the absolute size of each subject (i.e., what it feels like to be at the base transformation on viscerality in VR.
of a mountain roughly 6 km or 3.7 mi tall). In our prototype (Fig. 6b), we chose a ratio of 1:40,000,000 (i.e.,
We decided to create a VR version of the skyscraper diagram seen in 1 m in VR = 40,000 km in space), resulting in a real-world distance of
Fig. 1c. We chose skyscrapers because their size can be overwhelming approximately 10 m between the Earth and Moon. This struck a balance
to see and experience, but is still familiar to people living in cities with between being small enough to be able to see all planets reasonably
tall skylines. Our prototype uses 3D models of famous skyscrapers well, but large enough to still be at a reasonable size (Earth at roughly
and landmarks (Statue of Liberty, Space Needle, Eiffel Tower, CN 30 cm (12 in) in diameter). As the intent is to see the other planets
Tower, and Burj Khalifa) positioned side-by-side in a similar fashion comfortably fitting between the Earth and Moon, the user can grab
to the original visualization. These models are scaled as accurately as and slide these planets as a group along a single axis to align them
possible to their real-world counterparts. As with the prior examples, all together. Note that for E4 , E5 and E6 , we detail these
it is possible to move around the environment in ways that are either challenges in transformation in Sec. 4.6.
not available to most people, such as viewing from below (Fig. 5a) or
from the top of each skyscraper (Fig. 5b), or in ways that are physically
impossible, such as flying around in a ‘Superman’-like fashion.We
also experimented with adding visual cues to aid in scale perception,
such as life-sized people randomly walking at ground-level (Fig. 5c)
to simulate the feeling of ‘seeing people as ants,’ and casting shadows
from skyscrapers to the ground (Fig. 5d) as an artificial ‘ruler.’ In
addition, we explored two alternative views of the scene. The first
was to miniaturize the skyscrapers such that they were approximately
2 m (6 ft 7 in) in size and allowing the user to pick up and re-position
them (Fig. 5e). The second was to retain their real-world scale, but Fig. 6. Illustrative images of E4 . (a) Reference scene that our proto-
position the user far away enough so that the skyscrapers still occupied type is based on (adapted from [43]). (b) Prototype version in VR at a
a similar space in the user’s field of view (Fig. 5f). We elaborate on this 1:40,000,000 scale.
difference in Sec. 4.6.

4
© 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
Computer Graphics. The final version of this record is available at: 10.1109/TVCG.2020.3030435

Fig. 7. Illustrative images of E5 . (a) Birds-eye view of the protest


(adapted from [58]). (b) Perspective from inside the crowd. (c) Per-
spective from above while flying in a helicopter-like fashion. (d) Crowd
represented as human-sized cubes.

E5 – Discrete Quantities (of Humans): Hong Kong


Protests
Protest marches on Hong Kong’s streets beginning in 2019 involved
Fig. 8. Illustrative images of E6 . (a) Reference scene that our proto-
over 2 million people. To help visualize the sheer scale of this event, the
type is based off (© Oto Godfrey, Demonocracy.info, used with permis-
New York Times stitched together several birds-eye view photographs sion). (b) Overview of the scene with the Statue of Liberty in the center.
from June 16, 2019 to form a vertical composite image in a scrolling (c) Looking down from the top of a stack. (d) Looking up from ground
format [58] (Fig. 7a). It combines both the small size of each individual level and at labels of stack and Statue height.
person with the seemingly never-ending photographs to accomplish
this. This begs the question, however, what is it actually like to be there
in person? Is it a noticeably different experience being able to walk 4 C RITICAL R EFLECTION ON D ESIGN P ROBES
throughout a large crowd of people as compared to looking at photos In this section, we go into detail of our observations and critiques of
alone, and if so, can we perceive the extreme quantities on display? the design probes described in the previous section.
Given the technical limitations of rendering upwards to a million
animated 3D humanoid models in VR, we recreate only a small section 4.1 Perception: Perceptual ‘Sweet-spots’
of the protest using 10,000 models. The crowd and surrounding envi- It is clear from E1 and E2 that VR experiences of human-scale
ronment are at a real-world scale, and it is possible to move through data fall into a perceptual sweet-spot for data visceralization. Given
the crowd to experience being surrounded by many people (Fig. 7b). that these directly relate to human performance in sport, measures such
The user can also choose to fly above the crowd in a similar way to as running speed and jump length can directly be experienced with
a helicopter, without any physical limitations (Fig. 7c). However, it VR. E3 began to highlight some of these limits however, as all
was clearly apparent that the idle nature of the protesters detracted a lot skyscrapers simply appear to be tall beyond a certain height, requiring
from the experience, to the point where using static human-sized cubes special consideration to mitigate this (Sec. 4.2). However, a given
in place of the models worked just as well to convey quantities (Fig. 7d). measure being within this sweet-spot did not automatically make it
We suspect that much more varied models, animated movements, and easy to understand every detail of the data. For example, while E2
lighting would go a long way to provide a more visceral experience of still conveyed the sense of distance, the values were very similar to
being at the protest. each other, making it difficult to make comparisons between athletes.

E6 – Abstract Measures: US Debt Visualized 4.2 Virtuality: Manipulating the Scene to Facilitate Under-
In US Debt Visualized in $100 Bills [20], an incomprehensibly large standing
amount of money ($20+ trillion USD in 2017) is visualized using Because we designed and implemented our prototypes using VR, the
concrete scales. Starting from single $100 bills and moving to pallets virtual world allowed us to manipulate both the virtual objects and
of bills worth $100 million each, the piece culminates in comparing the user’s viewpoint in ways not possible in real-life in order better
stacks of these pallets with other large objects, such as the Statue of facilitate understanding of the chosen phenomena. One example of
Liberty as seen in Fig. 8a, putting into perspective the amount of debt. manipulating objects was the positioning of athletes in E1 and E2 .
However, the true scale of each stack is difficult to properly grasp Perspective foreshortening can distort the relative perception of objects
from the image, let alone without having visited the Statue of Liberty in the environment, meaning that comparing athletes on either end of
or stood under a construction crane. A similar concept was already the ≈80 m track was difficult. In the original videos [44, 45] the use
investigated in E3 , but as money is inherently conceptual, would this of fixed view points, lack of stereopsis, and orthogonal perspectives
transformation from abstract measure to physical measure (i.e., size of avoided this issue. Conversely, the use of VR allowed us to manipulate
$100 bills) influence viscerality? and play with the position of the athletes, superimposing them on top
Our VR prototype closely replicates the original piece, with stacks of one another (Fig. 3f) to achieve a similar orthogonal view, there-
of pallets of bills surrounding the Statue of Liberty (Fig. 8b) with fore making this comparison easier. Likewise, we can manipulate the
an updated total of $22+ trillion USD. Each stack is comprised of position of the user and their viewpoint to make it easier to view and
10 × 10 × 100 = 10, 000 life-sized pallets of $100 bills, piling up to understand the phenomena, as can be seen in E3 . Certain viewpoints
approximately 114 m in height. Given both this and E3 primarily made it difficult to accurately judge and/or make comparisons between
convey height, they offer similar points of view, such as from the top skyscrapers, such as when standing near the base of a skyscraper and
of a stack (Fig. 8c) or from the bottom of a stack (Fig. 8d). To try and looking directly straight up, or when the skyscraper was self-occluding
quantify some aspects of the experience however, we add annotations (e.g., the Space Needle’s observation deck). However, we overcome
of certain heights of the objects in the scene (Fig. 8d). this by allowing the user to fly and teleport around the scene, going to

5
the top of any skyscraper they wished. Doing so reduces the distance
and angle to the other skyscrapers, making it easier to see. A surprising
side effect this however, was that it provided more nuanced insights in
the data, such as in E1 where moving along with the fastest/slowest
sprinter grants both an experience of the speed that they were moving
at, but also to compare and contrast their speed directly with all the
others. More broadly, this concept of manipulation to facilitate under-
standing is prevalent throughout our design probes, such as juxtaposing
skyscrapers from different continents in E3 and moving all planets
closer together in E4 . However, if the goal is strictly to viscerally
understand the data in a manner as close to reality as possible, this may
negatively impact that understanding as these manipulations may be
deemed unrealistic and off-putting to users.
4.3 Realism: The Role of Photorealism and Abstraction Fig. 9. Illustration of the difference in perception when viewing an object
at two different scales and distances, such that the object fills the same
In all of our examples, we needed to add relevant contextual back- apparent size in the field of view. Consider a given point on the object
grounds and visual aids, such as the stadium in E1 and E2 . The which is projected onto a position of distance δ from the edge of the field
lack of contextual cues in the environment detracted from the sense of of view of angle α. When the eye is moved by a distance ∆eye, this point
presence, and in certain cases made the data more challenging to under- in the scale-down object is shifted by a distance ∆near=δ near−δ and for
stand. In E1 , for example, moving along with a sprinter without any the real-scale object ∆far= δ far–δ . ∆ far is much smaller than ∆ near.
background environment present removed all sense of absolute motion,
whereas in E3 and E6 , not having a floor made it difficult to judge
overall height. We also explicitly manipulated the level of realism of end, we experimented with having: (1) a scaling transformation applied
the physical phenomena in several of the examples, such as using cubes to have a miniaturized view of the skyscrapers; and (2) a distant yet
instead of humans in E1 and E5 , as well as small flags in E2 . visibly equivalent view of the skyscrapers, retaining absolute scales.
Surprisingly, this had little impact on the visceral feeling of the data—a Despite appearing the same on a traditional monitor, they had very
human-sized cube moving as fast as an Olympic sprinter still conveys noticeable differences in their visceral nature when in VR. The first
the speed in a similar manner. These observations aside, it is still un- instance produced a significant perceptual mismatch, where motion
clear what role realism plays. A basic level of detail of background parallax (caused by the motion of our head or body) caused significant
environments may be necessary, but it does not need to be high-fidelity. perceived motion of the scaled skyscrapers, but little perceived notion
More photorealistic rendering and simulation may be more engaging in the second instance when far away (Fig. 9). While the latter looked
and enjoyable, but conveying visceral senses of motion and distance to be real skyscrapers from really far away, the former looked like
were achievable without highly sophisticated representations. miniature 3D printed models which broke the illusion of being real
skyscrapers. Similarly, photographers have used perceptual mismatch
4.4 Annotation: The Use of Annotations and Distractions techniques to make real photographs look like miniature models by
While our senses are very good at judging relative sizes and speeds, we adjusting angle and blur to simulate a limited focal length exposure in
are less well suited to determining absolute values—hence the need to tilt-shift photography.
display exact measurements. In several examples, we experimented Extreme Scales: Experiencing phenomena significantly out of
with augmenting the direct experience with annotations to show sizes or range of perception. E4 explored a scenario where visualizing
distances. We made sure these could be turned on and off, since we were the data at a one-to-one scale would be impossible through the size of
concerned that such augmentation might impact the realism and thus the planets. As a result of the scaling, much of the experience became
the direct perception of the measures. However, as in more primitive similar to the miniaturized skyscrapers, where they clearly looked like
representations, we saw little detraction from the use of annotation. scale models. Because of this, it was impossible to get any direct notion
This was moderately surprising since, as Bertin specifies [7], reading of the true size of any of the planets. However, as the relative sizes be-
text can capture user attention and thus reduce the perception of other tween the planets and the relative distance between the Earth and Moon
stimuli, negatively impacting the visceral experience with the data. were preserved, it was still possible to make comparisons between
the planets and the Earth–Moon gap—arguably the intention of the
4.5 Knowledge Transfer: Applying Knowledge and Experi- original piece regardless. E5 instead explored very large quantities
ence from Visceralization into Real-Life Contexts of discrete objects, each one at a human perceptible scale (i.e. a large
Data visualisations share information and insights in data. In contrast, crowd). While the intention was to be ‘lost’ in the ‘sea’ of people, what
the physical nature of visceralizations may convey a different type of we found was that the occlusion of nearby people made it challenging
understanding that is more grounded in reality. For instance, after hav- to see long distances, and in cases where this was possible, perspective
ing seen the skyscrapers in E3 , one may then be able to ‘transfer’ that foreshortening shrunk the heads of those further away. This resulted
knowledge to real-life skyscrapers in their day-to-day life, comparing in a metaphorical ‘bubble of perception’, where one could only see
this new experience with the prior VR one. Likewise, one may see the and get a feel for the number of people within the bubble. Conversely,
skyscrapers in VR and be reminded of previous real-life experiences. density was quite simple to gauge as it only relied on estimation of
While similar in premise to VR for tourism [22], the nature of visceral- the immediate surroundings. For anything involving the larger crowd
izations being more closely tied to data may aid users’ perception and as a whole (i.e. tens of thousands), changing viewpoints to have a
understanding of these physical phenomena in the wild. vantage point above the rest is necessary, however at certain elevations
it becomes very similar to watching a news broadcast.
4.6 Data Transformation
Abstract Values: Perceiving abstract values with data visceraliza-
Given that many phenomena we wish to understand fall outside of tion. E6 explored the notion of requiring some transformation from
the realm of direct perceptibly, parts of E3 , E4 , E5 , and abstract concept to concrete object, in this case mapping money of the
E6 helped illuminate various pitfalls that may occur when scaling US debt to $100 bills. While the sheer scale of the stacks of money
large data into ranges more readily understandable by viewers, or when was present in a fashion similar to E3 , there was a level of cognitive
remapping abstract measures into perceivable concrete units. effort required to mentally translate the visceral understanding into the
Miniaturization: Maintaining viscerality during scaling. We chose abstract quantity, such that it was difficult to get any sense of direct,
to use skyscrapers in E3 as we considered them on the cusp of being deep understanding of money. Since there was already a transformation
both perceivable but also somewhat too large to properly see. To this from quantity of dollars into stacks of bills, it is unclear whether the

6
© 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
Computer Graphics. The final version of this record is available at: 10.1109/TVCG.2020.3030435
VR representation gave any deeper understanding of said quantity more whereby they were advised to use other techniques instead (i.e., telepor-
than the original 2D illustration would. In a sense, while the visceral tation). Most participants chose to watch the desktop versions of E1
experience of the concrete, physicalized representation was there, there and E6 once (presented as videos) with a few watching it twice. All
was no visceral understanding of the original quantity itself. participants were familiar with E3 (presented as an image).

5 U SER F EEDBACK 5.2.2 Comparison to Critical Reflection


To further expand our critical reflection and minimise bias, we invited Virtuality: Freedom of changing viewpoint provides unique in-
external participants to try our prototypes and give feedback on the sights but requires more time to do so. In Sec. 4.2 we considered how
concept of data visceralization. Since it remains a novel and somewhat the manipulation of the virtual environment can facilitate understanding
abstract concept, we chose not to formally measure and evaluate the but negatively impact viscerality. While no participant specifically men-
effectiveness of data visceralization, instead focusing on participants’ tioned that this aided their understanding, three participants remarked
thoughts and opinions which will help us better define the concept. that having these types of environments is a core part of VR as it “Al-
lows you to do something that you otherwise would never see.” [P10].
5.1 Process In contrast, two other participants stated that being in these unrealistic
scenarios made them feel ‘out of place’, such as when flying around
We recruited 12 participants (4 female ) who were the environment. Neither participant said that this negatively impacted
all university students with a range of different back- their understanding of the phenomena, however one of them clarified
grounds (majority computer science) and experience that things still need to be “Presented in a way that hypothetically could
with VR, as indicated to the right. They were not given any monetary exist.” [P11], such as stacks of $100 bills technically being possible to
compensation for their time. We conducted the sessions in a large open create if one had enough resources. In terms of the user’s viewpoint,
space free of obstructions using a HTC Vive Pro connected to a desk- many of the benefits we described were also identified by participants,
top PC with an Intel Core i7-7800X CPU (3.5 GHz, 6 cores), Nvidia particularly the ability to view the data from any angle they wished.
GeForce GTX 1080 (8 GB) GPU, and 32 GB of RAM. We limited the This meant “Every angle I was at, I could take a different bit of data
scope to three of the six prototypes described in Sec. 3: E1 , E3 , from it.” [P9] and they could “Consume [the experience] in a way
and E6 , conducted in that order from simplest to most complex, that [had] more meaning.” [P7]. However, six participants commented
however the lack of randomization may had influenced user preferences. that they had to figure out the best viewpoints by themselves, meaning
This was done due to time constraints, but still allowed us to obtain that it took longer to gain the relevant information as compared to the
feedback from a representative sample of measures: speed, scale, and desktop version. We discuss this notion of reader control in Sec. 6.1.
abstract quantities. For each prototype, participants were given both
Realism: Realism is not important to understand the underlying
the original source material and data visceralization to try (which we
data, but is still important for engagement. In accord with Sec. 4.3,
now refer to as desktop version and VR version respectively). For
all participants agreed that more photorealistic rendering was not re-
equivalency, we trimmed the desktop version to its relevant parts and
quired to understand the underlying quantitative data, but that it made
mute any sound and voiceover. Each was followed by a questionnaire
the experience more enjoyable and engaging. In terms of using abstract
(with Likert scales from 1 (strongly agree) to 7 (strongly disagree))
models (e.g., cubes instead of runners), participants agreed that the feel-
asking for their thoughts and opinions between the two versions. Note
ing and perception of the data was the same, but some remarked that it
that the goal was not to measure which version was better, but to under-
became easier to make comparisons such as “[Telling] the alignment of
stand their strengths, weaknesses, and characteristics. Sessions were
each racer.” [P7], mitigating similar issues identified in Sec. 4.1. Many
concluded with a semi-structured interview to elicit detailed responses
others pointed out flaws in these abstract models, such as them needing
and opinions of data visceralization. This was loosely structured around
to “Keep in mind that it was an abstracted representation of running.”
the topics discussed in Sec. 4, but was conducted in such a manner to
[P7], the inability to “Distinguish different people and separate them
allow for broader discussion driven by the participants.
[as] the blocks are all the same.” [P8], losing the sense of emotional
The questionnaire responses and de-identified transcripts are avail- attachment to the phenomena as “A block is very abstract [which] you
able in supplementary material. We combined interview notes and can interpret as anything coming at you, whereas a person generates
qualitative analysis on the transcripts to identify common and interest- some sort of emotion.” [P9], and that abstract looking models would be
ing themes. We include only the most relevant quotes below, labeled boring to look at in contrast to more realistic ones.
by participant. Please refer to the transcripts for additional context.
Annotation: Annotations were useful to round out the experience
5.2 Findings and were not distracting. In Sec. 4.4, we raised concerns of annota-
tions potentially being distracting. However, no participants reported
We first report on high-level results and metrics, followed by insights any distraction or annoyance caused by them. In fact, many complained
categorized and contrasted against relevant topics that were described that they were either too difficult to see or didn’t provide enough in-
in Sec. 4. We then discuss themes raised by our participants which formation. These participants thought annotations were important as
were not part of our reflection later in Sec. 6. without them “You would walk away with an unfinished idea... once
you put the labels in, the information is more complete.” [P9]. Few
5.2.1 General Results others thought they weren’t necessary, as “VR really makes me feel
Each session lasted for an average 82.4 minutes the speed [and height] difference, and in that case the label doesn’t
(Mdn = 85.5, SD = 13). Overall, all participants really matter.” [P5]. However, none argued for the complete removal
preferred VR for E1 and E3 , but were of annotations, with all agreeing that they were at least nice to have.
mixed between desktop (2 preferred), VR (6), Knowledge Transfer: The ability to apply experiences from
and no preference (4) for E6 . Similarly, they visceralizations is uncertain, but it is still valuable to have.
reported to be more immersed in the first two In Sec. 4.5 we discussed the ability to transfer
prototypes, particularly due to the ‘unrealistic’ this deeper understanding from visceralizations
stacks of $100 bills being off-putting. Partici- into reality. When asked if they thought they
pants generally liked the greater freedom and could do so, participants subjectively rated an
immersiveness of VR, but criticized the poor average of 5.06 for relating to previous events
communication of numerical values, specifically for E6 (discussed (Mdn = 6, SD = 1.79) and 5.14 for future events
later). Participants spent more time on average in VR than the desktop (Mdn = 5, SD = 1.42) across all three prototypes,
version: 3.4 times the duration for E1 , 13.2 times for E3 , and with E6 being the lowest rated. Of note is
1.8 times for E6 . Cybersickness in VR was reported by a minor- that many participants who gave low or neutral
ity of participants only when they used flying locomotion techniques, scores clarified that they didn’t have a prior ex-

7
perience to compare against (e.g., people run- When asked what types of visceral sensations they felt, common re-
ning past), or that the presented scenario was sponses were the feeling of something running past you, fear of heights,
far too unlikely to ever see it in the future (e.g., sense of awe/grandness, and some discomfort from flying around in
stacks of $100 bills). That said, two participants the environment (particularly for E3 ). Overall, participants thought
explicitly mentioned that it triggered previous that visceral sensations improved the experience and understanding of
memories, such as P7 having been under the the physical phenomena by making it feel more believable, but was
actual Eiffel Tower or P8 with a specific tower otherwise not necessary for learning exact values and numbers.
in her home country. A few were also very Data understanding and the experience go hand in hand, but num-
adamant in their ability to do so, such as “It bers are not everything. We have distinguished between the qualita-
feels I’ve been next to a building that’s that tall, tive and quantitative understanding of data, with participants valuing
so I can compare [it to reality].” [P2] and “That was represented each differently. All participants preferred VR as it provides a more
really well and was really believable as compared to my [real life] immersive experience and a better qualitative understanding of what
experiences.” [P7]. Our measures are very subjective however, as par- the phenomena are in reality (among other benefits). In contrast, no par-
ticipants were asked to hypothesize if it was possible. Most gave vague ticipants argued for quantitative understanding (i.e., knowing the exact
responses during the interview, but agreed that this skill was valuable numerical values) being a fundamental takeaway of visceralization. In-
to have. Regardless, we see this as a valuable means of measuring the stead, it is there to further facilitate the qualitative understanding of the
effectiveness of data visceralization in the future (Sec. 6.6). phenomena, as “The combination of both [qualitative and quantitative],
Miniaturization: Only a few participants felt the miniaturized they both support each other, one standing alone might be somewhat
phenomena to be unrealistic, with others not noticing or not car- useful but together they’re better.” [P10]. Three participants even
ing.. In Sec. 4.6 we noted differences between miniature vs distant posited that trying to learn the exact numerical values was pointless, as
phenomena. We asked participants if they noticed any differences in “Who will remember the values all the time, for everything, that’s crazy!?”
either one, making sure to be as vague as possible. Only three partici- [P6]. In some ways, the focus on the qualitative aspects guided by
pants stated that the distant skyscrapers felt like real structures which quantitative values is the intended purpose of data visceralization, as
were “Easier to understand than having a miniature sculpture.” [P9]. we do not seek to replace conventional data visualization but instead
However this was not due to motion parallax, but due to “The represen- complement it. In this sense, an alternate way of phrasing the purpose
tation of the hills, the elevation, and the trees [giving] me more context of data visceralization is to gain a ‘better perspective’ or ‘appreciation’
to the size, the scale, [and] the place.” [P7]. All other participants gave of data, as this appreciation is oftentimes missing in data visualization.
varying responses: four saying they felt the same, one saying that the This notion highly relates to a recent viewpoint by Wang et al. [56],
distant felt unrealistic due to the gray background (Fig. 5f), and three which argues for the importance of emotion when considering the value
saying they were useful simply to get an overview of the data similar that data visualizations provide rather than only measuring analytic and
to the original desktop version (Fig. 1c). knowledge-generating value. As such, the perceived value of viscer-
Abstract Values: Understanding exact magnitude of abstract val- alization by our participants leans into this—that it is more about the
ues is difficult, but does not have to be the goal of visceralization. experience and the emotions it generates rather than just the numbers.
In Sec. 4.6 we noted difficulties of representing abstract concepts such A balance needs to be struck between letting the user explore and
as money in VR, even in its physical form such as $100 bills. All par- guiding the user. As described earlier, the ability to control the view-
ticipants agreed that it was difficult to understand the exact magnitude point was was useful to gain more nuanced insights by having more
of $20+ trillion. For some, it was the abstract nature of money being agency to explore. However, more time is required to determine ap-
difficult to comprehend. For others, it was the lack of quantitative propriate viewing angles in VR compared to the fixed angles on the
information provided to them which caused them these issues. As both desktop, and the lack of guidance may cause users to miss important
the trimmed desktop and our VR versions did not specify how much information. Some participants argued for the use of predefined view-
each stack of bills was worth, it meant that they “Just looked at towers points tied to specific insights as a way to alleviate this, “If you have
and was told it was worth trillions of dollars, which is not a number I some charts aligned to that, I can click of them and say this is the view
can really comprehend anyway.” [P10]. Interestingly, some participants I was looking for, so this gives me more understanding [of where] I
suggested solutions, such as showing incremental stages similar to the should be going.” [P12]. This notion is similar to author-driven and
original piece [20], or to group the $100 bills by more familiar units reader-driven stories [49] in determining how much guidance to pro-
such as “A block of a street, or a city.” [P6] in a similar vein to concrete vide to the user. We can reasonably say that a middle-ground would
scales [13]. While we did not ask if they still thought the experience be best suited in order to retain much of the user agency that defines
was valuable, one participant commented that “It definitely gives a VR, but different considerations may be needed when used in a more
good perspective... it’s good to know that $20+ trillion is that much storytelling context such as in immersive journalism [14].
money.” [P4]. We discuss the validity of perspective/appreciation being While making comparisons were common, there are opportunities
the goal of visceralizations in Sec. 6.1. to highlight individual characteristics of standalone phenomena.
We noticed that participants almost always made comparisons between
6 D ISCUSSION objects in the scene, such as comparing the speeds of different runners
In this section we discuss the most interesting and relevant themes or the height of the Statue of Liberty against a stack of $100 bills.
which were raised by our participants which were not part of our critical Three participants explicitly stated that making these comparisons was
reflection. We then discuss broader aspects of data visceralization as a core part of their experience, but gave mixed responses when asked
well as future research opportunities. if this would still be the case when only a single object/phenomena
was shown. One said that it would only have the same impact “If
6.1 Insights from Participant Feedback you’ve had prior experience of being on heights [or] doing a sprint.”
[P9], another that they can still gain insights by “[Going] to the top
Viscerality is not required for data understanding but gives the ex- or close to the base, or see the building from the bottom of it.” [P6].
perience more meaning. When asked about feelings of viscerality, a More interestingly, P12 suggested that “If it gives me enough options
few participants explicitly commented that it is not required for the to explore, like climb the wall, climb the stairs, feel different textures...
purposes of understanding the underlying numerical measurements, as the scenario is not attached to the height scenario [any more].” In
“Data and numbers are more rational, and you don’t need emotions to un- this circumstance, the data visceralization presents information beyond
derstand rational things.” [P11]. In contrast, many others concentrated just the quantitative measures, but to include other characteristics of
on how visceral sensations improved their qualitative understanding of the chosen phenomena as well. While the hope is that visceraliza-
the phenomena as it made the experience feel more believable, and in tion can assist with understanding individual phenomena without the
some cases “Added emotion to the data... it put meaning to it.” [P9]. need for comparison, it is apparent there are alternative options for

8
© 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
Computer Graphics. The final version of this record is available at: 10.1109/TVCG.2020.3030435
communicating more information should the opportunity present itself. 6.6 Future Work
6.2 One-to-One Mapping from Data to Visceralization Evaluation. While responses from our participants supported the no-
tion that data visceralizations were engaging and impactful, their feed-
Based on our critical reflection (Sec. 4.6) and external feedback sessions back was inherently subjective. We discussed several experimental
(Sec. 5.2), it is clear that data visceralizations involving a one-to-one measures and factors which may be considered in future studies. These
translation from their ‘ground truth’ are ideal, as they most accurately include: users’ capability to perceive and comprehend the qualitative
portray the underlying data. When a transformation is applied to the ‘ground-truth’, time until this comprehension is reached, the transfer-
data, this connection to the ground truth is broken. However, it is still ability and generalization of learnings into the real-world, and the
possible to understand comparisons between the transformed phenom- effects of numerous factors (e.g., transformation from data to visceral-
ena. Moreover, many participants reported that they were still able to ization, number and types of annotations, level of realism) on under-
gain an appreciation and perspective of the data—both specifically for standing. By more formally considering and measuring these in future
E6 and for data visceralizations as a whole. In this sense, despite user studies can we more thoroughly evaluate the effectiveness and
not being the original intention, data visceralization can still provide appropriate use of data visceralization—particularly in comparison to
value in instances where one-to-one mappings are not possible, either similar techniques such as those in Sec. 2.
by rescaling the data or using concrete scales [13]. This transforma-
tion needs to be done with care, however, as its misuse may result Data visceralization beyond VR. Advancements in immersive tech-
in misleading or deceptive experiences. For example, a visceraliza- nologies opens many new opportunities for data visceralization. For
tion which subtly re-scales data to exaggerate a certain effect may be example, researchers have explored how non-visual senses can be
taken at face value to be true. There has been extensive discussion of stimulated to convey data, such as olfaction (e.g., [42, 57]), gustation
data visualizations potentially being deceptive, and special care needs (e.g., [31]), and haptic simulation (e.g., [6, 9]). AR also offers unique
to likewise be taken since data visceralization is intended to give an opportunities and challenges, as the real-world acts as an anchor for all
intuitive understanding of the data that is not misleading. measurements and scales. While this may improve relatability of the
data as the visceralization is in the context of the user’s environment,
6.3 Visceral vs. Emotional Experiences it may restrict the scope of phenomena which can be represented—
particularly for large scale objects such as skyscrapers.
As reported by our participants, visceral sensations were tied to some
emotional reaction, such as fear when looking down from a great height, Combining narrative storytelling and data visceralization. As was
or the feeling of awe when looking up at a large structure. For the the original intention and reaffirmed by participants, data visceraliza-
purposes of data visceralization however, it is important to differentiate tions can fill in the ‘ground truth’ understanding oftentimes missing in
between emotions stemming from visceral sensations and those from data visualization. As such, it is important to determine how to combine
the storytelling context. That is, visceral sensations are part of the pre- the two without sacrificing the accessibility and convenience of web-
attentive sensory process when experiencing the presented phenomena, based data stories. As described in Sec. 2, recent work has begun ex-
in contrast to strong emotional reactions that are evoked through the ploring the use of immersive experiences for data storytelling [5,25,46],
narrative such as sadness or anger (e.g., [23, 37]). By treating these with some existing stories already doing so using mobile-based AR
separately, we focus primarily on sensory fidelity and instead let the (e.g., [8]). Data visceralization can both contribute to and utilize these
storytelling context convey any other desired emotion. immersive data-driven storytelling techniques in the future.
Towards a general data visceralization pipeline. Development of
6.4 Application Domains our VR prototypes required skills in design, 3D modeling, and pro-
Since effective data visceralization may restrict transformation of data, gramming in specialized environments such as Unity3D. That said,
there are certain domains and experiences that are particularly well many pre-made assets were used to hasten development, resulting in
suited for visceralization. We have already identified and demonstrated development times of around a week for each prototype (excluding
two prototypes using sports visualization, and this space could be ex- reflection/evaluation). During this process, we fell into similar devel-
plored more extensively. Scale comparisons of other human endeavors opment patterns which we portray in Fig. 2. Sitting in parallel to the
like architecture, engineering, and design are all enhanced by having a visualization pipeline [11], we pass data into the data visceralization
deeper understanding of data. In particular, understanding the relative pipeline and directly map it onto attributes of objects within the virtual
sizes of objects while browsing online services has been often sug- environment. This simulation may use animation to encode information
gested as a compelling use of immersive technology. Finally, there has such as speed in E1 . After adding appropriate background imagery
been captivating video examples (e.g., [12]) of the impact of weather to contextualize the simulated phenomena, we then render these at
events like flooding, where a one-to-one demonstration of the height the desired level of realism. Finally, we annotate the experience with
of the water, or the speed of windows being blown open, could help the original quantitative data to provide perspective and context to the
viewers more deeply understand what happens during the event. experience. The output is then experienced on a VR device—ideally
when complementing an existing data visualization or story. While
6.5 Limitations of Data Visceralization in VR nothing is currently automated in this pipeline, the structure may help
As our presented design probes were conducted in VR, any limitations teams organize where different contributions can occur.
of VR directly affect the effectiveness of data visceralization. One issue
in particular is that egocentric distance estimations are compressed in 7 C ONCLUSION
VR [36, 47], possibly resulting in perceptually misleading experiences. We introduce and define the concept of data visceralizations as a com-
To combat this, the use of a virtual self-avatar has been shown to plement to data visualization and storytelling, in order to facilitate an
improve near distance estimation [16, 40]. It has also been shown that intuitive understanding of the underlying ‘ground-truth’ of data. We ex-
the size of one’s virtual body influences perceived world size [55], plore the concept through the iterative design and creation of six design
and the size of one’s virtual hand influences perceived size of nearby probes, which we critically reflect on as authors and gather external
objects [35] which can in turn help improve size estimation [29]. As feedback and opinions from participants. Through this, we identify
visceralizations are reliant on the accurate perception of virtual objects, situations where data visceralization is effective, how transforming
these techniques and others similar in the VR literature can be used. the data or representing abstract data may be problematic for visceral
Another limitation is the trade-off between visual fidelity, dataset size experiences, and considerations of many factors for data visceralization
(if applicable), and performance. This was notably of concern in E5 , as a whole. We also identify future opportunities, such as formally eval-
as rendering thousands of objects with complex meshes resulted in poor uating data visceralization in a user study, and how future technologies
performance. While we refer to general VR development guidelines to can extend the concept. We hope that this work will spawn both future
overcome these challenges (e.g., [18]), we note that graphical quality experiences for understanding data as well as deeper investigation into
may ultimately not be important for data visceralization (Sec. 5.2.1). how to best create and more formally evaluate these experiences.

9
R EFERENCES Tourism Management, 31(5):637 – 651, 2010. doi: 10.1016/j.tourman.
2009.07.003
[1] F. Amini, N. Henry Riche, B. Lee, C. Hurter, and P. Irani. Understanding [23] N. Halloran. The fallen of world war ii. https://www.youtube.com/
data videos: Looking at narrative visualization through the cinematography watch?v=DwKPFT-RioU, 2017. [Accessed: 2020-07-23].
lens. In Proceedings of the 33rd Annual ACM Conference on Human [24] J. Hullman, Y.-S. Kim, F. Nguyen, L. Speers, and M. Agrawala. Improving
Factors in Computing Systems, CHI ’15, pp. 1459–1468. ACM, New York, comprehension of measurements using concrete re-expression strategies.
NY, USA, 2015. doi: 10.1145/2702123.2702431 In Proceedings of the 2018 CHI Conference on Human Factors in Com-
[2] B. Bach, M. Stefaner, J. Boy, S. Drucker, L. Bartram, J. Wood, P. Ciuc- puting Systems, CHI ’18, pp. 34:1–34:12. ACM, New York, NY, USA,
carelli, Y. Engelhardt, U. Koppen, and B. Tversky. Narrative Design 2018. doi: 10.1145/3173574.3173608
Patterns for Data-Driven Storytelling, chap. 5, pp. 107–133. Taylor Fran- [25] P. Isenberg, B. Lee, H. Qu, and M. Cordeil. Immersive Visual Data Stories,
cis, March 2018. pp. 165–184. Springer International Publishing, Cham, 2018. doi: 10.
[3] W. Barfield and T. A. Furness, III, eds. Virtual Environments and Advanced 1007/978-3-030-01388-2 6
Interface Design. Oxford University Press, Inc., New York, NY, USA, [26] Y. Jansen, P. Dragicevic, and J.-D. Fekete. Evaluating the efficiency of
1995. physical visualizations. In Proceedings of the SIGCHI Conference on
[4] W. Barfield, D. Zeltzer, T. Sheridan, and M. Slater. Virtual environments Human Factors in Computing Systems, CHI ’13, pp. 2593–2602. ACM,
and advanced interface design. chap. Presence and Performance Within New York, NY, USA, 2013. doi: 10.1145/2470654.2481359
Virtual Environments, pp. 473–513. Oxford University Press, Inc., New [27] Y. Jansen, P. Dragicevic, P. Isenberg, J. Alexander, A. Karnik, J. Kildal,
York, NY, USA, 1995. S. Subramanian, and K. Hornbæk. Opportunities and challenges for data
[5] J. Bastiras and B. H. Thomas. Combining virtual reality and narrative physicalization. In Proceedings of the 33rd Annual ACM Conference on
visualisation to persuade. In 2017 International Symposium on Big Data Human Factors in Computing Systems, CHI ’15, pp. 3227–3236. ACM,
Visual Analytics (BDVA), pp. 1–8, Nov 2017. doi: 10.1109/BDVA.2017. New York, NY, USA, 2015. doi: 10.1145/2702123.2702180
8114623 [28] J. Jerald. The VR Book: Human-Centered Design for Virtual Reality.
[6] J. J. Batter and J. F. P. Brooks. Grope-1: a computer display to the sense Association for Computing Machinery and Morgan & Claypool, New
of feel. Inf. Process.; (Netherlands), 71, Jan 1972. York, NY, USA, 2016.
[7] J. Bertin. Semiology of Graphics. University of Wisconsin Press, 1983. [29] S. Jung, G. Bruder, P. J. Wisniewski, C. Sandor, and C. E. Hughes. Over my
[8] J. Branch. Augmented reality: Four of the best olympians, hand: Using a personalized hand in vr to improve object size estimation,
as you’ve never seen them - the new york times. https: body ownership, and presence. In Proceedings of the Symposium on
//www.nytimes.com/interactive/2018/02/05/sports/ Spatial User Interaction, SUI ’18, pp. 60–68. ACM, New York, NY, USA,
olympics/ar-augmented-reality-olympic-athletes-ul.html, 2018. doi: 10.1145/3267782.3267920
2018. [Accessed: 2020-07-23]. [30] R. Kenny and A. A. Becker. Is the nasdaq in another bubble? a vir-
[9] F. P. Brooks, Jr., M. Ouh-Young, J. J. Batter, and P. Jerome Kilpatrick. tual reality tour of the nasdaq - wsj.com. http://graphics.wsj.com/
Project gropehaptic displays for scientific visualization. SIGGRAPH 3d-nasdaq, 2015. [Accessed: 2020-07-23].
Comput. Graph., 24(4):177–185, Sept. 1990. doi: 10.1145/97880.97899 [31] R. A. Khot, J. Lee, D. Aggarwal, L. Hjorth, and F. F. Mueller. Tastybeats:
[10] J. Bucher. Storytelling for virtual reality: Methods and principles for Designing palatable representations of physical activity. In Proceedings
crafting immersive narratives. 01 2017. doi: 10.4324/9781315210308 of the 33rd Annual ACM Conference on Human Factors in Computing
[11] S. K. Card, J. D. Mackinlay, and B. Shneiderman, eds. Readings in Systems, CHI ’15, pp. 2933–2942. ACM, New York, NY, USA, 2015. doi:
Information Visualization: Using Vision to Think. Morgan Kaufmann 10.1145/2702123.2702197
Publishers Inc., San Francisco, CA, USA, 1999. [32] J. LaViola, E. Kruijff, R. McMahan, D. Bowman, and I. Poupyrev. 3D
[12] T. W. Channel. Storm surge like you’ve never experienced it before. User Interfaces: Theory and Practice. Addison-Wesley usability and HCI
https://www.youtube.com/watch?v=q01vSb_B1o0, 2018. [Ac- series. Addison-Wesley, 2017.
cessed: 2020-07-23]. [33] B. Lee, T. Dwyer, D. Baur, and X. G. Veira. Watches to augmented
[13] F. Chevalier, R. Vuillemot, and G. Gali. Using concrete scales: A practical reality: Devices and gadgets for data-driven storytelling. In Data-Driven
framework for effective visual depiction of complex measures. IEEE Storytelling, pp. 153–168. AK Peters/CRC Press, 2018.
Transactions on Visualization and Computer Graphics, 19(12):2426–2435, [34] A. Lele. Virtual reality and its military utility. Journal of Ambient In-
Dec 2013. doi: 10.1109/TVCG.2013.210 telligence and Humanized Computing, 4(1):17–26, Feb 2013. doi: 10.
[14] N. de la Peña, P. Weil, J. Llobera, E. Giannopoulos, A. Pomés, B. Spanlang, 1007/s12652-011-0052-4
D. Friedman, M. V. Sanchez-Vives, and M. Slater. Immersive journalism: [35] S. A. Linkenauger, M. Leyrer, H. H. Bülthoff, and B. J. Mohler. Welcome
Immersive virtual reality for the first-person experience of news. Presence: to wonderland: The influence of the size and shape of a virtual hand on
Teleoperators and Virtual Environments, 19(4):291–301, 2010. doi: 10. the perceived size and shape of virtual objects. PLOS ONE, 8(7), 07 2013.
1162/PRES a 00005 doi: 10.1371/journal.pone.0068594
[15] P. Dragicevic and Y. Jansen. List of physical visualizations and related [36] J. Loomis and J. Knapp. Visual perception of egocentric distance in
artifacts. http://dataphys.org/list/, 2019. [Accessed: 2020-07- real and virtual environments. In Virtual and Adaptive Environments, pp.
23]. 21–46. CRC Press, June 2003. doi: 10.1201/9781410608888.pt1
[16] E. Ebrahimi, L. S. Hartman, A. Robb, C. C. Pagano, and S. V. Babu. [37] G. Lopez and K. Sukumar. Mass shootings since
Investigating the effects of anthropomorphic fidelity of self-avatars on sandy hook, in one map. https://www.vox.com/a/
near field depth perception in immersive virtual environments. In 2018 mass-shootings-america-sandy-hook-gun-violence, 2019.
IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 1–8, [Accessed: 2020-07-23].
March 2018. doi: 10.1109/VR.2018.8446539 [38] K. Marriott, F. Schreiber, T. Dwyer, K. Klein, N. Henry Riche, T. Itoh,
[17] Ed Huai-Hsin Chi and J. T. Riedl. An operator interaction framework for W. Stuerzlinger, and B. H. Thomas. Immersive Analytics. Lecture Notes
visualization systems. In Proceedings IEEE Symposium on Information in Computer Science. Springer International Publishing, 2018.
Visualization (Cat. No.98TB100258), pp. 63–70, Oct 1998. doi: 10.1109/ [39] M. Mine, A. Yoganandan, and D. Coffey. Making vr work: Building
INFVIS.1998.729560 a real-world immersive modeling application in the virtual world. In
[18] L. Facebook Technologies. Guidelines for vr performance optimiza- Proceedings of the 2Nd ACM Symposium on Spatial User Interaction, SUI
tion. https://developer.oculus.com/documentation/native/ ’14, pp. 80–89. ACM, New York, NY, USA, 2014. doi: 10.1145/2659766.
pc/dg-performance-guidelines/, 2020. [Accessed: 2020-07-23]. 2659780
[19] J. Gibson. The Ecological Approach to Visual Perception. Resources for [40] B. J. Mohler, S. H. Creem-Regehr, W. B. Thompson, and H. H. Bülthoff.
ecological psychology. Lawrence Erlbaum Associates, 1986. The effect of viewing a self-avatar on distance judgments in an hmd-based
[20] O. Godfrey. Us debt visualized: Stacked in $100 bills at 20+ trillion virtual environment. Presence: Teleoperators and Virtual Environments,
usd for 2017. http://demonocracy.info/infographics/usa/us_ 19(3):230–242, 2010. doi: 10.1162/pres.19.3.230
debt/us_debt.html, 2017. [Accessed: 2020-07-23]. [41] M. F. Mortensen. Analysis of the educational potential of a science mu-
[21] D. M. Green and J. A. Swets. Signal detection theory and psychophysics. seum learning environment: Visitors’ experience with and understanding
New York : Wiley, 1966. of an immersion exhibit. International Journal of Science Education,
[22] D. A. Guttentag. Virtual reality: Applications and implications for tourism. 33(4):517–545, 2011. doi: 10.1080/09500691003754589

10
© 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
Computer Graphics. The final version of this record is available at: 10.1109/TVCG.2020.3030435
[42] B. Patnaik, A. Batch, and N. Elmqvist. Information olfactation: Harnessing
scent to convey data. IEEE Transactions on Visualization and Computer
Graphics, 25(1):726–736, Jan 2019. doi: 10.1109/TVCG.2018.2865237
[43] PerplexingPotato. Our seven fellow planets could fit end to end within
our moon’s orbit around us [oc]. https://imgur.com/Ae9hbU1, 2014.
[Accessed: 2020-07-23].
[44] K. Quealy and G. Roberts. Bob beamon’s long olympic shadow -
interactive graphic - nytimes.com. http://archive.nytimes.com/
www.nytimes.com/interactive/2012/08/04/sports/olympics/
bob-beamons-long-olympic-shadow.html, 2012. [Accessed:
2020-07-23].
[45] K. Quealy, G. Roberts, and A. Cox. One race, every medalist ever -
interactive graphic - nytimes.com. http://archive.nytimes.com/
www.nytimes.com/interactive/2012/08/05/sports/olympics/
the-100-meter-dash-one-race-every-medalist-ever.html,
2012. [Accessed: 2020-07-23].
[46] D. Ren, B. Lee, and T. Höllerer. Xrcreator: Interactive construction of
immersive data-driven stories. In Proceedings of the 24th ACM Symposium
on Virtual Reality Software and Technology, VRST ’18, pp. 136:1–136:2.
ACM, New York, NY, USA, 2018. doi: 10.1145/3281505.3283400
[47] R. S. Renner, B. M. Velichkovsky, and J. R. Helmert. The perception of
egocentric distances in virtual environments - a review. ACM Comput.
Surv., 46(2):23:1–23:40, Dec. 2013. doi: 10.1145/2543581.2543590
[48] N. Riche, C. Hurter, N. Diakopoulos, and S. Carpendale. Data-driven
Storytelling. AK Peters Visualization Series. CRC Press, 2018.
[49] E. Segel and J. Heer. Narrative visualization: Telling stories with data.
IEEE Transactions on Visualization and Computer Graphics, 16(6):1139–
1148, Nov 2010. doi: 10.1109/TVCG.2010.179
[50] M. Slater. Measuring presence: A response to the witmer and singer
presence questionnaire. Presence: Teleoper. Virtual Environ., 8(5):560–
565, Oct. 1999. doi: 10.1162/105474699566477
[51] M. Slater and S. Wilbur. A framework for immersive virtual environ-
ments five: Speculations on the role of presence in virtual environments.
Presence: Teleoper. Virtual Environ., 6(6):603–616, Dec. 1997. doi: 10.
1162/pres.1997.6.6.603
[52] C. D. Stolper, B. Lee, N. H. Riche, and J. Stasko. Data-driven storytelling
techniques: Analysis of a curated collection of visual stories. In Data-
Driven Storytelling, pp. 85–105. AK Peters/CRC Press, 2018.
[53] F. Taher, Y. Jansen, J. Woodruff, J. Hardy, K. Hornbæk, and J. Alexander.
Investigating the use of a dynamic physical bar chart for data exploration
and presentation. IEEE Transactions on Visualization and Computer
Graphics, 23(1):451–460, Jan 2017. doi: 10.1109/TVCG.2016.2598498
[54] L. Valmaggia, L. Latif, M. Kempton, and M. Rus-Calafell. Virtual reality
in the psychological treatment for mental health problems: An systematic
review of recent evidence. Psychiatry Research, 236, 01 2016. doi: 10.
1016/j.psychres.2016.01.015
[55] B. van der Hoort, A. Guterstam, and H. H. Ehrsson. Being barbie: The
size of one’s own body determines the perceived size of the world. PLOS
ONE, 6(5):1–10, 05 2011. doi: 10.1371/journal.pone.0020195
[56] Y. Wang, A. Segal, R. Klatzky, D. F. Keefe, P. Isenberg, J. Hurtienne,
E. Hornecker, T. Dwyer, and S. Barrass. An emotional response to
the value of visualization. IEEE Computer Graphics and Applications,
39(5):8–17, Sep. 2019. doi: 10.1109/MCG.2019.2923483
[57] D. A. Washburn and L. M. Jones. Could olfactory displays improve data
visualization? Computing in Science and Engg., 6(6):80–83, Nov. 2004.
doi: 10.1109/MCSE.2004.66
[58] J. Wu, A. Singhvi, and J. Kao. A bird’s-eye view of
how protesters have flooded hong kong streets - the new york
times. https://www.nytimes.com/interactive/2019/06/20/
world/asia/hong-kong-protest-size.html, 2019. [Accessed:
2020-07-23].
[59] S. Zhang, C. Demiralp, D. F. Keefe, M. DaSilva, D. H. Laidlaw, B. D.
Greenberg, P. J. Basser, C. Pierpaoli, E. A. Chiocca, and T. S. Deisboeck.
An immersive virtual environment for dt-mri volume visualization appli-
cations: a case study. In Proceedings Visualization, 2001. VIS ’01., pp.
437–584, Oct 2001. doi: 10.1109/VISUAL.2001.964545

11

View publication stats

You might also like