CMP 461 – FOUNDATIONS OF HUMAN-
COMPUTER INTERACTION (2 Units)
Module 2; Week 3
Topic: Conceptual Models and Paradigms
Prepared by: Dr. T. A. Olowookere
Disclaimer: Note that the Contents of this document derive from Textbooks and online
resources, suited for learning purposes and are not originally drafted by the Author.
Objective: To understand the basics of Conceptual Models, how they represent
users of interactive systems and how to conceptualize interaction.
Overview
The techniques and models in this module all claim to have some representation of
users as they interact with an interface; that is, they model some aspect of the user’s
understanding, knowledge, intentions or processing. The level of representation
differs from technique to technique – from models of high-level goals and the results
of problem-solving activities, to descriptions of motor-level activity, such as
keystrokes and mouse clicks.
Conceptualizing Interaction
When beginning a design project, it is important to be clear about the underlying
assumptions and claims. By an assumption, we mean taking something for granted that
requires further investigation; for example, people now want an entertainment and
navigation system in their cars. By a claim, we mean stating something to be true when
it is still open to question. For instance, a multimodal style of interaction for
controlling this system—one that involves speaking or gesturing while driving—is
perfectly safe.
Writing down your assumptions and claims and then trying to defend and support
them can highlight those that are vague or wanting. In so doing, poorly constructed
design ideas can be reformulated. In many projects, this process involves identifying
human activities and interactivities that are problematic and working out how they
might be improved through being supported with a different set of functions. In
others, it can be more speculative, requiring thinking through how to design for an
engaging user experience that does not exist.
Explaining people’s assumptions and claims about why they think something might
be a good idea (or not) enables the design team as a whole to view multiple
perspectives on the problem space and, in so doing, reveals conflicting and
problematic ones. The following framework is intended to provide a set of core
questions to aid design teams in this process:
Page 1 of 14
• Are there problems with an existing product or user experience? If so, what are they?
• Why do you think there are problems?
• What evidence do you have to support the existence of these problems?
• How do you think your proposed design ideas might overcome these problems?
Making clear what one’s assumptions are about a problem and the claims being made
about potential solutions should be carried out early on and throughout a project.
Design teams also need to work out how best to conceptualize the design space.
Primarily, this involves articulating the proposed solution as a conceptual model with respect
to the user experience. The benefits of conceptualizing the design space in this way are
as follows:
Orientation: Enabling the design team to ask specific kinds of questions about how
the conceptual model will be understood by the targeted users.
Open-Mindedness: Allowing the team to explore a range of different ideas to address
the problems identified.
Common Ground: Allowing the design team to establish a set of common terms that
all can understand and agree upon, reducing the chance of misunderstandings and
confusion arising later.
Once formulated and agreed upon, a conceptual model can then become a shared
blueprint leading to a testable proof of concept. It can be represented as a textual
description and/or in a diagrammatic form, depending on the preferred lingua franca
used by the design team.
It can be used not just by user experience designers but also to communicate ideas to
business, engineering, finance, product, and marketing units. The conceptual model
is used by the design team as the basis from which they can develop more detailed
and concrete aspects of the design. In doing so, design teams can produce simpler
designs that match up with users’ tasks, allow for faster development time, result in
improved customer uptake, and need less training and customer support.
Conceptualizing the problem space in this way helps designers specify what it is they
are doing, why, and how it will support users in the way intended.
Decisions about conceptual design should be made before commencing physical
design (such as choosing menus, icons, dialog boxes). A fundamental aspect of
interaction design is to develop a conceptual model.
A model is a simplified description of a system or process that helps describe how it
works.
Page 2 of 14
What are Mental Models?
Mental models are abstract, inner representations that people have regarding things
from the external world. Mental models include your basic ideas of what something
is or how it is supposed to work.
Mental models play an important role in Human-Computer Interaction (HCI) and
interaction design. They relate to the way that a user perceives the world around them
and are based in belief as opposed to being a factual concept. However, if you can
understand your users' mental models, you can simulate these models within your
designs to make them more usable and intuitive.
What are Conceptual Models?
A conceptual model is a representation of the mental model (perception) that a person
has about how a system is organized and operates. It shows what people can do with
a product and what concepts are needed to understand how to interact with it.
Conceptual models describe how an interactive system is organized
A conceptual model is a representation of a system. It shows how people, places, and
things interact. They show the real-world features and interactions of your design
idea. In other words, a conceptual model is an abstraction of a piece of the real world
or a design you plan to bring into the real world.
“A conceptual model is “a high-level description of how a system is organized and operates”.
In this sense, it is an abstraction outlining what people can do with a product and what
concepts are needed to understand how to interact with it. A key benefit of conceptualizing a
design at this level is that it enables “designers to straighten out their thinking before they start
laying out their widgets” - Jeff Johnson and Austin Henderson (2002).
So, a conceptual model is a high-level description of a product in terms of what users
can do with it and the concepts they need to understand about how to interact with it.
In a nutshell, a conceptual model provides a working strategy and a framework of
general concepts and their interrelations.
MENTAL MODELS VS. CONCEPTUAL MODELS
Mental Models: something the user has (forms)
• users “see” the system through their own mental models
• users rely on mental models during usage
• there are various forms of mental models
• mental models can support or impede users’ interaction
Page 3 of 14
Conceptual Models: articulation of designer’s (i.e. your) mental model
• what users will be able to do
• what concepts or knowledge users will need, in order to interact
• how they will interact with system (at a very high level)
Why Conceptual Modelling is a Good Idea
It provides a high-level understanding of how your mobile application will
work
It allows you to try to match the way your mobile application works with the
mental models of your users. This in turn should make the application more
usable and intuitive.
It allows you to see how well your conceptual model matches different mental
models. New scheduling software may need to take into account the traditional
diary/calendar approach (for those with no experience of using software tools)
but also other software such as Outlook or Google’s calendar functionality.
It allows you to examine when a mental model is not aligned with the
conceptual model and to decide whether you are going to shift the conceptual
model or to try and shift the user’s mental model. Someone who has never used
scheduling software has probably never received an automate reminder of a
meeting – how will you help the user model that idea mentally?
The ability to sketch conceptual models quickly and easily can save large
amounts of time in UI design and help deliver more intuitive applications.
Components of a Conceptual Model
A conceptual model should include the following core components:
Metaphors and analogies that convey to people how to understand what a
product is used for and how to use it for an activity. That is, any central design
metaphors and analogies, e.g. the “desktop metaphor”.
The concepts which includes– objects, operations that can be performed on
them; user roles; attributes of both. e.g., files and folders; both can be opened
or rearranged, and have names;
The relationships between those concepts (for instance, whether one object
contains another). E.g., files are contained in folders.
The mappings between the concepts and the user experience the product is
designed to support or invoke. E.g., the users can browse files, and mark
favourite files.
The Interaction Types; which describes how users will interact with the
concepts. E.g. give commands, perform operations, explore
Page 4 of 14
EXAMPLES OF IDENTIFYING CONCEPTS:
How the various metaphors, concepts, and their relationships are organized
determines the user experience. By explaining these, the design team can debate the
merits of providing different methods and how they support the main concepts, for
example, saving, revisiting, categorizing, reorganizing, and their mapping to the task
domain. They can also begin discussing whether a new overall metaphor may be
preferable that combines the activities of browsing, searching, and revisiting. In turn,
this can lead the design team to articulate the kinds of relationships between them,
such as containership. For example, what is the best way to sort and revisit saved
pages, and how many and what types of containers should be used (for example,
folders, bars, or panes)? The same enumeration of concepts can be repeated for other
functions of the web browser—both current and new. In so doing, the design team can
begin to work out systematically what will be the simplest and most effective and
memorable way of supporting users while browsing the Internet.
The best conceptual models are often those that appear obvious and simple; that is,
the operations they support are intuitive to use. However, sometimes applications can
end up being based on overly complex conceptual models, especially if they are the
result of a series of upgrades, where more and more functions and ways of doing
something are added to the original conceptual model. While tech companies often
provide videos showing what new features are included in an upgrade, users may not
pay much attention to them or skip them entirely. Furthermore, many people prefer
to stick to the methods they have always used and trusted and, not surprisingly,
become annoyed when they find one or more have been removed or changed. For
example, when Facebook rolled out its revised newsfeed a few years back, many users
were unhappy, as evidenced by their postings and tweets, preferring the old interface
Page 5 of 14
that they had gotten used to. A challenge for software companies, therefore, is how
best to introduce new features that they have added to an upgrade—and explain their
assumed benefits to users—while also justifying why they removed others.
Most interface applications are actually based on well-established conceptual models.
For example, a conceptual model that is based on the core aspects of the customer
experience when he is at a shopping mall actually informs and underlies most online
shopping websites. These include the placement of items that a customer wants to
purchase into a shopping cart or basket and proceeding to checkout when they’re
ready to make the purchase.
Some Collections of patterns are now readily available to help design the interface for
these core transactional processes, together with many other aspects of a user
experience, meaning interaction designers do not have to start from scratch every time
they design or redesign an application. Examples include patterns for online forms
and navigation on mobile phones. It is rare for completely new conceptual models to
emerge that transform the way daily and work activities are carried out at an interface.
Those that did fall into this category (of completely new conceptual models) include the
following three classics: the desktop (developed by Xerox in the late 1970s), the digital
spreadsheet (developed by Dan Bricklin and Bob Frankston in the late 1970s), and the
World Wide Web (developed by Tim Berners Lee in the early 1980s). All of these
innovations made what was previously limited to a few skilled people accessible to
all, while greatly expanding what is possible.
- The graphical desktop dramatically changed how office tasks could be
performed (including creating, editing, and printing documents). Performing
these tasks using the computers prevalent at the time was significantly more
arduous, having to learn and use a command language (such as DOS or UNIX).
- Digital spreadsheets made accounting highly flexible and easier to accomplish,
enabling a diversity of new computations to be performed simply through
filling in interactive boxes.
- The World Wide Web allowed anyone to browse a network of information
remotely. Since then, e-readers and digital authoring tools have introduced
new ways of reading documents and books online, supporting associated
activities such as annotating, highlighting, linking, commenting, copying, and
tracking. The web has also enabled and made many other kinds of activities
easier, such as browsing for news, weather, sports, and financial information,
as well as banking, shopping, and learning online among other tasks.
Importantly, all of these conceptual models were based on familiar activities.
Page 6 of 14
ASSIGNMENT/ACTIVITY
Go to a few online stores and see how the interface has been designed to enable the
customer to order and pay for an item. How many use the “add to shopping
cart/basket” followed by the “checkout” metaphor? Does this make it straightforward
and intuitive to make a purchase?
Interface Metaphors
Metaphors are considered to be a central component of a conceptual model. They
provide a structure that is similar in some way to aspects of a familiar entity (or
entities), but they also have their own behaviours and properties. More specifically,
an interface metaphor is one that is instantiated in some way as part of the user
interface, such as the desktop metaphor.
Another well-known one is the search engine, originally coined in the early 1990s to
refer to a software tool that indexed and retrieved files remotely from the Internet
using various algorithms to match terms selected by the user. The metaphor invites
comparisons between a mechanical engine, which has several working parts, and the
everyday action of looking in different places to find something. The functions
supported by a search engine also include other features besides those belonging to
an engine that searches, such as listing and prioritizing the results of a search. It also
does these actions in quite different ways from how a mechanical engine works or
how a human being might search a library for books on a given topic. The similarities
implied by the use of the term search engine, therefore, are at a general level. They are
meant to conjure up the essence of the process of finding relevant information,
enabling the user to link these to less familiar aspects of the functionality provided.
Interface metaphors are intended to provide familiar entities that enable people to
readily understand the underlying conceptual model and know what to do at the
interface. However, they can also contravene people’s expectations about how things
should be, such as the recycle bin (trash can) that sits on the desktop. Logically and
culturally (meaning, in the real world), it should be placed under the desk. But users
would not have been able to see it because it would have been hidden by the desktop
surface. So, it needed to go on the desktop. While some users found this irksome, most
did not find it to be a problem. Once they understood why the recycle bin icon was on
the desktop, they simply accepted it being there.
Usage of Metaphors
People frequently use metaphors and analogies (here we use the terms
interchangeably) as a source of inspiration for understanding and explaining to others
what they are doing, or trying to do, in terms that are familiar to them. They are an
integral part of human language.
Metaphors are commonly used to explain something that is unfamiliar or hard to
grasp by way of comparison with something that is familiar and easy to grasp.
Page 7 of 14
Metaphors are widely used in interaction design to conceptualize abstract, hard-to-
imagine, and difficult-to-articulate computer-based concepts and interactions in more
concrete and familiar terms and as graphical visualizations at the interface level.
Metaphors and analogies are used in these three main ways:
As a way of conceptualizing what we are doing (for instance, surfing the web)
As a conceptual model instantiated at the interface level (for example, the card
metaphor)
As a way of visualizing an operation (such as an icon of a shopping cart into
which items are placed that users want to purchase on an online shopping site)
Page 8 of 14
Cognitive Friction
What is Cognitive Friction?
Cognitive friction occurs when a user is confronted with an interface or affordance
that appears to be intuitive but delivers unexpected results. This mismatch between
the outcome of an action and the expected result causes user frustration and will
impair the user experience if not jeopardize it. User research can help uncover such
problems and generate friction-free design.
Imagine a mouse-operated graphical user interface (GUI) where selecting a folder icon requires
two left clicks and opening it requires a right click. This isn’t necessarily a bad way to control
the GUI—however, it’s completely counter-intuitive, as our experience with GUIs for decades
leads us to expect that a single left click selects an icon and a double left click opens it. The
conflict between our expectation and the way the interface works is called cognitive
friction.
Effect of Cognitive Friction
User interfaces that suffer from cognitive friction can negatively affect the user
experience, leading to frustration and possibly the abandonment of a product.
As users will not be comfortable with the prospect of unlearning a conventional way
of completing an action, they will reject such a design.
Delivering User Experience void of Cognitive Friction
A key part of delivering a great user experience is getting the UI right for a digital
product. Fortunately, there’s a simple shortcut when it comes to UI – the more the
object within the UI mimics a real-world object, the more likely that object is going to
be intuitive to use and deliver a quality user experience. For instance, folders for
computer files look like paper files for a filing cabinet and the delete function resembles a trash
can from the corner of your office; they deliver real world expectations for digital
experiences.
Cause of Cognitive Friction in HCI
Alien Concepts can Cause Cognitive Friction
If your conceptual model breaks the user’s mental model – it is likely that your mobile
app is going to create a certain level of cognitive friction for a user. Even the relabelling
of one simple item may make your app more complex to use. For example, the floppy
disc save icon is a mental model – it’s an outdated model now, very few people still
use floppy discs, there’s a generation of users who have possibly never even see a
Page 9 of 14
floppy disc… but creating a new save icon is still likely to make life harder for the user
who wants to save their data. The floppy disc icon is how we model saving things on
any application and that hasn’t changed even when you can’t put a floppy disc into a
smartphone.
Avoiding Cognitive Friction in HCI
Learning to avoid cognitive friction in UI design can help make the user experience
(UX) of products greater.
Avoiding cognitive friction is the job of the user experience design team, in
conjunction with the UI and interaction design teams.
To avoid cognitive friction, you must identify places where cognitive friction might
occur in your design:
To identify places where cognitive friction might occur, the team might engage in user
interviews, create task flows, and design easy-to-use information architectures ahead
of development. Expert evaluations and usability testing with users during the
development of a product can highlight problems and point to solutions for them.
Remembering that cognitive friction can arise across a range of conventions is vital.
While areas such as mouse and keyboard design (including the entrenched nature of
the QWERTY keyboard) appear obvious, designers should remain aware of potential
pitfalls, established norms, and the need for eliminating user frustration.
Avoiding Cognitive Friction before Development
There are ways to approach the design that can help identify where cognitive friction
is likely to take place and to help avoid that:
1. Conduct user interviews – learn how your users expect to undertake the task
with the product. Ask questions specifically relating to the interface during
prototyping and sketching.
2. Create task flows – if you can create the simplest possible task flow with the
greatest degree of automation (e.g. the computer not the user does the work)
you are likely to minimize the opportunity for cognitive friction to occur.
Page 10 of 14
3. Create easy to use information architecture (IA) – the simpler the IA, the less
often a user is going to get lost or confused when interacting with the product.
Extend this to creating navigational models and get users to test them and
feedback on them.
Avoiding Cognitive Friction during UI Development
You can also test for cognitive friction during the development lifecycle at each
iteration of the product:
1. Conduct expert evaluations – cognitive walkthroughs and heuristic evaluation
can be valuable tools when deployed by usability analysts. They may even
uncover cognitive friction that isn’t immediately obvious to users.
2. Run usability tests on wireframes and prototypes – nothing beats direct
interaction with the product to test interaction design and examine how users
will react to that design. The more user input you get, the more cognitive
friction is likely to be identified and thus you can focus on eliminating it in the
next iteration.
Page 11 of 14
Interaction Types
Another way of conceptualizing the design space is in terms of the interaction types
that will underlie the user experience. Essentially, these are the ways a person interacts
with a product or application. Interaction types provide a way of thinking about how
best to support the activities users will be doing when using a product or service.
The five main interaction types are:
Instructing: This type of interaction describes how users carry out their tasks
by telling the system what to do. Here, users issue instructions to a system. This can
be done in a number of ways, including typing in commands, selecting options from
menus in a windows environment or on a multitouch screen, speaking aloud
commands, gesturing, pressing buttons, or using a combination of function keys.
It instruct a system and tell it what to do; issuing commands and selecting options
(e.g. tell the time, print a file, save a file)
When to use:
• user needs to tell system what to do
Common conceptual model:
• word processors (open, close, save, etc.)
• VCRS/DVD players (play, rewind, pause, etc.)
Benefit: supports quick and efficient operations
• good for repetitive actions on more than one object
• must be aware of the possibilities – learned
Conversing: This form of interaction is based on the idea of a person having a
conversation with a system, where the system acts as a dialogue partner. Here, users
have a dialog with a system. Users can speak via an interface or type in questions to
which the system replies via text or speech output. In particular, the system is
designed to respond in a way that another human being might when having a
conversation. It differs from the activity of instructing insofar as it encompasses a two-
way communication process, with the system acting like a partner rather than a
machine that obeys orders.
Has to do with interacting with a system as if having a conversation (e.g. search
engines, advice-giving systems, help systems, virtual agents)
When to use:
• user needs have a dialogue, i.e. back-and-forth.
• really a dialogue, not just a series of options and selections.
• more of a 2-way conversation than in instructing
Page 12 of 14
Manipulating: This form of interaction involves manipulating objects, and it
capitalizes on users’ knowledge of how they do so in the physical world. Here,
users interact with objects in a virtual or physical space by manipulating them
(for instance, opening, holding, closing, and placing). Users can hone their
familiar knowledge of how to interact with objects. Extensions to these actions
include zooming in and out, stretching, and shrinking—actions that are not
possible with objects in the real world.
Has to do with interacting with objects in a virtual or physical space by
manipulating them (e.g. dragging, selecting, opening, closing and zooming
actions on virtual objects)
When to use:
• makes sense to directly manipulate objects
• benefit: leverages what people do in the real world; (e.g., drag/drop)
• but can be used for non-realistic actions too (e.g., zoom)
Exploring: This mode of interaction involves users moving through virtual or
physical environments. Here, users move through a virtual environment or a
physical space. Virtual environments include 3D worlds and augmented and
virtual reality systems. They enable users to hone their familiar knowledge by
physically moving around. Physical spaces that use sensor-based technologies
include smart rooms and ambient environments, also enabling people to
capitalize on familiarity. The basic idea is to enable people to explore and
interact with an environment, be it physical or digital, by exploiting their
knowledge of how they move and navigate through existing spaces.
Has to do with moving through a virtual environment or a physical space (e.g.
google maps)
When to use:
• user needs to explore and interact with an ‘environment’.
• can exploit user’s previous knowledge of how they move through spaces
(digital and physical)
Responding: This mode of interaction involves the system taking the initiative
to alert, describe, or show the user something that it “thinks” is of interest or
relevance to the context the user is presently in. Here, the system initiates the
interaction and the user chooses whether to respond. Smartphones and
wearable devices are becoming increasingly proactive in initiating user
interaction in this way, rather than waiting for the user to ask, command,
explore, or manipulate. For example, proactive mobile location-based
technology can alert people to points of interest. They can choose to look at the
information popping up on their phone or ignore it.
Page 13 of 14
For Further Readings:
1. Alan Dix, Janet Finlay, Gregory D. Abowd, Russell Beale, 2004. “Human–
Computer Interaction”. Pearson Education Limited, England. (Chapter 12).
2. Helen Sharp, Yvonne Rogers, Jennifer Preece, 2019. “Interaction Design:
Beyond Human-Computer Interaction”. John Wiley & Sons, Inc., Indiana. (Chapter 3).
Page 14 of 14