0% found this document useful (0 votes)
13 views101 pages

Hci Complete PDF

Human Computer Interface
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views101 pages

Hci Complete PDF

Human Computer Interface
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

INF312: HUMAN COMPUTER INTERFACE

Dr. (Mrs.) Alimatu-Saadia Yussiff


Dr. Brandford Bervell
Module Outline
MODULES UNITS
UNIT 1 Session 1: Introduction to Human Computer Interaction
Fundamentals Session 2: The Components and Scope of HCI
of HCI & Session 3: Interaction Model
Models Session 4: Conceptual Model
Session 5: Understanding User Cognition
Session 6: Cognitive Model
UNIT 2 Session 1: User Interfaces and Interaction Styles
Interface Session 2: Ergonomics and Human Factors
Design Session 3: Guidelines, Principles and Theories
Guidelines, Session 4: The Schneiderman’s 8 Golden Rules
Principles Session 5: Norman's 7 design principles
and Theories Session 6: Nielson’s 10 design principles

UNIT 3 Session 1: The process of Interaction Design


HCI Design Session 2: User-Centered Development Cycle
Process Session 3: Identifying Needs and Establishing Requirements
Session 4: Developing Alternative Designs
Session 5: Prototyping
Session 6: Implementation

UNIT 4 Session 1: Introduction to Evaluation and Framework


Evaluation Session 2: Inspections Techniques
Methods Session 3: Usability Engineering / Usability Testing
Session 4: Heuristic Evaluation and Walkthroughs
Session 5: Think Aloud Protocol Evaluation
Session 6: Review-based and Experimental evaluations

UNIT 5 Session 1: Quantitative vs Qualitative Data


Data Session 2: Data Gathering Techniques and Tools
Gathering Session 3: Data Gathering Techniques and Tools
and Analysis Session 4: Qualitative Data Analysis
Session 5: Quantitative Data Analysis
Session 6: Data Presentation

UNIT 6 Session 1: Social Technologies


Emerging Session 2: Virtual Reality
Technologies Session 3: Augmented Reality
Session 4: Artificial Intelligence
Session 5: Internet of Things
Session 6: Robotics
UNIT 1: FUNDAMENTALS OF HCI AND MODELS
Unit Outline
Session 1: Introduction to Human Computer Interaction
Session 2: The Scope and Components of HCI
Session 3: Interaction Model
Session 4: Conceptual Model
Session 5: User Cognition
Session 6: Cognitive Model

Dear student, you are welcome to the first unit of this course. This unit sets the foundations for
the rest of the materials you will study in this module. Human Computer Interface (HCI) is the
study of designing, implementing, and evaluating the interactive interfaces used by humans and
computers. We all use computers for our day-to-day activities and we expect it to provide us with
efficiency, accuracy, recall, and satisfaction in meeting our goals. It is therefore critical to
Information Technology professionals to learn how to develop usable and good user experience
computer systems. As we become more dependent on technologies, such as the use of Internet,
smartphones, laptops, rice cooker, microwave oven, washing machine, etc., HCI has become a key
part of designing tools that can be used efficiently and safely on daily basis.
HCI is therefore, a vital skill for any developer, product manager, or designer who wants to design
for the future. People who specialize in HCI think about how to design and implement computer
systems that satisfy human users. In this course, we will introduce you to HCI concepts and explore
its fundamental design principles, models, processes, implementation and evaluation activities.

Objectives
By the end of the session, you should be able to:
a. Explain Fundamentals of HCI & Models;
b. Describe and apply Interface Design Guidelines, Principles and Theories;
c. Describe and apply the process of interaction and user centered design
d. Evaluate apps and computer systems by employing appropriate HCI evaluation techniques;
e. Have basic knowledge of Data Gathering and Analysis and their application in HCI;
f. Describe Emerging Technologies with examples.
SESSION 1: INTRODUCTION TO HUMAN COMPUTER INTERFACE /
INTERACTION (HCI)
Welcome to unit 1 session 1. It is certain that technologies affect all spheres of human endeavors
including at homes, offices, hospitals, agriculture, industries, etc. We all use computers for our
day-to-day activities and we expect it to provide us with efficiency, accuracy, recall, and
satisfaction in meeting our goals. It is therefore critical to Information Technology professionals
to learn how to develop usable and good user experience computer systems. In this session, you
will be introduced to Human Computer Interaction/Interface (HCI), history of HCI,
interdisciplinary nature of HCI, as well as the goals and importance of HCI.
Objectives
By the end of this session, you should be able to:
a. describe in detail the concept of Human Computer Interaction;
b. describe in detail the concept of Human Computer Interface;
c. describe the history of HCI;
d. explain the goals of HCI;
e. explain the importance of HCI;
f. have basic knowledge of HCI multidisciplinary fields;
g. identify technological applications used in everyday activities and appreciate the role of
HCI in their design and usage.

Now read on…

1.1.What is Human Computer Interaction?


HCI focuses on designing, implementing, and evaluating interactive interfaces that enhance user
experience using computing devices. This includes user interface design, user-centered design, and
user experience design. The Association for Computing Machinery defined Human Computer
Interaction (HCI) as "a discipline concerned with the design, evaluation and implementation of
interactive computing systems for human use and with the study of major phenomena surrounding
them" (Hewett, et al., 1996).

HCI is the study of people, computer technologies and how we interact with it. It also includes the
study and practice of usability. “It is about understanding and creating software and other
technology that people will want to use, will be able to use, and will find effective when used”
(Carroll, 2002).

Therefore, HCI can be seen as the two-way communication between a man and a machine that
optimizes computer design and user experience. It is the study of how human interact with
computers. The input made by human is processed and it is outputted back to the computer screen
for human to interpret and make decision as to either it meets the goal or not.
In addition, HCI is the study of how people interact with computers and to what extent computers
are or are not developed for successful interaction with human beings. It is concerned with
designing, evaluating and deploying usable, effective and enjoyable technologies in a range of
contexts. HCI overlaps user-cantered design, UI, and UX to create intuitive products and
technologies.

Human-computer interaction is a multidisciplinary study that focuses on the interaction between


people and computers as well as the design of the computer interface. Factors to take into account
include the user capabilities and cognitive processes, personality, experience, motivation, and
emotions.
Human computer interaction examples include:
• Interaction with a mobile app;
• Browsing a website from your desktop computer;
• Using internet of things (IoT) devices to carry out specific tasks.

People who specialize in HCI think about how to design and implement computer systems that
satisfy human users. Most research in this field aims to improve human–computer interaction by
improving how an interface is used and understood by humans.

1.2. What is Human Computer Interface?


Human Computer Interface (HCI) is the means of communication between a human user and a
computer system, referring in particular to the use of input/output devices with supporting
software. According to (Turkle, 2001), human computer interface refers to the modalities through
which people interact with computational technologies. Devices of increasing sophistication are
becoming available to mediate the human-computer interaction. These include graphics devices,
touch-sensitive devices, and voice-input devices. They have to be configured in a way that will
facilitate an efficient and desirable interaction between a person and the computer. Artificial
intelligence techniques of knowledge representation may be used to model the user of a computer
system, and so offer the opportunity to give personalized advice on its use. The design of the
machine interface may incorporate expert-system techniques to offer powerful knowledge-based
computing to the user.

HCI is a branch of the science of ergonomics, and is concerned especially with the relationship
between workstations and their operators. The aim is to develop acceptable standards for such
aspects as display resolution, use of color, and navigation around an application.

How do we make computers communicate with humans? The first computers, developed in the
1940s, were no more than huge boxes filled with complex electronics. The computer operators
used binary code and primitive peripheral devices, such as punched card readers, to communicate
with them. The next generation of computers used the typewriter as an input/output device. Since
the end of 1960s, monitors and keyboards have become the standard way of communication
between computers and humans. Other input devices, such as touch screen, mouse, joystick,
scanner, and voice recognition modules, also became available to users. All these devices have
made possible the development of interactive computer systems, which permit users to
communicate with computers by entering data and control directions during program execution.
A part of an interactive computer system that communicates with the user is known as a user
interface.

1.3Brief History of HCI


Human Computer Interaction (HCI) is an area of research and practice that emerged in the late
1970s and early 1980s, initially as an area in Computer Science. Until the late 1970s, only
information technology professionals and dedicated hobbyists interacted with computers. This
changed disruptively with the emergence of personal computing in the later 1970s. Personal
computing, including both personal software (productivity applications, such as text editors and
spreadsheets, and interactive computer games) and personal computer platforms (operating
systems, programming languages, and hardware), made everyone in the world a potential computer
user, and vividly highlighted the deficiencies of computers with respect to usability for those who
wanted to use computers as tools.
Thus, the emergent of personal computing in the 1980s in households and corporate offices gave
rise to HCI. In addition, the advent of the internet, mobile devices and Internet of thing (IoT),
further calls for the needs for the advancement of HCI.

1.4The Goals of HCI


According to Jono DiCarlo as quoted by (Hibbitts, 2022), “When software is hard to use, don’t
make excuses for it. Improve it. When a user makes a mistake, don’t blame the user. Ask how the
software misled them. Then fix it. The user’s time is more valuable than ours. Respect it. Good UI
design is humble.”

Underlying the whole theme of HCI is the belief that people using a computer system should be
considered first. Their needs, capabilities and preferences for conducting various tasks should
direct developers in the way that they design systems. People should not have to change the way
that they use a system in order to fit in with it. Instead, the system should be designed to match
their requirements.

The goals of HCI are to produce usable and safe systems, as well as functional systems. In order
to produce computer systems with good usability, developers must attempt to:
• understand: the factors that determine how people use technology
• develop: tools and techniques to enable building suitable systems
• achieve: efficient, effective, and safe interaction
• put people first
In broader terms, the goals of HCI are therefore to develop usable, safe and functional systems
for a great user experience. It is to improve or enhance the usability of existing products. Finally,
it is also to identify problems and tasks (such as in the workplace) that can be addressed with
software products and develop the system.

1.5Why HCI?
In the past, computers were big in size, very expensive and were mainly used by expertise and
technical people. So, the issues of usability weren’t a problem at all. Currently, because computers
are smaller in sizes and cheaper, we can find it everywhere at homes and at work places. In
addition, computers are currently used by none expertise or none technical people with little or no
skills, different background, needs and knowledge. Computers and software developers have
noticed these issues with current users and have decided to make computers usable (i.e., easy to
use, easy to learn, save people time, user friendly, etc.) and have good user experience.

As a discipline, HCI focuses on creating a ‘natural’ dialog between the user and the machine. In
such dialog, the interaction with a machine does not require a lot of cognitive effort from the user.
When we put a lot of effort into designing good human computer interface, we help our users to
use machines to solve their problems. Lack of attention to human computer interaction, on the
other hand, almost always results in creating bad user interfaces. Bad HCI means bad usability,
and it increases chances for product failure.

is the study and the practice of usability. It’s about understanding and creating software and other
technology that people will want to use, will be able to use, and will find effective when used. As
its name implies, HCI consists of three parts: the user, the computer
itself, and the ways they work together. The user can be an individual user or a group of users
working together. The computer refers to any technology ranging from desktop computers,
to large scale computer systems. For example, if we were discussing the design of a website, then
the Website itself would be referred to as “the computer”. Devices such as mobile phones or VCRs
can also be considered to be “computers”. There are obvious differences between humans and
machines. In spite of these, HCI attempts to ensure that they both get on with each other and
interact successfully.

1.6 The Importance of HCI


HCI is crucial in designing intuitive interfaces that people with different abilities and expertise can
easily access and use appropriately to achieve a stated goal.
With efficient HCI designs, users need not consider the intricacies and complexities of using the
computing system. User-friendly interfaces ensure that user interactions are clear, precise, and
natural.

More importantly, HCI helps to make interfaces that increase productivity, enhance user
experience, and reduce risks in safety-critical systems. Poorly designed machines lead to many
unexpected problems, sometimes just user frustration, but sometimes, chaotic disasters.
Efficiently designed systems ensure that employees are comfortable using the systems for their
everyday work. With HCI, systems are easy to handle, even for untrained staff.
HCI tends to rely on user-cantered techniques and methods to make systems usable for people with
disabilities.

When people use a rise cooker, blender, washing machine, an ATM, food dispensing machine, or
snack vending machine, they inevitably come in contact with HCI. This is because HCI plays a
vital role in designing the interfaces of such systems that make them usable and efficient.
In general, HCI is an effective tool that designers can use to design easy-to-use interfaces. HCI
principles also ensure that the systems have obvious interfaces and do not require special training
to be used. Hence, HCI makes computing systems suitable for everyone.

1.7 The Interdisciplinary Nature of HCI


Because it is not efficient to design an interactive system from a single field. This is why HCI has
drawn from many disciplines.
Computer
Science Ergonomics Also, the fact that HCI has
Design
expanded rapidly and steadily for
three decades, attracting
Cognitive
Philosophy
Science professionals from many other
disciplines and incorporating
Neuroscienc Fields
Psychology/
diverse concepts and approaches,
of HCI
e
Cognitive calls for a different perspective
Science from many fields. According to
Engineering (Christine E. Wania, 2006), HCI is
Ethnography a multidisciplinary field which
Informati
combines the theories and
cs Sociology practices from a number of fields
including computer science,
Figure 1: Interdisciplinary Fields of Human Computer Interaction cognitive and behavioral
psychology, anthropology,
sociology, ergonomics, industrial design, and many more as shown in Figure 1.
The interdisciplinary nature of HCI has therefore depicted the following;
• Computer science and engineering: aids in the design and implementation of user
interfaces. To be able to build the right technology.
• Technical writing: for appropriate writing and formatting of textual communication. To
produce the user manual.
• Graphic design: to design appropriate graphic with effects and good resolution for visual
communication. To produce and effective interface presentation.
• Ergonomics: help in designing application by taking human factor into consideration, etc.
for the users’ physical capabilities.
• Sociology: to help understand the wider context of the interaction.
• Cognitive and problem-solving capability.
• Psychology and cognitive science: knowledge of user’s perception.
• Business and entrepreneurship: to be able to market what have been built.

1.8 Conclusion
HCI enables a two-way dialog between man and machine. Such effective communication makes
users believe they are interacting with human personas and not any complex computing system.
Hence, it is crucial to build a strong foundation of HCI that can impact future applications. This
session has introduced you to HCI components, the goal of HCI and its importance to stakeholders
in system development. Remember that cleverly designed computer interfaces with usability and
user experience goal, would motivate users to effectively and efficiently use digital devices in this
modern technological age.

Self-Assessment Questions
Exercise 1.1
1. What is Human Computer Interface?
2. What is Human Computer Interaction?
3. Describe at least five importance of HCI.
4. Describe the goal of HCI
5. What is HCI?
6. Differentiate between Human Computer Interface and Human Computer Interaction.
7. Describe the goal of HCI
8. Describe the multidisciplinary nature of HCI.
9. Why multidisciplinary areas in HCI?
Exercise 1.2
1. Which of the following are important in the design focus of HCI?
a. Thinking of the user
b. Testing the HCI
c. Involving the users
d. All of the above
2. Which one of these is a good reason for taking time to design a good computer-human
interface?
a. Not every user is a computer expert
b. Well-designed HCIs allow the software to be sold at a better price
c. Well-designed HCIs use less computer resources
d. Well-designed HCIs allow the computer to run faster
3. In virtual reality which of the senses cannot currently be portrayed?
a. Touch
b. Hearing
c. Sight
d. Smell
4. Which of these is not an interface style?
a. Command line/command prompt
b. Menus
c. Natural language
d. Voice Recognition
5. Which one of these is a good reason to include sounds in an HCI?
a. Users react more quickly to sounds than to visual signals
b. Users react more slowly to sounds than to visual signals
c. There is no preference. People just like sounds
d. The computer reacts to sounds in the same way as a human
6. All countries read from left to right?
a. True
b. False
7. A computer expert produces a solution with HCI which is very efficient in computer
resources, based on command-lines. Which one of the following is most likely to be the
result when the system is implemented?
a. It will be welcomed by all staff.
b. All staff will enjoy using it after mastering the skills of command lines.
c. Most staff will want to become computer experts to use it.
d. Most staff will feel demoralised and will not want to use the system.
8. The term human-computer interaction has only been in widespread use since the
early………...
a. 2000s
b. 1950s
c. 1970s
d. 1980s
9. Which of the following are examples of paradigms for interaction?
a. Personal computing
b. Hypertext
c. Multi-modality
d. All of the above
SESSION 2: The Scope and Components of HCI
Welcome to unit 1 session 2. In this unit, you will be introduced to the components of Human Computer
Interaction (HCI). In order to build effective interfaces, we need to first understand the limitations and
capabilities of the components that constitute HCI. This session introduced you to these components. We
hope you will enjoy it.

Objectives
By the end of this session, you should be able to:
h. describe the main components of HCI;
i. identify technological applications used in everyday activities and appreciate the role of
HCI in their design and usage.

Now read on…

2.1The Components of HCI


HCI consists of three intersecting components as shown in Figure 1: these are the Human (persona
or user), the computer, and the interaction between them (the ways they work together).

Figure 1: Components of HCI

Figure 1 also illustrated that HCI is the technology that connects human and machine. Thus,
Humans interact with the inferences of computers to perform various tasks. In order to build
effective interfaces, we need to first understand the limitations and capabilities of these
components. The three components are described below:

The Human:
The human, applies to an individual or a group of computer users. An appreciation of the way
people's sensory systems (sight, hearing, touch) relay information is vital in the use of computers.
Also, different users form different conceptions or mental models about their interactions and have
different ways of learning and keeping knowledge. In addition, cultural and national differences
play a part when designing a product for users.

HCI should always consider users in terms of their expectations and needs, physical abilities and
limitations they may have, how their perceptual systems work, and what they find attractive and
enjoyable when they use computers. When humans interact with computers, they bring to the
encounter a lifetime of experience. Designers must decide how to make products attractive without
distracting users from their tasks.
Furthermore, in HCI, users or human can be described as an intelligent agent, which owns
knowledge and cognitive and behavioral abilities. The main characteristics of a user include his/her
physical and cognitive factors. Physical factors are carriers for sending or receiving information,
which consist of sensory modality (such as vision, hearing, speech) and response modality (such
as eyes, ears, mouth), for a specific user, the physical factors can be illustrated by his/her vision
scan speed, hearing smartness, speech velocity, and so on; the cognitive factors include user
knowledge, psychological perception, and cognitive reasoning, which are represented by learning
ability, mental smartness, response ability, and so on(QI-JE ZHAO, 2004). Designers must
therefore consider users physical and cognitive factors in the design of everyday things.

HCI analyzes how the ‘user’ - an individual or a group of users behave, how they interact with
technology, and what their needs and goals are. Factors to take into account include their
capabilities and cognitive processes, personality, experience, motivation, and emotions. In the
modern, highly competitive market, it’s vital to create the best possible experience for users. Not
surprisingly, user centered design (a process in which designers focus on users and their needs in
each step of the design process) is paramount in the design process. Product teams that practice
user centered design always involve users in the design process from the very beginning and key
design decisions are evaluated based on how they work for users. Product teams also try to find a
balance between user needs and business needs.

The Computer:
When we talk about the computer, we are referring to the hardware and software offered in
inputting and outputting devices. It includes any technology ranging from desktop computers,
embedded system (e.g., photocopier, microwave oven, etc.), software (e.g., website, search engine
word processor, etc.) and large-scale computer systems. Devices such as mobile phones, VCRs,
rice cookers, washing machines, etc., can also be considered to be “computers”. It is therefore the
product that the human use and interact with which in turn displays the output to the human. Figure
2 below depicts computers in our daily life.

Figure 2: Computers in our Daily Life. Derived from: https://www.educative.io/blog/intro-human-


computer-interaction
The Interaction
This is about the ways human and computer work together. Most of us have a powerful computer
in our pockets and interact with a myriad of screens and devices every day. Some of our devices
are tiny, like smartwatches, and some don’t even have screens, like smart speakers for voice
assistants. Meanwhile, there have been incredible advances in machine learning and artificial
intelligence, and just as all of this technology has evolved, so has our relationship with them. Yes,
the interaction between humans and computers is ever-changing.

Human computer interaction implies communication between a human user and a computer
system. Interactions play an essential part in this communication because not only the actual tasks
that users will perform with devices impact human computer interaction, but also the context in
which such actions happen is important.

Interaction means any direct or indirect communication between users and systems. It involves a
dialog with a feedback and controls that initiate a task (e.g., a user can click on save or print icon
then the interface replies with a dialog box). The interaction between users and computers occurs
at the user interface which includes both software and hardware. People have to use the computers
or different embedded devices for different purposes. For these reasons, they have to interact with
these machines. Researchers have built different interfaces and methods for interactivities. The
designers and programmers also look for a reasonable balance between what can be programmed
within the necessary schedule and budget, and what would be ideal for the users.

There are obvious differences between humans and computers. In spite of these, HCI attempts to
ensure that they both get on with each other and interact successfully. In order to achieve a usable
system, you need to apply what you know about humans and computers, and consult with likely
users throughout the design process. In real systems, the development schedule and the budget are
important, and it is vital to find a balance between what would be ideal for the users and what is
feasible in reality.

In addition, both humans and computers have different input-output channels as demonstrated in
table 1. These channels need to be considered in the design and development of interactive
systems.

Table 1: Human and Computer input-output channels


Humans: Computers:
• Long-term memory • Text input devices
• Short-term memory • Speech recognition
• Sensory memory • Mouse / touchpad / keyboard
• Visual perception • Eye-tracking
• Auditory perception • Display screens
• Tactile perception • Auditory displays
• Speech and voice • Printing abilities

2.2How Computer and Human Work Together


How do we make computers communicate with humans? The first computers, developed in the
1940s, were no more than huge boxes filled with complex electronics. The computer operators
used binary code and primitive peripheral devices, such as punched card readers, to communicate
with them. The next generation of computers used the typewriter as an input/output device. Since
the end of 1960s, monitors and keyboards have become the standard way of communication
between computers and humans. Other input devices, such as touch screen, mouse, joystick,
scanner, and voice recognition modules, also became available to users. All these devices have
made possible the development of interactive computer systems, which permit users to
communicate with computers by entering data and control directions during program execution.
A part of an interactive computer system that communicates with the user is known as a user
interface.

Interaction between humans and computers is a two-way communication. To interact effectively,


users need appropriate tools that are designed specifically for human-to-computer and computer-
to-human links. As in communication with each other, people have a natural desire to be able to
use a combination of vision, movements, and speech to communicate with computers. Hence,
visual, audio, and physical motion components of user interface are interconnected.

Two different visual interface methods, graphical- and character-based, utilize different kinds of
actions. The character-based interface typically requires typing on a keyboard. A graphical
interface incorporates the use of a mouse or joystick, which translates physical motion into motion
of objects on a computer screen. In addition, visual information may be sent from scanners, digital
cameras, and faxes, which are able to translate images into computer code.

Speech also may be used, through voice input devices, to deliver system commands and data
directly into a system. It is a complex multi-stage process that involves digital sampling of the
acoustic signal, recognition of the elements of speech, and even machine knowledge of the
language.

All of these methods and devices can be categorized as human-to-computer communication


elements. Prompts, messages, and light and sound signals are examples of computer-to-human
interaction. A prompt is a signal on a screen indicating that the computer is waiting for a command
or data. In response to users' usually brief commands, computers may provide them with reports,
graphics, music, links to Internet web sites, and other digitized information or entertainment
resources (Encyclopedia.com, 2018).

2.3The Scope of HCI?


HCI is an interdisciplinary area concerned with design, implementation and evaluation of products.
It is also concerned with the interaction of people and technology and the user experience derived
from such interaction. The main goal of HCI is oriented toward the users, especially the non-
computer-professional users, and how to improve the human-computer relationship for them.
In addition, HCI is concerned with human interactions with all technology including traditional
PCs, ubiquitous computation, tangibles, collaborative systems and hypertext (Hooper & Dix,
2012).
Figure 3 below demonstrates HCI four important features in terms of scope.
Figure 3: Scope of HCI

Broadly, the four features are listed and described below:


1. Use and context
2. The human
3. The computer
4. Development process

2.3.1 Use and context:


The context of use is the actual conditions under which a given artifact/software product is used,
or will be used in a normal day to day working situation. One of the key focal points of user-
centered design is the contexts in which the designs will be used. For technology products and
services, contexts of use include a potentially broad array of factors – physical and social
environments, human abilities and disabilities and cultural issues.

In HCI, the context of use comprises a combination of users, goals, tasks, resources, and the
technical, physical and social, cultural and organizational environments in which a system, product
or service is used.This is a much broader definition of contexts than is used in practice. When
referring to a complete system, the context of use would include all users along with their
respective goals, tasks and the required resources as well as the environmental contexts across all
of those factors. These include
• Technical environment – equipment and applications, including hardware, furniture;
information (data the users have access to) relevant to the tasks; support services, either human-
or system-based (such as assistive technology).
• Physical environment – where the system will be used and what the environmental factors
would be (consider the warehouse, farm or cold storage examples earlier).
• Social, cultural and organizational environment – other people involved (such as stakeholders)
and the relationships between them, the organizational structure, language, legislation, cultural
norms and values, work practices, group working and privacy.

Contexts describe the actual circumstances of use. It is important to consider what is and isn’t
relevant. For example, Environmental and physical contexts may not be particularly relevant to
most websites, but this can change dramatically for systems used outside the typical home or
office. Examples include external Automated Teller Machines (“cash machines” or “cash
dispensers”), computing systems used in farming, some of which must be steam cleaned, and
systems used for stock control in unheated warehouses or, more challenging still, cold stores.
The contexts of use in mobile devices are very different from that of desktops. Researchers Savio
and Braiterman introduced the overlapping spheres of context to a mobile user’s context that
included (InteractionDesignFoundation, 2022)
• Personal goals (such as identity, status, and social interaction).
• Attention levels (complete or partial, intermittent or continuous).
• Tasks (for example, make calls, send a video, or get directions).
• Device constraints (including, among other things, software, sensors, battery and network).
• Secondary activities (such as walking, eating, etc.).
• Environment (sound, light, space, etc.).
• Culture (economics, religion, law, etc.).

Example 1

Figure 4: Cash Dispenser

Cash dispensers are a common example of an unusual context of use. They have to cater for a wide
range of users – short, tall and in wheelchairs – as well as variations in lighting and weather.

Example 2

Figure 5: warehouse worker scans stock using a portable reader and tablet
Desktop computers are not always an appropriate solution. In Figure 5, a warehouse worker scans
stock using a portable reader and tablet.

The context describes the actual conditions under which the computer system is used. For example,
when you design a mobile app, you need to consider how the visual design will look both in dim
lighting and on sun glare, whether the app works fine during a poor internet connection. These are
just two of many aspects that can have an impact on user experience.
Good interaction design always results from rigorous user testing and constant refinement of
individual design details, especially details that can be affected by the context of use. That is why
the human computer interface should always be tested with real users who represent the target
audience.

How Can you Define Contexts of Use


User research and observation is essential to determine context of use. The goal of these processes,
which include contextual interviews, user visits, etc., is to answer key questions such as the ones
suggested by the leading UX consultancy Experience Dynamics:
• Where do your users engage with your product or service? (Physically, environmentally,
device-specific)
• What is happening to the user when they are using it? (Social or emotional influences)
• What is physically or socially preventing users from completing their tasks? (e.g., another
party or person has to act first)
• When does usage happen and what triggers it? (Timing and coordination)
• What expectations do users bring to the task? (Mental model)
• Why do users want to do this before that? (Workflow, motivation, flow)
• What makes sense to users and why does that differ from how you think about it? (Content,
labeling, problem-solving)

2.3.2 The Human:


HCI study psychological and physiological aspects of human e.g., it studies how a user learns to
use a new product, study human typing speed.

2.3.3 Computer:
Hardware and software offered e.g., input & output devices, speed, interaction styles, computer
graphics.

2.3.4 Development Process:


This is concerned with the design, development, implementation and evaluation of a product.

Conclusion
HCI enables a two-way dialog between man and machine. Such effective communication makes
users believe they are interacting with human personas and not any complex computing system.
Hence, it is crucial to build a strong foundation of HCI that can impact future applications. This
session has introduced you to HCI components, the goal of HCI and its importance to stakeholders
in system development. Thus, HCI is the study of how people interact with computers and to design
better user interfaces. It has four main components: the user, task, tools / interface, the context.
The user is the center of the HCI Universe. The task is what the user is trying to accomplish. The
tools or interface is how the user interacts with the computer. The context is the environment in
which the user is working.
Remember that cleverly designed computer interfaces with usability and user experience goal,
would motivate users to effectively and efficiently use digital devices in this modern technological
age.

Self-Assessment Questions
Exercise 1.1
10. Describe in detail the components of HCI.
11. How do we make computers communicate with humans?
12. Mention at least six examples of computers in our daily life.
13. With examples, differentiate between GUI and command line interfaces.
14. Are there any differences between Human Computer interface and Human Computer
Interaction?
15. Describe in detail the major scope of HCI.
SESSION 3: INTERACTION MODEL
Welcome to another interesting session. In your journey to understand what Human Computer
Interaction (HCI) is all about, there is the need to understand some HCI models. In this session we
are going to introduce you to types of interaction model (Donald Norman’s Model of interaction
and Abowd and Beale Framework). We hope you will enjoy it.

Objectives
By the end of this session, you should be able to:
a. describe interaction model in HCI;
b. describe Donald Norman’s Model of interaction;
c. describe Abowd and Beale Framework;
d. list and explain the seven stages of Donald Norman’s Model of interaction;
e. demonstrate knowledge of execution and evaluation loop;
f. use Donald Norman’s Model of interaction or Abowd and Beale Framework in designing
and evaluating a product;
g. give examples of gulf of execution and gulf of evaluation;

Read on…

3.1 What is Interaction Model?


An interaction model, sometimes called a task model, is not just how users navigate or move
from one page or section of an application to another. An interaction model can be viewed briefly
as the communication between human and computer systems. According to (Kiran Jot Singh,
2021), the core spirit of interaction model is communication between the sender and receiver,
which is subjective to the spatial distance between them.

An interaction model describes how to represent an application to users. For example, how the
layout should look like, the tasks the system would allow users to perform and the widget and
controls to support interactivities. It is a design model that binds an application together in a way
that supports the conceptual models of its target users. It defines how all of the objects and actions
that are part of an application interrelate, in ways that mirror and support real-life user interactions
(Nieters, 2012) .

An interaction model, also represents an abstract description of user’s perception of system


functions and purpose from his or her point of view. Task models describe what users do
(activities) with systems and how they do it (tasks). It’s about what is being done at any time. This
definition makes interaction models synonymous with task models and differentiates them from
interaction scenarios. Interaction scenarios are more like stories that show users performing tasks
within their environment using technology as a medium (ArticleHpal, 2022).

What are the types of interaction model? There are different types of interaction model as presented
below;
3.2 Types of Interaction Model
There are two types of Interaction models. (a) Donald Norman’s model of interaction, (b) Abowd
and Beale framework. Let’s quickly look at them one after the other.
3.2.1 Donald Norman’s model of interaction
Donald Norman’s model of interaction is a notable model for HCI since 1988. Norman’s model
concentrates on users view of the interface and it is used to demonstrate why some interfaces
causes problems to the user. (Norman, 2013), in his book, “The Design of Everyday Things” stated
that the model is about how people interact with the real world. This model is also referred to as
the seven stages of action, or Norman’s action cycle, or the execution-evaluation cycle.

The Donald Norman’s model of interaction is divided into seven stages as shown in Figure 1
below. The model proposes that a user first establishes a goal and then performs actions by using
the system to achieve that goal. A system then reflects the output of those actions on the interface.
A user observes the interface and evaluates if their goal has been met. If not, a new goal is
established, and the cycle is repeated.

Figure 1: Don Normans Interaction Cycle Retrieved from (Prototyping)

The seven steps in the action cycle or interaction model can further be divided into two categories
called gulfs:
1. Gulf of Execution: i.e., discovering the properties of objects. User perceiving for
example “How do I use the system?” in order to use a system, you have to go
through the following stages to achieve your goal.
1. Forming a goal about something that you want to accomplish.
2. Forming an intention to act.
3. Selecting a sequence of actions that will lead you to your goal.
4. Execution of the action.
However, if you are interacting with a system and you need additional information to be able
to operate it to achieve your goal, then, there is gulf of execution since you are missing the
clear indicators about the system’s functionality. This leads to users’ frustration and avoidance
of such system.
2. Gulf of Evaluation: getting feedback related to our interaction. Example question
is, “What’s is the current state of the system?” after the user has interacted with a
product. If there is lack of clear feedback from the interaction, then there is gulf
of evaluation.
5. Perception of the state of the world resulting from the action.
6. Interpretation of this perception based on your expectations.
7. Evaluating whether or not the goal was reached.
Execution and Evaluation Loop
The seven stages can be viewed in terms of execution and evaluation loop as shown in Figure 2
below. The stage 1 is forming the goal. The execution involves stages 2, 3 and 4 as stated in the
listing above. The evaluation involves stages 5, 6 and 7 in the listing above.

Figure 2: Execution and Evaluation Loop

This model helps us to understand where things go wrong in our designs. The main points of
conflict are:
• Gulf of execution: There is a difference between user actions and those that the system
can perform. An effective interface allows a user to perform an action without system
limitations.
• Gulf of evaluation: There is a difference between the presentation of an output and the
user’s expectations. An effective interface can be easily evaluated by a user.
• Human error: The system is performing correctly, but the user has inputted an error.
Errors can be avoided by improving interface design or providing better user support.

Examples of Donal Norman’s Model


Below, we present application examples of Donald Norman’s model

Example 1: Purchasing a Book


Example 2: Bored at home and want to kill boredom
Goal: you want to kill your boredom.
Plan: movie seems to be a good idea.
Specify: you check for the nearest theatre and show time.
Perform: purchase ticket, go there and sit in the theatre.
Perceive: listen and watch the movie.
Reflect: interpreter to your understanding.
Compare: what is your mood after the movie?

Example 3:

Figure 3: Execution and Evaluation Loop

3.2.2 Abowd and Beale framework.

The second type of interaction model is termed the Abowd and Beale framework. The framework
is an extension of Norman’s model. It actual consists of four parts (user, input, system and output)
and their relationships.
Abowd and Beale’s interaction framework identifies system and user components which
communicate via the input and output components of an interface. This communication follows a
similar cyclic sequence of steps from the user’s articulation of a task, through the system’s
performance and presentation of the task, to the user’s observation of this task’s results, upon
which the user can formulate further tasks.
The framework introduces languages for input and output in addition to the core and task
languages. By concentrating on the language translations, the interaction framework allows us to
determine if the concepts are being communicated correctly.

Interaction Steps:
The interaction framework identifies four steps in the interaction cycle:
i. the user formulates the goal and a task to achieve that goal
ii. the interface translates the input language into the core language
iii. the system transforms itself into a new state
iv. the system renders the new state in the output language and sends it to the user Translations
The four translations involved in the interaction framework are
a. articulation - the user articulates the task in the input language
b. performance - the interface translates the input language into stimuli for the system
c. presentation - the system presents the results in the output language
d. observation - the user translates the output language into personal understanding.

Figure 4: Interaction Model: Abowd and Beale Framework retrieved from- (Abowd & Beale,
1991)

The nodes represent the four major components in an interactive system – i.e., the System, the
User, the Input and the Output. Each component has its own language. In addition to the User’s
task language and the System’s core language, which we have already introduced, there are
languages for both the Input and Output components. Input and Output together form the Interface.
Figure 5: Abowd and Beale Framework: Derived from - https://cio-wiki.org/wiki/File:Abowd-
beale1.png

As the interface sits between the User and the System, there are four steps in the interactive cycle,
each corresponding to a translation from one component to another, as shown by the labelled arcs
in Figure 4 and Figure 5.

The fact is, the User begins the interactive cycle with the formulation of a goal and a task to achieve
that goal. The only way the user can manipulate the machine is through the Input, and so the task
must be articulated within the input language. The input language is translated into the core
language as operations to be performed by the System. The System then transforms itself as
described by the operations; the execution phase of the cycle is complete and the evaluation phase
now begins. The System is in a new state, which must now be communicated to the User. The
current values of system attributes are rendered as concepts or features of the Output. It is then up
to the User to observe the Output and assess the results of the interaction relative to the original
goal, ending the evaluation phase and, hence, the interactive cycle. There are four main translations
involved in the interaction: articulation, performance, presentation and observation as described
above.

3.3 Why Model of Interaction?


Interaction model allows users to evaluate a product viz-a-viz its design goal. For example, if your
goal is to design an e-collaboration system, the model of interaction would consist of features for
lecturer to be able to post a discussion and monitor the trends and give final feedback. To evaluate
such a system, you have to match the performance and task by the stated goal, using Donald
Norman’s Model of interaction.
Interaction model will enable you to identify what is going on with the system interaction so that
you can resolve it as soon as possible. It serves as the basis for comparing interaction style and
selecting appropriate style for your system.
Furthermore, since some systems are harder to use than others, a model will help identify gulf of
execution and gulf of evaluation in their usage. These would guide in addressing them.
Also, Interaction models help us to understand what is going on in the interaction between user
and system. Interaction model address the translations between what the user wants and what the
system does.
Conclusion
In this session, we have introduced you to interaction model, followed by the two types of
interaction model (i.e., Donald Norman’s model and Abowd and Beale framework). We then
conclude the session by looking at the importance of interaction model?

Self-Assessment Questions
Exercise 1.1
What is interaction model?
What is the importance of interaction model?
Compare and contrast the two types of interaction models.
What is the importance of interaction model?
Describe with example the Norman's seven stages of action?
Describe with example the Abowd and Beale framework
SESSION 4: CONCEPTUAL MODEL
You are welcome to this interesting session. Have you ever heard or read about the conceptualize
model? This session will be dedicated to describing conceptualizing Design and conceptual model.

Objectives
By the end of this session, you will be able to:
a. describe conceptual Models with examples;
b. describe the basic components of conceptual model;
c. Differentiate between conceptual model and mental model;
d. Identify when to use Mental Models/Conceptual Model in UX Design
e. Describe the importance of conceptual design
f. Describe the following with examples:
i. Metaphors
ii. Concepts
iii. Relationships among concepts
iv. Mapping from concepts to the user experience envisioned
v. Interface types
vi. Interaction types

Read on...

4.1 What is a Conceptual Model?


The high-level description of how a system is organized and operates” is called conceptual model.
It is a structure that outlines the concepts and their connections rather than a description of the user
interface. This enables designers to straighten out their thinking before they start laying out their
widgets. It offers a practical approach as well as a framework of fundamental ideas and how they
relate to one another.

A conceptual model is a simple and helpful explanation of how something works. For example, a
website or application onboarding experience demonstrates how to use the product or service.
According to Norman, this leads to the design of all the information needed to create a good system
leading to an understanding and a feeling of control.

Thus, it helps to describe the proposed system in terms of a collection of cohesive notions and
ideas about what it ought to do, behave, and appear like, which will be perceptible to users in the
intended way. Following the identification of a set of potential interactions with an interactive
system, the design of the conceptual model must be considered in terms of actual/practical
solutions. This requires figuring out the interface's behavior, the specific interaction techniques
that will be utilized, and its "look and feel."
Conceptual model is the designers intended model for the user of the system. Norman (1986)
referred to it as the design model.

Conceptual model for a debit machine using a diagrammatic approach is illustrated in figure 1
showing concepts, relationships and terminology
Figure 1: Conceptual Model for a Debit Machine

4.1.1 Components of Conceptual Model


A conceptual model can include the following:
1. metaphors and analogies; e.g., The “desktop metaphor”.
metaphors are sets of concepts employed in graphical user interfaces to help users understand and
easily interact with a computer. It can be described as the transference of the relation between one
set of objects to another set for the purpose of brief explanation. It is originally a Greek word which
means "carrying across". A metaphor is an analogy, for example, a picture, symbol or concept,
that we use to make interaction simpler and more intuitive. In the user interface, we give
information and operations a physical (graphical) appearance. Examples are: icons, "drag-and-
drop", the desktop metaphor, trash can, desktop, files, and folders etc.

Metaphors are used on different levels, for example; as a basic building block or as a principle for
grouping information on the screen. Metaphors are used as concepts and building blocks when we
create the user interface. It is important to select metaphors that make sense to the users and that
support the users' tasks.
The essence of metaphor is to give an idea of some unknown thing or concept, or to create images
that facilitate the user's understanding, learning, performance, information organization, etc.

2. Concepts – these are objects and actions and what you can do to them; it includes user roles
and attributes; e.g., Files and folders; both can be opened, have names, etc.

3. Relationships among concepts; What actions or attributes are shared between objects? For
example, song, podcast, audiobook all have timelines that users want to navigate (i.e., Fast
forward, rewind, etc.)
Another example is about containment and hierarchy. For example;
o A song is contained by an album
o Files are contained in folders
4. Mappings from concepts to the user experience envisioned; How do the concepts map to
what people will actually do? Example the users can browse files, and mark favorites
terminology that will be used (consistently) to tie it all together as shown in figure 2.

Figure 2: Debit Machine Example


Activity:
• Are these the right objects?
• Can I do all the operations?
• Do they match what people want to do?
• Can I do them in a consistent way?

5. Interaction types; this is about how the users interact with a system. e.g., give commands,
perform operations, explore. What the user is doing when interacting with a system Command
line (how you talk to it), intelligent (function), gestural (hardware), touch (both hardware and
interaction type).

Interaction types include:


1. Instructing
Where users instruct a system and tell it what to do. issuing commands and selecting
options (e.g., Print a file, save a file). It is also a very prevalent conceptual framework
or model that underpins a variety of gadgets and systems. For instance, issuing
commands using keyboard and function key and selecting options via menus. Main
benefit is that instructing supports quick and efficient interaction. This model is good
for repetitive kinds of actions performed on multiple objects.

2. Conversing
Interacting with a system as if having a conversation. Underlying model of having a
conversation with another human. This ranges from straightforward menu-driven voice
recognition systems to more complicated 'natural language' dialogs. It also includes, virtual
agents, advice-giving systems, toys, help systems, search engine and pet robots designed
to converse with the computer user.
3. Manipulating
Interacting with objects in a virtual or physical space by
manipulating them (e.g., Dragging, selecting, opening, closing and
zooming actions on virtual objects). involves manipulating virtual items by dragging,
selecting, opening, closing, and zooming. Also, it involves exploiting users’ knowledge
of how they move and manipulate in the physical world. For instance, using gestures
and controllers to control the movement of an on-screen avatar. Virtual objects, for
instance, can be moved, selected, opened, closed, and zoomed in and out of. Extensions
to these acts are also possible, such as controlling virtual environments or manipulating
items in ways that aren't feasible in the real world.
• Direct Manipulation
This involves continuous representation of objects and actions of interest. Instead of
issuing orders with complicated syntax, physical actions like button pressing are used.
This model supports rapid reversible actions with immediate feedback on object of
interest.

4. Exploring
Moving through a virtual environment or a physical space (e.g.,
Google maps, GPS). It involves individuals navigating around environments, whether
virtual or real, finding and learning things.
For instance, Real-world settings with integrated sensor technology.

6. Interface type: The kind of interfaces that can be used to support the mode are shown in figure
3. These includes: mobile, GUI, touch, tangible, haptic, desktop, command line, data
visualizations, Speech, menu-based, gesture. One may ask, should it be constrained? How
would different interfaces affect result?

Figure 3: Interface Types


4.2Mental Model Versus Conceptual Model
According to (Chandrashekar, 2022), mental models are important to understand things and
explain it to others how they work. Our mental models are based on what we know about the world
from past experiences and from what others have told us. They are different from conceptual
models, which are the models we create by thinking about something in a rational, organized and
structured way. In short, conceptual models are blueprints for how we intend something to work,
mental models are how we think something will work.
Mental models play a vital role in User Experience (UX) and design because they relate to how
users perceive how a system works based on belief instead of factual concepts.

Figure 4: Mental Model versus Conceptual Model. Retrieved from (Chandrashekar, 2022)

As UX Designers, we can create products or solutions to simulate these models and make them
more usable and intuitive. User personas help us to understand the users we are designing for, to
put ourselves in their shoes and better understand their context.

When we look at understanding how several unique users solve complex problems, mental models
provide us with a bigger picture of the decision-making process – through their perceptions.
Mental models are developed in the brain to cognitively map life situations, leading to assumptions
about future-like situations. Senge (1990) viewed mental models as our understanding of how the
world operates. This understanding gears us to towards typical ways of thinking and acting.

Conceptual models are the model of an application that the designers want the users to understand.
They are abstract, psychological representations of how the users should carry out tasks. Generally,
conceptual models are identified at the beginning of the design process and are referred to for
direction and inspiration for the whole design process.
In most cases, companies and designers want mental and conceptual models to match. Why?
Because that is how they can provide an intuitive experience. It’s essential to keep in mind that
mental models differ
from person to
person. Still, when
studied carefully,
more common
patterns emerge,
which can be used as
a reference to build
more intuitive
conceptual models.
When early user
interfaces were built,
designers tried to base
their conceptual
Figure 2: Home Desktop Analogy. Retrieved from (Chandrashekar, 2022)
models around
familiar real-world
mental models.

The famous desktop metaphor is an example of a real-world concept being used to introduce a new
idea. The desktop graphical user interface was modelled around the actual objects on the top of a
user’s desk, like documents
and folders and other things
around the desk like a trash
bin. This helped people
adopt and transition to GUI
(graphical user interface)
over the more accustomed
command-line interface.

Figure 3: Early GUI Analogy. Retrieved from (Chandrashekar, 2022)

4.3When to use Mental Models in UX Design


Mental model should be used when understanding user behavior. When designing an interface, it
is important to understand why users do what they do. Mental models can help you with this by
providing a framework for thinking about user behavior and how it will impact your design. Mental
model also works well in situations where the solution is not known, but there is an issue that needs
to be solved.

Mental models are also useful when users are having problems with a task in your design. By
discovering how your users think about the task they are trying to accomplish you can better fit
your User Interface (UI) and workflow to match their mental model. These should be real users,
not internal stakeholders unless they are your actual users (Hunt, 23).

4.4When to use Conceptual Models in UX Design


Conceptual models are used when designing interactions between components of systems. For
example: conceptual model would be useful if we were building a chatbot or some other system
that had more than one component (i.e.: multiple modules). Conceptual model helps us think
through all of the different flows and possible paths while keeping each part separate from each
other, which makes things easier to maintain and update.

Conceptual models are a good tool when dealing with a complex system that has "a lot of moving
parts" or interactions between components. The model can provide a nice short hand for quickly
understanding what requirements are needed in a project and to find gaps. If you are helping design
a conceptual model, be sure to involve stakeholders and other team members that are knowledge
experts across the model. They will help catch gaps or misunderstandings in the model (Hunt, 23).
In short, mental models are used to understand objects, people, or situations while conceptual
models are used to understand the interactions between components of a system.

4.5Importance of conceptual design


1. Quality assurance—provide a solution model for baseline evaluation
2. Formal example—team members can utilize their strengths in contributing to the design
process with a clear understanding of design targets.
3. Conceptual models improve people's understanding of the subject being modelled and
communicate details between people.
4. provide a point of reference for design work and useful documentation for the future.
5. a conceptual model has four fundamental objectives:
a. Enhance an individual's understanding of the representative system.
b. Facilitate efficient conveyance of system details between stakeholders.
c. Provide a point of reference for system designers to extract system specifications.
d. Document the system for future reference and provide a means for collaboration.
e. Although conceptual models are not meant to be implemented, they play an
important part in the system's development. It can clarify system properties,
requirements, and events that need to be addressed.

4.6 Conclusion
It's crucial to have a solid grasp of the problem domain. The creation of a conceptual model is the
fundamental part of interface design. It should be noted that the interaction modes offer a
framework for deciding what kind of conceptual model to create. Specific interface types known
as interaction styles are instantiated as a part of the conceptual model. A conceptual model can be
shaped by paradigms, theories, models, and frameworks as well.

To sum it all up, mental models include everything from looking up the model number of your TV
to asking a guy at the hardware store about how to mix epoxy glue. Mental models are basically
advanced affinity diagrams of behaviors made from data gathered from the target user
representatives. Designing something requires us to completely understand what the user wants to
get done. This can be achieved through extensive user interviews and or ethnographic studies. The
findings are then curated, and the findings are mapped as mental model diagram. Let’s learn how
to map mental models in the upcoming article.
Self-Assessment Questions
Exercise 1.1

1. What is the best description of a conceptual model?


a. a high-level description of how a system is organized and how it operates
b. interaction paradigms and interaction modes
c. the problem space faced by the designer when gathering user requirements
d. none of the above
2. In understanding the problem space, a designer has to understand
a. What do you want to create?
b. What are your assumptions
c. Will it achieve what you hope it will
d. All of the above
3. A conceptual model is
a. a high-level description of how a system is organized and operates
b. a detailed description of how a system is organized and operates
c. a low-level description of how a system is organized and operates
d. none of the above
4. The four main interaction types are.
a. Instructing, conversing, manipulating and exploring
b. Instructing, conversing, manipulating and exploiting
c. Instructing, conversing, maintaining and exploring
d. Introducing, conversing, manipulating and exploring
5. Schneiderman (1983) describes Direct Manipulation as
a. Continuous representation of objects and actions of interest
b. Physical actions and button pressing instead of issuing commands with complex
syntax
c. Rapid reversible actions with immediate feedback on object of interest
d. All of the above.
Exercise 1.2

1. Describe what is meant by conceptual model


2. What is a mental model?
3. Compare and contrast mental model and conceptual model.
4. List and explain five importance of conceptual design
5. Describe Interaction types
6. Mention six interface types
SESSION 5: USER COGNITION AND COGNITIVE MODEL

Welcome to session 5 of unit 1. Have you developed a website or an apps before? Also, think
about using computer to solve problems or create a game app to be played by others. What kind
of cognitive processes go through your mind? In order to make human-computer interactions that
are easy to learn, easy to remember, and easy to apply to new problems, computer scientists must
understand something about human and cognitive processes Therefore, in this session we are going
to introduce you to user cognition and its application in Human Computer Interaction (HCI).

Objectives
By the end of this session, you will be able to:
a. describe the term cognition;
b. list and explain cognitive modes or types;
c. describe the impact of cognition on HCI;
d. explain the three main types of memory;
e. describe the cognitive process;

Now read on…

1.3. What is Cognition?


The term cognition refers to the mental processes involved in gaining knowledge and
comprehension. Cognition is the mental process of acquiring knowledge and understanding
through thought, experience and the senses. It includes all the mental processes that allow us to
perceive, think, remember and learn.

Cognition in human computer interaction (HCI), is the study of how people interact with
computers and how to design better systems that take into account the way people think and
process information. It includes research on how people use computers for tasks such as problem
solving, decision making, and learning, as well as on how to design user interfaces that are
effective and easy to use.

Jung’s theory of cognitive functions is a starting point for understanding how the mind works. He
identified four different ways of processing information, as sensation, intuition, thinking, and
feeling. Each of these functions operates in a different way, and they can be used together to help
us understand and make sense of the world around us.

In practice, cognitive functions can be used to help us improve our thinking and decision-making
skills. By understanding how each function works, we can learn to use them more effectively to
solve problems and make better choices.

The four cognitive functions can be helpful in different ways. Sensation helps us to focus on the
details of our environment. Intuition helps us to see the big picture and make connections between
ideas. Thinking helps us to analyze and understand information. Feeling helps us to make
decisions based on our values and emotions.

Each person relies on different cognitive functions to different degrees. Some people may prefer
to use one function more than others, and some may use all four functions equally. By
understanding the different strengths and weaknesses of each function, we can learn to use all of
them more effectively. Also, cognitive skills are essential for a person’s overall development. They
help us think, learn, remember and pay attention. They also allow us to make decisions and solve
problems.

1.4. Cognition Modes or Types of Cognition


Cognition, according to Norman (1993), has two general modes: experiential and reflective.
a) Experiential cognition:
This is more instinctive and it include the state of mind in which we perceive, act, and react to
events around us effectively and effortlessly. It requires reaching a certain level of expertise and
engagement. Examples include driving a car, reading a book, having a conversation and playing a
video game.

Experiential cognition allows us to quickly and automatically respond to the many Stimuli we
encounter in our environment. It is the mode of cognition that is responsible for our ability to drive
a car or ride a bike without having to consciously think about each and every step involved.

b) Reflective cognition:
Reflective cognition involves our ability to think, compare and make decision. This kind of
cognition is what leads to new ideas and creativity. Examples include designing, learning, and
writing a book. Reflective cognition is slower and more deliberative. It is the mode of cognition
that allows us to reflect on our past experiences and plan for the future.

Norman points out that both modes are essential for everyday life but that each requires different
kinds of technological support. We need experiential cognition to quickly and automatically
respond to the many stimuli we encounter in our environment. We need reflective cognition to
reflect on our past experiences and plan for the future.

1.5. The impact of cognition on HCI


Cognitive Psychology is a major contributor to research in Human-Computer Interaction (HCI).
By applying psychological principles to understand and help develop models that explain and
predict human performance, Cognitive Psychology can help improve the usability of computers
and other technological devices.

Cognition is becoming increasingly important for interaction design as humans become more
reliant on technology for everyday tasks. It involves lot of cognitive processes, such as thinking,
remembering, learning and decision making. Designers need to take into account how humans will
interact with their designs, and how to make the user experience as smooth and enjoyable as
possible.

1.6. The three Components of Cognition


There are three main components to successful memory performance these are; encoding, storage,
and retrieval. All of these are necessary for a person to be able to effectively remember something.
In each of these domains, there are multiple important features required for understanding the
processes involved.
Encoding includes the process of getting information into memory in the first place. This can be
done through various means, such as paying attention to the information, rehearsal, and making
associations.

Storage is the second component, and refers to keeping information in memory over time. This
can be done through a variety of means as well, such as consolidation and elaboration.

Finally, retrieval is the process of accessing information that is stored in memory. This can be
done through different techniques, such as retrieval cues and associative retrieval.

1.7. What are Cognitive Processes?


Cognitive development generally refers to the development of the mental process and capabilities
needed for thought, learning and perception. Three aspects of cognitive development that are often
studied are:
1. Knowledge and understanding of the world
2. Ability to acquire new information and learn
3. Processes underlying developmental changes in the first two areas

There is often a focus on understanding differences in cognitive development between individuals,


and how these differences arise. Such differences were sought in three domains: existing
knowledge about the problems, ability to acquire new information about them, and process-level
differences underlying developmental changes in the first two areas.

Cognitive processes are a series of chemical and electrical signals that occur in the brain that allow
you to comprehend your environment and gain knowledge.
The common types of cognitive processes that humans often display are listed in Figure 1.
These include processes such as
thinking, knowing,
remembering, judging, and
problem-solving, perceiving,
recognizing, conceiving, and
reasoning, Reading, speaking,
and listening, Problem solving,
decision-making. Acquiring
skills, Creating new ideas.
Cognition is important for
understanding and responding to
the world around us because it Figure 1: Cognition Processes
involves the ability to process and
understand information.

It is important to note that many of these cognitive processes are interdependent. Thus, several of
them may be involved for a given activity. It is rare for one to occur in isolation. For example,
when you try to learn material for an exam, you need to attend to the material, perceive and
recognize it, read it, think about it, and try to remember it.
Below we describe six of the cognitive processes in detail, this is followed by a summary
highlighting core design implication for each. Most relevant for interaction design are attention
and memory which we describe in greatest detail.

Below we describe six of the cognitive processes in detail, this is followed by a summary
highlighting core design implication for each. Most relevant for interaction design are attention
and memory which we describe in greatest detail.

i. Attention
The first process of cognition to discuss is attention. Attention means selecting things to
concentrate on at a point in time from the mass of stimuli around us. Attention allows us to focus
on information that is relevant to what we are doing. It also involves audio and/or visual senses.

Focusing on stimuli in your environment often requires conscious effort. Note that focussed and
divided attention enables us to be selective in terms of the mass of competing stimuli but limits
our ability to keep track of all events.
Information at the user interface should be structured to capture users’ attention, e.g., use
perceptual boundaries (windows), colour, reverse video, sound and flashing lights.

Consider the question, is it possible to perform multiple tasks without one or more of them being
detrimentally affected? Ophir et. Al. (2009) compared heavy vs light multi-taskers. They found
out that heavy taskers were more prone to being distracted than those who infrequently multitask.
heavy multi-taskers are easily distracted and find it difficult to filter irrelevant information Activity
on Attention.

Activities on Attention
Which of the tables below makes it easier to find information?

Table 2 Find the price of a double room at the Holiday Inn in Columbia
Table 3: Find the price for a double room at the Quality Inn in Pennsylvania

Design implications for attention


• Make information salient when it needs attending to
• Use techniques that make things stand out like colour, ordering, spacing, underlining,
sequencing and animation
• Avoid cluttering the interface with too much information
• Search engines and form fill-ins that have simple and clean interfaces are easier to use

ii. Perception
The second process of cognition to discuss is perception. Perception is concerned with how
information is acquired from the world and transformed into experiences. Human perception
occurs through the five senses: sight, taste, smell, sound and touch. Perceptions are a cognitive
process because we often consciously and unconsciously interpret information gained through our
perceptions, forming thoughts, opinions and emotional reactions. For instance, the smell of a
particular flower may remind you of a specific person and bring back a pleasant memory.

Activity on Perception
Which of the following background and text color is easiest to read and why?
Design implications for Perception
• Icons should enable users to readily distinguish their meaning
• Bordering and spacing are effective visual ways of grouping information
• Sounds should be audible and distinguishable
• Speech output should enable users to distinguish between the set of spoken words
• Text should be legible and distinguishable from the background
• Tactile feedback should allow users to recognize and distinguish different meanings.
• To design representations that are easily perceivable, e.g., text should be legible, icons
should be easy to distinguish and read, etc.

iii. Memory
The third process of cognition to discuss is memory. Memory is a vital part of how we perceive
the world around us. Human beings have both short-term and long-term memory capacities, and
we can create better designs by understanding how memory works and how we can work with that
capacity rather than against it. This is important for all designers but particularly so for information
visualization designers who need to ensure that their work is readily understood by the viewer in
order for it to be immediately useful.

Human memory is a powerful mental process that has many implications on life and how you
experience things, from remembering meaningful events to enabling you to execute tasks and
achieve goals. In essence, human memory has three facets: sensory memory, short-term memory
and long-term memory. The designer is most concerned with the first two types and strategically
designs to appeal to short-term and sensory memory.

Perhaps one of the most useful bits of information about human memory is that humans have
trouble remembering and engaging with anything that has more than 7 (+ or - 2) task items.
Designers take this memory limitation into account when presenting information and wireframing
products, in order to provide the most memorable and efficient user experience.

This involves first encoding and then retrieving knowledge. It is belief that human doesn’t
remember everything. It involves filtering and processing what is attended to in order to
remember. Sometimes, context (i.e. where, when) is important in affecting our memory and enable
us to remember things easily. In addition, we also tend to recognize things much better than being
able to recall them. For instance, we remember less about objects we have photographed than when
we observe them with the naked eye (Henkel, 2014).

Three Types of Memory


There are three main types of memory that are processed in the brain:
Figure 4: three main types of memory

The three main types of memory are listed and described below:

i. Sensory Memories
ii. Short-term Memories
iii. Long-term Memories

Sensory Memories
Sensory memories are the memories which are stored for tiny time periods and which originate
from our sensory organs (such as our eyes or our nose). They are typically retained for less than
500 milliseconds.

Visual sensory memory is often known as iconic memory. Sensory visual memories are the raw
information that the brain receives (via the optic nerve) from the eye. We store and process sensory
memories automatically – that is without any conscious effort to do so.

The processing of this information is called pre-attentive processing (e.g., it happens prior to our
paying attention to the information). It is a limited form of processing which does not attempt to
make sense of the whole image received but rather to a small set of features of the image – such
as colors, shapes, tilt, curvature, contrast, etc. It is sensory memory which draws your attention to
the strawberries in this graphic.

Figure 3: Sensory Memories


Short-Term Memories
Short-term memory is used to process sensory memories which are of interest to us – for whatever
reason. The sensory memory is transferred to the short-term memory where it may be processed
for up to a minute (though if the memory is rehearsed – e.g., repeated – it may remain in short-
term memory for a longer period up to a few hours in length).

Short-term memory is of limited capacity. Experiments conducted by, George A. Miller the
psychologist, and reported in his paper “The Magical Number Seven, plus or minus two” suggest
that we can store between 5 and 9 similar items in short-term memory at the most.

This capacity can be increased by a process known as “chunking”. This is where we group items
to form larger items. So, for example, you can memorize a 12-digit phone number in short-term
memory by taking digits in pairs (35) rather than singly (3 and 5) which gives you 6 chunks to
remember (which falls between 5 and 9) rather than 12 digits (which exceeds the capacity of short-
term memory).

Chunking can occur visually as well as through combination of numeric or alpha-numeric


attributes. A common example of this would be in a bar chart where a single bar may represent a
chunk of information. This is useful to the visual designer because it allows a visual representation
of information to be easily processed in short-term memory and for that representation to offer
more complex insights than an initial examination of the capacity of short-term memory might
allow.

Figure 4: recalling a Sequence by Age

This graph, above, shows how information recall is limited from the short-term memory and recall
becomes worse when asked to recall a sequence backwards.

Long-Term Memories
In most instances the memories transferred to our short-term memories are quickly forgotten. This
is, probably, a good thing. If we didn’t forget the huge volumes of information that we perceive
on a daily basis we could well become overloaded with information and find processing it in a
meaningful way becoming impossible.

In order for most memories to transfer from short-term to long-term memory – conscious effort
must be made to affect the transfer. This is why students review for examinations; the repeated
application of information or rehearsing of information enables the transfer of the material they
are studying to long-term memory.

Figure 5: Van de Graaf Generator which can be used to generate static electricity

The image above is of a Van de Graaf Generator which can be used to generate static electricity –
you can then touch the generator and another person to give them a static shock. It’s worth
remembering that they won’t come back for a 2nd attempt…

It is also possible for a long-term memory to evolve through a meaningful association in the brain.
For example, we know that a static shock is painful even if we are only shocked once. It doesn’t
take repeated shocks to memorize that. The meaningful connection between the pain and the shock
allows us to process the memory in long-term. In fact, strong emotional or physical connections
are often the easiest way for something to enter long-term memory.

It is worth noting that majority of designs and in particular, information visualizations, will not be
committed to long-term memory. It may be that the conclusions or understanding they bring will
be transferred to long-term memory (usually through revision or application) but the design itself
will not. The vast majority of interaction between the user and an information visualization will
occur in sensory and short-term memory.

Encoding is the first stage of memory. It determines which information is attended to in the
environment and how it is interpreted. The more attention paid to something, the more it is
processed in terms of thinking about it and comparing it with another knowledge and the more
likely it is to be remembered. Example, when learning about HCI, it is much better to reflect upon
it, carry out exercises, have discussions with others about it, and write notes than just passively
read a book, listen to a lecture or watch a video about it.
Note that context affects the extent to which information can be subsequently retrieved. Sometimes
it can be difficult for people to recall information that was encoded in a different context. E.g.,
“You are on a train and someone comes up to you and says hello. You don’t recognize him for a
few moments but then realize it is one of your neighbors. You are only used to seeing your
neighbor in the hallway of your apartment block and seeing him out of context makes him difficult
to recognize initially”(Preece, Sharp, & Rogers, 2015).

Activity on Memory

Is Apple’s Spotlight search tool in figure 6 any good?

Figure 5: Apple’s Spotlight search tool

Design implications of Memory


• Don’t overload users’ memories with complicated procedures for carrying out tasks.
• Design interfaces that promote recognition rather than recall
• Provide users with various ways of encoding information to help them remember. e.g.,
categories, colour, flagging, time stamping

iv. Learning
The fourth process of cognition to discuss is learning. You can learn through many cognitive
processes, such as memory, thought and perception. Combining multiple processes can allow you
to learn more quickly and retain more information. For example, reading, writing, listening,
verbally communicating and thinking about a language can help you learn one much faster than
any of those processes alone.

Design Implications for Learning:


• Design interfaces that encourage exploration
• Design interfaces that constrain and guide learners
• Dynamically linking concepts and representations can facilitate the learning of complex
material

v. Reading, speaking, and listening


The fifth process of cognition to discuss is reading, speaking, and listening.
The ease with which people can read, listen, or speak differs:
• Many prefer listening to reading
• Reading can be quicker than speaking or listening
• Listening requires less cognitive effort than reading or speaking
• Dyslexics have difficulties understanding and recognizing written words

Applications to Support Reading, Speaking, and Listening


• Speech-recognition systems allow users to interact with them by asking questions. e.g.,
Google Voice, Siri.
• Speech-output systems use artificially generated speech. e.g., written-text-to-speech
systems for the blind
• Natural-language systems enable users to type in questions and give text-based responses.
e.g., Ask search engine

Design implications for Reading, Speaking and Listening


• Speech-based menus and instructions should be short
• Accentuate the intonation of artificially generated speech voices
• they are harder to understand than human voices
• Provide opportunities for making text large on a screen

vi. Problem-solving, planning, reasoning and decision-making


The sixth process of cognition to discuss is All involves reflective cognition. Example, thinking
about what to do, what the options are, and the consequences, often involves conscious processes,
discussion with others (or oneself), and the use of artefacts such as maps, books, pen and paper. It
may involve working through different scenarios and deciding which is best option.

CONCLUSION

We have learnt in this session that cognition in human computer interaction is the study of how
humans interact with computers, and how humans process and understand the information
presented to them by computers. It is concerned with the ways in which people interact with
technology, and how technology can be designed to better meet the needs of users.
Self-Assessment Questions

Exercise 1.1

1. What is cognition?
2. Describe the two general modes of cognition.
3. Describe the cognition processes with example.
4. What are the design implications for each of the cognition processes?
5. What are implications for designing technologies to support how people will learn, and what
they learn?
6. Explain the three components of cognition.
7. Describe the following types of memory:
i. Sensory Memories
ii. Short-term Memories
iii. Long-term Memories

References

Preece, J., Sharp, H., & Rogers, Y. (2015). Interaction design: beyond human-computer
interaction: John Wiley & Sons.
SESSION 6: COGNITIVE MODEL

Welcome to the last session of this unit. Congratulations!! In the previous sessions, you were
introduced to users’ cognition. In this session, we would build upon the previous session by
introducing you to Cognitive Model. In addition, this session, will emphasize on the two cognitive
model techniques - GOMS model and Task analysis model. Thus, we will focus on GOMS model
as a task analytic notation and how to construct them. We hope you will enjoy it.

Objectives
By the end of this session, you should be able to:
j. have basic knowledge model;
k. have basic knowledge cognitive model;
l. describe the three types of cognitive models;
m. explain the concept of Goals, Operators, Methods and Selection (GOMS) with example;
n. demonstrate Hierarchical Task Analysis (HTA) with examples.

Now read on…

6.1What is a Model?
A model can be described as a simplified representation of a system at some particular point in
time or space intended to promote understanding of the real system (Bellinger, 2004). A model
can also be described as an abstraction of a system, aimed at understanding, communicating,
explaining, or designing aspects of interest of that system. The model is related to the system by
an explicit or implicit mapping (Sanford Friedenthal, 2010).

Models for user-interface interactions must simulate both the user and the interface. In HCI design,
models are used to guide the creation of an interface. One common model outside of HCI would
be the blueprint used in the construction of a house. Where in the blueprint will serve as a guide
to put up the building easily and appropriately.

6.2What is Cognitive Model?


Hello class, we have studied cognition in the previous session. In relation to that can anyone of
you describe what a cognitive model is all about? Because HCI is concerned with such issues as
users’ capabilities and preferences and the degree to which interfaces support their activities, this
calls for the understanding of the term cognitive model.

A cognitive model is an approximation of one or more cognitive processes in humans or other


animals for the purposes of comprehension and prediction. It is the modelling of some aspects of
user or representation of users as they interact with an interface. According to (Ed, 2023) cognitive
model “is an area of computer science that deals with simulating human problem-solving and
mental processing in a computerized model. Such a model can be used to simulate or predict
human behavior or performance on tasks similar to the ones modeled and improve human-
computer interaction” (Ed, 2023). Cognitive modeling can also be described as a form of task
analysis which can evolve into many areas and human factors.

More importantly, cognitive modeling is used in numerous artificial intelligence (AI) applications,
such as expert systems, natural language processing (NLP), robotics, virtual reality (VR)
applications and neural networks. Cognitive models are also used to improve products in
manufacturing segments, such as human factors, engineering, and computer game and user
interface design.

Research into cognitive modeling is currently being conducted by academic and industry groups,
including MIT, IBM and Sandia National Laboratories.
An advanced application of cognitive modeling is the creation of cognitive machines, which are
AI programs that approximate some areas of human cognition. One of the goals of Sandia's project
is to make human-computer interaction more like an interaction between two humans (Ed, 2023).

In terms of information processing, cognitive modeling is the modeling of human perception,


reasoning, memory and action, understanding, knowledge, intentions and processing.

6.3Types of Cognitive Models in HCI


There are many types of cognitive models, and they can range from box-and-arrow diagrams to a
set of equations to software programs that interact with the same tools that humans use to complete
tasks (e.g., computer mouse and keyboard. These are categorized into the following five:

i. Hierarchical models are used to represent a user’s task and goal structure. This type of
model is usually used in cases where the user needs to complete a task that has multiple
steps or goals.
ii. Linguistic models are used to represent the user-system grammar. This type of model is
usually used in cases where the user needs to interact with the system using natural
language.
iii. Physical and device models are used to represent human motor skills. This type of model
is usually used in cases where the user needs to interact with the system using some type
of physical device.
iv. Discrepancy detection model Discrepancy detection systems signal when there is a
difference between an individual's actual state or behavior and the expected state or
behavior as per the cognitive model. That information is then used to increase the
complexity of the model. Some highly sophisticated programs model specific intellectual
processes. Techniques such as discrepancy detection are used to improve these complex
models. According to Forsythe, the cognitive machines they've created have the capacity
to infer user intent -- which is not always consistent with behavior -- store information from
experiences similarly to human memory and call upon expert systems for advice when they
need it.
v. Neural network Another type of cognitive model is the neural network. This model was
first hypothesized in the 1940s, but it has only recently become practical thanks to
advancements in data processing and the accumulation of large amounts of data to train
algorithms.
Neural networks work similarly to the human brain by running training data through a large
number of computational nodes, called artificial neurons, which pass information back and
forth between each other. By accumulating information in this distributed way, applications
can make predictions about future inputs.

vi. Reinforcement learning is an increasingly prominent area of cognitive modeling. This


approach has algorithms run through many iterations of a task that takes multiple steps,
incentivizing actions that eventually produce positive outcomes while penalizing actions
that lead to negative ones. This is a primary part of the AI algorithm that Google's
DeepMind used for its AlphaGo application, which bested the top human Go players in
2016.

These models, which can also be used in NLP and smart assistant applications, have improved
human-computer interaction, making it possible for machines to have rudimentary conversations
with humans.

6.4 Cognitive Models Techniques


The cognitive model techniques are;
1. Goals, Operators, Methods and Selection (GOMS)
2. Hierarchical Task Analysis (HTA)
3. Cognitive Complexity Theory (CCT)
4. Linguistic and grammatical models
5. BNF (Backus-Naur Form)
6. Task-Action Grammar (TAG)
7. Physical and device-level models
8. Keystroke-Level Model
9. Three State Model

Three of these techniques are described below:

6.4.1 What is GOMS?


GOMS stands for (Goals, Operators, Methods, and Selection). GOMS is a family of predictive
models of human performance that can be used to improve the efficiency of human-machine
interaction by identifying and eliminating unnecessary user actions.

It is a detailed description of knowledge required by an expert user to perform a specific task


without error. It is a usefully approximation of the processes underlying human behavior. It is
therefore the description of the knowledge that a user must have in order to carry out tasks on a
device or system. It is the representation of the “how to do it” knowledge that is required by a
system in order to get the intended tasks accomplished.

The analysis of a task into Goals, Operators, Methods, and Selection rules (GOMS) is an
established method for characterizing a user's procedural knowledge. When combined with
additional theoretical mechanisms, the resulting GOMS model provides a way to quantitatively
predict human learning and performance for an interface design, in addition to serving as a useful
qualitative description of how the user will use a computer system to perform a task (Kiera, 2004).
In 1980, Card, Moran, and Newell developed two versions of GOMS – the CMN-GOMS and the
KLM-GOMS. The CMN-GOMS is the original GOMS. It is a top-level task analysis. KLM-
GOMS: A key stroke level model. This is the simplest GOMS.
Other GOMS models are:

Critical-Path Model GOMS (CPM-GOMS) or Cognitive Perceptual Motor GOMS. Developed by


Bonnie John and eliminates the restriction that actions are performed sequentially.

Natural GOMS language (NGOMSL) was developed by David Kieras and formalized the
description of tasks and enable automated calculation of task execution time and learning time.

The simplest and most frequently used GOMS variant is KLM-GOMS< (Keystroke-Level Model),
where empirically derived values for basic operators like keystrokes, button presses, double clicks,
and pointer movement time, are used to estimate task times.

Note that the other three major GOMS variants (CMN-GOMS, NGOMSL, and CPM-GOMS)
require extensive training and familiarity with Human-Computer Interaction principles to perform
an analysis.

Below we define each of the key terms in GOMS:


Goals - what the user wants to achieve
Operators - basic actions user performs
Methods - decomposition of a goal into subgoals/operators
Selection - means of choosing between competing methods

6.4.2 What does a GOMS Task Analysis Involve?


It involves defining and then describing the user’s
• Goals: Something that the user tries to accomplish (action-object pair, e.g., delete word).
• Methods: A learned sequence of steps that accomplish a task
– How do you do it on this system? (Could be long and tedious…)
• Selection Rules: Only when there are clear multiple methods for the same goal.
• Operators: Elementary perceptual, cognitive and motor acts that cause change (external vs.
mental). It also uses action-object pair (e.g., press key, select menu, make gesture, speak
command...).

GOMS analysis - Example 1: Closing an Internet Explorer window


GOAL: CLOSE-WINDOW
. [select GOAL: USE-MENU-METHOD
. . MOVE-MOUSE-TO-FILE-MENU
. . PULL-DOWN-FILE-MENU
. . CLICK-OVER-CLOSE-OPTION
GOAL: USE-CTRL-W-METHOD
. . PRESS-CONTROL-W-KEYS]
For a particular user, U1:
Rule 1: Select USE-MENU-METHOD unless another
rule applies
Rule 2: If the application is GAME,
select CTRL-W-METHOD
Example 1 above have one Goal with either of two Methods, one of which requires a sequence of
three Operators; for U1 we have 2 Selection rules.

GOMS analysis - Example 2: Deleting a file using Windows Explorer


This can be done using three alternative methods: drag-to-trash, delete-key, or right-click as shown
in the GOMS analysis below. It is also clear that within the delete-key method we have alternative
sub-goals: confirm with keyboard or confirm with mouse.

GOAL: DELETE-FILE
. LOCATE-FILE
. MOVE-CURSOR-OVER-FILE
. [SELECT GOAL: DRAG-TO-TRASH-METHOD
. . HOLD-MOUSE-BUTTON-DOWN
. . LOCATE-TRASH-ICON
. . MOVE-CURSOR-TO-TRASH-ICON
. . VERIFY-TRASH-IS-REVERSE-VIDEO
. . RELEASE-MOUSE-BUTTON
. GOAL: USE-DELETE-KEY-METHOD
. . CLICK-ON-FILE
. . PRESS-DELETE-KEY
. . LOCATE-CONFIRM-YES
. . [SELECT GOAL: CONFIRM-YES-
KEYBOARD-METHOD
. . . PRESS-Y-KEY
. . GOAL: CONFIRM-YES-MOUSE-METHOD
. . . MOVE-CURSOR-OVER-YES-BUTTON
. . . CLICK-ON-YES-BUTTON]
. GOAL: USE-RIGHT-CLICK-OPTION-METHOD

In.order to make .human-computer


RIGHT-CLICK-ON-FILE-AND-HOLD-DOWN
interactions that are is easy to learn, easy to remember, and easy
to.apply to new problems,
. computer scientists must understand something about human learning,
LOCATE-DELETE-OPTION
memory, and problem solving. While designing user interface of these systems, the cognitive
.
processes whereby. usersMOVE-CURSOR-OVER-DELETE-OPTION
interact with computers must be taken into account because usually users
attributes
.
do not match
.
to computer attributes. Also, we should take into account that computer
RELEASE-MOUSE-BUTTON
systems can have non-cognitive effects on the user, for example the user’s response to virtual
.
worlds. . LOCATE-CONFIRM-YES
. . …
The processes which contribute to cognition include:
a) Understanding,
b) Remembering
c) Reasoning
d) Perception and recognition
e) Memory
f) Learning
g) Attention
h) Being aware
i) Acquiring skills
j) Creating new ideas
k) Reading, speaking, and listening
l) Problem solving, decision-making.

GOMS analysis – Example 3: File & directory operations - a better version:


– Method for goal: delete an object.
• Step 1. drag object to trash.
• Step 2. Return with goal accomplished.
– Method for goal: move an object.
• Step 1. drag object to destination.
• Step 2. Return with goal accomplished.

GOMS analysis – Example 4: the drag operation


– Method for goal: drag item to destination.
• Step 1. Locate icon for item on screen.
• Step 2. Move cursor to item icon location.
• Step 3. Hold mouse button down.
• Step 4. Locate destination icon on screen.
• Step 5. Move cursor to destination icon.
• Step 6. Verify the destination icon.
• Step 7. Release mouse button.
• Step 8. Return with goal accomplished

6.4.2.1 Stages of Hierarchical Task Analysis (HTA)

1. Starting the analysis


a) Specify the main task.
b) Break down main task into 4-8 subtask, and specify in terms of objectives. Cover the whole
area of interest.
c) Draw out as layered plans, logically & technically correct. None should be missing.

2. Progressing the analysis


a) Decide on level of detail and stop decomposition. Should be consistent between tasks. Can
range from detailed to high level description.
b) Decide if a depth first or breadth first decomposition should be done. Can alternate between
the two.
c) Label and number the HTA.
3. Finalizing the analysis.
a) Check that decomposition and numbering is consistent. May produce a written account of the
processes.
b) Have a second person look it over. They should know the tasks but not to be involved in the
analysis.

Practical Examples of Task Analysis

Example 1
Diagrammatic HTA – Prepare and Print a letter using Microsoft Word
Example 2
Diagrammatic HTA – for Using a Website

6.4.3 Limitations of Cognitive Modeling


Haven seen the beautiful things about cognitive modelling especially in relation to AI, researches
have shown that it is not without problems. Who can tell us some of these problems?
Well, despite advancements in applying cognitive models to artificial intelligence, it still falls short
of its true goal of simulating human thinking. In neural networks, for example, algorithms must
see thousands - or even millions - of examples of training data before they can make predictions
about similar data in the future. Even then, they can only make inferences about the narrow topic
area on which they trained.

This is very different from how human brains work. The human brain uses a combination of
context and more limited experience to make generalizations about new experiences, something
even the most advanced cognitive models can't do today.
The most advanced biological research into the human brain still lacks a complete picture of how
it works. Even if that baseline information is established, transposing human thought processes
onto computer programs is another leap entirely.
Self-Assessment Questions
Exercise 1.6
1. What is a Model?
2. Describe term cognitive model with example.
3. Explain four importance of modelling.
4. Compare and contrast cognitive model and Hierarchical task analysis.
5. Briefly describe four techniques of modelling.
6. Demonstrate Hierarchical task analysis with example.
7. Demonstrate cognitive model with example.
SESSION 1: USER INTERFACES AND INTERACTION STYLES

Welcome to session 1, unit 2. In this session, we would look at user interfaces and interaction
techniques or styles. Both user interfaces and interaction techniques provide a useful focus for
human-computer interaction research because their importance cannot be over emphasized. Thus,
the need to further discuss these techniques. We hope you will enjoy the session.
Now let’s beginning by looking at the objectives of this session.

Objectives
By the end of this session, you should be able to:
a. define human computer interface with examples;
b. list and describe the types and uses of user interfaces
c. describe in detail interaction techniques;
d. list and describe examples of interaction techniques;
e. differentiate between command line and graphic user interface;
f. mention the advantages of interaction techniques

Now read on…

1.1 What is a User Interface?

In computer science and human-computer interaction, the user interface (of a computer program)
refers to the graphical, textual and auditory information the program presents to the user, and the
control sequences (such as keystrokes with the computer keyboard, movements of the computer
mouse, and selections with the touchscreen) the user employs to interact with the program.

A user interface (UI) is a conduit between human and computer system. It is the space where a
user will interact with a computer or machine to complete tasks as illustrated in figure 1. The
purpose of a UI is to enable a user to effectively control a computer or machine they are interacting
with, and for feedback to be received in order to communicate the effective completion of tasks.
A successful user interface should be intuitive (not require training to operate), efficient (not create
additional or unnecessary friction) and user-friendly (be enjoyable to use) (Interaction, 2023).

Figure 6: Example user interfaces


There are different kinds of UI as shown in Table 1. The most common type is the Graphical User
Interface (GUI) which encompasses elements such as text, links, buttons and images to construct
a design system that form the user experience. When we refer to a UI in Software Design, Digital
Design, Web Design or User experience (UX) Design, we usually mean a graphical user interface.
In UX Design, a user interface and the resulting behaviours are an end output of the design process.

User interfaces can be visualised in many ways and in many degrees of fidelity. On the web user
interfaces are usually rendered as Hypertext Markup Language (HTML) and Cascading Style
Sheets (CSS), and in native applications using native or custom libraries. A design system is often
visualised as a collection of UI elements that define guidelines for building a user experience
within a given platform.

Table 4: Interface Types

Example Interfaces in The Real World that user can interact with include VCR, wristwatch, phone,
copier, running shoes, plane cockpit, airline reservation.

1.1.1 Elements of User Interface

The UI elements are the controls or widgets that represents the parts of the interface that the user
can directly interact with. They may include:
• Form Controls: buttons, text fields, checkboxes, radio buttons, dropdown lists, list boxes,
toggles, date field
• Navigational Components: breadcrumb, slider, search field, pagination, slider, tags, icons.
• Informational Components: tooltips, icons, progress bar, notifications, message boxes,
modal windows.
• Icons: these are images used to communicate a variety of things to users. They can help to
better communicate content and trigger a specific action.
• Containers or accordion: Accordions let users expand and collapse sections of content. They
help users navigate material quickly and allow the UI designer to include large amounts of
information in limited space.
• Menus • Adjustment handle
• Windows • virtual trash bin
• Tabs • button
• Curser • form elements such as check box, button, radio
• Pinter button, text box, etc.

1.1.2 Best Practices for Designing an Interface


According to (usability.gov, 2023), once you know about your user, make sure to consider the
following when designing your interface:
• Keep the interface simple. The best interfaces are almost invisible to the user. They avoid
unnecessary elements and are clear in the language they use on labels and in messaging.
• Create consistency and use common UI elements. By using common elements in your UI,
users feel more comfortable and are able to get things done more quickly. It is also important
to create patterns in language, layout and design throughout the site to help facilitate efficiency.
Once a user learns how to do something, they should be able to transfer that skill to other parts
of the site.
• Be purposeful in page layout. Consider the spatial relationships between items on the page
and structure the page based on importance. Careful placement of items can help draw attention
to the most important pieces of information and can aid scanning and readability.
• Strategically use color and texture. Make use of color, light, contrast, and texture to your
advantage.
• Use typography to create hierarchy and clarity. Carefully consider how you use typeface.
Different sizes, fonts, and arrangement of the text to help increase legibility and readability.
• Make sure that the system communicates what’s happening. Always inform your users of
location, actions, changes in state, or errors. The use of various UI elements to communicate
status and, if necessary, next steps can reduce frustration for your user.
• Think about the defaults. By carefully thinking about and anticipating the goals people bring
to your site, you can create defaults that reduce the burden on the user. This becomes
particularly important when it comes to form design where you might have an opportunity to
have some fields pre-chosen or filled out.

1.1.3 User Interface Engineering

This is the design of computers, appliances, machines, mobile communication devices, software
applications, and websites with the focus on the user's experience and interaction. The goal of user
interface design is to make the user's interaction as intuitive as possible. The intuitiveness of an
interface may depend on symbology from an artistic perspective and functionality from a technical
engineering perspective. The user interface provides means of:
• Input – Allowing the users to manipulate a system.
• Output – Allowing the system to indicate the effects of the users’ manipulation.

1.2 What are Interaction Technique/Styles?

An interaction technique/styles is a way of using a physical input/output device to perform a


generic task in a human-computer dialogue. It represents an abstraction of some common class of
interactive task, for example, choosing one of several objects shown on a display screen. It can
also refer to how the user can communicate or otherwise interact with the computer system to carry
out a specific task.
According to Wikipedia, “an interaction technique, user interface technique or input technique
is a combination of hardware and software elements that provides a way for computer users to
accomplish a single task. For example, one can go back to the previously visited page on a Web
browser by either clicking a button, pressing a key, performing a mouse gesture or uttering a speech
command. It is a widely used term in human-computer interaction. In particular, the term "new
interaction technique" is frequently used to introduce a novel user interface design idea”.

There many types of interaction techniques or styles. Below we have listed and described eight of
them:
1. Command-line interfaces
2. Form-fills and spreadsheets interfaces
3. Menu-based interfaces
4. Direct manipulation
5. Natural language interfaces
6. Questions/Answer Query dialog interfaces
7. Point-and-click interfaces
8. 3D interfaces
9. The WIMP interfaces
The WIMP interface is the most common and complex. Below we present their explanations
and examples.

1. Command-Line Interface (or Command Entry)

This is a mechanism for interacting with


a computer operating system or software
by typing commands to perform specific
tasks. This method of instructing a
computer to perform a given task is
referred to as entering a command.
Inputs are accepted via keyboard only.
The command line interface as shown in
figure 2 is the earliest and the first
interactive dialog style to be commonly
used and, in spite of the availability of
menu-driven interfaces, it is still widely
used especially on Linux/Unix
operating systems by administrators.
Figure 2: Command Line Interface You should note that this type of user
interface is not suitable for beginners. In
command-line interface, commands must be remembered and no hints are provided so it is mostly
used by expert users who type in commands to be executed.
Advantages of Command Language
• Flexible.
• Appeals to expert users.
• Supports creation of user-defined “scripts” or macros.
• Is suitable for interacting with networked computers even with low bandwidth.
• Allow user initiative.
• Cheap
4.8.4.3 Disadvantages of Command Language
• Retention of commands is generally very poor. Require substantial training and
memorization.
• Not useful for novice users. Learnability of commands is very poor.
• Poor error handling. Error messages and assistance are hard to provide because of the
diversity of possibilities plus the complexity of mapping from tasks to interface concepts
and syntax.

2. Natural language
Perhaps the most attractive means of communicating with computers, at least at first glance, is by
natural language. Users, unable to remember a command or lost in a hierarchy of menus, may long
for the computer that is able to understand instructions expressed in everyday words! Natural
language understanding, both of speech and written input, is the subject of much interest and
research.

Unfortunately, however, the ambiguity of natural language makes it very difficult for a machine
to understand. Language is ambiguous at a number of levels. First, the syntax, or structure, of a
phrase may not be clear. If we are given the sentence:

3. Question/answer and query dialog


Question and answer dialog is a simple mechanism for providing input to an application in a
specific domain. The user is asked a series of questions (mainly with yes/no responses, multiple
choice, or codes) and so is led through the interaction step by step. An example of this would be
web questionnaires. These interfaces are easy to learn and use, but are limited in functionality and
power. As such, they are appropriate for restricted domains (particularly information systems) and
for novice or casual users.

4. Form-fills
Form-fill interface as shown in figure 2 is used primarily for data entry but can also be useful in
data retrieval applications. The user is presented with a display resembling a paper form, with slots
to fill in as shown in. Often the form display is based upon an actual form with which the user is
familiar, which makes the interface easier to use.
Figure 3: Form Example

Advantages of Form Fill-in


• Simplifies data entry.
• Shortens learning in that the fields are predefined and need only be recognised.
• Guides the user via the predefined rules.
Disadvantages of Form Fill-in
• Consumes screen space.
• Usually sets the scene for rigid formalisation of the business processes.

5. Menus Based
In a menu-driven interface as shown in figure 3, the set of options available to the user is displayed
on the screen, and selected using the mouse, or numeric or alphabetic keys.
Two Types of Menus:
• Pull-down menu
• Pop-up menu

Figure 4: Menu Driven Interface


Advantages of Menu Selection
• Ideal for novice or intermittent users.
• Can appeal to expert users if display and selection mechanisms are rapid and if appropriate
“shortcuts” are implemented.
• Affords exploration (users can “look around” in the menus for the appropriate command,
unlike having to remember the name of a command and its spelling when using command
language).
• Structures decision making.
• Allows easy support of error handling as the user’s input does not have to be parsed (as with
command language).

Disadvantages of Menu Selection


• Too many menus may lead to information overload or complexity of discouraging proportions.
• May be slow for frequent users.
• May not be suited for small graphic displays.

6. Direct Manipulation (DM)


Manipulating means interacting with objects in a virtual or physical space by controlling and
navigating through them. For example, virtual objects can be manipulated by moving, selecting,
opening, closing, dragging, and zooming actions on virtual objects.
It can also involve actions using physical controllers (e.g., Wii) or air gestures (e.g., Kinect) to
control the movements of screen avatar. Or by using tagged physical objects (e.g., balls) that are
manipulated in a physical world result in physical/digital events (e.g., animation).

In 1983, Shneiderman coined the term Direct manipulation (DM), this came from his fascination
with computer games at the time. DM implies continuous representation of objects and actions of
interest. It means that objects of interest are represented as distinguishable objects in the UI and
are manipulated in a direct fashion. Again, it implies rapid reversible actions with immediate
feedback on object of interest. Example, the physical actions and button pressing instead of issuing
commands with complex syntax.

Direct manipulation systems have the following characteristics:


• Visibility of the object of interest.
• Rapid, reversible, incremental actions.
• Replacement of complex command language syntax by direct manipulation of the object
of interest.

Figure 4 demonstrate text-book example of direct manipulation showing the Windows File
Explorer, where files are dragged and dropped.
Figure 7: The text-book example of Direct Manipulation

Advantages of Direct Manipulation


• Visually presents task concepts.
• Easy to learn.
• Errors can be avoided more easily.
• Encourages exploration.
• High subjective satisfaction.
• Recognition memory (as opposed to cued or free recall memory).

Disadvantages of Direct Manipulation


• May be more difficult to program.
• Not suitable for small graphic displays.
• Spatial and visual representation is not always preferable.
• Metaphors can be misleading
• Compact notations may better suit expert users.

7. Point-and-click interfaces
In most multimedia systems and in web browsers, virtually all actions take only a single click of
the mouse button. You may point at a city on a map and when you click a window opens, showing
you tourist information about the city. You may point at a word in some text and when you click
you see a definition of the word. You may point at a recognizable iconic button and when you
click some action is performed.

This point-and-click interface style is obviously closely related to the WIMP style as describe
below.

8. Graphical user interface (GUI)


This is a type of user interface which allows people to interact with computer with images rather
than text commands. It accepts input via keyboard and pointing devices and it is easy to learn
compared with Command Line Interface. Below we present elements of GUI.

9. The WIMP interface / Graphical user interface

Currently many common environments for interactive computing are examples of the WIMP
interface style, often simply called windowing systems. WIMP stands for Windows, Icons, Menus
and Pointers (sometimes Windows, Icons, Mice and Pull-Down Menus), and it is the default
interface style for the
majority of interactive computer systems in use today, especially in the PC and desktop
workstation arena. Examples of WIMP interfaces include Microsoft Windows for IBM PC
compatibles, MacOS for Apple Macintosh compatibles.

Elements of WIMP interface / Graphical user interface


• Pointer.
It is a symbol that appears on the display screen and can be moved to select objects and commands.
Usually, the pointer appears as a small, angled arrow as shown in figure 5.

Figure 5: Pointer

• Desktop
The area on the display screen where icons are grouped is often referred to as the desktop because
the icons are intended to represent real objects on a real desktop. Example of desktop is shown in
figure 6.

Figure 6: Desktop

• Icons
They are small pictures that represent commands, files or windows. Example icons are shown in
figure 7. Tools for designing icon include: Figma and Sketch

Figure 7: Icons
• Windows
This is used to divide the screen into different areas as shown in Figure 8. In each window, you
can run a different program or display a different file.

Figure 8: Windows
• Menus
Most graphical user interfaces let you execute commands by selecting a choice from a menu as
shown in Figure 9.
There are two Types of Menus:
• Pull-down menu
• Pop-up menu

Figure 9: Menu

Table 5: CLI versus GUI


Self-Assessment Questions
Exercise 1.1
1. Give example of at least five types of user interface.
2. What are the elements of graphical user interface?
3. Mention at least two types of menus in GUI.
4. What are interaction techniques?
5. Describe at least five types of interaction techniques.
6. Compare and contrast CLI and GUI
7. List the elements of GUI
8. Describe the components of WIMP
9. Mention two advantages and two disadvantages of at least five interaction techniques.

Exercise 1.2
1. Which of the following is true regarding user interface components?
a. Vertically scrolling lists support single-item scrolling
b. A single row of tabs (property sheets) is a good user interface design
c. On the Macintosh, the trash can was used to eject a diskette
d. All of the above
2. ………………plays a role to bridge up the gap between the interfaces of machines and
human understanding.
a. Human
b. Computer
c. Human computer interaction
d. None of these
3. A ………………is usually a collection of icons those are reminiscent of the purpose of the
various modes.
a. Button
b. Pointer
c. Title bar
d. Palette
4. Which is the best definition of an interface metaphor?
a. In broad terms, the kind of technical and software frame work within which human
system interaction takes place (e.g., wimp, mobile, tangible)
b. An idea from the world that is used in the interface to help the user understand what to
do (e.g., click on tabs to change window contents, use shopping cart to store items to
purchase)
c. What the human does to make inputs and receive information from the system (e.g.,
click and drag an object, talk to an object, move self-closer to an object, converse with
an entity, etc)
d. The conceptual model used to guide the design of the interface.
5. Which of the following is the most likely interface metaphor used by a smart phone
calendar function?
a. Restaurant menu
b. Touch screen interface
c. A paper diary
d. Mobile technology
6. User personas that are not primary or secondary are _______personas.
a. Served b. Supplemental c. Customer d. Negative
7. Which is the best definition of an interface metaphor?
a. in broad terms, the kind of technical and software framework within which human
system interaction takes place (e.g., wimp, mobile, tangible)
b. an idea from the world that is used in the interface to help the user understand what to
do (e.g., click on tabs to change window contents, use shopping cart to store items to
purchase)
c. what the human does to make inputs and receive information from the system (e.g.,
click and drag an object, talk to an object, move self-closer to an object, converse with
amenity, etc.
8. A persona in the context of goal-oriented interaction design:
a. is used to role-play through an interface design.
b. is a real person.
c. represents a particular type of user.
d. should represent an average user.
9. During Modeling phase, usage and workflow patterns discovered through
a. modelling
b. analysis
c. Testing
d. error correction
10. During modelling phase, usage and workflow patterns discovered through.
a. Modelling
b. Analysis
c. Testing
d. Error correction
11. It is used for contextual data representation and interaction design and a hypothetical but
specific “character”
a. Persona
b. User Class
c. Candidate persona
d. Selected persona
12. It is interaction occurring not just on computers and laptops but potentially everywhere in
our environment.
a. Interaction design
b. User design
c. user experience
d. Ubiquitous interaction
SESSION 2: Ergonomics and Human Factors
Welcome to another interesting session. In your journey to understand what Human Computer
Interaction (HCI) is all about, there is the need to understand some ergonomics in relation to HCI.
In this session we are going to first describe the concept of ergonomic. Then we would look at
thematic domain of ergonomics. This would be followed by the ergonomics improvement process,
then Human Factor Ergonomic (HFE) principles and perspectives. In addition, few of the issues
addressed by ergonomics will be considered. Finally, we shall look at stakeholders of HFE and
Importance of ergonomic at work place.

Objectives
By the end of this session, you should be able to:
h. Explain the concept of ergonomics.
i. Describe the ergonomics domain.
j. Describe HFE principles and prospective.
k. Explain ergonomic improvement process.
l. Stipulate the issues or cases of ergonomics.
m. Describe the roles of stakeholders in relation to ergonomics.
n. Have the knowledge of the arrangement of controls and displays, the physical environment,
health issues and the use of color.

Read on…

2.1 What is Ergonomics in HCI?


Ergonomics is a word derived from Greek where ‘Ergon’ means ‘work’ and ‘nomos’ means
‘natural laws’. In short it is the science of designing a workstation in such a way that people can
work effectively, efficiently and satisfactory. What does these mean to you? These means;

• Effective means the work result meets the requirements.


• Efficient means the work task was completed with the available resources;
• and satisfactory means the healthy and safe work environment.
According to the International Ergonomics Association (IEA), ergonomic (or human factors) is
the “scientific discipline
concerned with the
understanding of
interactions among
humans and other
elements of a system, and
the profession that applies
theory, principles, data,
and methods to design in
order to optimize human
well-being and overall
system performance”
(Karwowski, 2006). That
Figure 8: Human Factors and Ergonomics
is the science of designing user interaction to fit the user in the performance of task so as to achieve
their predefined goals.

More importantly, ergonomics can be seen as the study of the physical characteristics of the
interaction: how the controls are designed, the physical environment in which the interaction takes
place, the layout and physical qualities of the screen.

A primary focus is on user performance and how the interface enhances or detracts from this. In
seeking to evaluate these aspects of the interaction, ergonomics will certainly also touch upon
human psychology and system constraints. The concept of ergonomics is further illustrated in
figure 2.

Figure 2: Ergonomics Concept

A concept that often describe ergonomic is Musculoskeletal Disorder (MSDs).


These are conditions that affect your body’s muscles, joints, tendons, ligaments, and nerves. MSDs
can develop over time or can occur immediately due to overload. The international ergonomics
association declared that “Ergonomics is employed to fulfil the two goals of health and
productivity” (Karwowski, 2006).

2.2 Domain of Ergonomics.

The terms ergonomics and human factors are often used interchangeably or as a unit (e.g., human
factors / ergonomics – HFE or EHF) a practice that is adopted by the IEA. Domains of HFE were
defined in 2000 to include:
Physical ergonomics is
concerned with human
anatomical, anthropometric,
physiological and
biomechanical characteristics
as they relate to physical
activity. (Relevant topics
include working postures,
materials handling, repetitive
movements, work-related
musculoskeletal disorders,
workplace layout, physical
safety and health.)

Cognitive ergonomics is
concerned with mental
processes, such as perception,
memory, reasoning, and Figure 3: Domain of Ergonomics and Human Factors
motor response, as they affect
interactions among humans and other elements of a system. (Relevant topics include mental
workload, decision making, skilled performance, human-computer interaction, human reliability,
work stress, and training as these may relate to human-system design.)

Organizational ergonomics is concerned with the optimization of sociotechnical systems,


including their organizational structures, policies, and processes. (Relevant topics include
communication, crew resource management, work design, design of working times, teamwork,
participatory design, community ergonomics, cooperative work, new work paradigms, virtual
organizations, telework, and quality management.)

Although HFE practitioners often work within particular economic sectors, industries, or
application fields, the science and practice of HFE is not domain-specific. HFE is a multi-
disciplinary, user-centric integrating science. The issues HFE addresses are typically systemic in
nature; thus, HFE uses a holistic, systems approach to apply theory, principles, and data from many
relevant disciplines to the design and evaluation of tasks, jobs, products, environments, and
systems. HFE takes into account physical, cognitive, sociotechnical, organizational,
environmental and other relevant factors, as well as the complex interactions between the human
and other humans, the environment, tools, products, equipment, and technology.

2.3 ERGONOMIC IMPROVEMENT PROCESS

The following describe process that can help in the improvement of ergonomics.

Assess Risk: Conducting an ergonomic assessment is a foundational element of the ergonomics


process. Your ergonomic improvement efforts will never get off the ground without being able to
effectively assess jobs in your workplace for musculoskeletal disorder (MSD) risk factors.

Plan Improvements: The core goal of the ergonomics process is to make changes to your
workplace that reduce risk. Making changes at scale requires a significant planning effort that
includes prioritizing jobs to be improved, identifying effective improvement ideas, and cost-
justifying the improvement projects.

Measure Progress: Measurement is an important component of any successful continuous


improvement process. High performing ergonomics programs are constantly measured using both
leading and lagging indicators.

Scale Solutions: By establishing a common set of tools to train your workforce, assess risk, plan
improvements, measure progress, and design new work processes, you’ll be able to scale
ergonomics best practices throughout your organization.

2.4 HUMAN FACTORS / ERGONOMICS (HFE) PRINCIPLES

HFE principles are rooted in socio-technical values. HFE participatory design principles and
methodologies apply across the design of tasks, jobs, products, environments, industries and types
of work. HFE principles are rooted in essential core values such as:
• humans as assets
• technology as a tool to assist humans,
• promotion of quality of life,
• respect for individual differences, and
• responsibility to all stakeholders.

3.5 Human Factors / Ergonomics (HFE) Perspectives

HFE encompasses not only physical safety and health but also the cognitive and psycho-social
aspects of living and working. Additionally, HFE can focus on micro-ergonomic aspects of design
– including design of the procedures, the context, and the equipment and tools used to perform
tasks – as well as macro-ergonomic aspects of design – including the work organization, types of
jobs, technology used, and work roles, communication and feedback. These various aspects cannot
be viewed in isolation. HFE reflects a holistic perspective toward the design of products and
systems, considering the interrelatedness of human, technical, and environmental components and
the potential effects of system design changes on all parts of the system.

Participation in system design

HFE contributes to safe and sustainable systems through a unique combination of three drivers for
intervention:

(1) HFE takes a systems approach, using a systematic, iterative, step-by-step process;

(2) HFE is design-driven; and

(3) HFE focuses on optimizing two closely related outcomes, performance and well-being.

HFE practitioners recognize the need for participation of all stakeholder groups (participatory
human factors and ergonomics) in system design. Effective HFE is indispensable to support our
life and work in the 21st century; without attention to HFE, system design will not support the
sustainability of work, organizations, or societies.

2.5 Stakeholders of Human Factors / Ergonomics (HFE)

Any person or group of people that can affect, be affected, or perceive themselves to be affected
by an HFE decision or activity is a stakeholder of HFE. Stakeholders are inter-related and include:

• System influencers – e.g., competent authorities such as governments, regulators,


standardization organizations at national and regional levels.
• System decision makers – e.g., employers and managers, those who make decisions about
requirements for the system design, purchasing system, implementation and use;
• System experts – e.g., professional HFE specialists, professional engineers and
psychologists who contribute to the design of systems based on their specific professional
backgrounds;
• System actors – e.g., employees/workers, product/service users, who are part of the system
and who are directly or indirectly affected by its design and who, directly or indirectly,
affect its performance.5
• Stakeholders for HFE can represent many levels, domains, and types of influence and
investment, such as:
• International level – regulatory officials and policy makers, International NGOs
• National level – government, law and policy makers, regulators, national NGOs
• Educational level – universities, applied sciences programs, vocational education,
professors, teachers, students
• Practice level – CEOs and managers in companies, designers of work and work systems in
different fields, practitioners in domains relevant to HFE.
• Value of HFE in the world of work

Figure 4: Value of HFE in the world of work:


Work systems are made up of humans, the tools, processes, and technologies they use, and the
work environment. HFE contributes to the creation of safe and sustainable work systems by
considering the interrelatedness of human, technical, and environmental components and the
potential effects of work system design changes on all parts of the system. Members of the HFE
community recognize the need for participation of all stakeholder in system design groups (i.e.,
Participatory HFE).

HFE simultaneously contributes to the economic health of organizations by enhancing worker


well-being, capability and sustainability, maximizing performance, and reducing direct costs as
well as indirect costs from productivity losses, quality deficiencies, and employee turnover.
Workplaces that are designed with HFE principles have better employee performance and produce
better business results. HFE design in work systems is simply and unquestionably good business.

2.6 Cases/Issues Addressed by Ergonomics

Issues of ergonomics are grouped into the following four. These are described below:
• the arrangement of controls and displays,
• the physical environment,
• the health issues and,
• the use of color.

i. Arrangement of controls and displays

Sets of controls and parts of the display should be grouped logically to allow rapid access by the
user. It is the set of frequently used control which are arranged logically so that users can easily
find or locate and use controls effortlessly. This may not seem so important when we are
considering a single user of a spreadsheet on a PC, but it becomes vital when we turn to safety-
critical applications such as plant control, aviation and air traffic
control.

The exact organization will depend on the domain and the application, but possible organizations
include the following:

• Functional controls and displays are organized so that those that are functionally related
are placed together;
• Sequential controls and displays are organized to reflect the order of their use in a
typical interaction (this may be especially appropriate in domains where a particular
task sequence is enforced, such as aviation);
• Frequency controls and displays are organized according to how frequently they
are used, with the most commonly used controls being the most easily accessible.
• Examples of controls organized according to function or frequency of use, or sequential
include;
• MS word: Insert menu when user click on it, Insert -> Picture, Table, Shape, Chart, Icon,
etc.
• Mobile notifications are also stored in notification windows only etc.
ii. The physical environment of the interaction

As well as addressing physical issues in the layout and arrangement of the machine interface,
ergonomics is concerned with the design of the work environment itself. It is focused on standing,
moving and sitting position we use app or computer system. Example: seating arrangements
adaptable with all sizes of users and portability of applications (laptop, mobile, etc.)

The first consideration here is the size of the users. Obviously, this is going to vary considerably.
However, in any system the smallest user should be able to reach all the controls (this may include
a user in a wheelchair), and the largest user should not be cramped in the environment.

iii. Health issues

This includes physical postures, poor arrangement of devices, back pain, eye strain, etc. Perhaps
we do not immediately think of computer use as a hazardous activity but we should bear in mind
possible consequences of our designs on the health and safety of users. Leaving aside the obvious
safety risks of poorly designed safety-critical systems (aircraft crashing, nuclear plant leaks and
worse), there are a number of factors that may affect the use of more general computers.

Again, these are factors in the physical environment that directly affect the quality of the
interaction and the user’s performance:

• Physical position users should be able to reach all controls comfortably and see all displays.
Users should not be expected to stand for long periods and, if sitting, should be provided with back
support. If a particular position for a part of the body is to be adopted for long periods (for example,
in typing) support should be provided to allow rest.

• Temperature Although most users can adapt to slight changes in temperature without adverse
effect, extremes of hot or cold will affect performance and, in excessive cases, health.
Experimental studies show that performance deteriorates at high or low temperatures, with users
being unable to concentrate efficiently.

• Lighting The lighting level will again depend on the work environment. However, adequate
lighting should be provided to allow users to see the computer screen without discomfort or
eyestrain. The light source should also be positioned to avoid glare affecting the display.

• Noise Excessive noise can be harmful to health, causing the user pain, and in acute cases, loss of
hearing. Noise levels should be maintained at a comfortable level in the work environment. This
does not necessarily mean no noise at all. Noise can be a stimulus to users and can provide needed
confirmation of system activity.

• Time The time users spend using the system should also be controlled. It has been suggested that
excessive use of CRT displays can be harmful to users, particularly pregnant women.
Figure 5: Cases of Ergonomics

iv. The use of color

Every color has their own culture and identity. Colors used in the display should be as distinct as
possible and the distinction should not be affected by changes in contrast. Blue should not be used
to display critical information. If color is used as an indicator, it should not be the only cue:
additional coding information should be included. The colors used should also correspond to
common conventions and user expectations. Red, green and yellow are colors frequently
associated with stop, go and standby respectively. Therefore, red may be used to indicate
emergency, warning and alarms; green, normal activity such as battery saver for laptop; and
yellow, standby and auxiliary function. These conventions should not be violated without very
good cause.

However, in some countries, red means happiness and good fortune but in another countries red
means harm or dangerous. These should be considered when designing products for specific
country.

2.7 Benefits of Ergonomics in the Workplace


Here are ten impressive benefits of ergonomics in the workplace.

1. Health Benefits
People who work in ergonomic workplaces benefit from improved health. The effect of
ergonomics starts within the cardiovascular system and spreads to other areas. Your heart will be
healthier than it would be if you worked in a standard environment. Ergonomics can improve the
health of your employees by reducing work-related injuries, most often caused by strains and over-
exertion.
Ergonomic workstations can help you and your employees feel less tension in your body because
you’ll adjust the workstations to fit your height. A neutral position will prevent you from straining
your eyes, neck, and back. Your legs can also benefit from better blood flow.

2. Improved Productivity
Ergonomic workplaces are efficient workplaces. Ergonomic workstations combine different ideas
to improve workers’ ability to complete their tasks—from organizing items and supplies to
integrating computer equipment and monitors. An optimized workstation allows a worker to focus
on their task and not be distracted by discomfort or a lack of organization. The more focused your
employees are, the higher the level of productivity they can have. Workers come in all different
sizes. Ergonomics can help make the work more comfortable for the individual worker.

3. Improved Mental Clarity


Reducing physical discomfort and improving your posture can improve your mental clarity and
allow you to do your work more successfully. Ergonomics can also help you reduce stress and
improve your concentration.
When you’re feeling comfortable, you can focus better on your work. Ergonomics helps decrease
pain, strengthen muscles, and increase blood flow. Combined, this can improve mental insight.
The improved moods and focus will allow you and your employees to be more productive and
engaged in your work.

4. Decreased Pains
Ergonomics focuses on optimizing the design of the workplace, tools, and equipment to reduce
strain on employees, minimize fatigue, and improve overall comfort and safety.
Ergonomics can also help improve posture and reduce the risk of musculoskeletal injuries, such as
back pain, by providing ergonomically designed chairs, desks, other furniture, and adjustable
workstations. By creating an ergonomically designed work environment, employers can help to
ensure that employees remain healthy and productive.

5. Eliminates Hazards
Part of creating a more productive work environment is eliminating the daily hazards that can hurt
your employees. Ergonomics contributes to identifying and eliminating hazards in the workplace
by creating work environments that are tailored to fit the user.

Additionally, ergonomics helps to create a better environment by reducing distractions, providing


suitable lighting and ventilation, and providing adjustable furniture and equipment, all of which
help eliminate potential workplace hazards.

it is also a good idea to ask your employees about what hazards they see in their environment. By
asking for their input, you’re showing interest in them. Implementing the change will show them
they’ve been heard, which will further help with employee engagement.

6. Quality of Work Improves


The benefits of ergonomic workspaces range from employee well-being to the quality of work.
Aches, pains, fatigue, and other problems can affect a worker. Ergonomics can eliminate those
issues and help workers work.
Additionally, ergonomics can help ensure that employees are using the most effective tools and
equipment to do their job and that they can reach and use the tools and equipment safely. A
straightforward example is the way how proper lighting can reduce the number of mistakes that
happen in a work process just by ensuring the worker can adequately see all details.

7. Reduce Absenteeism
Ergonomics can improve absenteeism by helping prevent workplace injuries that lead to missed
work days and creating a more comfortable working environment for employees. Ergonomic
practices can also reduce employee fatigue and stress, which can be physical or psychological
factors contributing to absenteeism. Encourage your workers to take regular breaks, change
postures frequently, and adjust their workstations to fit their body size and shape better.
Finally, promote positive health behaviours by providing resources and encouraging your
employees to maintain a healthy diet and lifestyle.

8. Focus on Safety
Ergonomics will create a safer work environment and increase awareness. You’ll remove hazards,
improve workstations for less discomfort, and teach your employees to update their spaces with
safety in mind.
Not to mention, the health benefits that come with ergonomics keep employees healthy at work.
This will encourage safety on another level. You can keep your work consistent and stable by
providing employees with a safe environment they can thrive in.

9. Increased Employee Satisfaction


The more you lean into the ergonomics culture, the more positive your work environment will be.
Ergonomics in the workplace can help improve employee satisfaction by reducing physical and
mental stress. As a result, your employees will enjoy coming to work more than they did before,
affecting those around them.

10. Lower Insurance Costs


Ergonomics in the workplace can help lower your insurance costs by reducing the risk of
employees suffering from work-related injuries and illnesses. By reducing the number of Workers’
Compensation claims, you could be able to save on insurance premiums. Ergonomic
improvements such as adjustable workstations, ergonomic chairs and keyboards, and improved
lighting can help reduce employee fatigue and improve your overall working environment. In such
an ergonomic work environment, your employees are less likely to become injured on the job. In
addition, implementing ergonomic policies and programs can provide you with greater legal
protection in case of a worker injury.
Self-Assessment Questions
Exercise 1.1
16. What is ergonomics in HCI?
17. Describe the ergonomics domain.
18. What is the major goal of ergonomics according to the IEA?
19. Explain seven benefits of ergonomics.
20. Why is ergonomics important?
21. Compare ergonomics to human factors.
22. What is a musculoskeletal disorder?

Exercise 1.2

1-‘Ergonomics’ is related to human


a) Comfort
b) Safety
c) Both ‘a’ and ‘b’
d) None of the above
2-The following subject(s) is (are) related to ‘Ergonomics’
a) Anthropology
b) Physiology
c) Psychology
d) All of the above
3-Ergonomics principle suggests that
a) Monitoring displays should be placed outside peripheral limitations
b) Glow-in-the dark dials made of reflective substances are good for viewing in the nights
c) Visual systems should be preferred over auditory systems in noisy locations
d) All of the above
SESSION 3: GUIDELINES, PRINCIPLES AND THEORIES

Welcome to unit two, session two. In your own words, mention some guidance that user interface
designers should follow when designing user interfaces. In this session, we shall distinguish among
three guidance for designing user friendly interfaces in human computer interaction. This guidance
forms the basis for designing an intuitive system. Now let’s beginning by looking at the objectives
of this session.

OBJECTIVES
By the end of this session, you will be able to:
a. Distinguish between guidelines, principles and theories
b. Apply guidelines, principles and theories to system design.
c. Mention the advantages of following specific guidelines, principles and theories.

Now read on…

3.1 INTRODUCTION

In order to design an intuitive and user-friendly website or system, user interface designers have
to follow certain guidance. To avoid dealing with frustrated, discouraged or even angry users, you
need to overcome pure intuitive judgment by relying on some sort of guidance. Such guidance is
available in three forms - guidelines, principles and theories - as described in the following sub-
headings.

3.1.1 GUIDELINES

These are low-level focused advice about good practices and cautions against dangers. Written
guidelines help to develop a shared language which help to promote consistency among
multiple designers and designs/products. This includes issues such as: terminology,
appearance, action sequences, input/output formats, and Dos and Don’ts of graphic styles. They
build a good starting point but need management processes to facilitate their enforcement.
Examples are:

Examples of User Interface Design Guidelines


Four important HCI design guidelines are presented below.
1. General Interaction
Guidelines for general interaction are comprehensive advices that focus on general instructions
such as −
i. Be consistent.
ii. Offer significant feedback.
iii. Ask for authentication of any non-trivial critical action.
iv. Authorize easy reversal of most actions.
v. Lessen the amount of information that must be remembered in between actions.
vi. Seek competence in dialogue, motion and thought.
vii. Excuse mistakes.
viii. Classify activities by function and establish screen geography accordingly.
• Deliver help services that are context sensitive.
• Use simple action verbs or short verb phrases to name commands.

2. Information Display
Information provided by the HCI should not be incomplete or unclear or else the application will
not meet the requirements of the user. To provide better display, the following guidelines are
prepared −
i. Exhibit only that information that is applicable to the present context.
ii. Don't burden the user with data, use a presentation layout that allows rapid integration
of information.
iii. Use standard labels, standard abbreviations and probable colors.
iv. Permit the user to maintain visual context.
v. Generate meaningful error messages.
vi. Use upper and lower case, indentation and text grouping to aid in understanding.
vii. Use windows (if available) to classify different types of information.
viii. Use analog displays to characterize information that is more easily integrated with this
form of representation.
ix. Consider the available geography of the display screen and use it efficiently.

3. Data Entry
The following guidelines focus on data entry that is another important aspect of HCI:
i. Reduce the number of input actions required of the user.
ii. Uphold steadiness between information display and data input.
iii. Let the user customize the input.
iv. Interaction should be flexible but also tuned to the user's favored mode of input.
v. Disable commands that are unsuitable in the context of current actions.
vi. Allow the user to control the interactive flow.
vii. Offer help to assist with all input actions.
viii. Remove "mickey mouse" input.

4. Shneiderman’s Eight Golden Rules


Ben Shneiderman, an American computer scientist consolidated some implicit facts about
designing and came up with the following eight general guidelines:
i. Strive for Consistency.
ii. Cater to Universal Usability.
iii. Offer Informative feedback.
iv. Design Dialogs to yield closure.
v. Prevent Errors.
vi. Permit easy reversal of actions.
vii. Support internal locus of control.
viii. Reduce short term memory load.
These guidelines are beneficial for normal designers as well as interface designers. Using these
eight guidelines, it is possible to differentiate a good interface design from a bad one. These are
beneficial in experimental assessment of identifying better GUIs.
3.1.2 PRINCIPLES

These are mid-level strategies or rules to analyse and compare design alternatives. Principles are
more fundamental, widely applicable and enduring than guidelines (i.e., guidelines often need to
be individually specified for every project/organization while principles are project-
independent). Principles help to facilitate a structured design process; they bare more abstract and
widely applicable. Clarification is more important in order to ensure consequent interpretation.
Five tasks/principles that may be performed/followed as part of most UI projects are:

1. Determine your target audience (in particular user skill-level)


There are three design goals based on skill level: Novice or first-time
users, Knowledgeable intermittent users, Expert frequent users.
2. Identify the tasks that users perform
This process often involves interviewing and observing the user, which also helps to
understand the task frequencies and sequences.
3. Choose an appropriate interaction style. Spectrum of Directness:
4. Apply the “8 Golden Rules” of UI design:
i. Strive for consistency
ii. Cater for universal usability
iii. Offer informative feedback
iv. Design dialogs to yield closure
v. Prevent errors
vi. Permit easy reversal of actions
vii. Support internal locus of control
viii. Reduce short term memory loads. Use functionally organized screens and menu
5. Always try to prevent user errors. General issues in order to prevent users error include:
i. Design menu choices and commands to be distinctive
ii. Use functionally organized screens and menu
iii. Make it difficult for users to perform irreversible actions
iv. Provide feedback about the state of the UI
v. Design for consistency of actions
vi. Consider universal usability

Example Design Principles


a) Norman’s Seven Principles
To assess the interaction between human and computers, Donald Norman in 1988 proposed seven
principles. He proposed the seven stages that can be used to transform difficult tasks. Following
are the seven principles of Norman −
i. Use both knowledge in world & knowledge in the head.
ii. Simplify task structures.
iii. Make things visible.
iv. Get the mapping right (User mental model = Conceptual model = Designed model).
v. Convert constrains into advantages (Physical constraints, Cultural constraints,
Technological constraints).
vi. Design for Error.
vii. When all else fails − Standardize.
b) Nielsen's Ten Heuristic Principles
Heuristics evaluation is a methodical procedure to check user interface for usability
problems. Once a usability problem is detected in design, they are attended as an integral
part of constant design processes. Heuristic evaluation method includes some usability
principles such as Nielsen’s ten Usability principles.
i. Visibility of system status.
ii. Match between system and real world.
iii. User control and freedom.
iv. Consistency and standards.
v. Error prevention.
vi. Recognition rather than Recall.
vii. Flexibility and efficiency of use.
viii. Aesthetic and minimalist design.
ix. Help, diagnosis and recovery from errors.
x. Documentation and Help
The above mentioned ten principles of Nielsen serve as a checklist in evaluating and explaining
problems for the heuristic evaluator while auditing an interface or a product.

3.1.3 THEORIES

Theories are even more high-level than principles, largely very abstract. They describe objects and
actions with consistent terminologies, help in analyzing and comparing design alternatives, predict
reading, typing or pointing times, etc.
Theory can also be described as the high-level widely applicable frameworks to draw on during
design and evaluation, as well as to support communication and teaching. Theories can also be
predictive, such as those for pointing times by individuals or posting rates for community
discussions.
There are two types of theories:
a) Explanatory theories as in the case of:
▪ Observing behavior
▪ Describing activity
▪ Conceiving of designs
▪ Comparing high-level concepts of two designs
▪ Training

b) Predictive theories: Enable designers to compare proposed designs for execution time or
error rates.
Example of a theory is the stage-of-action models by Norman consisting of the following:

1. Forming the goal


2. Forming the intention
3. Specifying the action
4. Executing the action
5. Perceiving the system state
6. Interpreting the system state
7. Evaluating the outcome
Conclusion
In this session, we have discussed with examples, the various HCI guidelines, principles and
theories. I hope you enjoyed it.

Self-Assessment Questions
Exercise 4.1

23. Distinguish between guidelines, principles and theories


24. How would you apply guidelines, principles and theories to system design?
25. Mention the advantages of following; guidelines, principles and theories in user interface design.
26. Describe the three categories of Interface Design Guidelines.
27. Mention the stages in Jakob Nielson’s usability principles.
28. Mention the stages in Schneiderman’s design.
29. Give a brief explanation of the Eight Golden Rules of Interface Design.
SESSION 4: THE SCHNEIDERMAN’S EIGHT GOLDEN RULES

Welcome to session four, unit two. In this session, we are going to introduced you to Ben
Schneiderman’s eight golden rules of interface design used by successful companies such as Apple,
Google, and Microsoft to design different kinds of interfaces. To some, user-interface (UI) design
or Web design might seem like work that relies solely on creativity and seeking innovative ideas.
However, you should always base your design solutions on a few rules that optimize the entire
design process, such as Ben Schneiderman’s eight golden rules of user-interface design.

Objectives
By the end of this session, you will be able to:
d. Describe in detail Schneiderman’s 8 golden rules of interface design;
e. Demonstrate with examples Schneiderman’s 8 golden rules of interface design;
f. Apply the eight golden rules to interface designs.

Now read on…

4.1 The Eight Golden Rules of Interface Design

Schneiderman pioneered the concepts behind the eight golden rules after conducting fundamental
research in the field of human-computer interaction. Although Schneiderman defined the eight
golden rules back in 1985, their timelessness has ensured that they are still in use by application
and Web designers all around the world. Organisations such as Apple, Google, and Microsoft
follow these rules in designing different kinds of interfaces. These rules are described below with
examples.

1. Strive for consistency.


This means using the same design patterns and sequences of actions in similar situations
throughout an application user’s workflows and includes the proper use of colors, typography, and
terminology. identical terminology should be used in prompts, menus, and help screens; and
consistent color, layout, capitalization, fonts, and so on, should be employed throughout.
Exceptions, such as required confirmation of the delete command or no echoing of passwords,
should be comprehensible and limited in number.

Consistency in the design and its visual elements is one of the most important factors that we must
always keep in mind. You must use the same design patterns and sequence of actions in similar
situations to maintain this consistency. Same design patterns include the same use of color,
typography, terminology, menu hierarchy, and even call-to-action has to be consistent within the
design. Example The look of Windows Operating System stays consistent over time.

According to Jakob’s Law, users spend most of their time on other sites which means that they
prefer consistency among all sites they come across. Because of this, you must design your site
the same way all other sites are designed too. This means that navigation bars, breadcrumbs, forms,
and even the layout of a web page must stick to the same foundation.
If you fail to maintain consistency, it will increase the users’ cognitive load because they are forced
to learn something new every
time. It's always best to have
constant consistency along
with a design because it will
allow the user to complete their
tasks and achieve their end
goal easily.
Consistency helps users to
achieve their goals and Figure 1: Using Consistent and Simple Icon
navigate through your app
easily. when a UI works consistently, it becomes predictable (in a good way), which means users
can understand how to use certain functions intuitively and without instruction and as an interface
designer you should remember that your user is not using your product only, they are getting ideas,
expectations, and building intuition from lots of different products.

How You Can Apply this Rule to Design


According to (Malviya, 2020), the best way to keep your user-interface designs consistent is to
create a design system that collects all the graphic user-interface and design guidelines for a brand,
ensuring their consistent use. A design system should include the following:
style guide—This is the source of knowledge about the use of colors, fonts, and logos and the ways
in which an organization communicates with users.
pattern library—This documents specific examples of user-interface elements and their
behaviors—for example, contact forms—and describes their use in applications.
component library—This is a library of components and templates that are implemented in code
for developers to use when coding the individual elements of an application. A design system
organizes all of this information in a way that enables its users to understand and use it. Plus, it
provides clear guidelines that you can use in creating your designs to maintain that golden rule of
consistency.
• Maintain consistency within a single product or a family of products (internal consistency).
• Follow established industry conventions (external consistency).

2. Seek universal usability


The second principle “Seek universal usability” means you recognize the needs of diverse users
and design for plasticity, facilitating transformation of content. Novice to expert differences, age
ranges, disabilities, international variations, and technological diversity each enrich the spectrum
of requirements that guides design. Adding features for novices, such as explanations, and features
for experts, such as shortcuts and faster pacing, enriches the interface design and improves
perceived quality.
When you think about users,
there are mainly 2 types:
Experienced users and
Inexperienced users. Therefore,
your design must cater to the
needs of both these types of
users. One such way to do so is
the use of shortcuts. Experienced
users can interact throughout the
design and save time by using
shortcuts whereas inexperienced
users will not struggle too.
Shortcuts can satisfy the
requirements of both Figure 2:Gmail Shortcuts
experienced and inexperienced
users simultaneously.
A user-interface design can only be considered good only when experienced, inexperienced, and
new users can find their way through the application without any problems and can quickly achieve
their goals. Keep in mind that these users may be of different ages or from different cultures.
Another way to assist both types of users is through customization of the features and settings.
Users can make their own decisions on how they can use the product when customization is
available to them. Users at first can be provided with default actions to follow but if they feel like
it's too hard to follow, customization will allow the users to change these actions to something that
they are more comfortable with. If you feel like the actions, you provided are too hard for any user,
you can provide simple instructions or visual cues to assist the user.

How You Can Apply this Rule to Design


According to (Siarkiewicz &
Sobolewski, 2022), it is a good
practice for designers to add
ToolTips to user interfaces. This
will provide a description or
explanation of user interface
elements. It will also make it
easy for new users to find their
way around an application or site
and appropriately navigate the
user interface.

Figure 3: Tool tip to User Interface

3. Offer informative feedback


A well-designed user interface should keep users informed about the results of their actions. The
feedback should take into account even seemingly insignificant or infrequent actions. Users want
to be sure that they understand what has happened once they’ve performed a given action.
Figure 4: A homepage with well laid down Breadcrumb Navigation

For every user action, there should be interface feedback. For frequent and minor actions, the
response can be modest, whereas for infrequent and major actions, the response should be more
substantial. Visual presentation of the objects of interest provides a convenient environment for
showing changes explicitly.

How You Can Apply this Rule to Design


• The system should respond to every action the user performs in a way that the user can
understand. If users must wait for something, inform them how long and why. If users do
something wrong, display an alert. If a site map is complex, help users to find their way around
the site by providing breadcrumb navigation.
• Figure 3 above provides a good example of breadcrumb navigation. Notice how each
subsequent element narrows down the categories for users: Main > Staff Students > etc.
• No action with consequences should be taken without informing the users.
• Present feedback to the user as quickly as possible (preferably immediately).
• Feedback should be relevant, comprehensible, and meaningful.
• Communicate with the user clearly to build trust.

4. Design dialogs to yield closure.


If the user must complete a specific task, especially one that consists of several steps, make sure
that the user sees an appropriate message as they progress the steps. Sequences of actions should
be organized into groups with a beginning, middle, and end. Informative feedback at the
completion of a group of actions gives users the satisfaction of accomplishment, a sense of relief,
a signal to drop contingency plans from their minds, and an indicator to prepare for the next group
of actions. For example, e-commerce websites move users from selecting products to the checkout,
ending with a clear confirmation page that completes the transaction. This ensures that users know
they’ve completed a process and gives them a sense of satisfaction on completing the task.

How You Can Apply this Rule to Design


Figure 4 shows an example of a message that informs users that they’ve successfully connected
their account and how to complete their registration.
Different types of messages that we can include in the design are: thank you messages, validation
messages, and summary messages. You can display these messages as a group, so users know
when they’re at the beginning, in the middle, and at the end of a process. Good messages should
be as short as possible, devoid of technical jargon, and clear. Also, remember to avoid blaming the
user for mistakes or using negative words. So, rather than displaying the message, “You entered
the wrong address”, display the message, “please enter the correct address”.

5. Prevent errors.
The ideal application user interface is one in which the user makes no mistakes. As much as
possible, design the interface so that users cannot make serious errors; If users make an error, the
interface should offer simple, constructive, and specific instructions for recovery. For example,
users should not have to retype an entire name-address form if they enter an invalid zip code but
rather should be guided to repair only the faulty part. Erroneous actions should leave the interface
state unchanged, or the interface should give instructions about restoring the state.

How You Can Apply this Rule to Design


• Prioritize: Prevent bigger errors first, then little
frustrations.
• Avoid slips by providing helpful constraints and
good defaults.
• Prevent mistakes by removing memory burdens,
supporting undo, and warning your users.
• Offer solutions for problems.
• Try to make tasks easier for users to reduce the
risk of errors. For example, if users must provide
a date, the interface should be designed so that
users can select a date picker instead of typing it
in a textbox. Figure 5 shows an example of a
calendar that allow users easily select date. Figure 5:

Figure 6: Date Picker

6. Permit easy reversal of actions.


As much as possible, actions should be reversible. This feature relieves anxiety, since users know
that errors can be undone, and encourages exploration of unfamiliar options. The units of
reversibility may be a single action, a data-entry task, or a complete group of actions, such as entry
of a name-address block.
When using your application or Web site, users should feel at ease and have a sense of control
over their actions. This includes the ability to cancel a current action or go back and undo and
redo their actions. Actions that users can’t undo are very frustrating to them and may result in stop
using the application.

Figure 7: Undo and Redo icons

How You Can Apply this Rule to Design


• Support Undo and Redo.
• Show a clear way to exit the current interaction, like a Cancel button.
• Make sure it doesn’t interfere with workflow.
• Single-action undo and action history.
Listed below are some buttons that should be included in a design to help users control their
actions:
• Back—Returns the user to a previous page or screen.
• Cancel—Lets the user quit a task or a multistep process
• Close—Lets the user close the current view.
• Undo—Lets the user undo the most recent action.
• Redo—Lets the user redo the corresponding action. Together, Undo and Redo let the user
successively backtrack on their changes, then if they go too far, redo them.

7. Keep users in control.


Give users the ability to use your site or application in whatever way they want to use it and enable
them to customize the user interface to suit their needs. Experienced users strongly desire the sense
that they are in charge of the interface and that the interface responds to their actions. They don’t
want surprises or changes in familiar behavior, and they are annoyed by tedious data-entry
sequences, difficulty in obtaining necessary information, and inability to produce their desired
result.

Figure 8
How You Can Apply this Rule to Design
• If a user-interface element is not absolutely necessary to the use of an application, let the
user disable it temporarily or remove it completely. A good example of this is the ability
to mute or eliminate pop-up notifications.
• Keep the content and visual design of UI focus on the essentials.
• Don’t let unnecessary elements distract users from the information they really need.
• Prioritize the content and features to support primary goals.

8. Reduce short-term memory load.


Humans’ limited capacity for information processing in short-term memory (the rule of thumb is
that people can remember “seven plus or minus two chunks” of information) requires that
designers avoid interfaces in which users must remember information from one display and then
use that information on another display. It means that cell phones should not require re-entry of
phone numbers, website locations should remain visible, and lengthy forms should be compacted
to fit a single display.

These underlying principles must be interpreted, refined, and extended for each environment. They
have their limitations, but they provide a good starting point for mobile, desktop, and web
designers. The principles presented in the ensuing sections focus on increasing users’ productivity
by providing simplified data-entry procedures, comprehensible displays, and rapid informative
feedback to increase feelings of competence, mastery, and control over the system.

Figure 9
Therefore, recognizing something is easier than remembering it. Minimize the user’s memory load
by making objects, actions, and options available. The user should not have to remember
information from one part of the dialogue to another. Instructions should be visible.
Use iconography and other visual aids such as themed coloring and consistent placement of items
to help the returning users find the functionalities. Humans have limited short-term memory.
Interfaces that promote recognition reduce the amount of cognitive effort required from users and
are thus more successful.

How You Can Apply this Rule to Design


A great way to relieve the user’s memory load is to textually and visually prompt specific
behaviors, as follows:
• Provide implicit help. For example, prompts could indicate what the user should type in a
given field.
• Use visual aids. Use arrows or other signs to catch the user’s attention and help users
perform a given action.
• Fill out form fields for the user. Don’t make the user enter the same data twice in different
forms. Once you already have the data, fill in the corresponding fields automatically for
the user.
• Rely on recognition over recall. Don’t make users enter their password every time they log
in. Provide a Remember me check box.
• Offer contextual help instead of a long tutorial to memorize.
• Reduce the information that users have to remember.

Conclusion
In this session, we have described Shneiderman’s 8 golden rules for user interface design. These
are theoretical principles that should be adapted when designing for any context. We hope that the
practical examples we have provided in this session will help you to apply Shneiderman’s
principles in your daily practice of user-interface or Web design. The eight golden rules are
supplemented with practical tips and examples to help you apply these universal principles in your
daily work as a UX designer.

Self-Assessment Questions
Exercise 4.1

1. Describe the Shneiderman’s 8 golden rules for user interface design.;


2. How can Shneiderman’s 8 golden rules for user interface design be applied in the design process?
3. Mention four advantages of Shneiderman’s 8 golden rules for user interface design.
SESSION 5: NORMAN'S 7 DESIGN PRINCIPLES

Welcome to session five, unit 2. As we try to digest all what we need to know about interface and
interaction designs, we shouldn’t forget the fact that in Human Computer Interaction (HCI), we
should take cognisance of design principles. In this session, we are going to discuss the Donald
Normans Principles of Design and its application in HCI. Now let’s beginning by looking at the
objectives of this session.

Objectives
By the end of this session, you will be able to:
g. describe in detail Donald Normans Principles of Design;
h. describe the application of Donald Normans Principles of Design;
i. mention the advantages of Donald Normans Principles of Design.

Now read on…

4.1 Donald Normans Principles of Design

Donald Norman is one of the leading thinkers and notable researchers in the field of human-
computer interaction and user-centered design. He has written books that require reading by every
system designer. Designers are encouraged to follow these principles daily in designing computer
system, during product development and when evaluating the products.
Donald Norman provides six key design principles to keep in mind while designing any interface.
Norman’s idea is that devices, computers, and interfaces should function correctly and be intuitive
and easy to use. The six principles of designing interactive products are clearly stated in Don
Normans book, The Design of Everyday Things (Tenner, 2015).

Norman's main idea is that devices, things, computers, and interfaces should be functional, easy to
use, and intuitive. His idea is that there are two gulfs to avoid: the gulf of execution and the gulf
of evaluation. Say you want to delete a photo from Facebook. Your goal is to delete it, and the end
result is it being deleted.

What happens in between is the gulf of execution, for example, clicking a button that says 'delete'.
This gulf is small where there are only a few roadblocks (like when you're deleting a photo). It's
much larger when there are lots of roadblocks, like having lots of fields in a contact form.
The gulf of evaluation is when a user is expecting feedback from a system, and the system either
doesn’t provide the feedback at all or, alternatively, doesn't give the feedback the user is expecting.
Think of an ecommerce site. You've entered your credit card details and clicked check out. You
expect a message to pop up saying 'well done! Now please continue to shop' if nothing happens,
you don't know what to do because the feedback you expected didn't happen, and in fact no
feedback happened at all. This confusion (which often results in panicky button pushing) is called
the gulf of evaluation – you have nothing to evaluate!

So, how do we avoid the twin gulfs?


The six principles that revolve around this idea are described below.
1. Visibility
Users should know, just by looking at an interface, what their options are and how to access them.
Visibility is the basic principle that the more discoverable an element is, the more likely users will
know about them and how to use them and vice versa when it is the opposite. Thus, when
something is out of sight, it’s difficult to know about and use.
The more visible functions are, the more likely users will be able to know what to do next. In
contrast, when functions are out of sight, it makes them more difficult to find and know how to
use. Users need to know what all the options are, and know straight away how to access them. In
the case of websites, this is an easy win.
This is particularly important in mobile applications because it is a challenge to make everything
visible within the limited screen space; hence, it is essential to include only the options that are
needed. For example, a log-in screen only needs information about logging in or signing up, so
cluttering it with other information would go against the visibility principle. For example, use
intuitive iconography that clearly indicates there are more options hiding deeper down.

2. Feedback
The user must receive feedback after every action they perform to let them know whether or not
their action was successful. For example, changing the icon on the tab to a spinner to indicate that
a webpage is loading.
Feedback is about designing a system to send back information about what action has been
performed by the user and what has been accomplished, allowing the person to continue with the
activity. Various kinds of feedback are available for interaction design-audio, tactile, verbal, and
combinations of these.
Every action needs a reaction. There needs to be some indication, like a sound, a moving dial, a
spinning rainbow wheel, that the user’s action caused something. A simple and effective feedback
of this is implemented in Google Chrome when it is loading pages. The little spinning circle starts
as soon as you hit enter, so you know something's happening, and goes faster when the page is
about to load, so you know you're about to do something again.

3. Constraints
The design concept of constraining refers to determining ways of restricting the kind of user
interaction that can take place at a given moment. Constraints are the limits to an interaction or an
interface. Some are really obvious and physical, for example the screen size on a phone. Others
are more nuanced, like a single, continuous page website having an image peeking onto the main
page. It is logical for the user to scroll down to see the next image, and thus the rest of the website.

4. Mapping
This refers to the relationship between controls and their effects in the world. Nearly all artifacts
need some kind of mapping between controls and effects, whether it is a flashlight, car, power
plant, or cockpit. An example of a good mapping between control and effect is the up and down
arrows used to represent the up and down movement of the cursor, respectively, on a computer
keyboard. Another example of mapping is the vertical scroll bar. It tells you where you are in a
page, and as you drag it down, the page moves down at the same rate; control and effect are closely
mapped.

5. Consistency
This refers to designing interfaces to have similar operations and use similar elements for
achieving similar tasks. In particular, a consistent interface is one that follows rules, such as using
the same operation to select all objects. For example, a consistent operation is using the same input
action to highlight any graphical object at the interface, such as always clicking the left mouse
button. Inconsistent interfaces, on the other hand, allow exceptions to a rule.

6. Affordance
Affordance is the relationship between what something looks like and how it's used. It is a term
used to refer to an attribute of an object that allows people to know how to use it. For example, a
mouse button invites pushing (in so doing acting clicking) by the way it is physically constrained
in its plastic shell. At a very simple level, to afford means to give a clue (Norman, 1988). When
the affordances of a physical object are perceptually obvious it is easy to know how to interact
with it.

For designers, it means that as soon as someone sees something, they have to know how to use it.
For example, a mug has high affordance: it's easy to figure out intuitively how to use it. For web
designers, affordance is even more important. Users need to be able to tell how to access
information they want from a website, or else they’ll just leave.

Conclusion
In this session, we have described Donald Norman's design principles. These six guidelines
provide the basic outline for a great user experience and an awesome website design.

Self-Assessment Questions
Exercise 4.1

4. Describe the Donald Normans Principles of Design;


5. How can Donald Normans Principles of Design be applied in the design process?
6. mention four advantages of Donald Normans Principles of Design.

References
Tenner, E. (2015). The design of everyday things by Donald Norman. Technology and Culture,
56(3), 785-787.
SESSION 6: NIELSON’S 10 DESIGN PRINCIPLES

Welcome to the final session of unit 2. In this final unit, we are going to look at Nielson’s 10 design
principles for user interface. You can think of these as rules of thumb for designing user interfaces
that are intuitive to users to interact with. These principles should guide you in creating a user
experience that feels intuitive and clear to people visiting your website or using your products. The
goal is to create an interface that is delightful for users while clearly orienting them on their
journey. Now let’s beginning by looking at the objectives of this session.

Objectives
By the end of this session, you will be able to:
j. Describe in detail Nielson’s design priciples;
k. Apply Nielson’s design priciples in system design
30. Describe the advantages for employing Nielson’s usability design principle?

Now read on…

6.1 INTRODUCTION

In 1990, Jakob Nielsen and Rolf Molich proposed ten guidelines to help develop UIs. Jakob
Nielsen's 10 principles for interaction design are generally called "heuristics" because they are
broad rules of thumb and not specific usability guidelines.
To work with UI means finding ways to develop interactions that allow the user to have a better
experience. The UI cannot be confusing, demanding, or cause stress to the visitors. Instead, user
journeys should be so fluid that their navigation becomes intuitive and effortless. Therefore, one
of the roles of designers is to prevent the user from needing external assistance to interact with a
product.

How can you ensure that interactions are fluid and guarantee a good experience? Let’s go through
the famous Jakob Nielsen’s heuristics to find out how. These principles are indispensable for
designers, so keep them on your mind, put them on your walls, memorize them! As described
below, you will realise that they will help you verify the usability of your interfaces.

6.2 What are the Jakob Nielsen’s heuristics for interaction design?

Nielsen’s heuristics are general principles, meaning that they do not determine specific usability
rules. Instead, the heuristics are general rules of thumb you can follow to help create more
accessible, user-friendly, and intuitive digital products. They created these heuristics through
observations and the expertise acquired during their years of work experience.
Here are the 10 Nielsen heuristics:
1. Visibility of system status;
2. Match between system and the real world;
3. User control and freedom;
4. Consistency and standards;
5. Error prevention;
6. Recognition rather than recall;
7. Flexibility and efficiency of use;
8. Aesthetic and minimalist design;
9. Help users recognize, diagnose, and recover from errors;
10. Help and documentation.

Below we explain these one after the other with examples;


1. Visibility of system status;
The first principle is about keeping users informed about their actions and what’s happening at a
given interaction.

When users are informed of the current system status, they learn the results of their past
interactions, so they can better determine what their next steps will be. Remember: When a design
is predictable, it builds trust in the product. This way, it’s important to provide instant feedback
that inform the status of the interaction, in addition it will serve as guidance and leading the user
to the next steps.

Figure 1: indicators on mall maps show people where they currently are and where to go next.

It’s important for users to understand what’s happening when they take an action. Giving
immediate feedback on actions is vital to achieving this end. When users take an action and expect
something to happen, they need to be informed of the status of that action. Feedback should be
immediate and could be done via graphics, animations, or sounds.

There are 4 possible feedback types a good system should provide:


i. What has just happened?
ii. Where am I?
iii. What is happening?
iv. What will happen next?

2. Match between system and the real world;


This principle claims that a system should always speak the user’s language and follow real-world
conventions. This means avoiding marketing jargon or other expressions that might be familiar to
who is building the product but not to their audience. So, use words, phrases, and concepts that are
familiar to your target audience.

Also, to establish a connection with the real world, components should appear in a logical order
that will make sense to the users according to their life experiences.
Remember: people’s mental model of technology is based on their offline experiences combined
with their prior digital interactions. With that in mind, always use icons and other illustrations that
resonate with the real world, so users instantly recognize and understand what you’re trying to say.

As shown in Figure 2, we have a match between system and the real world with the following
icons;

Figure 2: Examples of Match between system and Real World –

Skeuomorphism is a famous term used in graphical user interface design to describe objects that
mimic the physical world in their appearance and behavior.
Also, when you read on kindle, for example, the pages turn with a swipe, which imitates the
experience of reading a physical book.

3. User control and freedom;


A good UI design should never impose an action on the user or make decisions for them. Instead,
the system should only suggest which paths the users can take.

The interactions you build must give users the freedom to decide and take the actions they see fit
— except for rules that go against the system or interfere with some functionality.

However, don’t forget to consider that users may regret their decision or make an error. Therefore,
it is necessary to think of how the system can allow users to undo and redo their actions according
to their needs.

Figure 3: Example User Control and Freedom

4. Consistency and standards;


This heuristic is about keeping the same language throughout the system to avoid confusing the
user. Thus, when users interact with a product, they should have no doubts about the meaning of
words, icons, or symbols used.
Therefore, Designers should create a consistent design that speaks the same language and treats
similar things in the same way as shown in Figure 4. an interface must follow the system’s
conventions, maintaining interaction patterns across different contexts.

Figure 4: Example on Consistency and Standard

5. Error prevention;
This heuristic proposes that a good design should always prevent problems from occurring. Think
of a delete files button, for example. We must assume that users might accidentally click this button
or that they can imagine a different result from it.

This way, to prevent the user from getting frustrated if they delete files by mistake, it is essential
to create a warning message to confirm the decision before going through.

Example 1.

Figure 5: Error Prevention


Example 2.

Figure 6: Error Prevention

6. Recognition rather than recall;


Nielsen’s heuristics aim to reduce users’ cognitive load, and this also includes their memory
capacity. So, it’s essential to think of ways to make options and actionable components visible;
this is important because it’s easier for us to recognize something rather than remember it.

The user should not have to remember all the actions or functions of the system. Therefore, always
leave small reminders of information that can assist users in navigating your designs. For example,
menu items should be easy to access and recognize when needed.

Figure 7: Example Recognition Rather Than Recall

7. Flexibility and efficiency of use;


Your designs should benefit both inexperienced and experienced users. Notice that inexperienced
users need more detailed information. But as they keep using a product, they become experienced
users. This way, allowing them to customize processes like creating keyboard shortcuts is a good
practice. Also, try enabling personalization by tailoring content and functionality for individual
users.
Figure 9: Allowing Flexibility and Efficiency of Use

8. Aesthetic and minimalist design;


The aesthetic and minimalist design principle means interfaces should not contain irrelevant
information or elements which don't add value. As Steve Jobs said, "Design is not just what it
looks like or feels like. Design is how it works." That's true, but apart from its visual appeal, an
aesthetic and minimalist design helps prioritize content, making the most relevant stand out. Clean
up interfaces by getting rid of everything that doesn't help users complete their tasks or, worse
those that prevent them from achieving their goal.

As a designer, you should not consider aesthetics above functionalities. You should therefore,
create interactions that contain only essential information. Avoid unnecessary visual elements that
can overwhelm and distract users.

Figure 9: aesthetic and minimalist design

9. Help users recognize, diagnose, and recover from errors;


Your designs should help the user identify and find solutions to eventual problems and errors.
Therefore, express error messages in plain language: code-free and clear. Moreover, don’t forget
to tell them what the problem was and suggest a solution.

Figure 10: Example Help and Diagnoses


To help users recognize and recover from errors, the system should alert users in concise and
simple language, clearly indicate the problem, and provide some solutions.
Don’t leave users wondering what to do if they encounter an error or make a mistake. Good error
messages explicitly tell users what happened, how to fix it, and how to move forward. Give users
constructive advice, but feel free to make error messages friendly and empathetic.

10. Help and documentation.


The last of Nielsen’s heuristics concerns documentation that will help users understand how to
perform their tasks. Although all the heuristics listed above are supposed to help users avoid errors
and make it easy to navigate without assistance, it is still essential to provide further assistance at
any given time.
The help and documentation heuristic means that the system should make it easy to find additional
information when needed. Good design is familiar and easy to understand — but sometimes, your
users will need a little nudge. The best way is to be proactive and offer help when users actually
ask for it, like contextual tips or onboarding screens.
However, most users frivolously skip this educational part, and it's crucial to ensure help
documentation or tutorials are easy to find when users get stuck and need some help. Ultimately,
design to remove the need for help documentation but include it nonetheless.

Figure 10: Information Kiosk at the Airport

Conclusion
In this session, we have described Nielsen’s usability principles or heuristics. You should note that
these 10 heuristics are indispensable for User experience (UX) and will help you design better.
The ideal scenario is working with the heuristics in mind from the beginning of a project in order
to avoid future adjustments. Have in mind that an intuitive design that has a minimalist approach
and is easy to understand engages users both online and in the physical world. By following the
ten heuristics of Nielsen and Molich, designers can create user-friendly, accessible, and intuitive
products.
Self-Assessment Questions
Exercise 6.1
31. What are Nielson’s usability design principle?
32. Describe with examples the ten-usability design principle by Jakob Nielson.
33. What are the implications for design with Nielson’s usability design principle?
34. What are the advantages for employing Nielson’s usability design principle?

You might also like