0% found this document useful (0 votes)
12 views89 pages

L4 InterfaceDesign

The document outlines the interface design process in Human-Computer Interaction (HCI), emphasizing the importance of requirement analysis, user analysis, scenario modeling, and interface selection. It discusses various hardware and software platforms, as well as interface components, and introduces a practical example of designing a smartphone application called 'No Sheets' for displaying sheet music. The document serves as a lecture plan for a course on HCI, detailing the iterative steps involved in creating user-friendly interfaces.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views89 pages

L4 InterfaceDesign

The document outlines the interface design process in Human-Computer Interaction (HCI), emphasizing the importance of requirement analysis, user analysis, scenario modeling, and interface selection. It discusses various hardware and software platforms, as well as interface components, and introduces a practical example of designing a smartphone application called 'No Sheets' for displaying sheet music. The document serves as a lecture plan for a course on HCI, detailing the iterative steps involved in creating user-friendly interfaces.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

Interface Design

Pratheeba Jeyananthan

Faculty of Engineering
University of Jaffna

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 1 / 89


Lecture plan – Chapter 4

Lecture hours: 6 hours


Teaching method: PowerPoint slide and conventional teaching
method
Assignments: No assignment
Practical: 6 hours
References: Human–Computer Interaction: Fundamentals and
Practice by Gerard Jounghyun Kim

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 2 / 89


The overall design process
So far we covered principles, guidelines and theories for the design of
interfaces for HCI.
Designing process includes all the preparatory activities required to
develop an interactive software product that will provide a high level
of usability and a good user experience when it is actually
implemented.
The following four iterative steps are very important in designing an
HCI:

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 3 / 89


Requirement analysis:
Any software design starts with a careful analysis of the functional
requirements.

For interactive software with a focus on the user experience,


functional task requirements, functional UI requirements should be
considered carefully.

Functional-task requirements: Functions that are to be activated


directly by the user through interaction.

Functional-UI requirements: Functions that are important in


realizing certain aspects of the user experience – not directly
activated by the user.

One example is automatic functional feature of adjusting the display


resolution of a streamed video based on the network traffic.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 4 / 89
Requirement analysis . . .

It is not always possible to computationally separate major functions


from those for the user interface – certain functions actually have
direct UI objectives.

Nonfunctional UI requirements: UI features that are not directly


related to accomplishing the main application task.

For example, requiring a certain font size or type according to a


corporate guideline may not be a critical functional requirement but a
purely HCI requirement feature.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 5 / 89


User analysis:

User analysis is an essential step in HCI design.

results of the user analysis will be reflected back to the requirement


and this could identify additional UI requirements.

It is simply a process to reinforce the original requirements analysis to


further accommodate the potential users in a more complete way.

For example, a particular age group might have some preferred


features such as large font size and high contrast or may be some
functional UI features to adjust the scrolling speed.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 6 / 89


Scenario and task modeling:
This has equal importance to user analysis.
This is the crux of the interaction modeling: identifying the
application task structure and the sequential relationship between
different elements.
We can also start with a drawing of more detailed scenario or
storyboard to imagine how the system would be used and to assess
both the appropriateness of the task model and the feasibility of the
given requirements.
Again this can be used to refine the original rough requirements.
Through the process of storyboarding, a rough visual profile of the
interface can be sketched.
It can be used as another helpful medium in selecting the actual
software or hardware interface.
It will also serve as a starting point for drawing the object-class
diagram, message diagram and the use cases for preliminary
implementation and programming.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 7 / 89
Interface selection and consolidation:
For each of the subtasks and scenes in the storyboard – particularly
software interface components (widgets), interaction techniques
(voice recognition) and hardware (sensors, actuators, buttons, display,
etc)– choices will be made.
Chosen individual interface components need to be consolidated into
a practical package, because not all of these interface components
may be available on a working platform.
Certain choices will have to be retracted in the interest of employing
a particular interaction platform.
For instance, for a particular subtask and application context, the
designer might have chosen voice recognition to be the most fitting
interaction technique. If the required platform does not support voice
sensor or network access to the remote recognition, then an
alternative will have to be devised.
Such arrangements can be made for many reasons such as constraints
in budget, time, personnel, etc.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 8 / 89
Interface selection options

There are two options here: Hardware platform and Software


Platform
Hardware Platforms:
Different individual devices such as sensors and displays may required
by different interactions and tasks.
The choice of a design configuration for the hardware interaction
platform is largely determined by the characteristics of the
task/application.
Different platforms for various operating environments:
Desktop (stationary): Monitor, keyboard, mouse,
speaker/headphone.
Suited for: Office-related tasks, time-consuming/serious tasks,
multitasking.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 9 / 89


Hardware Platforms . . .
Smartphones/handhelds (mobile): LCD screen, buttons, touch
screen, speaker/headphones, microphone, camera, sensors, vibrators,
mini ”qwerty” key board.
Suited for: Simple and short tasks, special-purpose task
Tablets/pads (mobile): LCD screen, buttons, touch screen,
speaker/headphones, microphone, camera, vibrators, sensors
(acceleration, tilt, light, gyro, proximity, compass, barometer).
Suited for: Simple, mobile and short tasks those require a relatively
large screen.
Embedded (Stationary/mobile): LCD/LED screen, buttons, special
sensors and output devices (touch screen, speaker, microphone,
special sensors). Embedded devices may be mobile or stationary and
offer only very limited interaction for a few simple functionalities.
Suited for: Special tasks and situations where interaction and
computations are needed on the spot (printer, rice cooker, mp3
player, personal media player).
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 10 / 89
Hardware Platforms . . .
TV/consoles (stationary): LCD/LED screen, button-based remote
control, speaker, microphone, game controller, special sensors,
peripherals.
Suitable for: TV-centric tasks, limited interaction, tasks that need
privacy.
Kiosks/ installations (stationary): LCD screens, buttons, speaker,
touch screen, special sensors and peripherals.
Suited for: Public users and installations, limited interaction, short
series of selection tasks and monitoring tasks.
Virtual reality (stationary): Large-surround and high-resolution
screen/ head-mounted display/stereoscopic display, 3-D tracking
sensors, 3-D sound system, haptic/tactile display, special sensors.
Suited for: Spatial training, tele-experience and tele-presence,
immersive entertainment.
Free form (stationary and mobile): Special purpose hardware
platforms consisting of a customized configuration of individual
devices best suited for a given task. Cost is a big factor here.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 11 / 89
Software interface components
Windows/layers: Well known platform. Mostly modern computer
desktops are designed around Windows.
Icons: Interactable objects may be visually represented as a compact
and small pictogram such as an icon. It should be informative and
distinctive as possible despite their size and compactness.
Menus: Allows activation of commands and task through selection
rather than recall. Generally they are arranged as a 1-D or 2-D arrays.
There are varieties of styles and mechanisms such as pull-down,
pop-up, toolbars, tabs, scroll menu, 2-D array of icons, buttons, check
boxes, hot keys, etc.
The menu items are generally subtasks or the target interaction objects
for a certain task.
Menu must be organized, categorized and structured according to the
task – typically hierarchically.
It is better to keep the number of item below 8.
If it is a long list, then it should be ordered systematically. E.g., In the
order of frequency, importance, alphabetic, etc.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 12 / 89
Best use of different type of menus

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 13 / 89


Software interface components. . .
Direct interaction: The concept of direct and visual interaction is
strongly connected to the mouse/touch-pad interaction. Keyboard
inputting was used in HCI before mouse era. Mouse made it possible
for users to apply a direct metaphoric touch upon the target object
rather than indirect commanding. Few interactions: ”dragging and
dropping”, ”cutting and pasting.
GUI components: Software interaction objects are mostly visual.
Few essential elements of GUI were already discussed.
WIMP: Windows, icon, mouse and pointer.
WIMP interfaces have greatly contributed to the mass proliferation of
computer technologies.
Few more GUI components for receiving input from users:
Text box: For short/medium alphanumeric input.
Toolbar: Small group of frequently used icons/functions for quick
direct access.
Forms: Mixture of menus, buttons, text boxes, etc for long thematic
input.
Dialog/combo boxes: Mixture of menus, buttons and text boxes for
short mixed-mode input.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 14 / 89
Software interface components. . .
3-D interface (in 2-D interaction input space):

Standard GUI elements that are operated and presented in 2-D space.

However, 2-D control in a 3-D application is often not sufficient.


This mismatch makes the environment inconvenient.
For this reason, non-WIMP based interface such as a 3D motion
gestures are gaining popularity.
Aside from a task such as 3-D games and navigation, it is also possible
to organize the 2-D operated GUI elements in 3-D virtual space.
Problem is, on top of the added dimension the occlusion due to overlap
will remain.
It is really a burden for the users if one needs to place or manipulate
GUI objects in three dimensions.
Anyways it is used in 3-D games for aesthetic reasons.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 15 / 89
Software interface components. . .

Other (non-WIMP) interfaces:

WIMP interface is synonymous with the GUI.

It has been a huge success since its introduction in early 1980s.


It continuing its advances in a lot of revolutionized interface
technologies such as voice recognition, language understanding, gesture
recognition, etc.

Also in various computer environments.

New interfaces are on their way into our everyday lives.

Cloud-computing environment has enabled running computationally


expensive interface algorithms, which non-WIMP interfaces often
require, over less powerful devices against large service population.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 16 / 89


Wire-Framing

The interaction modeling and interface options can be put together


concurrently using the so called wire framing process.
It originated from making rough specifications for website page design
and resembles scenarios or story boards.
It looks like page schematics or screen blueprints which serves as a
visual guide that represents the skeletal framework of a website
interface.
It represents the page layout or arrangement of the UI objects and
how they respond to each other.
It can be pencil drawings or sketches on a white board or they can be
produced by means of a broad array of free or commercial software
applications.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 17 / 89


Wire-Framing. . .

Wireframes produced by these tools can be simulated to show


interface behavior.
Depending on the tools, the interface logic can be exported for actual
code implementation.
There are tools that allow the user to visually specify UI element and
their configuration and then automatically generate code.
Regardless of type of tool, it is important to note that the design and
implementation stages be separated.
Through wire-framing, the developer can specify and flesh out the
kinds of information displayed, the range of functions available and
their priorities, alternatives and interaction flow.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 18 / 89


Naive design example: No sheets

This is to replace paper sheet music with the smart phone.

No more flying pages.

No more awkward flipping and page searching.

We focus more on the HCI related requirements on this task.

To illustrate the HCI design process more concretely, we can go


through the design of a simple interactive Android smartphone
application, called No Sheets.

Main purpose: Using smartphone to present sheet music, hence no


need for paper sheet music.

Here we focus more on the HCI-related requirements.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 19 / 89


Requirement analysis

Initial requirements for No Sheets:


1 Use the smartphone to present written music like ”sheet music”.
Transcription includes only those for basic supports such as the chord
information and beat information.
2 Avoid the necessity to carry and manage physical sheet music. Store
music transcription files using a simple file format.
3 Help the user effectively by providing the music by timed and effective
presentation of musical information (e.g., paced according to a preset
tempo).
4 Help the user effectively practice the accompaniment and sing along
through flexible control (e.g., forward, review, home buttons).
5 Help user sing along by showing the lyrics and beats in a timed fashion.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 20 / 89


User Analysis

Typical user of No Sheets: Smartphone owner and


novice/intermediate piano player.

Since the device is a smartphone, we can expect a reasonable power


of sight for a typical usage (viewing distance of about 50 cm ± 1 cm)

Particular gender or age group does not be considered here.

However, there might be a consensus on how the chord/music


information should be displayed (e.g., portrait vs landscape,up-down
scrolling vs left-right paging).

A very minimal user analysis (of developer himself) resulted the


interface requirement shown in the below slide.

Most of the requirements or choices presented here are rather


arbitrary without clear justifications.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 21 / 89


User interface requirements

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 22 / 89


Simple task model

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 23 / 89


Making a Scenario and Task Modeling

Based on the user requirements, we derived a hierarchical simple task


model.
Each task is to be activated directly by the user through an interface.
Select Song: Select the song to view.
Select tempo: Set the tempo of the paging.
Show timed music information: Show the current/next chord/
beat/lyric.
Ply/Pause: Activate/deactivate the paging.
Fast-forward: Manually move forward to a particular point in the song.
Review: Manually move backward to a particular point in the song
Show instruction: Instructions on how to use the system.
Set preferences: Set preferences for information display and others.
Show software information: Show the version number and information
of the developer.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 24 / 89


State transition diagram

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 25 / 89


State transition diagram . . .

The subtasks, as actions to be taken by the user, can be viewed


computationally as action events or reversely as states that are
activated according to the action events.
Possible state transition diagram for No Sheets is provided in the
above slide.
Through this, one can identify the precedence relationship among the
subtasks.
From the top menu, user is allowed to do six actions.
However user is only allowed to play and view the timed display of the
musical information only after the selection of a song.
During the timed music information display, user is allowed to change
four states concurrently.
The above model can serve as a rough starting point for defining the
overall software architecture for No Sheets.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 26 / 89


Story board

Next step is drawn the story board based on the task model to show
its usage and possible interface choices in detail.
A story board is basically the graphical illustrations organized in
sequence and is often used to previsualize motion picture, animation
and interactive experience.
No fixed format.
Each illustration usually includes a depiction of the important steps in
the interaction, annotated with a description of important aspects
(e.g, possible interface choice, operational constraints and any special
consideration needed, and usage contexts.)
Following slide illustrates the initial story boards for No Sheets,
showing the motivational context, a typical usage scenario and
sequences and rough mobile interface sketches.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 27 / 89


Motivational context

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 28 / 89


Topical usage scene 1: The top-level menu

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 29 / 89


Scene 2: Interface looks for three subtasks

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 30 / 89


Scene 3:During ”play” looks the three concurrently
available subtasks

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 31 / 89


Scene 4: Moving between views/stages and quitting the
application

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 32 / 89


Interface selection and consolidation

Finally, choice of particular interfaces for the individual subtasks.


This step is very important where we try to adhere the HCI principles,
guidelines and theories to justify and prioritize our decision.
Starting requirement: this is a smartphone application.
Later, we will see that the problems in our initial choice.
This is done puposely to show that naive and hurried choice would
expose the application to be at high risk in terms of usabiliy and user
experience.
However, this might computationally satisfies the required
functionalities.
While we evaluate the initial prototype, this fact will become apparent
and there is a need for revise our requirements and design.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 33 / 89


Finalization of the interface design choice for No Sheets

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 34 / 89


Finalization of the interface design choice for No Sheets . . .

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 35 / 89


Initial design wireframe

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 36 / 89


Summary

So far, we saw the design process for interactive applications, focusing


on modeling of the interaction and selection of the interface.
It started with the requirement analysis and continued through the
user research and application-task modeling.
Then a story board while considering different options for particular
interfaces by applying relevant HCI principles, guidelines and theories.
We used a simple example to illustrate all these processes.
It is in a simple and hurried manner, leaving much potential for later
improvements.
This emphasize that the design process ig going to be unavoidably
iterative, because it is not usually possible to fulfill all the user
possibilities.
This is why we need an evaluation, which is the next chapter.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 37 / 89


Understanding the UI Layer
Interactive applications are implemented and executed using the user
interfaces and software layers, collectively the UI layer.
UI layer refers to a set of software that operates above the core
operating system and under the application.
It encapsulates and exposes system functions for:
Fluid input and output.
Facilitation of development of I/O functionalities (in the form of an
application programming interface/library [API] or toolkit)
Run-time management of graphical applications and UI elements often
manifested as windows or graphical user interface (GUI) elements (in
the form of separate application often called the window manager).
Since most interfaces are graphical, the UI layer uses a 2- or 3-D
graphical system based on the implemented GUI elements.
Thus the UI layer is lergely composed of,
an API for for creating and managing the user interface elements
(windows, button, menus, etc).
a window manager manager to allow users to operate and manage the
applications through its own interfaces.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 38 / 89
UI software layer

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 39 / 89


Understanding the UI Layer . . .
Above figure illustrates UI layer as part o the system software in many
computing platforms.
User interacts with the window/GUI-based applications using various
input and output devices.
Apart from the general applications, the user interacts with the
computer and manages multiple application windows/tasks using the
window manager.
Window manager is considered as both an application and API.
User applications are developed using the APIs that represent
abstracted I/O related functionalities of the UI layer such as those for
window managing (resizing, iconifying, dragging, copy paste, etc.),
GUI elements and widgets (windows, menus, buttons, etc.) and basic
windowing (creating/destroying window, deactivating window, etc).
These APIs are abstracted from the even lower-level APIs for
2-D/3-D graphics and the operating system.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 40 / 89
Input and output at the Low Level

At the lowest level, inputs and outputs are handled by the interrupt
mechanism of the system software (OS).
An interrupt is a signal to the processor indicating that an event
(usually I/O) has occurred and must be handled.
An interrupt signal is interpreted so that the address of its handler
procedure can be looked up and executed while suspending the
ongoing process for a moment.
After the handler procedure, the suspended process resumes again.
Every arrival of the interrupt is checked very fast as part of the
processor execution cycle.
This shows that the processor is always listening to the incoming
events, ready to serve them as needed.
The interrupt mechanism is contrasted to polling.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 41 / 89


Input and output at the Low Level . . .
In polling, the processor initiates input or output.
In order to carry out I/O tasks, the processor enters a loop,
continually checking the I/O device status to see whether it is ready
and incrementally accomplishes the I/O task.
This form of an I/O is deficient in supporting asynchronous user
driven I/O and wastes CPU time by blocking other non-I/O processes
to move on.
At a higher level, the I/O operation is often described in terms of
events and event handlers.
These are an abstraction of the lower-level interrupt mechanism.
This is generally called the event-driven architecture where programs
are developed in terms of events such as mouse click and keyboard
input, and their corresponding handlers.
Such information can be captured in the form of a table and used for
efficient execution.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 42 / 89
Complex interrupt mechanism abstracted as an event
handler table

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 43 / 89


Events, UI objects and Event handlers
How is the user/device input processed, and how the application
generate output?
Central to its overall networking are events, UI objects and event
handlers.
The most basic UI object in today’s visually oriented UI system would
be the window (or layer).
A window is a rectangular portion of the screen associated with a
given application that is used as a space and channel for interacting
with the application.
Few other UI objects includes: buttons, menus, icons, forms, dialog
boxes, text boxes and so forth.
These are often referred to as GUI objects or widgets.
Generally, GUI-based interactive applications would have a top
window that includes all other UI objects or widgets that are logically
and/or spatially subordinate to it.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 44 / 89
Events, UI objects and Event handlers . . .
With the current concurrent operating systems, separate
windows/widgets for concurrent applications can coexist, overlapping
with one another so that they can be switched to the current focus.
That is, when there are multiple windows the user carries out an
action to designate the active or current window in focus to which the
event will be channeled.
Two major methods for focusing are:
click-to-type
move-to-type
Click-to-type: User has to explicitly click on the window before
making input into it.
Move-to-type: The window over which the mouse cursor hovers
becomes the focus.
Move-to-type method is generally regarded as less convenient because
of the likelihood of unintended focus change due to accidental mouse
movements.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 45 / 89
Events, UI objects and Event handlers . . .
Most of the UI layers are implemented in an object-oriented fashion –
not all of them.
Thus we can think of generic or abstract object classes for a window
and other UI objects and widgets as being organized hierarchically.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 46 / 89


Events, UI objects and Event handlers . . .

We can designate the background screen space as the default root


system window (automatically activated upon system start) onto
which children application windows and GUI elements are placed.
Background naturally becomes the top window for the window
manager process.
Whether it is the root window, or GUI widget as an interact channel
or object, it will receive input from a user through input devices such
as the keyboard, mouse, etc.
The physical input from the user/devices is converted into an event
which is simply data containing information about the user’s intent or
action.
Usually an event contains additional information such as its type, a
time stamp, the window to which it was directed and screen
coordinates.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 47 / 89


Events, UI objects and Event handlers . . .

These events are put into a queue by the OS and dispatched.


e.g., according to the current focus, to invoke its corresponding
handler.
An event does not necessarily correspond exactly just to an individual
physical input.
The stream of raw inputs may be filtered and processed to form/find
meaningful input sequence of raw inputs may form a meaningful event
such as a double-click command, keyboard commands and mouse
enter/exit command.
Following figure shows the two-tier event queuing system in detail.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 48 / 89


Event queuing

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 49 / 89


Event queuing . . .

There is the system-level event-queuing system that dispatches the


events at the top application level.
Every application or process also typically manages its own event
queue, dispatching them to its own UI objects.
Proper event is captured by the UI object as it traverses down the
application’s hierarchical UI structure, may be top to bottom.
Then the event handler (also called as callback function) associated
with the UI object is activated in response to the event that is
captured.
The events do not necessarily have to ge generated externally by the
interaction devices; indeed, sometimes they are generated internally
for special purpose.
They are also called as pseudo-events.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 50 / 89


pseudo-events
For example, during window resizing, in addition to the resizing event,
the internal content of the window must be redrawn, and the same
goes for the other windows occluded or newly revealed by the resized
window.
Special pseudo-events are enqueued and conveyed to the respective
applications/windows.
In the case of resizing/hiding/activating and redrawing of windows, it
is the individual application’s responsibility to update its display
contents, rather than of the window manager.
Because only that application has the knowledge of how to update
their contents.
Hence a special redraw pseudo-event is sent to the application with
the application with the information about which region is to be
updated.
The window content might be redrawn, due to the needs of the
application itself.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 51 / 89
Redraw a window
UI objects can generate pseudo-events for creating chain effects.
For example, When a scroll bar is moved, both the window content
and the scroll bar position have to be updated.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 52 / 89


Event-driven program structure

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 53 / 89


Event-driven program structure . . .

Above figure shows the general structure of the event-driven program.


First initialization part of the program creates the necessary UI
objects for the application and declares the event-handler functions
and procedures for the created UI objects.
As the next step, the program enters a loop that automatically keeps
removing an event from the application event queue and invoking the
corresponding handler.
This is a system-level implementation part, which is generally hided
form the developer.
Sometimes, based on the development toolkit, users might have to
explicitly program this part as well.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 54 / 89


Output
Interactive behavior which is purely computational will simply be carried out by executing
the event-handler procedure.
However, the response to an event or application is often manifested by explicit visual,
aural and haptic/tactile output as well.
Refreshing the display based on the changed screen data is the last part of the event
processing loop.
Analogous processes will be called for sending commands to output devices of other
modalities as well.
Sometimes with the multimodal output, the outputs in different modalities need to be
synchronized.
Generally, provisions for this will be offered by the toolkit.
Internal computations take relatively little time. Processing and sending the new/changed
data to the display devices can take a significant amount of time.
This can become a bottleneck in the event-processing loop, thereby reducing the
interactivity.
Separating rendering and sensing parts into independent threads and processing at
different rates can enhance the real-time interactivity.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 55 / 89
Summary

So far, we covered the inner working of the general underlying


software structure on which interactive programs operate.
Most UI frameworks operate in similar ways according to an
event-driven structure.
The hardware input devices generate events that are conveyed to the
software interfaces and they are processes to produce output by the
event-handling codes.
The UI layer sitting above the operating system provides the
computational framework for such an event-driven processing model
and makes useful abstractions of the lower OS details for easier and
intuitive interactive software and interface development.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 56 / 89


UI development toolkit

As we already see, interfaces are often developed using UI


development toolkits.
For this purpose, UI toolkits are used with the core application logic
using conventional programming languages.
We can also think of a UI development framework as a methodology
for the interactive program development.
An example might be, where the core computational and interface
parts are developed separately and combined in a flexible manner.
This allows the concept of plugging in different interfaces for the
same model computation and thus easier maintenance of the overall
program.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 57 / 89


UI toolkit

The UI toolkit is a,
library of precomposed UI objects (would be with event handlers) and
a predefined set of events that are defined and composed from the
lower-level UI software layer or the UI execution framework.
The UI toolkit abstracts the system details of handling events and as
such programming for interactive software becomes easier and more
convenient.
The UI object often takes the form of a manipulable graphical object,
often called as a widget (i.e., window gadget).
A typical widget set includes menus, buttons, text boxes, images, etc.
This widget may be singular or composite (made up of several UI
objects).
The use of a toolkit also promotes the creation of an interface with a
consistent look, feel and mechanism.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 58 / 89


UI toolkit . . .

Here, we will be studying,

how events are defined,

how UI objects are created,

how event handlers are specified and

how the interface is combined with the core functional part of the
application.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 59 / 89


Java AWT UI Toolkit
Java is an object-oriented language, offers a library of object classes
called the AWT (Abstract Window Toolkit).
AWT consists of classes useful for creating 2-D UI and graphical
objects.
Component: The most bare and abstract UI class from which other
variant UI objects derive.
Descendants (subclasses) from the component class include window,
button, canvas, label, etc.
The window class has further subclasses such as frame and dialog.
Each class has basic methods.
For example, a window has methods for resizing, adding subelements,
setting its layout, moving to a new location, showing or hiding, etc.
The overall UI object hierarchy and an example of the codes for
creating a frame and setting some of its properties by the calling of
such methods are explained below.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 60 / 89
Class hierarchy of UI objects

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 61 / 89


Creating a window and setting some properties

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 62 / 89


Java AWT UI Toolkit . . .

The Java AWT is not only a library of object classes for programming,
but also part of the UI execution framework for Java that handles a
subset of the UI execution events (called as the AWTEvents).
These AWTEvents are descendants of the EventObjects that cover
most of the useful UI events such as mouse clicks, keyboard input, etc.
The AWT framework will map the AWTEvent to the corresponding
AWT UI object.
There are two ways for the UI object to handle the events:
First is overriding the predefined callback methods of the interactive
applet object for the events.
Second one is for implementing reactive behaviors to various events,
the individual AWT UI object is to be registered with an event listener
waits for and responds to the corresponding event.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 63 / 89


AWTEvent Types and corresponding overridable callback
functions

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 64 / 89


An interactive applet with callbacks for mouse events

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 65 / 89


Java AWT UI Toolkit . . .

Above figure shows an example for reacting to various mouse events


and redrawing the given interactive applet.
Each event handler must return a Boolean value indicating whether
the event should be made available to other event handlers.
Try false and see what happens (during lab hours).
The event handlers in Java AWT are known as the listeners.
It is a background process listens for the associated events for a given
UI object and respond to them.
It is an abstraction of the event-processing loop we have already
discussed and assigned to a UI component.
Listener is different from a simple callback function because listener is
a process waits for and reacts to the associated event, while callback
function is just the procedure that reacts to the event.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 66 / 89


Event listener interface hierarchy

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 67 / 89


Java AWT UI Toolkit . . .
Listeners must be registered for various events that can be taken up
by the given UI object.
As a single UI object may be composed of several basic components
and potentially receive many different types of input events, listeners
for each of them will have to be coded and registered (above figure).
All the events derive from an abstract EventObject and offer basic
methods for retrieving the object associated with the event and
accessing the event type and id.
Descendant event classes posses additional specific attributes and
associated methods for accessing the values.
Examples:
the KeyEvent has a method getkeyChar to return the value of the
keyboard input.
the MouseEvent has methods called getPoint() and getClickCount() to
return the screen position data at which the mouse event occurred and
the number of clicks.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 68 / 89
Event-component hierarchy

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 69 / 89


Event descriptions and examples

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 70 / 89


Java AWT UI Toolkit . . .

A UI object, possibly composed of several basic UI components is


designated to react to different events by associating the
corresponding listeners with the UI object by the
interface-implementation construct.

The UI object is declared to implement the various necessary listeners,


and in the object-initialization phase, the specific components are
created and listeners are registered.

The class definition will thus include the implementation of the


methods for the registered listeners.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 71 / 89


Events, corresponding listener interface and derived
methods

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 72 / 89


UI object specification with event listeners using the Java
AWT

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 73 / 89


Android TM UI Execution Framework and Toolkit

The user programming environment and execution model for Android


(even though at the low level, the OS is derived from Linux) is based
on Java.
Events in Android can take a variety of different forms.
But they are usually generated in response to bare and raw external
actions such as touch and button input.
Multiple or composite higher-level events may also be internally
recognized and generated such as touch gestures (e.g., flick, swipe) or
virtual keyboard inputs.
The Android framework maintains an event queue into which events
are placed as they occur.
Events are then removed from the queue on a first-in, first-out (FIFO)
basis.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 74 / 89


Android TM UI Execution Framework and Toolkit . . .
In the case of an input event such as a touch on the screen, the event
is passed to the view object either by the location on the screen
where the touch took place or by the current focus..
View object: The UI object classes in Android are derived from here.
In addition to the event notification, the view is also passed a range
of information about the nature of the event such as the coordinates
of the point of contact between the user’s fingertip and the screen.
Similar to the case of Java AWT, there are two major ways to define
the reactive behavior to these events.
First is to override the default callback methods.
Second is to associate an event listener with the view object.
The Android View class, from which all UI components are derived
contains a range of event-listener interfaces.
Each of these interfaces contains an abstract declaration for a
callback method.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 75 / 89
Android TM UI Execution Framework and Toolkit . . .

In order to respond to an event of a particular type, a view must


register the appropriate event listener and implement the
corresponding callback.

For example, if a button is to respond is to respond to a mouse-click


event, it must both register the View.OnClickListener event listener
and implement the corresponding onClick() callback method.

When a click event is detected on the screen at the location of the


button view, the Android framework will call the onClick() method of
that view when that event is removed from the event queue.

Different ways of registering and implementing an event listener are


illustrated below:

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 76 / 89


Implementing the event listener itself and associating it
with the View object

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 77 / 89


Having the View object implement the event listener

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 78 / 89


Having the topmost Activity object and implement the
event listener

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 79 / 89


Android TM UI Execution Framework and Toolkit . . .

The Android UI framework also provides a declarative method for


specifying the UI – the form of the UI can be declared using a markup
language.

Through a development tool such as Eclipse, the UI can be built


through direct graphical manipulation as well.

In summary, there are three methods of UI development:

the usual programmatic way

a declarative way

a graphical way

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 80 / 89


Example of a declarative specification of the UI for No
Sheets application

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 81 / 89


Graphical specification of the UI for No Sheets application

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 82 / 89


Programmatic specification of the UI for No Sheets

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 83 / 89


iOS UIKit Framework and Toolkit
There are three major types of discrete events for iOS: mutitouch,
motion and remote control.
The iOS generates low-level events when users touch ”views” of an
application.
The application send these discrete events as UIEvent objects, as
defined by the UIKit framework, to the view on which the touches
occurred.
The view analyses the touch event and responds to them.
Touch events can be combined to represent higher-level gestures such
as flick and swipes.
The given application uses the UIKit classes for gesture recognition
and responding to such recognized events.
For continuous streams of sensor data such as those from
accelerometers or gyroscopes, a separate Core Motion framework is
used.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 84 / 89
iOS UIKIT: A UI object hierarchy example

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 85 / 89


iOS UIKit Framework and Toolkit . . .
When users touch the screen of a device, iOS recognizes the set of
touches and packages them in a UIEvent object and places it in the
active application’s event queue.
If the system interprets the shaking of the device as a motion event,
an object presenting that event is also placed in the application’s
event queue.
The UIApplication object managing the application takes an event
from the top of the queue and dispatches it for handling.
The object is different for touch events and motion events.
A responder object is an object that can respond to events and
handle them.
UIResponder is the base class for all responder objects, also known
simply as responders.
It defines the programmatic interface not only for event handling, but
also for common responder behavior.
Pratheeba J (UOJ) EC9540 - Human Computer Interaction 86 / 89
iOS UIKit Framework and Toolkit . . .

UIApplication, UIView and all UIKit classes that descend from


UIView inherit directly or indirectly from UIResponder and thus their
instances are responder objects.
The first responder is the responder object in an application that is
designated to be the first recipient of events other than touch events.
A UIWindow object sends the first responder these events in a
messages, giving it the first shot at handling them.
If the first responder does not handle an event, UIKit may pass the
event to the next responder in the responder chain to see if it can
handle it.
An event proceeds up the responder chain as the application looks for
an object capable of handling the event.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 87 / 89


The event-processing flow and the event-driven object
behavior structure

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 88 / 89


Summary

So far we reviewed three examples of UI toolkits, namely those for


Java3D, Android and iOS.
There are many more other UI toolkits; however most of them are
similar in their structure and basic underlying mechanism.
Some UI toolkits include visual prototyping tools and declarative
specification syntax as well, which make it even more convenient for
developers to implement user interfaces.
In general, the use of UI toolkit promotes standardization, familiarity,
ease of use, fast implementation and consistency for a given platform.

Pratheeba J (UOJ) EC9540 - Human Computer Interaction 89 / 89

You might also like