0% found this document useful (0 votes)
24 views57 pages

Chapter 19

Uploaded by

unstable da
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views57 pages

Chapter 19

Uploaded by

unstable da
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 57

Implementation

Based on Chapter 19 of Bennett, McRobb


and Farmer:
Object Oriented Systems Analysis and
Design Using UML, (2nd Edition), McGraw
Hill, 2002.
03/12/2001 © Bennett, McRobb and Farmer 2002 1
In This Lecture You Will
Learn:
 About tools used in software
implementation
 How to draw component diagrams
 How to draw deployment diagrams
 The tasks involved in testing a system
 How to plan for data conversion from an
old system
 Ways of introducing a new system into an
organization
 Tasks in review and maintenance
© Bennett, McRobb and Farmer 2002 2
Software Implementation
Tools
 In an iterative project, planning for
implementation will begin in the inception
phase
 The implementation workflow includes tasks
to set up the environment for implementation
 Some tools, particularly CASE tools and
configuration management systems will carry
through from analysis and design activities
 A wide range of types of software tools will be
used

© Bennett, McRobb and Farmer 2002 3


Software Implementation
Tools
 CASE Tools
– Many tools now support UML
– Make it possible to generate code (in Java,
C++ and VB) from the models
– May make reverse engineering of code
possible, to provide round-trip engineering
– May map classes to a relational database
– Link to configuration management tools

© Bennett, McRobb and Farmer 2002 4


Software Implementation
Tools
 Compilers, Interpreters and Run-times
– Different languages require different tools
– C++ requires a compiler and a linker to
build executables
– Java requires a compiler and a run-time
program and libraries to run the byte-
code produced by the compiler
– C# is like Java and is compiled into MSIL
(Microsoft Intermediate Language)

© Bennett, McRobb and Farmer 2002 5


Software Implementation
Tools
 Visual Editors
– Provide a way of designing GUI
interfaces by dragging and dropping
buttons, text fields etc. onto a window
– Can often also handle controls or
objects that represent non-visual
components such as links to a
database or communications
processes
© Bennett, McRobb and Farmer 2002 6
Software Implementation
Tools
 IDEs (Integrated Development
Environments)
– Manage the many files in a project and the
dependencies among them
– Link to configuration management tools
– Use compilers to build the project, only
recompiling what has changed
– Provide debugging facilities
– May include a visual editor
– Can be configured to link in third party tools
© Bennett, McRobb and Farmer 2002 7
Software Implementation
Tools
 Configuration Management Tools
– Also called version control tools,
although configuration management
is more than just version control
– Maintain a record of file versions and
the changes from one version to the
next
– Record all the versions of software
and tools that are required to produce
a repeatable software build
© Bennett, McRobb and Farmer 2002 8
Software Implementation
Tools
 Class Browsers
– May be part of IDEs or visual editors
– Originally provided as the way of
browsing through available classes in
Smalltalk
– Java API documentation is provided in
a browseable hypertext format
generated by Javadoc

© Bennett, McRobb and Farmer 2002 9


Software Implementation
Tools
 Component Managers
– New kind of tool to manage
components
– Provide mechanisms to
 add components
 search for components
 browse for components
 maintain versions of components

© Bennett, McRobb and Farmer 2002 10


Software Implementation
Tools
 DBMS (Database Management
Systems)
– Server system
– Client software (administration
interfaces, ODBC and JDBC drivers)
– Tools to manage the database and
carry out performance tuning
– Large DBMS, such as Oracle, come with
many tools, even their own application
server
© Bennett, McRobb and Farmer 2002 11
Software Implementation
Tools
 CORBA
– CORBA ORB to handle the marshalling
and unmarshalling of requests and
objects
– IDL compiler
– Registry service

© Bennett, McRobb and Farmer 2002 12


Software Implementation
Tools
 Testing Tools
– Tools written by developers as test
harnesses
– Automated test tools to run repeated
or multiple simultaneous tests
– May allow user to run through test
once manually, then generate a script
that can be edited to provide
variations
© Bennett, McRobb and Farmer 2002 13
Software Implementation
Tools
 Installation Tools
– Automate the extraction of files from an
archive and the setting up of configuration
files and registry entries
– Some maintain information about
dependencies on other pieces of software
and will install all necessary packages (e.g.
Redhat RPM)
– Uninstall software, removing files, directories
and registry entries (if you are lucky!)
© Bennett, McRobb and Farmer 2002 14
Software Implementation
Tools
 Conversion Tools
– Extract data from existing systems
– Reformat the data for the new system
– Insert it into the database for the new
system
– May require manual intervention to
‘clean up’ the data—removing
duplication or invalid values in fields
© Bennett, McRobb and Farmer 2002 15
Software Implementation
Tools
 Documentation Generators
– Document models and code
– Extract standard information or user-
defined information into document
templates
– Produce HTML to document the API of
classes in the application

© Bennett, McRobb and Farmer 2002 16


Coding and
Documentation Standards
 Naming standards are agreed early in a
project
 A typical object-oriented standard:
– classes with capital letters: Campaign
– attributes and operations with initial lower
case letters: title, recordPayment()
– words are concatenated together with
capital letters to show where they are
joined: InternationalCampaign,
campaignFinishDate, getNotes()

© Bennett, McRobb and Farmer 2002 17


Coding and
Documentation Standards
 Hungarian Notation
 Used in C and C++
 Names prefixed by an abbreviation
to show the type of the member
variable
– b for boolean: bOrderClosed
– i for integer: iOrderLineNumber
– btn for button: btnCloseOrder
© Bennett, McRobb and Farmer 2002 18
Coding and
Documentation Standards
 One other standard:
– using underscores to separate parts
of a name instead of capital letters
– Order_Closed
– often used for column names in
databases, as it is easier to replace
the underscores with spaces to
produce meaningful column headings
in reports than trying to find the word
breaks in concatenated names
© Bennett, McRobb and Farmer 2002 19
Coding and
Documentation Standards
 Document code
– Think of the people who will maintain your code
– Others may be able to use your code to learn
good practice, but only if it is clearly
documented
– No language is self-documenting; conventions
and standards help
– Comply with Java documentation standards, if
coding in Java (Javadoc)
– You can take advantage of tools that automate
the production of documentation from
comments
© Bennett, McRobb and Farmer 2002 20
Implementation Diagrams
 Component Diagrams
– used to document dependencies between
components, typically files, either
compilation dependencies or run-time
dependencies
 Deployment Diagrams
– used to show the configuration of run-time
processing elements and the software
components and processes that are located
on them
© Bennett, McRobb and Farmer 2002 21
Notation of
Component Diagrams
 Components
– rectangles with two small rectangles
superimposed at one end
– may implement interfaces, shown as
circles connected by a line
– can be stereotyped, for example to
represent files
 Dependencies
© Bennett, McRobb and Farmer 2002 22
Notation of
Component Diagrams
 Compilation dependencies
«Header» «includes» «Body»
SalesOrder.h SalesOrder.cpp

«Object Code»
SalesOrder.o

«Executable»
PrintOrder.exe

© Bennett, McRobb and Farmer 2002 23


Notation of
Component Diagrams
 Dependency of a component on
the interface of another
component Specification
Component

Dependent
Component

© Bennett, McRobb and Farmer 2002 24


Notation of
Component Diagrams
 Dependency between high-level
components

Production
scheduler

Staff planner

© Bennett, McRobb and Farmer 2002 25


Components
 Components should be physical
components of a system
 Packages can be used to manage
the grouping of physical
components into sub-systems
 Components can be shown on
deployment diagrams to document
their deployment on different
processors
© Bennett, McRobb and Farmer 2002 26
Stereotyped Components

Production
Scheduler
?
Scheduler.hlp

Scheduler.ini

© Bennett, McRobb and Farmer 2002 27


Notation of
Deployment Diagrams
 Nodes
– rectangular prisms
– represent processors, devices or
other resources
 Communication Associations
– lines between nodes
– represent communication between
nodes
– can be stereotyped
© Bennett, McRobb and Farmer 2002 28
Notation of
Deployment Diagrams

«TCP/IP»
swift:PC aardvark:DECAlpha

© Bennett, McRobb and Farmer 2002 29


Notation of
Deployment Diagrams
 Can be shown with active objects
or components located on the
nodes
 In a component diagram,
components are component types,
in a deployment diagram, they are
component instances

© Bennett, McRobb and Farmer 2002 30


Notation of
Deployment Diagrams

PC Client aardvark:DECAlpha

«JDBC»
sun.jdbc

SalesDB:
Sales Database
system

© Bennett, McRobb and Farmer 2002 31


Use of Implementation
Diagrams
 Can be used architecturally to show how
elements of system will work together
 Typically used for simple diagrams
 Full documentation of dependencies
and location of all components may be
better handled by configuration
management software or in a
spreadsheet or database

© Bennett, McRobb and Farmer 2002 32


Software Testing
 Who tests?
– Ideally specialist test teams with access to
software to build and execute test scripts
– Often the analysts who have carried out the
initial requirements gathering and analysis
– In eXtreme Programming (XP) programmers
are expected to write test harnesses for
classes before they write the code
– Users of the system, who will test against
requirements and do user acceptance testing
© Bennett, McRobb and Farmer 2002 33
Software Testing
 What kinds of tests?
– Black box testing
 Does it do what it’s meant to do?
 Does it do it as fast as it should?
– White box testing
 Is it not just a solution to the problem,
but a good solution?

© Bennett, McRobb and Farmer 2002 34


Purpose of Testing
 The purpose of testing is to try find
errors, not to prove the software is
correct
– Test data should test the software at its
limits and test business rules
 extreme values (very large numbers, long strings)
 borderline values (0, -1, 0.999)
 invalid combinations of values (age = 3, marital
status = married)
 nonsensical values (negative order line quantities)
 heavy loads (are performance requirements met?)
© Bennett, McRobb and Farmer 2002 35
Levels of Testing
 Levels of testing
– unit testing (individual classes)
– integration testing (classes work correctly
together)
– sub-system testing (sub-system works correctly
and delivers required functionality)
– system testing (whole system works together
with no unwanted interaction between sub-
systems)
– acceptance testing (the system works as
required by the users and according to
specification)
© Bennett, McRobb and Farmer 2002 36
Outside-in or Inside-out?
 Test harnesses
– Write programs that create instances
of classes and send them messages
to test operations execute correctly
 Mock objects (Endo-testing)
– Write mock objects that implement
the interface of real objects and
check the signature of messages sent
to them and the state of the objects
© Bennett, McRobb and Farmer 2002 37
Levels of Testing
 Level 1
– Test modules (classes), then programs (use
cases) then suites (application)
 Level 2 (Alpha Testing or Verification)
– Execute programs in a simulated
environment and test inputs and outputs
 Level 3 (Beta Testing or Validation)
– Test in a live user environment and test for
response times, performance under load
and recovery from failure

© Bennett, McRobb and Farmer 2002 38


Test Documentation
 Test plans
– written before the tests are carried out!
– written, in fact, before the code is written
– contains Test Cases
 description of test
 test environment and configuration
 test data
 expected outcomes

© Bennett, McRobb and Farmer 2002 39


Test Documentation
 Test results
– ideally in a spreadsheet or database
– record when tests are failed or passed
– allow reporting of percentage passed
– ideally should be linked to
requirements
– error results should be recorded in a
fault reporting package with enough
details for developers to reproduce
them with a view to fixing the bugs
© Bennett, McRobb and Farmer 2002 40
Data Conversion
 Data from manual systems needs collating
and putting into a standard format
 Incomplete data may need to be chased
up
 There is a cost associated with keying data
into a new system
 There may be a requirement for data entry
screens that will only be used to get data
into the system to start it up
© Bennett, McRobb and Farmer 2002 41
Data Conversion
 Data from existing computer systems
– existing data must be checked for
correctness
– specially written programs may be
required to check and convert the data
– data may be loaded into a staging area to
be ‘cleaned up’
– data is imported into the new system
– data must be verified after being imported

© Bennett, McRobb and Farmer 2002 42


User Documentation
and Training
 User manuals
– Training manuals organized around the
tasks the users carry out
– On-line computer-based training (CBT)
that can be delivered when the users
need it
– Reference manuals to provide
complete description of the system in
terms the users can understand
– On-line help replicating the manuals
© Bennett, McRobb and Farmer 2002 43
User Documentation
and Training
 User Training
– Set clear learning objectives for trainees
– Training should be practical and geared to
the tasks the users will carry out
– Training should be delivered ‘just in time’
not weeks before the users need it
– CBT can deliver ‘just in time’ training
– Follow up after the introduction of the
system to make sure users haven’t got into
bad habits through lack of training or having
forgotten what they had been told
© Bennett, McRobb and Farmer 2002 44
Implementation Strategies

Direct changeover Old system

New system

Parallel running Old system


New system

Phased changeover New system—Phase 1


Old system New system—Phase 2

New system —Phase 3

Time

© Bennett, McRobb and Farmer 2002 45


Implementation Strategies
 Direct Changeover
– On a date the old system stops and the new
system starts
+ brings immediate benefits
+ forces users to use the new system
+ simple to plan
– no fallback if problems occur
– contingency plans required for the unexpected
– the plan must work without difficulties
Suitable for small-scale, low-risk systems
© Bennett, McRobb and Farmer 2002 46
Implementation Strategies
 Parallel Running
– Old system runs alongside the new system for
a period of time
+ provides fallback if there are problems
+ outputs of the two systems can be compared, so
testing continues into the live environment
– high running cost including staff for dual data entry
– cost associated with comparing outputs of two
systems
– users may not be committed to the new system
Suitable for business-critical, high-risk systems
© Bennett, McRobb and Farmer 2002 47
Implementation Strategies
 Phased Changeover
– The new system is introduced in stages,
department by department or geographically
+ attention can be paid to each sub-system in turn
+ sub-systems with a high return on investment can
be introduced first
+ thorough testing of each stage as it is introduced
– if there are problems rumours can spread ahead of
the implementation
– there can be a long wait for benefits from later
stages
Suitable for large systems with independent
sub-systems
© Bennett, McRobb and Farmer 2002 48
Implementation Strategies
 Pilot Project
– Complete system is tried out in one department
or at one site
+ can be used as a learning experience
+ can feed back into design before system is launched
organization-wide
+ decision on whether to go ahead across the whole
organization can depend on the pilot outcome
+ reduces risk
– there is an initial cost without benefits across the
whole organization
Suitable for smaller systems and packaged
software
© Bennett, McRobb and Farmer 2002 49
Review
 Post-implementation review
 Review the system
– whether it is delivering the benefits expected
– whether it meets the requirements
 Review the development project
– record lessons learned
– use actual time spent on project to improve
estimating process
 Plan actions for any maintenance or
enhancements
© Bennett, McRobb and Farmer 2002 50
Evaluation Report
 Cost benefit analysis—Has it delivered?
Compare actuals with projections
 Functional requirements—Have they been
met? Any further work needed?
 Non-functional requirements—Assess whether
measurable objectives have been met
 User satisfaction—Quantitative and qualitative
assessments of satisfaction with the product
 Problems and issues—Problems during the
project and solutions so lessons can be learned
© Bennett, McRobb and Farmer 2002 51
Evaluation Report
 Positive experiences—What went well? Who
deserves credit?
 Quantitative data for planning—How close were
time estimates to actuals? How can we use this
data?
 Candidate components for reuse—Are there
components that could be reused in other projects
in the future?
 Future developments—Were requirements left out
of the project due to time pressure? When should
they be developed?
 Actions—Summary list of actions, responsibilities
and deadlines
© Bennett, McRobb and Farmer 2002 52
Maintenance Activities
 Systems need maintaining after
they have gone live
 Bugs will appear and need fixing
 Enhancements to the system may
be requested
 Maintenance needs to be controlled
so that bugs are not introduced and
unnecessary changes are not made

© Bennett, McRobb and Farmer 2002 53


Maintenance Activities
 Helpdesk, operations and support
staff need training to take on these
tasks
 A Change Control System is required
to manage requests for bug fixes and
enhancements
 Changes need to be evaluated for
their cost and their impact on other
parts of the system, and then planned
© Bennett, McRobb and Farmer 2002 54
Maintenance
Documentation
 Bug reporting database
 Requests for enhancements
 Feedback to users
 Implementation plans for changes
 Updated technical and user
documentation
 Records of changes made
© Bennett, McRobb and Farmer 2002 55
Summary
In this lecture you have learned about:
 Tools used in software implementation
 How to draw component diagrams
 How to draw deployment diagrams
 The tasks involved in testing a system
 How to plan for data conversion
 Ways of introducing a new system into
an organization
 Tasks in review and maintenance
© Bennett, McRobb and Farmer 2002 56
References
 Deitel and Deitel (1997)
(For full bibliographic details, see
Bennett, McRobb and Farmer)

© Bennett, McRobb and Farmer 2002 57

You might also like