0% found this document useful (0 votes)
8K views493 pages

Exploring Innovation

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8K views493 pages

Exploring Innovation

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Exploring Innovation

Exploring Innovation
David Smith

London Boston Burr Ridge, IL Dubuque, IA Madison, WI New York San Francisco
St. Louis Bangkok Bogotá Caracas Kuala Lumpur Lisbon Madrid Mexico City
Milan Montreal New Delhi Santiago Seoul Singapore Sydney Taipei Toronto
Exploring Innovation
David Smith
ISBN-13 9780077158392
ISBN-10 0077158393

Published by McGraw-Hill Education


Shoppenhangers Road
Maidenhead
Berkshire
SL6 2QL
Telephone: 44 (0) 1628 502 500
Fax: 44 (0) 1628 770 224
Website: www.mheducation.co.uk
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Cataloguing in Publication Data
The Library of Congress data for this book has been applied for from the Library of
Congress
Content Acquisitions Manager: Natalie Jacobs
Product Developer: Nina Smith
Content Product Manager: Alison Davis
Marketing Manager: Geeta Kumar
Text Design by HL Studios
Original Cover design by Ego Creative
Published by McGraw-Hill Education. Copyright © 2015 by McGraw-Hill Education.
All rights reserved. No part of this publication may be reproduced or distributed in
any form or by any means, or stored in a database or retrieval system, without the
prior written consent of McGraw-Hill Education, including, but not limited to, in any
network or other electronic storage or transmission, or broadcast for distance
learning.
Fictitious names of companies, products, people, characters and/or data that may be
used herein (in case studies or in examples) are not intended to represent any real
individual, company, product or event.
ISBN-13 9780077158392
ISBN-10 0077158393
© 2015. Exclusive rights by McGraw-Hill Education for manufacture and export.
This book cannot be re-exported from the country to which it is sold by McGraw-Hill
Education.
Dedication
For G, E and M
Part 1 What Is Innovation?
1 Introduction
2 Types of innovation
3 Theories of innovation
Part 2 What Does Innovation Involve?
4 Sources of innovation
5 The process of innovation
6 Value capture
7 Intellectual property rights
Part 3 How Do You Manage Innovation?
8 Innovation strategy
9 Technological entrepreneurs
10 Funding innovation
11 Managing innovation
Part 4 Developments in Innovation
12 Innovation offshoring
13 Green innovation
14 National innovation systems
References
Resources
Index
About the Author
Preface
Acknowledgements
Guided Tour
Online Learning Centre
Part 1 What Is Innovation?
1 Introduction
Innovation – what’s the big deal?
The phases of innovation
Exploration – exploitation – diffusion
Exploration versus exploitation: a matter of balance
Exploration
Idea generation
Mini Case: Workmate
Scientific discovery
Mini Case: Compound: UK92480
Technological breakthrough
Invention
Mini Case: Menlo Park
Exploitation
Business models
Mini Case: HP Sauce
Mini Case: ARM Holdings
Diffusion
What does innovation involve? The attributes of innovation
What next?
Case Study: Twitter
Further reading

2 Types of Innovation
Introduction
Making sense of innovation
Forms of innovation
Product innovation
Service innovation
Mini Case: NHS Direct
Mini Case: Netflix
Process innovation
Mini Case: Federal Express
Types of innovation
Radical innovation
Mini Case: The Intel 4004 Microprocessor
Incremental innovation
Mini Case: Automatic Washing Machine
Modular innovation
Architectural innovation
The value of an innovation typology
Case Study: Power-by-the-Hour
Further reading

3 Theories of Innovation
Introduction
The macro perspective: theories of technological change
Mini Case: Thanks, Gutenberg – but we’re too pressed for time
Technological change
Technological paradigms
Mini Case: 15 November 1971
Creative destruction
Mini Case: Creative Destruction in Photography
Kondratiev’s long wave theory
The Information Age: our very own long wave
The implications of a macro perspective
The micro perspective: theories of innovation
Technology S-curve
Mini Case: Screen Wars – CRT versus LCD
Punctuated equilibrium
Mini Case: Carbon Fibre in Formula One
Dominant design
Mini Case: Blu-ray
Absorptive capacity
Case Study: High Fidelity
Further reading
Part 2 What Does Innovation Involve?
4 Sources of Innovation
Introduction
Classifying the sources of innovation
Individuals
Corporations
Mini Case: 3D Printing
Users
The State
Mini Case: My Trusty Little Sunflower Cream
Other sources
Employees
Outsiders
Spillovers
Process needs
Case Study: The Mountain Bike
Further reading

5 The Process of Innovation


Introduction
The steps in the innovation process
Ideas/insight
Mini Case: PARC
Development
Mini Case: Prototyping Public Services
Design
Mini Case: Jonathan Ive
Market evaluation
Production engineering
Market/pilot testing
Full-scale manufacture and launch
Models of the innovation process
Technology push
Demand pull
Mini Case: Iridium
Coupling
Integrated
Mini Case: Lessons from Apple
Network
Mini Case: Connect & Develop
Open innovation
Case Study: The Chilled Meals Revolution
Further reading
6 Value Capture
Mini Case: With the Beatles – EMI and the CT Scanner
Introduction
Appropriability
Appropriability mechanisms
Institutional protection
Nature of knowledge
Human resource management
Mini Case: Motor Sport Valley
Practical and technical means
Mini Case: F117-A Stealth Fighter
Lead time
Complementary assets
Mini Case: Pixar
Revenue generating mechanisms
Outright sale
Renting/leasing
Advertising
Subscription
Usage fee
Brokerage fee
Licensing
Razor and razor blades
Freemium
Case Study: VisiCalc – the ‘Killer App’
Further reading
7 Intellectual Property Rights
Introduction
Intellectual property and intellectual property rights
Intellectual property rights through registration
Patents
Mini Case: BTG Sues Amazon over Tracking Software
Obtaining a patent
Patent agents
What protection does a patent provide?
Mini Case: ‘Floor Wars’
Registered designs
What is a registered design?
Trademarks
Mini Case: Chocs Away
Registration of a trademark
Mini Case: Redwell Sounds Like Red Bull
Intellectual property rights that are inherent
Design right
Copyright
Mini Case: Beatles for Sale
Passing off
Mini Case: Rihanna Wins T-shirt Face-off
Licensing
Case Study: The Anywayup Cup
Further reading

Part 3 How Do You Manage Innovation?


8 Innovation Strategy
Introduction
Mini Case: PJB-100 Digital Music Player
The nature of strategy
Innovation strategy
External routes to innovation
Licensing
Spin-offs
Divestment/demerger
Mini Case: Vodafone
Internal routes to innovation: innovation strategies
First-mover/pioneer strategy
Mini Case: eBay
Follower/latecomer strategy
Mini Case: Google
Niche strategy
Mini Case: JCB
Derivative strategy
Case Study: Route 76
Further reading
9 Technological Entrepreneurs
Closed versus open innovation
Mini Case: Elon Musk – Technological Entrepreneur
Entrepreneurship
Economic
Psychological
Behavioural/processual
Mini Case: Nest Labs
The nature of technological entrepreneurship
Definitions
Industrial sectors
Mini Case: Mark Shuttleworth
Clusters
Academic enterprise – the triple helix
Mini Case: SSTL
Incubators and science parks
Occupational background of technological entrepreneurs
Research technological entrepreneur
Producer technological entrepreneur
User technological entrepreneur
Opportunist technological entrepreneur
Towards a synthesis
Types of technological entrepreneur
Application innovator
Market innovator
Technology innovator
Paradigm innovator
New venture creation
Start-ups
Mini Case: Mike Lynch – Technological Entrepreneur
Spin-offs
The drivers of technological entrepreneurship
Diffusion of knowledge
Staff churn
Improved institutional support
Availability of and access to venture capital
The rise of open innovation
The nature of new technologies
Role models
Mini Case: XenoGesis Ltd
Case Study: Angry Birds
Further reading
10 Funding Innovation
Introduction
Innovation cashflow
The cashflow gap
Founder, family and friends
Mini Case: Cybersense Biosystems
Financial bootstrapping
Mini Case: Lotus
Government funding
SMART awards
Banks
Mini Case: Dragons’ Den
Venture capital
Business angels
Mini Case: Amazon.com
Venture capital firms
Corporate venturing
Initial public offering (IPO): the Alternative Investment Market (AIM)
Accessibility
Admission process
Regulatory regime
Case Study: Oxford Instruments
Further reading
11 Managing Innovation
Introduction
Mini Case: Failed Innovation: The Apple Newton MessagePad PDA
The functions of management
Planning
Project management
Development funnel
Organising
Innovation specific structures
Corporate venturing
Direct integration
Dedicated business unit
New-venture department
Mini Case: Skunk Works
Independent-venture unit
Leading
Leadership roles
Project leader
Product champion
Godfather
Gatekeeper
Mini Case: Akio Morita and the Sony Walkman
Motivational schemes
Bootlegging
Ideas programmes
Mini Case: Dollond & Aitchison
Research clubs
Corporate culture and creativity
Mini Case: Lunar Design Inc.
Mini Case: W. L. Gore and Associates Inc.
Controlling
The stage-gate process of innovation
Conclusion
Case Study: The Advanced Passenger Train (APT)
Further reading
Part 4 Developments in Innovation
12 Innovation Offshoring
Introduction
Outsourcing and offshoring
Mini Case: Les Garagistes
Innovation offshoring
Drivers of innovation offshoring
Changes in corporate innovation management
Mini Case: Project Dulcimer
Increasing specialisation through modularisation
Increased availability and mobility of knowledge
Globalisation of markets for technology
Global innovation networks
Structure
Linkages
Geography
Hi-tech sectors
Mini Case: The ‘Super Guppy’
Governance
Implications
A caveat
Case Study: The Rolls-Royce Trent 1000 Engine
Further reading
13 Green Innovation
Mini Case: PlantBottle™
Introduction
Mini Case: When Nudge Comes to Shove
Definitions
Types of green innovation
Drivers of green innovation
Technology push
Mini Case: The Environmental Cost of Washing
Market pull
Mini Case: Vehicle Excise Duty (VED)
Regulatory push
Barriers to green innovation
Economic barriers
Technological barriers
Institutional barriers
Mini Case: Nottingham’s New Car Sharing Scheme
Business strategies for green innovation
Niche market strategy
Endorsement strategy
Partnership strategy
Case Study: Toyota Prius
Further reading
14 National Innovation Systems
Introduction
The public nature of innovation
National innovation systems
Governance
Mini Case: The National Literacy Strategy
Institutions
Industrial institutions
Mini Case: CERN and the World Wide Web
Mini Case: Oxford Instruments
Science and technology institutions
Financial institutions
Educational institutions
Case Study: Backing Australia’s Ability
Further reading

References
Resources
Index
David Smith
David is Professor of Innovation Management in Nottingham Business School at
Nottingham Trent University. He studied Economics at Lancaster University before
gaining a PhD from Nottingham University. Having previously worked for
Courtaulds and Hitachi he has been a full-time academic since 1974. Prior to joining
Nottingham Trent University he worked at the universities of Derby and
Northampton.
He currently teaches innovation on a number of modules including: Competitive
Strategy & Innovation; Creativity; Innovation & Design and Exploring Global
Markets as well as two postgraduate courses: Managing Creativity & Design and
Contemporary Issues in International Business. David also teaches on Corporate
Education programmes and among the organisations he has worked with are:
Experian, Rolls-Royce, Goodrich and the European Space Agency. He is also
actively involved in supervising research students and has supervised more than 25
PhDs through to successful completion.
David’s specialist field is innovation and his current research interests include:
innovation offshoring, green innovation, technology strategy and technological
entrepreneurship. He has published in a range of innovation journals including: R&D
Management, Technology Analysis and Strategic Management, Technovation, Local
Economy, Prometheus and International Journal of Entrepreneurship and
Innovation Management.
It is now nearly ten years since the first edition of Exploring Innovation appeared.
In terms of innovation much has changed in that time, so it is timely to provide a
revised and updated edition. The second edition has continued to be popular with
both students and academics, so the basic structure of the text divided into four
sections has been retained. Significantly the fourth section has been renamed since it
now focuses on developments in innovation. This provides an opportunity to look at
some of the biggest changes in the innovation landscape and as a result there are
two completely new chapters dealing with global and green aspects of innovation in
this section.
In terms of specific changes incorporated into this new edition, almost every
chapter is different in some way. Chapter 1 has been extensively revised to make it
both simpler and more focused. There is now greater emphasis on what innovation
involves with each of the main phases now clearly identified and explained. Slightly
different terminology has been used to facilitate clarification especially for those
new to the subject. Chapter 2 includes more emphasis on both service and process
innovations backed up by up-to-date examples and illustrations. Chapter 3 looks
quite different. This is because the old Chapters 3 and 4 have been combined into
one chapter entitled ‘Theories of innovation’. The first part looks at technological
change as a force for innovation in general, while the second part looks at specific
theories designed to explain particular innovations. Chapter 4 has been simplified
and it now includes a new source of innovation – the State, reflecting recent
research which has highlighted the importance of the State in supporting the
development of many of the technologies that form the basis of many well-known
recent innovations.
Chapter 6 is the first of the completely new chapters. It focuses on value capture
and highlights the importance of two concepts – appropriability and
complementary assets – that are critical in ensuring effective value capture. The
chapter also explores a range of revenue generating mechanisms noting how the
Internet has led to the emergence of new ways of generating revenue. The main
change to Chapter 7 has been the inclusion of several new Mini Cases drawn from
recent high profile cases in which intellectual property rights have been infringed.
Chapter 8 presents a simplified treatment of innovation strategy with external
strategies revised to include more on open innovation. Chapter 9 on technological
entrepreneurship has been updated and now includes a major new Case Study
featuring a Finnish start-up company set up by three students that developed the
well-known game Angry Birds. Chapter 11 features an expanded section on
innovation cultures that includes greater emphasis on creativity as an important
ingredient in the innovation process. Finally Chapters 12 and 13 are completely new
and as noted earlier they focus on recent developments in innovation associated
with global and green trends.
Probably the biggest changes to strike the casual browser flicking through the
pages of this new edition are the new Case Studies. There are more than 40 new Mini
Cases representing some two-thirds of the total. They have been carefully selected
and written up to be relevant, current, and as far as possible topical. They provide
genuine insights into a range of different aspects of innovation. Many of the main
Case Studies at the end of each chapter are also new. They include: Twitter, Power-
by-the-hour, VisiCalc, Anywayup Cup, Route 76, Angry Birds, Rolls-Royce’s Trent
1000 engine and Toyota Prius. What’s also new is many of the diagrams. These have
been chosen or specially created, in order to help illustrate and explain many of the
concepts associated with innovation. Hopefully they, along with the new Case
Studies, will help to rejuvenate and reinvigorate the text.
As ever, my biggest thanks must go to my students, especially those
undergraduate students taking my module: Competitive Strategy and Innovation.
You have not only provided me with the inspiration to revise the text, you have also
contributed through critical comments, valuable insights and not a few interesting
ideas. Thank you. My thanks, too, to my colleagues, especially Paul Garratt, Elaine
Arici and Rupert Matthews. You have not only put up with me, you have also
provided many valuable insights and much needed support. My thanks also to
Maurice Starkey, who continues to draw my attention to potentially useful case
materials. Finally, my particular thanks to Alice Aldous, product developer at
McGraw-Hill, who has done a great job of new product development in making this
third edition of Exploring Innovation a reality.
As in the past, Case Studies that were in the 2nd edition but have been left out of
this one will be made available in the Online Learning Centre (OLC) together with
lecture notes, PowerPoint slides, suggested answers to the case questions and a
bank of multiple choice questions. Finally, I am always pleased to receive feedback
about what are or are not valuable features of the book and can be contacted at:
[email protected].
David Smith
Nottingham, UK
2014
Our thanks go to the following reviewers for their comments at various stages in the
text’s development:

Neil Alderman, Newcastle University


Dr Jean-Malik Dumas, Tilburg University
Dr Hiro Izushi, Aston University
Yvonne Kirkels, Fontys University of Applied Science
Dr Jonathan Lean, University of Plymouth
Fiona Lettice, University of East Anglia
Gideon Maas, Plymouth University
René Pellissier, University of South Africa
Hans Eibe Sørensen, University of Southern Denmark
Ludmia Striukova, University College London

Every effort has been made to trace and acknowledge ownership of copyright and
to clear permission for material reproduced in this book. The publishers will be
pleased to make suitable arrangements to clear permission with any copyright
holders whom it has not been possible to contact
Objectives
Each chapter opens with a set of objectives, summarising what you will learn in
each chapter.
Figures and Tables
Each chapter provides a number of figures and tables to help you to visualise
various models of management, and to illustrate and summarise important
concepts.

Mini Case Studies


Throughout each chapter these real life examples will help you to understand
the concepts of innovation more easily and enable you to relate abstract ideas to
actual products.
Case Studies
Each chapter contains a full-length Case Study with questions. These
comprehensive studies demonstrate the material from each chapter and test
your understanding of the key theories and principles covered.

Questions for Discussion


These questions encourage you to review and apply the knowledge that you
have acquired from each chapter.
Exercises
A selection of more detailed questions is designed to cover the material in more
depth.

Further Reading
A selection of further readings is discussed at the end of the chapter, including
web links, books and articles.
www.mheducation.co.uk/textbooks/innovation3

Students – Helping you to Connect, Learn and Succeed


We understand that studying for your module is not just about reading this textbook.
It’s also about researching online, revising key terms, preparing for assignments,
and passing the exam. The website above provides you with a number of FREE
resources to help you succeed on your module, including:
▮ Resources appendix: to prepare you for research assignments.
▮ Multiple choice questions: to test your knowledge.

Lecturer support – Helping you to help your students


The Online Learning Centre also offers lecturers adopting this book a range of
resources designed to offer:
▮ Faster course preparation – time-saving support for your module
▮ High-calibre content to support your students – resources written by your
academic peers, who understand your need for rigorous and reliable content
The materials created specifically for lecturers adopting this textbook include:

▮ Tutor Notes to support your module preparation


▮ PowerPoint presentations to use in lecture presentations
▮ Image library of artwork from the textbook
▮ Case Notes with guide answers to case questions, written to help support your
students in understanding and analysing the cases in the textbook

To request your password to access these resources, contact your McGraw-Hill


Education representative or visit www.mcgraw-hill.co.uk/textbooks/innovation
Let us help make our content your solution
At McGraw-Hill Education our aim is to help lecturers to find the most suitable
content for their needs delivered to their students in the most appropriate way. Our
custom publishing solutions offer the ideal combination of content delivered in
the way which best suits lecturer and students.

Our custom publishing programme offers lecturers the opportunity to select just the
chapters or sections of material they wish to deliver to their students from a
database called CREATE™ at

http://create.mheducation.com/uk
CREATE™ contains over two million pages of content from:

▮ textbooks
▮ professional books
▮ case books – Harvard Articles, Insead, Ivey, Darden, Thunderbird and
BusinessWeek
▮ Taking Sides – debate materials
Across the following imprints:

▮ McGraw-Hill Education
▮ Open University Press
▮ Harvard Business Publishing
▮ US and European material

There is also the option to include additional material authored by lecturers in the
custom product – this does not necessarily have to be in English.
We will take care of everything from start to finish in the process of developing and
delivering a custom product to ensure that lecturers and students receive exactly the
material needed in the most suitable way.
With a Custom Publishing Solution, students enjoy the best selection of material
deemed to be the most suitable for learning everything they need for their courses –
something of real value to support their learning. Teachers are able to use exactly
the material they want, in the way they want, to support their teaching on the
course.
Please contact your local McGraw-Hill Education representative with any questions
or alternatively contact Warren Eels e: [email protected]
What Is Innovation?

This part contains the following 3 chapters:


1 Introduction
2 Types of innovation
3 Theories of innovation
Introduction

Objectives
When you have completed this chapter you should be able to:
appreciate the importance of innovation for business and the national
economy
describe the principal phases of innovation
understand the nature of innovation and be able to distinguish between the
exploration and exploitation phases
analyse the factors that trigger exploration
appreciate the part business models play in exploitation
understand the link between innovation and diffusion.

Innovation – what’s the big deal?


We hear a lot about innovation. Innovations, especially technology based ones like
the iPhone – the first mobile phone to offer a multi-touch interface – attract a great
deal of media attention and public interest. Indeed Atkinson and Ezell (2009: 129)
suggest that most people believe that innovations only comprise ‘shiny new
products’ produced by companies like Apple. Media attention often occurs when the
innovation first appears and the number of users is comparatively modest. Driven
by a heady cocktail of novelty, scarcity and uncertainty a new product or service
becomes newsworthy and much is said and written about it. Perez (2002: 3)
describes this as ‘technological euphoria’. Typically it involves speculation about the
potentially rapid take-up of the innovation and the dramatic impact it is likely to
have on our lives. This is not unconnected to what Naughton (2008a) refers to as the
‘first law of technology’ whereby we, that is the media and the public, tend to over-
state and exaggerate the short-term impact of new technology (although he goes on
to note that we often under-state the long-term impact).
Nor is interest in innovation confined to the general public. Politicians and
policymakers often take a keen interest too. Harold Wilson, Britain’s Prime Minister
in the 1960s and early 1970s, captured the excitement and inherent possibilities of
technological innovation when he famously spoke of the ‘white heat’ of
technological innovation as a powerful modernising force in society. Similarly in the
economic downturn that followed the financial crisis of the late 2000s, politicians,
from Cameron to Miliband in the UK and from Obama to Romney in the US, extolled
the virtues of innovation as some kind of ‘elixir’ (Chakrabortty, 2013) in which
designing and developing new products and services would generate new sources of
wealth and drag Western economies out of the slump. Though this interest is
typically fuelled by exactly the same forces, such as media interest and media
attention, there is a more logical explanation for politicians’ interest, since as
economists like William Baumol (Dodgson and Gann, 2010: 18) and others have
shown, much of the economic growth that has brought increased prosperity and
rising living standards since the industrial revolution is attributable to innovation,
particularly technological innovation.
Not to be outdone, the financial markets too frequently show much interest in
innovation, so much so that in the right circumstances the value of shares of
innovating companies can power to dizzying heights, making millionaires and even
billionaires of those who backed the innovation (and sometimes those who actually
developed it, too!). For instance, when Twitter, the San Francisco based firm that
developed the now well-known microblogging service, was floated on the New York
Stock Exchange (NYSE) in November 2013, its shares which were initially offered at
$26 each rose on the first day to $44.90 valuing the company, which had never made
a profit, at a cool $31 billion. Hence in financial terms some innovations are quite
literally a very big deal.
So if innovation is something that attracts a lot of attention and that lots of
people are interested in and regard as important – what exactly is it? Newness and
novelty are key aspects of innovation, indeed the term is derived from the Latin
word novus, meaning new or novel. However, as Figure 1.1 shows, ‘newness’ can
take many different forms ranging from a completely new product to new uses for
an existing product or simply a product that is new for a firm. This is captured in
Rogers’ (2003) definition of innovation which refers to differences in ‘newness’:

‘An innovation is an idea, practice or object that is perceived as new by an



individual or other unit of adoption
Rogers (2003: 12)

Figure 1.1 Different forms of ‘newness’

However, there is more to innovation than newness and novelty. An invention is


something new but it is not the same thing as innovation. Atkinson and Ezell (2009:
129) note that notions of newness and novelty are limiting, because innovation is
about much more. In particular an innovation, as well as being novel and new, has to
be ‘a viable business concept’. In other words, innovation is about the development
of something new and its implementation into a viable product that one can
purchase. Only when something new appears on the market so that it can be bought
and sold, can the idea be said to be an innovation. This is captured in the OECD
definition of innovation which defines it as:

‘the implementation of a new or significantly improved product (good or


service), or process, new marketing method, or a new organisational method
in business practices, workplace organisation or external relations ’
OECD (2005)
Complicated though this definition may appear to be, in this context the word
‘implementation’ is critically important, because unlike invention, which is primarily
a creative process involving the creation of something new, innovation also
involves turning the idea into a viable product and getting it onto the market so that
consumers can acquire it.
The phases of innovation

Exploration – exploitation – diffusion


If innovation is about the development of new products and services just what does
this involve? To answer this we have to stand back and place innovation in a
broader perspective. Essentially innovation comprises three key elements or phases
of which the first two are much the most important and lie at the heart of what
innovation is about. The phases (see Figure 1.2) that make up the process of
innovation are:
1 Exploration
2 Exploitation
3 Diffusion

Figure 1.2 The phases of innovation: exploration, exploitation and diffusion

Which comes first and what happens in each phase? We will begin by taking an
overview of the three phases and their relationship one to another and to the whole
process of innovation. We will then take an in-depth look at what happens in each
phase.
Of the three phases it is exploration that comes first. It is followed by exploitation
and finally diffusion. As its name implies, exploration is an exploratory phase. It is
where innovation begins and it is the most creative of all three phases, where a mix
of qualities are required ranging from originality, openness, creativity and vision to
inquisitiveness, ingenuity, intuition and the ability to improvise. To use a very
clichéd term, exploration demands a capacity for ‘thinking outside the box’, in the
sense that it is vital to break away from the ‘conventional wisdom’ (Galbraith, 1958)
of what the customer wants or one thinks the customer wants. What is needed is to
approach the problem with a broad and open perspective. As Henry Ford is reputed
to have said, ‘If I’d asked people what they wanted they would have said faster
horses.’ There appears to be no evidence that Ford actually did say this but there is
plenty of evidence that he certainly did think along these lines (Vlaskovitz, 2011).
Exploration involves a search for new ways of doing things, trying out new or
certainly different technologies and finding new ways to meet customer needs.
Exploration tends to be associated with the research part of research and
development (R&D).
The next phase of innovation, exploitation, is concerned not with the search for
things that are new and different but with the commercialisation of potential new
products and services that have been developed into inventions as part of the
exploration phase. In the words of Levinthal and March (1993: 105), exploitation is
‘the use and development of things already known’. Compared to exploration,
exploitation can be regarded as a less creative (though this is still a valuable and
useful quality), and more pragmatic phase. It is very much about aligning the new
product with the requirements of the market and the consumer. What may seem
highly attractive with obvious benefits to technology-minded inventors in the
exploration phase may not be regarded so favourably by bemused potential
consumers and users. Similarly, it is in the exploitation phase that difficult decisions
about how the new product or service will be made and delivered have to be agreed
in order to ensure that at least some kind of profit is the end result.
Diffusion does not actually involve innovation directly. It is concerned with the
rate at which an innovation, once launched onto the market, is taken up and adopted
by consumers. Quite literally, it is about the rate at which an innovation ‘catches on’.
Where new technologies are concerned diffusion is also about the way the
technology spreads to other sectors. Some innovations never catch on. Launched in
a blaze of publicity they fail to appeal to consumers. In the late 1950s Ford’s Edsel
car famously suffered precisely this fate. Despite a great deal of publicity and
marketing effort, when launched in 1958 it failed to live up to customer
expectations. By the time the model was dropped 2 years later less than half the
number required to break even had been sold and Ford ended up losing in excess of
$350 million on the ill-fated venture, an unprecedented sum at the time. Other
innovations catch the mood of the time and quickly become a huge commercial
success. Apple’s entry into the mobile phone market with the first iPhone would be a
good example. Hence diffusion describes the rate at which an innovation captures
market share.

Exploration versus exploitation: a matter of balance


A critical issue with regard to the first two phases of the innovation process is the
balance between exploration and exploitation (Chen and Katila, 2008). Too much
emphasis upon exploration presents a real risk that not enough new products and
services will actually reach the market and the enterprise will fail to get a return on
the knowledge and know-how it has created (Levinthal and March, 1993). Failure to
devote sufficient resources to exploitation is likely to lead to the enterprise being a
‘cash burner’. That is to say, it eats up cash spent on R&D, without a sufficient
number of new products reaching the market and generating revenue.
An example of a company that perhaps devoted too much effort to the
exploration phase was Xerox in the 1970s. At its Palo Alto Research Centre (known
as PARC) located in Silicon Valley in 1970, Xerox assembled some of the world’s best
computer engineers and programmers (Gladwell, 2011) and over the next ten years
they developed what were to become some of the building blocks of the Internet age
in the form of a personal computer with a graphical user interface (i.e. ‘windows’),
control via a mouse instead of a keyboard and the capability to exchange e-mails
through the world’s first Ethernet network. But Xerox failed to capitalise on these
developments. Its first personal computer the ‘Alto’ proved too expensive and only
sold in modest numbers, before Xerox pulled out of personal computing altogether.
As Matthew Hiltzik (2000) noted, research projects at Xerox PARC just never seemed
to come to an end.
However, too much emphasis on exploitation can be equally dangerous and is
probably more common. Companies that do comparatively little R&D are likely to
produce innovations that incorporate only very minor improvements. In the process
they may well miss out on the bigger changes taking place, resulting in their being
left behind in the market as their product portfolio becomes increasingly obsolete.
This is especially true of dynamic business environments. Research shows (Benner
and Tushman, 2003) that firms, especially large well-established ones, tend to lean
towards too much emphasis on exploitation. As a result they over-exploit the
characteristics of existing products, leading to innovations that are no more than
enhanced versions of existing products which incorporate only very modest
improvements and enhancements. This is often manifest in increased spending on
marketing and a lack of investment in R&D. In time this leads to obsolescence as a
company’s product portfolio becomes outdated. Failure to adopt new technologies
and new developments quickly leads to them being left behind (Christensen, 1997).
Recently some observers (Baptiste, 2013; Chakrabortty, 2013) have suggested
(perhaps unfairly) that Apple, a company with an outstanding record of innovation,
has begun to go down this route, with a shift in the company’s efforts away from
exploration in favour of exploitation. Some of its more recent tablet computers like
the 4th generation iPad Air, have offered little in terms of increased functionality.
Similarly, criticisms were levelled at its iPhone 5. At the same time the company’s
spending on R&D, which a decade earlier amounted to 10 per cent of sales, had
fallen to nearer 2 per cent. In terms of innovation the company has been described
as ‘treading water’ and ‘concentrating on refining products’ (Chakrabortty, 2013),
rather than launching new products that are noted for their originality and novelty.
Why do some firms tend to ‘over-exploit’ and get the balance wrong by
emphasising exploitation at the expense of exploration? Chen and Katila (2008: 199)
suggest that over time there is a natural organisational tendency towards
exploitation. They put forward three main reasons for this tendency:
▮ firms become set in their ways with well-established routines tending to favour
the familiar rather than the unknown
▮ the application and implementation of process management techniques aimed at
improving efficiency and quality (e.g. TQM) can be at the expense of
exploratory work
▮ emphasis upon short-term financial performance.

The result can all too easily be an imbalance with exploitation taking priority and
resources at the expense of exploration. Large, established firms in mature
industries are particularly prone to this. Indeed research by Christensen (1997) has
shown that this leads to large incumbent firms being poor at adopting
transformative new technologies (there are some notable exceptions), compared to
small start-up companies, with the result that they get left behind and ultimately exit
the industry.

Exploration
Exploration is closely associated with invention – that is, the development of new
artefacts that perform a specific function. Inventions and the inventors who develop
them get a lot of attention. We hear a lot about inventors in particular and what is
often portrayed as their ‘heroic’ struggle to build and refine their invention.
Typically this process involves much experimenting with variations and
refinements, some of which prove ultimately to be blind alleys. Notable examples of
famous inventors who struggled in this way would include Frank Whittle the
inventor of the jet engine and Thomas Edison the inventor of, among other things,
the light bulb and the phonograph (an early music player). And yet invention is only
part of the innovation story. In the exploration phase invention is typically preceded
by some kind of trigger event that fires the inventor to start experimenting. The
three main types of trigger event (see Figure 1.3) are:
1 Idea generation
2 Scientific discovery
3 Technological breakthrough.
Figure 1.3 Three triggers to innovation

Idea generation
Idea generation is essentially a cerebral process whereby individuals conceive
something new. It is often described as involving a ‘eureka’ moment or an ‘aha’
moment, where an individual suddenly has a moment of inspiration leading to an
idea for a potential innovation. In reality the moment of inspiration may well be a
collective one when a group working together conceives something new.
Idea generation is typically the product either of a very deliberate and intentional
process designed to initiate something new, or of a much less deliberate and
informal process, where the idea emerges in a much more haphazard, even
accidental, way. Where it is the former, then idea generation is likely to be a formal
and relatively structured process, involving not one individual but a group
comprising a number of individuals, and involving the application of one or more so-
called ‘creativity’ techniques that can be used to aid and assist in generating new
ideas. Some of the better-known techniques are brainstorming, nominal group
technique and mind-mapping. Brainstorming is widely used. It relies on bringing a
group of people together in order to try and think up new ideas. The basic notion is
to be non-judgemental and produce a stream of possible ideas in a short space of
time without any kind of assessment of their value. It was precisely this technique
that gave rise to the microblogging site Twitter. In March 2006 the board members of
the San Francisco-based podcasting company Odeo, held a day-long brainstorming
session at which they came up with the idea of a microblogging service in the form
of a social network based on text messages of no more than 140 characters. Thus
Twitter was born.
However, while formal processes for idea generation are often a feature of large
corporations, elsewhere idea generation tends to be more informal and more
personal. Where this occurs, idea generation tends to take place as part of one of
three possible scenarios in which the process may be described as:

▮ problem-related
▮ associative
▮ serendipitous.

With the first of these, some kind of problem or bottleneck tends to be the trigger
that generates an idea for a potential new product or service that will help in solving
or ameliorating a problem. There is rarely anything formal about this. Individuals
may be confronted with a problem that has persisted for a long time and it is often a
sense of frustration that motivates individuals to think of possible ways around the
problem, leading to a moment of inspiration when they conceptualise a potential
solution. A film about the American inventor Bob Kearns, entitled Flash of Genius,
tells the story of just this kind of sudden moment of inspiration. In 1962 Kearns was
driving his Ford Galaxie through the streets of Detroit. It began to rain, not heavily
but just enough to make it difficult to see through the windscreen. In those days
wipers had just one setting, they were either on or off, and the noise of the wipers
screeching across the screen led Kearns to have his moment of inspiration. He
suddenly thought, ‘Why can’t a wiper work like an eyelid? Why can’t it blink?’
(Seabrook, 1993). Thus was born the idea for a wiper that mimics the way the human
eye blinks intermittently. This led him to develop, in the basement of his house, the
world’s first intermittent windscreen wiper, one that wipes the glass not
continuously but about every 4–5 seconds. Today such wipers are standard on
virtually every car.

Mini Case: Workmate


Ron Hickman, who came originally from South Africa, was a design engineer.
In the 1960s he worked first for Ford designing cars like the Ford Anglia and
latterly for Lotus where he designed the iconic Lotus Elan sports car. The
latter featured a unique steel backbone chassis, which was originally used to
test the suspension but which became part of the final product because its
rigidity gave the car outstanding handling. However, Hickman’s spare time
was devoted to do-it-yourself (DIY) activities which were popular in the post-
war era. It was this hobby that gave him the idea for his invention of the
‘Workmate’, a portable workbench that also served as a sawhorse. The idea
came to him when he cut up a large piece of plywood, using a chair for
support, only to find to his irritation that he had damaged the chair beneath.
Frustrated he set about designing, for his own use, a compact foldable and
portable bench, which could be set up anywhere and taken easily from room
to room.
His design drew on his automotive experience. The work surface
comprised two lengths of wood some 70 centimetres long, which could be
moved up to 25 centimetres apart as on a vice, to grip large objects when
sawing or drilling. This was mounted on a lightweight frame made of
aluminium alloy and steel. Ever resourceful, Hickman at one stage even
raided the Lotus parts bin for bits of Elan suspension to construct a prototype
(Mallett, 2011). A key feature was that when not in use the workbench could
be folded away so that it would fit in the boot of a car or be placed out of the
way by hanging it on a wall.
Faced with the high cost of manufacturing and marketing his design,
Hickman tried to interest the American power tool manufacturer Black &
Decker in licensing it and selling it under their brand name. They turned him
down, as did seven other companies. Hickman wisely patented his idea and
decided to manufacture the bench himself. In 1967 he left Lotus, remortgaged
his house (Mallett, 2011) and started trading as Mate Tools Ltd. Over the next
four years about 25,000 units were made and sold. Then, in 1972, Black &
Decker came back, requesting that Hickman grant them a licence to the
production rights of a new slightly modified design, the Mark II Workmate,
which they would manufacture in the UK. Black & Decker finally started
production in the US in 1975, after a great struggle to convince the parent
company of its potential. The rest, as they say, is history, with over 70 million
Workmates sold around the world (Buckley, 2011) by the time of Hickman’s
death in 2011.
Source: Buckley (2011); Mallett (2011).

Associative processes are similar in as much as they too tend to revolve around an
individual rather than a group. Again they are informal rather than structured. In
this case, however, it is the sight of one device performing a particular function in a
particular way that provides the trigger. Seeing one device at work leads an
individual to consider how the operating principles or the technology could be
transferred to a different context. This is the moment of inspiration when the
individual concerned realises that the technology can be adapted to serve a different
purpose. There have been many instances of this kind of process at work. A good
example is Karl Dahlman. Dahlman was a Swedish engineer whose insight came
from seeing a commercial hovercraft ferrying passengers about. It led him to
wonder if the hover principle could be applied not to something for ferrying people
about, but to a humble lawnmower! Dahlman could see that potentially hovering
could have many advantages for cutting grass. It would be easier to cut right up to
the edge of a lawn and it would be much easier to go up and down slopes. Hence he
had a go at building a small lightweight mower that dispensed with the conventional
roller, wheels and cutter, and instead utilised a small fan located above spinning
cutting blades. The fan blades generated a cushion of air that lifted the mower above
the surface of a lawn. The resulting hover-mower glided on a cushion of air that
made it easy to push around and to access narrow spaces and steep gradients.
Dahlman called his new mower a ‘Flymo’ and it very quickly proved popular both
with homeowners and landscape contractors.
Finally, serendipitous processes are even more haphazard and informal. They
rely entirely on chance, especially chance events that have nothing to do with
innovation. Quite literally a chance event gives rise to a spontaneous moment of
inspiration. There is no planning, no structure and normally no connection to the
context in which the resulting new product will be used. An example is the Swiss
engineer, Georges de Mestral. His moment of inspiration came from taking his dog
for a walk in the woods. The dog came back with many tiny seed pods stuck to its
fur. Curious, de Mestral looked at the fur under a microscope and discovered that
the pods were covered in microscopic loops that hooked on to the fur. This led him
to devise a fabric made up of tiny hooks and loops and thus was born the fastener
Velcro.
In all three instances idea generation relies heavily on the creativity of the person
or persons who have the idea. Creativity is not a function of qualifications. Training
may help. It is no coincidence that many of those who came up with ideas are
engineers, since they are trained to look for and solve problems. They do tend to be
curious individuals, willing to challenge and question existing practices and they are
typically receptive to new ideas and new ways of doing things. It is worth noting,
however, that idea generation is also a function of circumstances. It is often the
context in which an individual finds themself that motivates him/her to come up
with something new.

Scientific discovery
Science is concerned with the systematic acquisition of knowledge. Specifically,
knowledge acquired through observation and experimentation in order to
understand and explain natural phenomena. Advances in scientific knowledge lead
to discoveries about the properties of natural phenomena and these can have uses
and applications in a wide range of fields. The most obvious tend to be in medicine
and healthcare, but just as important can be everything from agriculture to
manufacturing and services. Indeed, recent advances in information and
communications technologies (ICT) owe much to scientific advances in relation to
materials that have facilitated the miniaturisation of everyday devices like the
phone, the music player and of course the computer. Since these new applications
are effectively innovations, scientific discoveries can be a very important trigger for
innovation.

Mini Case: Compound: UK92480


A good example of a scientific discovery that led to an innovation is a
chemical compound that began life as Compound: UK92480. In time it went on
to become the fastest selling drug of all time (Jay, 2010). Intended as a new
treatment for angina, a heart condition that constricts the blood vessels
supplying the heart, experiments with this drug whose clinical name was
Sildenafil Citrate, proved disappointing. The scientists at Pfizer’s laboratories
in Kent (an area to the south of London) were about to abandon further trials
when men among the trial volunteers reported an unusual side effect. Senior
scientist at Pfizer, Chris Wayman, was asked to investigate what was
happening (Jay, 2010). He found that in men Compound: UK92480 had the
effect of relaxing the muscles of the penis, leading to a restoration of erectile
response. With no effective treatment for erectile dysfunction available
hitherto, here quite by accident was a way of treating impotence in men for
the first time. Launched in 1998, Compound: UK92480, renamed Viagra, very
quickly became one of the world’s most prescribed drugs.
Source: Jay (2010).

Although science is systematic and methodical, scientific discoveries and the


resulting applications are often serendipitous, with innovations occurring by chance
rather than through a deliberate and deterministic quest. The drug Viagra is a case in
point, but it is not alone. Antibiotics which were to transform the treatment of
bacterial infections in the second half of the twentieth century, were the result of the
chance discovery of penicillin by the Scottish biologist Alexander Fleming. When he
returned from taking a summer holiday Fleming noticed that that the spores of
fungus accidently blown in by the wind had killed off bacteria in a culture dish and
this led to one of the most significant medical advances of all time. The microwave
oven had a similar origin. Percy Spencer at Raytheon discovered the heating
properties of microwaves when he noticed that a chocolate bar in his pocket had
melted while he was standing next to a magnetron, a vacuum tube used to generate
microwaves.

Technological breakthrough
A breakthrough is an advance or improvement resulting in a product or service that
enjoys significantly improved performance. A technical or technological
breakthrough is one that involves the application or development of technology to
create something that advances capability or technique leading to improved
performance. Technological breakthroughs tend to be the product of human
ingenuity, where someone devises a better and more effective way of performing a
particular action or activity. Historically, ingenuity has been exercised by ‘tinkerers’,
men or women who liked to examine the working of a particular product in the hope
of finding a way of improving its performance. Quite often the product of tinkering
has been a very small improvement in performance, but this can be a cumulative
process leading over time to very big improvements in performance. There is
significant overlap between ideas and breakthroughs. However, breakthroughs tend
to be the product of purposive activity where there is the deliberate intention of
achieving a breakthrough that will dramatically improve performance. As the name
implies, this typically involves the application of technology.
An example would be the development of the flint axe or Henry Ford’s
development of the moving assembly line. The former, through the addition of a
wooden handle, greatly improved the capability and proficiency of sharpened flints
as instruments for cutting. The latter, through the use of overhead conveyors
modelled on those found in Chicago’s slaughterhouses and abattoirs, greatly
reduced the time taken to assemble a car thereby dramatically reducing the cost.
Neither of the examples used above relied on advances in science. Rather it was
the application or combination of things that were already known and already
available. This tends to be a feature of technology breakthroughs and one of the
things that differentiates such breakthroughs from scientific discoveries.

Invention
Ideas, no matter how great, have to be turned into something that actually works.
This is where invention comes in. Invention is a key part of the exploration phase. It
is where ideas are turned into workable inventions. It is where the idea is made to
work. As noted earlier, if the product or service in question is technological, this is a
phase that is typically characterised by much experimentation. The function of the
experimentation is to prove the concept and arrive at something that is workable. If
the innovation is a modest one, with comparatively little novelty, there may be little
or no experimentation; nonetheless there may be a considerable amount of technical
work to undertake in order to arrive at a product with the desired attributes.
There are three models that describe how invention can be undertaken. The first
is what one might describe as the classic model of invention, where a lone inventor
toils away on his or her own. In this instance the inventor is portrayed as a ‘heroic’
figure, battling against the odds, isolated, lacking support and short of resources.
Although examples regularly appear on the BBC TV programme Dragon’s Den, this
model is actually comparatively rare. However, there continue to be a small number
that have a big impact and attract a high public profile. The Internet search engine
Google, for instance, was established on the basis of data mining software
developed by its founders, Larry Page and Sergey Brin (Vise, 2005), while they were
postgraduate students at Stanford University in California. Similarly, the bagless
vacuum cleaner was developed by James Dyson (Dyson, 1997) working on his own
in the garage of his home near Bath. In both cases the invention was the product of
individual effort.

Mini Case: Menlo Park


The American inventor Thomas Edison (1847–1931) was probably the most
prolific inventor of all time. At one point he held over 1,000 patents and some
of the things he invented did not merely become very successful innovations,
they led to the creation of major new industries including electric light,
electric power generation, sound recording and motion pictures. In a variety
of different ways they might be said to have transformed the lives of millions
during the course of the twentieth century.
It is therefore perhaps not surprising that Edison is typically portrayed as
the archetypal inventor, that is, an individual who heroically struggles
through successive experiments and trials to eventually come up with a
wondrous new product. This perspective of Edison owes much to his own
pronouncements, not least a quote from an interview he gave to the magazine
Harper’s Monthly, in which he claimed that ‘genius is one per cent inspiration
and ninety-nine per cent perspiration’.
Though often portrayed as a heroic lone inventor, in fact one of Edison’s
most important and valuable innovations could not be patented and had more
to do with collective rather than individual effort. This was the creation of the
world’s first industrial research laboratory. Established in 1876 on a 30-acre
site at what was then the tiny village of Menlo Park in New Jersey about 50
miles south of New York, it comprised a main laboratory linked to a machine
shop, together with a glasshouse, carpenter’s shop, a carbon shed and even a
blacksmith’s shop. An office and library were added later along with a house
for Edison and his young family. Here in the space of six years Edison
patented some 400 inventions, including the incandescent electric light bulb
and the phonograph (the world’s first audio/music player). This prodigious
burst of creative output was the product not of one individual but of a close-
knit team. At Menlo Park for the first time, Edison brought together specialists
with a range of knowledge, skills and expertise including scientists, engineers,
machinists, designers and others. It was their collective effort, rather than that
of one heroic individual, combined with a process of systematic
experimentation, that produced a succession of inventions, patents and
ultimately innovations. All of whom contributed in different ways to achieving
the technological breakthroughs that were eventually patented in Edison’s
name.
Edison himself referred to Menlo Park as his ‘invention factory’. It became
a model for large business corporations throughout most of the twentieth
century, where technological breakthroughs became the source of many of
the most important innovations of the period. Among the most spectacular
examples of this institutional innovation was AT&T’s Bell Labs. At its peak it
employed some 25,000 scientists, engineers and technicians. Gertner (2013),
harking back to Edison’s Menlo Park, called Bell Labs the ‘Ideas Factory’ not
least because of its outstanding output of scientific discoveries and
technological breakthroughs which included the transistor, the laser, the
UNIX operating system and the programming languages C and C++, along
with seven Nobel prizes.

Despite these high-profile examples of individual inventions, this model has


generally given way to a corporate model of invention, where corporate research
and development (R&D) facilities in the form of R&D laboratories are the main
engine of invention. By the mid-twentieth century most large corporations like
Dupont, IBM and AT&T were using this model (Chesbrough, 2003: 35), which
originated with Thomas Edison and his Menlo Park industrial research lab, as the
basis for developing new products. These firms possessed massive R&D facilities
and they competed by the simple expedient of doing more R&D than anybody else in
their industry.
Because most of the activities associated with invention take place within a
single vertically integrated organisation, as do the associated exploitation activities,
the corporate model has in recent years come to be termed the closed model of
innovation (Chesbrough, 2006a).
Latterly, however, we have come to see a new model emerge. This stands in
marked contrast to the closed model of innovation. The open model of innovation
(Chesbrough, 2006a) recognises that invention is not only the product of corporate
research labs. While this source is important there are other external sources which
can be important today. These external or outside sources include other large
corporations which, having developed new technologies, decide not to
commercialise (i.e. exploit) them. Having no immediate and obvious use for the
technology themselves, they license the technology to others who are willing to turn
the technology into new products. Another external source is small, entrepreneurial,
hi-tech companies. Perhaps formed as spin-offs of universities or other companies,
these highly specialised companies possess knowledge and expertise in very
narrowly defined fields. These enterprises may not have the facilities to achieve
outstanding technical breakthroughs, but they do have what much innovation
demands, namely the capability to adapt the technology to a particular, highly
specialised application. This specialisation, combined with a high degree of
flexibility, enables these companies to produce potential commercial applications
that can then be exploited in collaboration with large corporations.
It is important to note that with the open model of innovation, while inventions
can come from outside the organisation, nonetheless much innovation activity is
normally carried out internally. Such is the flexibility of open innovation, in contrast
to closed innovation, that the invention may come about via the internal route, only
for the external route to be used for the exploitation phase as a third party exploits
and commercialises the invention.

Exploitation
Inventions, whether they are the product of ideas, discoveries or breakthroughs,
though they may be of great interest to the technological community and on
occasion attract much public interest, are actually of only limited value. This is
because they may be dramatic, they may be exciting, they may be the product of a
great deal of hard work, but they only release value when consumers start buying
them. A way has to be found to transform the ‘technological potential’ of an
invention into economic value. The essence of the exploitation element of
innovation is to find an appropriate way to unlock what Chesbrough (2006a: 64)
describes as the ‘latent value’ of a technology in order to generate real value.

Business models
There are potentially many ways to exploit an idea, a discovery or a technology,
though in reality only a very small number of them are likely to succeed.
Exploitation mechanisms are increasingly described as business models. The rise of
the Internet and the so-called ‘dot-com bubble’ has fuelled interest in business
models, partly because a number of new models have emerged and partly because
the business model is perceived to be the key to unlocking the new opportunities
created by the Internet.
What do we mean by a business model? Essentially a business model is an
enabling device, that is, a tool that allows inventors to profit from their ideas and
inventions. How does a business model enable innovation? According to Chesbrough
(2006b: 108) a business model performs two important functions as far as the
exploitation of an invention is concerned. These functions are: value creation and
value capture (Figure 1.4). Value creation refers to a series of activities that enable
the user to recognise the benefit and hence the value that he or she can gain from
the invention. With new technologies in particular, this is a vitally important
function, because the technology may be the product of a scientific breakthrough
rather than a specific quest to meet a user need or solve a user problem. What the
business model has to do is first identify the users to whom the innovation is going
to be of use and then articulate the value proposition so that users are aware of its
purpose and the benefit they can expect to derive. Only when the user recognises
the benefit to be gained from a new offering is he or she likely to be willing to
consider purchasing it. No matter how enthusiastic the inventor, unless the
articulation of value is effective potential users will not be interested.

Figure 1.4 Exploitation: the function of business models

An example of an innovation that suffered from poor value creation was the
Sinclair C5. Launched in a blaze of publicity, its creator, Clive Sinclair, heralded it as
an electric car that would transform urban transportation. When it appeared on the
streets, however, it was clearly not a car. It was in fact a single seat electric tricycle,
powered by a modified washing machine motor and car battery. With an effective
range of 6.5 miles and a top speed of 12 miles per hour it bore little resemblance to a
conventional car (Anderson and Kennedy, 1986). It was not clear at whom the C5
was aimed. It lacked the range required for all but the shortest of journeys, while as
a leisure vehicle it lacked the capacity to take passengers. It was not clear for whom
the C5 was intended or what purpose it was designed to fulfil. In short, its business
model suffered from poorly articulated value creation and, not unsurprisingly, it
proved a commercial flop.

Mini Case: HP Sauce


Today, some 28 million bottles of HP sauce are sold every year, making it
Britain’s best-selling brown sauce, with a market share approaching 75 per
cent of the UK market. The vinegar-based brown sauce was first invented in
the last decade of the nineteenth century and made use of exotic spices, such
as cayenne pepper and tamarind, which had only recently begun to be
imported from India. The spicy mix proved popular with Britons used to a diet
of meat and two veg, and HP sauce bottles became a permanent fixture on the
dining tables of both homes and greasy spoon cafés throughout the country. It
was even exported, mainly to expatriate communities across the empire,
although some found its way to the United States, where reputedly American
actor Tom Hanks is a fan. The American connection is not quite as surprising
as it might seem for something so distinctly British, as the sauce, which is
actually produced in the Netherlands, is today owned and marketed by H J
Heinz, the American baked bean multinational, who bought it in 2005 from the
French food giant Danone.
And yet its inventor, Frederick Garton, a Nottingham grocer, did not share
in the sauce’s success. Garton developed the recipe for the sauce in his
workshop at the back of his home at 49 Sandon Street in Basford, a suburb of
Nottingham, in 1894. Originally marketed as ‘F C Garton’s Sauce’, Garton
shrewdly registered the more memorable name HP as a trademark in 1895,
having apparently heard that his sauce was being used in the restaurant of the
Houses of Parliament. But four years later in 1899, he agreed to sell the name
and the recipe to Edwin Moore, a travelling salesman, for £150, apparently in
settlement of a debt. Renamed ‘Gartons HP Sauce’ in 1903, its new owners, the
Midland Vinegar Company of Aston in Birmingham, not only built large new
production facilities, they also invested heavily in advertising and promotion.
As a result, HP sauce established a firm place in the market, a position that
despite changing tastes it has retained to this day. Garton meanwhile
remained a small provincial grocer.

The second function of a business model is value capture. This involves


appropriating value from the activities undertaken by the innovator. In this context
the term ‘appropriating’ means extracting or obtaining. The value that the innovator
typically hopes to gain is revenue (i.e. money), though there could be other gains as
well. The most obvious way of generating revenue is through outright sale where the
consumer exchanges money in return for ownership of the product or service, but
there are a variety of other methods of generating revenue including: renting,
charging by transaction, advertising, subscription and charging for after-sales
support (Chesbrough and Rosenbloom, 2002). However, it is rarely straightforward.
If the innovation is very different from existing product offerings, consumers may
be reluctant to pay using conventional revenue generation mechanisms used in the
industry. Furthermore those working in the industry may be reluctant to break from
the revenue generation mechanisms with which they are familiar. In Chesbrough
and Rosenbloom’s (2002) terminology they have a cognitive bias to existing
mechanisms. Yet in order to capture value effectively it may be essential to break
with the familiar and provide a different revenue generation mechanism. There is
also the danger that some other party will appropriate the value by copying the
innovation. Taking appropriate steps to protect the inventor’s intellectual property
(i.e. through patents) is essential to avoid this. It is precisely these sorts of issues
that exploitation has to tackle and that the selection of an appropriate business
model aims to solve. For this reason revenue generation mechanisms are a key
feature of business models.
Apple’s iPhone mobile phone provides an example that illustrates some of the
issues surrounding the choice of revenue generation mechanism. When Apple
brought out its iPhone mobile phone in January 2007, it was applauded as a highly
innovative product and much praised for the originality of the design. However, the
iPhone is not just another mobile phone. According to Naughton (2008b) the iPhone
is essentially a handheld computer which supports the powerful and widely used
Unix operating system. This transforms the phone from a specialised gadget into
something much more versatile. An important factor in boosting iPhone sales, and
therefore Apple’s revenues, is the wealth of software applications written and
developed by third parties. Anyone can write programs for the iPhone and once
approved by Apple they are available from the ‘Apps’ branch of the iTunes store.
The result has been what Naughton describes as an ‘explosion of iPhone
applications’ (Naughton, 2008b). The Apps store supplied more than 60 million
downloads in its first month. Not only did this earn the developers of these
applications $70 million, it also provided Apple with a useful $30 million (Apple splits
revenues 30:70 with developers). More importantly, however, the presence of this
wealth of additional, complementary material greatly boosted the ‘value proposition’
that the iPhone represents. Hence as a revenue generation mechanism, the iPhone
has become a platform that other firms are prepared to invest in. This costs Apple
nothing, but from a user’s perspective it boosts the value proposition that the
product represents. And it would be wrong to see this as just a happy accident.
Apple’s revenue generation mechanism is predicated on taking deliberate steps to
help and assist external organisations in developing new software applications – and
therefore new uses – for the iPhone.

Mini Case: ARM Holdings


Acorn Computer was a British computer firm that was among the first to
develop a commercial Reduced Instruction Set Computer (RISC) processor or
chip. The RISC chip is a central processing unit (CPU) that exchanges
versatility for processor speed. Essentially the CPU executes a reduced
number (i.e. set) of commonly used instructions very fast, thereby enhancing
the overall speed of the processor. Hitherto CPUs employed a Complete
Instruction Set Computer (CISC) chip that got the hardware of the CPU to do
as much as possible per instruction. RISC technology operates on a quite
different basis with simple instructions that get the CPU to do less per
instruction.
To develop its RISC technology Acorn decided to create a spin-off
company by forming a joint venture, ARM Holdings, with Apple Computer of
the US, which was keen to use the new technology in its Newton notepad.
Unlike other chip manufacturers such as Intel and Motorola, ARM Holdings
chose to exploit the new technology in a very particular way. It became, in the
words of its managing director Robin Saxby, ‘a chipless chip company’
(Garnsey et al., 2008: 217). By licensing, rather than manufacturing and
selling, RISC chip technology, the company established a new business model
that redefined the way in which microprocessors were designed, built and
sold. Licensing meant that ARM Holdings could focus on design work as a
core activity, leaving others to undertake manufacturing. It also enabled ARM
Holdings to quickly establish a market presence that in turn enabled the
company to exercise a very powerful influence over the sorts of
microprocessor used in a variety of consumer products including automotive,
entertainment, imaging, security and wireless applications. Among the
everyday items using ARM Holdings’ RISC technology are mobile phones,
digital cameras, DVD players, smart cards, set-top boxes, SIM cards, scanners
and desktop printers. Some 80 per cent of the mobile phones shipped
worldwide utilise ARM technology. All this from a company that makes
nothing, preferring instead to license its technology.
Among the companies who are licensees of ARM technology are such
household names as Motorola, Philips, Sharp, Sony and Texas Instruments, as
well as a large number of specialist manufacturers of computer peripherals
and similar devices. ARM Holdings now has a turnover of £250 million and
employs more than 1,650 people in design centres in the UK, France and the
US.
Sources: Afuah (2003); Garnsey et al. (2008).

Value creation and value capture are key aspects of business models. Just as there
are a number of revenue generation mechanisms to facilitate value capture, so there
are a number of generic business models. The three business models (see Figure 1.5)
identified by Chesbrough (2006a: 63) to enable firms to convert technological
potential (i.e. inventions) into economic value are:

1 incorporate the technology into the current business


2 license the technology to a third party (i.e. another firm)
3 launch a new venture to exploit the technology in new business arenas.
Figure 1.5 Three types of business model

These models determine who undertakes the exploitation phase of innovation and
how it is conducted. They are each in their own way mechanisms for bringing an
invention to market. Examples of the application of the first model are very
numerous, because it is the normal and logical step for a company that has taken
time and effort to develop something new, to take. When Apple Computer developed
the iPod in 2001, for instance, the company was very clear that, although audio
players were not part of the company’s existing product portfolio, nonetheless this
innovation would be incorporated into the existing business and sold as an Apple
product.
Examples of the second business model are probably much more numerous than
most people imagine, simply because many large companies receive a significant
income from royalties resulting from the licensing of technology.
An example of an innovator who initially went down this route is James Dyson,
who when he had developed his dual cyclone technology for vacuum cleaners,
decided that the best way for him to exploit his invention was for him to use a
business model that entailed him licensing it to an existing vacuum-cleaner
manufacturer. Unfortunately this proved extremely difficult. While some vacuum-
cleaner manufacturers appeared at least to see the technological potential in
Dyson’s dual cyclone technology, they were too firmly committed to a business
model based, in part at least, on a revenue-generating mechanism where revenue
from replacement vacuum cleaner bags was important in capturing value, to switch
to a new business model based on incorporating Dyson’s new dual cyclone
technology into their business (Dyson, 1997: 134).
Hence Dyson, having generated some income from a small-scale licensing
agreement with a Japanese firm, eventually opted for a business model in which he
created a new venture. Called Dyson Appliances Ltd, the new venture eventually
became a multi-million pound business (i.e. the third business model). Another
example of a firm that went down this route was the British automotive component
manufacturer Lucas Varity. Having developed a revolutionary new electric system
of power steering for cars, it chose to form a new venture, in the form of a joint
venture with the US-based automotive component supplier TRW, in order to market
the new technology to the world’s leading car manufacturers.
The significance of business models for the innovation process can be gauged by
the fact that the same technology taken to market through different business models
will yield different amounts of value, a fact that many would-be innovators have
failed to appreciate.

Diffusion
A key aspect of innovation is the way in which new products and services come into
use, that is to say are adopted by potential users. Indeed innovations that fail to
catch on effectively fail to be innovations. Diffusion is the term used to describe the
rate at which innovations are adopted by consumers/users and come into general
use. Successful innovations are generally judged to be ones where the process of
diffusion is relatively rapid and widespread with the innovation proving popular and
widely used. Unsuccessful innovations suffer from very limited diffusion, being slow
to catch on and never gaining a wide following (see Figure 1.6).

Figure 1.6 Possible paths of diffusion

Source: Leonard-Barton, D. (1982).

One of the leading figures in research into diffusion has been Rogers (2003) who
postulated a model of diffusion in which the rate of diffusion follows an S-curve
(Geroski, 2000: 604). This has diffusion entering a trajectory that begins very slowly,
then enters a period of rapid acceleration as the innovation becomes popular, before
levelling off as saturation occurs, leading finally to decline as maturity sets in.
Rogers (2003), whose research focused on innovations in agriculture, especially the
introduction of new strains of wheat and other crops, highlighted ‘social’ factors as
particularly important in explaining the shape of the diffusion trajectory. Hence
things like the psychological attitudes of would-be users (Dodgson et al., 2008) can
exert a powerful influence on their willingness to adopt a particular innovation.
Rogers (2003) highlighted the importance of social networks and word-of-mouth as
influencing decisions to adopt, as well as things like peer pressure and fashions.
Hence one finds that, contrary to what one might expect, technological factors are
not necessarily of prime importance in determining the rate of diffusion. They can
all too easily be outweighed by social factors, as has become apparent in the
Internet age where social factors, such as those outlined above, have often been
critical especially in the take-up/adoption of innovations by young people.
Rogers (2003) also noted that the characteristics of innovations can also be
highly influential in determining the rate of diffusion. Quite literally these determine
why some innovations spread more quickly than others, and why others fail
altogether. The characteristics of an innovation that are important in this regard
include its

▮ relative advantage
▮ compatibility
▮ complexity
▮ trialability
▮ observability.

Relative advantage refers to the perceived improvement that an innovation offers


over existing products. Compatibility is the extent to which the innovation is
perceived as being compatible with existing experience and practices. Complexity is
the ease with which an innovation can be understood and used by potential users,
while trialability is the scope for trying out the innovation on a limited basis,
thereby reducing uncertainty. Finally, observability is the extent to which potential
users of an innovation can actually see clearly defined benefits arising from the use
of an innovation.
Finally, Rogers (2003) focused on the characteristics of the users themselves as
an important and powerful influence on the rate of diffusion. He identified five
different types of user each of whom played a distinctive part in shaping the
diffusion trajectory of an innovation. The five types are:
▮ innovators
▮ early adopters
▮ early majority
▮ late majority
▮ laggards.
The innovators comprise a very small number of highly enthusiastic individuals.
Idealistic and visionary they not only have the time and resources to lavish on new
gadgets, they are also eager to pursue things that are new. They are followed by the
early adopters, who are fashion conscious trendsetters who like to be seen as
leaders. Well connected, they exert a lot of influence over others and cause the rate
of adoption to accelerate rapidly. Following them are the early majority – more
pragmatic individuals who require solid evidence of benefits. It is the actions of the
early majority that lead to sales peaking. Finally, the late majority and the laggards
make an appearance after saturation has been reached and in both instances are
reluctant converts.
Knowledge of these different groups can be valuable to would-be innovators.
Thus, early adopters are a particularly important group and knowing something
about them can be extremely useful. Not only do they participate at a crucial point
in the diffusion cycle (i.e. when the rate of diffusion is accelerating rapidly and the
S-curve is rising sharply), they are highly influential and very well connected so that
they are likely to exert a powerful influence in persuading others to adopt an
innovation.

What does innovation involve? The attributes of innovation


While concepts like newness and novelty help in explaining what innovation is and
exploration and exploitation give some sense of the activities involved in
innovating, this does not help us when it comes to explaining how individuals and
companies innovate, that is, some of the practical realities of carrying out
innovation. One could explain this by referring to the systems and procedures that
companies use to turn ideas into marketable products and services, but though such
approaches are an important part of innovation they fail to convey many of the
more general attributes or characteristics of the process of innovation. It is because
of the presence of these attributes that innovation is difficult, and why companies
try to be systematic.
Some of the important features or attributes of the process of innovation are:

1 Uncertainty and risk. All innovations involve uncertainty. In developing


anything that is new one simply cannot be certain whether or not the product
will work or just how well it will perform (i.e. technical risk). The use of
techniques like computer-aided design (CAD) and simulation has helped reduce
some of the technical uncertainty, but one cannot remove uncertainty
completely. And it is just the same where the market is concerned. Consumers
are notoriously fickle. One can predict, but no amount of market research can
guarantee successful sales of a new product. Furthermore, where innovations
are concerned, consumers often do not know they need a product or service
until it appears.
2 Trial and error. Turning an idea into a workable product typically involves
experimentation. One has to experiment with different materials, different
designs and different processes in order to come up with effective solutions.
This means working on a trial-and-error basis, where one only makes progress
by trying out something different. Altering the product very slightly and then
observing the results helps to create a better product. This means not only that
innovation can be a lengthy process, but one that doesn’t move forward in a
smooth and orderly way.
3 Failure. Dodgson and Gann (2010: 16) observe that ‘most attempts at
innovation fail’. Indeed they go so far as to note that history is littered with
unsuccessful attempts at turning what were often good ideas into new products
and services. Consequently failure, and the disappointment that comes with
failure, is something that stalks all innovations. However, James Dyson, one of
the most successful innovators of modern times, is much more upbeat. In his
view, ‘It’s when something fails you learn. If it doesn’t fail you don’t learn’
(Berger, 2012). Hence failure has a valuable part to play in increasing and
expanding the stock of knowledge.
4 Fits and starts. Textbook accounts of new product development (NPD), which
is how companies typically innovate, present it as a rational and systematic
process, but as Levinson (2006: xii) observes, innovation often occurs in ‘fits
and starts’. New technologies and breakthroughs often lie dormant for a period
and it is only later that someone else comes along and thinks up an application
and adapts the technology. Alternatively, someone takes an existing concept
and works out a completely new use for it. Either way the process moves
forward, but in fits and starts rather than in smooth progression.
5 Perseverance. Given that innovation tends to move forward in fits and starts,
and that failure and disappointments are common, it is perhaps not surprising
that perseverance is a very important quality for those involved in innovation.
Creativity, passion and vision are all important, but probably the most
important thing is not to be put off by setbacks, in other words, perseverance.
Studies of individual innovators very often show that they were individuals who
weren’t easily put off. James Dyson with his bagless vacuum cleaner, Walt
Disney with the world’s first full length animated feature film Snow White and
the Seven Dwarfs (Whiteley, 2012), and Ron Hickman with the Workmate, all
faced a stream of setbacks, delays, humiliations and lack of interest stretching
over many years, but they persevered and eventually produced very profitable
innovations.
6 Collaboration. Many accounts of innovation and even more accounts of
invention present a narrative in which an individual struggles heroically against
tremendous odds. While there is no doubt that individuals often do have to
strive against many hurdles and many setbacks, it should be stressed that
innovation is normally a team effort. Many of the most successful innovations
are the product of two very different individuals working closely together.
Boulton and Watt, Rolls and Royce, Jobs and Wozniak, were all highly
successful partnerships that led to important innovations. What made them
successful was that each partner supplied strengths that the other lacked.
These attributes do not feature in every innovation, but they do convey much of the
general character of the process of innovation. Whether carried out by individuals
or large corporations they are part of the innovation process. They vary in
importance according to the nature of the innovation, but they are usually in there
somewhere even if only in the background. They are characteristics that innovators
ignore at their peril.

What next?
This chapter has provided a brief overview of innovation. It is clear that innovation
is a complex topic that draws on not only several academic disciplines, including
economics, marketing and engineering, but also different fields of professional
expertise (e.g. scientists, managers and lawyers). Each of the facets of innovation
that this chapter has touched on is now considered in more depth in the chapters
that follow.
Chapter 2 seeks to categorise innovations. It builds on the part of the current
chapter that explored the scope of innovation. However, while this focused on the
newness and novelty of innovations, the categorisation in Chapter 2 is more broadly
based. Two well-known categorisations are presented. One is based primarily on
novelty and complexity while the other is based on functionality. By presenting
different, one might almost say alternative, categorisations of innovation, the reader
hopefully comes both to understand the nature of innovation rather better and to
appreciate the range of things covered by innovation. Chapter 3 presents
frameworks that provide scope for analysing the course of innovation. These
frameworks take the form of a number of theories of innovation. These it is argued,
can be used, if not to predict the course of innovation, then at least to aid explaining
it, in other words why some innovations occur when they do and why some have a
much more profound impact than others.
Chapter 4 focuses on what one might term the creative dimension – that is, where
and how innovations originate, in particular the different sources of innovation.
Chapter 5 provides an overview of the process of innovation. As well as identifying
the wide range of activities associated with innovation (i.e. what you have to do in
order to innovate), it also explores different models of the process. This latter aspect
shows that while there are various activities that have to be undertaken, how they
are undertaken, by whom, and when, can vary enormously.
One of the key messages coming out of the current chapter is that the second
phase of innovation, exploitation, is crucial if an invention is to stand any chance of
commercial success. Exploitation, it was noted, normally requires the use of a
business model that performs two key functions: value creation and value capture.
Value capture is the focus of Chapter 6, while Chapter 7 provides detailed coverage
of intellectual property rights. These rights are crucial if an innovator is to
appropriate and therefore profit from his/her creativity. The other function of a
business model is value creation, which is the function of Chapter 8 which looks at
innovation strategy. By looking at how organisations and individuals position an
innovation in relation to the market, innovation strategy is closely bound up with
understanding the nature of the value that an innovation creates and for whom.
While value creation and value capture are clearly a vital part of exploitation,
there are other aspects to this phase of innovation. Earlier in the current chapter it
was noted that exploitation covers a range of business activities including
marketing, organisation and finance. With aspects of marketing covered in Chapter
8 as part of innovation strategy, Chapter 11 explores the managerial aspects of
innovation, while Chapter 10 highlights the difficulty of funding innovation and
presents a range of funding mechanisms that can help to bridge the temporal gap
that all too often exists between investment outlay on the development of a new
product or service and the income stream that a successful innovation eventually
brings in. Chapter 10 particularly focuses on the funding of innovation by new start-
up businesses and to complement this, Chapter 9 examines the role of technological
entrepreneurs, those individuals who found technology-based businesses.
Finally, the remaining three chapters provide a policy perspective on current
issues (i.e. developments) in innovation. Chapters 12 and 13 examine global and
green aspects of innovation, while the concluding chapter rounds off by considering
national innovation systems, which comprise a country’s infrastructure, in the
broadest sense, for supporting and stimulating innovation.

Case Study: Twitter


When trading on the New York Stock Exchange closed on 7 November 2013,
shares in Twitter, which were being traded for the first time, closed at $44.90,
some 70 per cent above their initial offering price of $26. This valued the
company at over $31 billion, making Twitter the second largest Internet-
related IPO (initial public offering) by any US company. The IPO valued the
company as worth almost as much as Yahoo Inc., an Internet icon from the
dot.com era, and just below Kraft Foods, the vast grocery conglomerate
founded more than a century earlier. The IPO made instant billionaires of
Evan Williams and Jack Dorsey (Walker, 2013), two of the people who had
played a key role in the project, though the biggest winner was Twitter’s
largest shareholder, the private equity group Rizvi Traverse. Two other key
players in the early stages of the innovation process, Biz Stone and Noah
Glass, meanwhile received nothing, having left the company prior to the float.
The idea for Twitter, the online microblogging/social network site that uses
short messages (i.e. up to 140 characters) called ‘tweets’, sent as text
messages from a phone or via the web, to be posted on an individual’s profile
and sent to ‘followers’, first came about as an idea that came out of a
brainstorming session at the podcasting company Odeo. Faced with a decline
in sales of existing products, key staff at Odeo undertook a day-long
brainstorming session at the company’s offices in San Francisco in 2006, in an
attempt to generate ideas for potential new products and services that the
company could develop. Reputedly it was Jack Dorsey who originally put
forward the idea of a text-based communications platform – by which groups
of friends could keep track of what each other was doing based on their status
updates – to Odeo’s co-founders Evan Williams and Biz Stone. They gave
Dorsey the green light to spend more time on the project and develop it
further, although much of the early development work was carried out by
software developer Noah Glass who is generally credited with coming up with
the name Twitter.
It was Dorsey who sent the first message on Twitter on 21 March 2006. It
read, ‘just setting up my twitter’ (Walker, 2013). However, while the initial
concept of Twitter was being tested at Odeo, the company faced very difficult
trading conditions as a result of Apple releasing its own podcasting platform.
This seriously undermined Odeo’s business model and the founders decided to
buy their company back from the investors. By doing this, they acquired the
rights to the Twitter platform. However, some key members of the Twitter
development team were not brought on to the new company, most notably,
the software developer who had led much of the early development work,
Noah Glass. By now Twitter was on the cusp of commercial acceptance and a
big spurt in its growth. The Twitter team had a big presence at the South By
Southwest Interactive Conference in early 2007, and took advantage of the
viral nature of the conference to communicate directly with attendees. The
result was an explosion of Twitter usage, with more than 60,000 tweets sent at
the event every day.
By 2013 Twitter had become what one commentator described as ‘an
indispensable Internet utility’ and had some 230 million users worldwide. It
was against this background that the company decided to go public with an
IPO designed to raise $1.8 billion through the sale of its shares. And yet
amazingly Twitter had never made a profit since it was founded. Indeed, it
had positively bled cash having lost $80 million in 2012 and posted a net loss
of $134 million for the first nine months of 2013.
Source: Walker (2013).

Questions
1 Where was the novelty in this innovation (i.e. what was new about it)?
2 Which financial institutions were interested in the innovation of Twitter
and why?
3 Why was Twitter able to raise a huge amount of money via the IPO
despite having never made a profit?
4 What triggered the start of the exploration phase of the innovation
process?
5 How was the initial phase of exploration conducted and who participated
in this process and why?
6 What was the business model employed to facilitate exploitation of the
idea for Twitter?
7 Who captured a huge amount of value from this innovation and why?
8 Who does not appear to have captured any value from the innovation and
why?
9 What happened in early 2007 to indicate the value created by Twitter?
10 What were some of the key events in the diffusion of Twitter as an
innovation?

Questions for discussion


1 Why do so many biographies of people associated with successful innovations
appear more preoccupied with inventions?
2 What is an innovation?
3 Explain the difference between exploration and exploitation. Which do you
consider the more important and why?
4 Outline the main phases in the process of innovation.
5 Explain what is meant by the term ‘business model’ in the context of innovation.
6 Why is value creation likely to be a problem with innovations that involve a
high degree of novelty?
7 Why is value capture an important issue for innovators, particularly individual
innovators?
8 What personal qualities do you consider are essential for successful innovators?
9 Why is it that innovation is often now a collective activity?
10 Internet-related innovations have been associated with much discussion about
business models – why?

Exercises
1 Prepare a briefing document for an innovation of your choice. This should be no
more than six pages in length and should provide the reader with a clear
understanding of the nature of the innovation, as well as the factors that
account for its success and the important lessons you feel it holds for would-be
innovators. As part of the briefing prepare some slides that will enable you to
make a presentation in class.
To carry out this task you will need to classify the innovation, identify
prospective purchasers/consumers, distinguish product/service features,
identify competitor products/services, etc. Your aim should be to show what it is
about this particular innovation that makes it a good example.
You will need to carry out some basic fact-finding research. The websites of
newspapers such as The Independent and The Guardian can also be used to
locate articles about innovation. Similarly you may find Van Dulken (2000)
helpful as it gives details of a large number of innovations.
2 Prepare a profile of a company that you feel has a strong record of innovation.
As well as providing background details on the company, you will need to
identify examples of successful innovations they have produced. Indicate why
these innovations have been successful and try to identify the expertise and
experience that you feel enabled the company to innovate.
You will probably find that biographies, industry studies and similar sources
will be useful. You will find details of some of these books in the bibliography.
Internet searches may well enable you to find short profiles of the innovations
you are interested in. However, be warned that such profiles often lack detail
and tend to treat the subject matter in an unsophisticated and uncritical manner.

Further reading
1 Dyson, J. (1997) Against the Odds, Orion Business, London.
If you only ever read one book that is in any way about innovation, this is the
one. It describes how James Dyson developed the dual cyclone vacuum cleaner.
It covers both the exploration and exploitation phases of innovation. The
exploration phase of innovation is described at length with plenty of coverage
of prototype building and testing. But most important of all there is also plenty
about the exploitation stage of innovation, in particular Dyson’s struggle to
bring his innovation to market. Above all, though, it gives a valuable insight
into just what innovation entails.
2 Gertner, J. (2013) The Idea Factory: Bell Labs and the Great Age of American
Innovation, Penguin Books, London.
An unusual book about innovation. It is the story of Bell Labs, the most
successful industrial research lab of all time. It not only traces the story of some
very important innovations like the transistor and the laser, but also provides a
fascinating insight into what until recently was probably the dominant form of
innovation, namely that carried out by large corporations.
3 Chesbrough, H. W. (2006a) Open Innovation: The New Imperative for
Creating and Profiting from Technology, Harvard Business School Press,
Boston, MA.
A highly influential text. In many respects it is the antidote to the previous book
(Gertner, 2013). In contrast to Gertner (2013), Chesbrough extols the virtues of a
more open approach to innovation where ideas, discoveries and breakthroughs
come not just from internal sources but also external sources. In the process it
shows how the context of innovation has changed dramatically in recent years.
It also highlights some of the key issues in innovation, most notably through
giving very explicit consideration to business models.
4 Van Dulken, S. (2000) Inventing the 20th Century: 100 Inventions that Shaped
the World, British Library, London.
Titles can be deceptive. All of the inventions covered in this book are also
innovations. It provides useful background information on many well-known
innovations. It does not provide a great deal of data on each one, but it is a good
starting point. Certainly this is a reference work that is worth consulting to
build up quickly a picture of when and how a particular innovation occurred.
Types of Innovation

Objectives
When you have completed this chapter you will be able to:
distinguish the different forms that innovation can take, such as product,
service and process innovations
analyse the characteristics of these different forms of innovation
differentiate and distinguish between the different types of innovation, such
as radical and incremental innovation
analyse the impact of the different types of innovation on human
behaviour, business activity and society as a whole.

Introduction
The notion that innovation is essentially about the commercialisation of ideas and
inventions suggests that it is relatively straightforward and simple. Far from it, not
only is the step from invention to commercially successful innovation often a large
one that takes much effort and time, but as Figure 1.1 previously indicated,
innovations can and do vary enormously. Some involve a high degree of novelty,
some a very modest degree of novelty. In addition the term ‘innovation’ is widely
used, and is often applied to things that really have little to do with innovation,
certainly in the sense of technological innovation. This chapter builds on the
discussion of the scope of innovation in the previous chapter, and tries to produce
some order from the apparent chaos and confusion surrounding the term
‘innovation’. Hopefully better informed, the reader can then proceed to more
detailed analysis of innovation.

Making sense of innovation


If innovation comes in a variety of shapes and sizes and is used by different people
to mean different things then making coherent sense of the subject is not an easy
task. Grouping innovations into categories can help. Essentially, categorising
innovations should make it easier to make sense of innovation as a whole, simply
because one can then take each category in turn and subject it to detailed scrutiny.
If it is easier to make sense of a small group than a large one then we should be on
the way to making sense of innovation.
Innovations can be categorised in a number of different ways. In this chapter two
of several potential methods of categorisation are used. One focuses on the form or
application of the innovation (i.e. what it is used for), the other focuses on the
degree of novelty associated with the innovation. Neither categorisation is
exhaustive, but the difference between the two is illuminating and helps to shed
further light on the nature of innovation.

Forms of innovation
The first categorisation, based on the form of innovation, distinguishes three
principal applications for innovation: products, services and processes. Consumers
use products and services. Products are tangible physical objects like mobile
phones, audio players or cars, which consumers acquire and then use as part of the
act of consumption. Product innovations take the form of new tangible objects.
Services on the other hand are typically intangible things like healthcare or
education, where the consumer benefits from the service but does not actually
acquire an object. Service innovations are therefore intangible. Both product and
service innovations are typically aimed at consumers. In contrast, producers
produce products and deliver service and in order to do so they utilise processes.
Typically these processes require equipment such as machinery which we refer to as
‘capital goods’. An innovation in the form of new equipment or new methods and
systems would be a process innovation.
This distinction is actually a simplification of what one finds in the real world,
because clearly companies buy products and services, as well as individuals as
consumers. Similarly, we could perfectly well argue that a washing machine, while
being a consumer product bought on the high street, is also used for a process,
namely washing clothes. In general, when consumers acquire things it is part of
consumption and when companies acquire things it is part of production or service
delivery. However, while there is undoubtedly some overlap, the distinction between
product, service and process innovations is a useful one. This is because of the
distinction in terms of the benefit that results from innovation. Product and service
innovations will typically benefit consumers giving them more functional products
or faster and more effective services. Process innovations on the other hand
typically benefit the corporate sector by improving the efficiency of their production
or service delivery processes, thereby lowering their costs. Such innovations should
also benefit consumers indirectly as lower costs eventually (but not inevitably) feed
through into lower prices.

Product innovation
Product innovations loom large in the public imagination. Products, especially
consumer products, are probably the most obvious innovation application. The
Dyson bagless vacuum cleaner is an example of a product innovation. James Dyson
developed what he terms ‘dual-cyclone’ technology (Dyson, 1997) and used it to
create a new, more efficient, vacuum cleaner. As a vacuum cleaner it is a consumer
product and what makes it an innovation – that is, what is ‘innovative’ about it – is
that it functions in a quite different way from a conventional vacuum cleaner. It is
still a vacuum cleaner and it does what vacuum cleaners have always done – it
extracts dust and other items of household debris from carpets and upholstery – but
the innovation lies in the way in which it functions. Instead of employing a fan to
suck dust into a bag, it dispenses with the bag and uses Dyson’s patented dual-
cyclone technology to extract dust and place it in a clear plastic container, resulting
in a more effective cleaner. It is a good example of a product innovation because it
is an everyday household product where you can actually see the innovation at
work, a fact that James Dyson, an experienced industrial designer and entrepreneur,
no doubt had in mind when he designed his first bagless vacuum cleaner, the Dyson
001.
From a commercial perspective the attraction of product innovations is that the
novelty of a new product will often persuade consumers to make a purchase. Nor is
it purely a matter of novelty. The introduction of a new technology into an existing
product may similarly attract much consumer interest. It is no coincidence that
‘product development’ is one of the four business strategies put forward by Ansoff
(1988) for the future development of a business.

Service innovation
Often overlooked because they lagged behind product innovations (Miles, 1993), but
equally important, are innovations in the service sector. These service innovations
are essentially new or updated and improved services. What do we mean by
services? Typically services are regarded as the provision of anything that does not
involve supplying a physical artefact. Unfortunately this is a somewhat negative
way of portraying services. A more positive way is to describe a service as placing a
bundle of capabilities at the disposal of the consumer. These capabilities would be
organised in such a way as to enable consumers to satisfy particular needs they may
have or provide solutions to problems. Thus if my car has broken down I need the
services of a garage to fix it, or if my hair is too long I need the services of a
hairdresser to get it cut. Nor are services confined to relatively mundane needs. In
order to provide for our old age we rely on a range of financial services to protect
and invest our savings and thereby ensure an appropriate income when we retire.
Financial institutions can be very innovative when it comes to the provision of
innovations in financial services, in fact as the recent financial crisis has shown,
sometimes they can be much too innovative!
One reason why service innovations have hitherto failed to attract as much
attention as product innovations is that they are often less spectacular and less eye-
catching. This probably has something to do with the fact that, where innovation is
concerned, the public imagination has always tended to identify with inventions and
the people who create them, namely inventors. While the former often attract much
interest because they are very visible and easy to identify, the latter often score
highly in the human interest stakes. Despite this, service innovations can actually
have a huge impact on consumers and the way in which we live our lives.
Reflecting this lack of attention given to service innovations, until the late 1980s
and early 1990s there was little research into service innovations (Miles, 2005).
However, the arrival of the personal computer and other developments in
information technology saw this begin to change. The last 20 years has seen massive
growth in research and development (R&D) linked to computer software, so that
services R&D grew rapidly. The arrival and subsequent diffusion of the Internet
caused this trend to accelerate. As a result the number of service innovations has
grown rapidly. What began with Amazon and other dot-com pioneers in the last
decade of the twentieth century accelerated in the first decade of the new century as
Facebook, Twitter, PayPal and a host of other service innovations became
household names.
One of the features of service innovations is that they can be extraordinarily
varied. Den Hertog (2000) provides a very valuable typology for making sense of
this variety through an intuitively simple yet informative system of classification.
He suggests that service innovations can be differentiated on the basis of four
dimensions to produce essentially four different types of service innovation. These
four dimensions are shown in Figure 2.1.
Figure 2.1 Den Hertog’s four dimensions of service innovation

Source: Adapted from ‘Knowledge-Intensive Business Services as co-producers of innovation’, Den Hertog,
International Journal of Innovation Management, Vol. 4, No. 4, Copyright © 2000 Imperial College Press.

In this schema the first dimension is an innovation that takes the form of a new
service concept. This involves the creation and development of an entirely new kind
of service, with the customer being provided with a service that has not hitherto
been available. It may well involve meeting a previously unmet need, or the
provision of a solution to a problem that has in the past gone unresolved. When first
introduced the call centre was an entirely new concept in terms of the provision of
services. The first call centre was reputedly created by the credit card company
Barclaycard, at its operational centre in Northampton in the 1970s as a means of
offering a centralised service handling telephone enquiries from its credit card
customers, rather than such enquiries being dealt with at the level of bank branches
on the high street. The idea of bringing staff together into one location in order to
handle telephone enquiries and providing computer technology to enable them to
access customer account data, while seemingly obvious today, was novel when first
introduced. Similarly Facebook was a new service concept when it first appeared, as
were eBay and PayPal.
Mini Case: NHS Direct
NHS Direct is a 24-hour telephone helpline providing advice and guidance on
a range of health issues throughout England (similar facilities are available in
Wales and Scotland). In line with normal practice in the UK’s National Health
Service, the service is free. It was set up in March 1998 in three pilot areas,
Newcastle, Milton Keynes and Preston. Following success with these pilot
schemes it was rolled out across the UK. Staffed by nurses and paramedics,
NHS Direct is a new service designed to provide members of the public with
access to healthcare professionals who can give advice and guidance about
the treatment of minor ailments and advise when conditions are likely to be
more serious, directing users to sources of specialist help and treatment. One
of the aims of setting up this new service was to take some of the pressure off
hard-pressed GPs and Accident and Emergency (A&E) departments in
hospitals. By 2008 NHS Direct was receiving some 8 million calls per year and
had a staff of over 2000.

The second dimension of service innovation is a new client interface (see Figure
2.1). This is where the source of the innovation lies in major changes taking place in
the point of interaction between the service provider and the client. One of the main
distinguishing features of services is that unlike products, clients are often a key
part of the production of the service (DTI, 2007). Hence a change in the way that the
client interacts with the service, by taking on a new or different role in some way,
can be a significant innovation. Most innovations of this type require the client to
behave differently. Very often this involves a switch from a relatively passive role to
a much more active one. In retailing, a change in the role of the client has occurred
with the introduction of the self-service checkout, or to give it its full title, the Semi-
Attended Customer Activated Terminal (SACAT), in many supermarkets. This
replaces the conventional human-operated checkout where the person manning the
checkout scans the barcodes, with a machine that requires the customer to scan in
the barcodes, and count and weigh items like fruit and vegetables and place them in
the ‘bagging area’. Similar service innovations have occurred in banking and in
libraries. In the case of libraries the insertion of RFID (radio frequency
identification) chips into books has led to the introduction of a much greater element
of self-service, with library users checking books out and in via a machine instead of
via a librarian.
The third dimension of service innovation (see Figure 2.1) according to Den
Hertog (2000) is service delivery. In this case the innovation lies in finding a new
way to deliver the service to the customer. Typically this sort of innovation involves
the introduction of new systems and new forms of organisation. Good examples
would be fast-food outlets such as drive-ins and takeaways. Here the conventional
restaurant as a way of delivering meals to customers is replaced by a new system
that involves packaging the meal in such a way that the consumer collects the meal
from the outlet rather than consuming it on the premises.
The creation of the ‘Direct Line’ telephone insurance business is a good example
of this third dimension of service innovation (Channon, 1996). For years the
insurance business had been transacted via high street outlets or through
intermediaries known as insurance brokers. Peter Wood, the creator of the Direct
Line telephone insurance business, realised that with appropriate online computer
services, it would be possible to cut out these expensive and unproductive ways of
dealing with the public and deal direct with the customer via the telephone. Thus a
service delivered via high street stores was replaced by one delivered via the
telephone. This was not only a much easier and more convenient way for consumers
to access insurance services, it was also much more efficient.
The fourth and final dimension of service innovation in the Den Hertog (2000)
typology involves technology, where the emergence of new technologies makes new
forms of service possible. The service may well be one that has been around for a
long time but a new technology makes innovation possible so that the nature of the
service changes in some way. Very often it is new technology that works with the
other dimensions to create innovation.
The growth of electronic technologies associated with information and
communications technologies (ICT), such as the Internet, has led to a sharp rise in
the number of service innovations of this type. Indeed Den Hertog (2000) describes
ICT as a very powerful enabling and facilitating factor driving service innovations.
A good example would be Internet shopping. Retailing is a classic service activity.
But the arrival of Internet technology has led to many innovations. Internet
shopping means it is no longer necessary to visit a retail store in order to shop.
Shopping can take place online with the goods then being delivered to the
consumer’s home by mail order or via a delivery service. Amazon.com is a prime
example. Bookshops have been around a long time, but online bookshops can offer
a much greater range of books often at lower prices because they don’t pay the cost
of retail premises.
While these four dimensions help us to understand the nature of service
innovations, it has to be pointed out that there is a significant amount of interaction
and overlap between them. Thus service innovations involving new forms of service
delivery are often made possible by developments in technology and new forms of
client interface arise in the same way. Entirely new service concepts often come
about for the same reasons. Facebook, for example, is a very good illustration of a
new concept, since until Mark Zuckerberg came up with the idea there really was
nothing like it. But Facebook would not have been possible without the new
technology of the Internet.
Mini Case: Netflix
Netflix was founded in 1997 by Marc Randolph and Reed Hastings. At the time
the video rental market was dominated by Blockbuster which rented out
video tapes of movies for video cassette recorders (VCRs) from high street
video rental stores. Randolph and Hastings’ new business aimed to take
advantage of the recently introduced DVD-video format technology by
offering a different kind of service. The new technology made movies
available on DVDs that were much smaller and lighter than bulky video
cassette tapes. This in turn meant it was now possible for movies to be mailed
as DVDs direct to customers. Randolph and Hastings’ new service combined
the advantages offered by this new technology with the increasing availability
of Internet technology, by allowing customers to select the movies they
wanted to rent online. This not only did away with the need to operate and
staff a costly network of rental stores, it also meant that Netflix could offer a
very much greater choice of movie titles. The initial business model employed
by Netflix involved charging customers a rental fee for each DVD borrowed,
but by 1999 the company had ditched this in favour of a subscription model
that allowed customers to rent as many DVDs as they wanted on a monthly
basis.
The new business model offered valuable benefits for customers. At a
stroke it eliminated the hassle surrounding due dates and late return fees.
However, it also provided important advantages to Netflix, since it allowed
the company to differentiate and profile its customers by how many movies
they watched. By February 2003 Netflix’s subscriber network had passed the
one million subscribers mark and the company was the leading provider of
movie rentals. But the technology did not stand still and neither did Netflix. In
2007 it began to move away from DVD rentals and into video on demand via
video streaming. As a result, while DVD sales fell between 2006 and 2011
Netflix’s revenue continued to grow.
In 2011 Netflix announced a new major addition to its service offering in
the form of original content for video streaming. The first manifestation of
this was the production of an hour-long political drama series starring Kevin
Spacey entitled House of Cards which debuted in February 2013. By April 2014
Netflix had 50 million subscribers worldwide, 32.3 per cent of the video
streaming market in the US and operations in 41 countries.
Source: Afuah (2009).

Process innovation
New products can be a powerful force for economic growth as demand for them
creates new jobs and new industries. However, this is not the only way that
innovations can make us better off. There are also those innovations that result in
the development of ways of producing things more cheaply. More efficient
production methods, or simply the removal of bottlenecks that hold back
production, serve to lower costs enabling firms to lower prices, resulting in greater
demand and again more jobs and higher incomes.
The impact of these efficiency gains resulting from improved manufacturing
methods and processes can be both impressive and far reaching. Leung and Voth
(2011) cite the case of Henry Ford and his introduction of the moving assembly line.
This reduced the time taken to assemble a car from 12.5 hours in the spring of 1913
to 1 hour and 33 minutes a year later. These efficiency gains led Ford to reduce the
price of his Model T car, from $960 in 1909 to $360 in 1916. Sales increased
dramatically and so too did the output of Model Ts. They became so commonplace
that in his novel Cannery Row, John Steinbeck (2000) wrote: ‘Most of the babies of
the period were conceived in Model T Fords and not a few were born in them.’
Leung and Voth (2011) estimate that the value of Ford’s innovation was worth
around 1.8 per cent of GDP. This may look like a relatively small number but US
GDP even in the first half of the twentieth century was huge. Leung and Voth (2011)
note that in the process Ford made himself rich and created thousands of new jobs
but most of the benefits of his innovation went to the people who bought his cars.
Such is the power of process innovations.
What are process innovations? Essentially process innovations are innovations
that take place within the production or service delivery process. This is the process
by which products are manufactured and services created and delivered. As Figure
2.2 shows, this is the process by which inputs are converted into outputs. The inputs
are what economists term factors of production in the form of materials and
components, labour and capital which may be both physical in the form of
machinery and equipment and financial. The outputs will be products and services
which can then be sold to consumers. Conversion involves a multitude of very
different activities that process (i.e. convert) material in a variety of different forms
into tangible objects. With more complex products, production may well involve the
assembly of components and modules into finished products. Process innovations
involve the transformation of this process typically to make it more efficient, more
reliable and capable of producing outputs of high quality.
Figure 2.2 Innovating the production/service delivery process

In reality, process innovations are even more diverse than service innovations. It
is significant that of the five types of innovation identified by Schumpeter (1934), at
least three, namely, new processes, new sources of supply and new ways of
organising business, actually come within the remit of process innovation. In
Schumpeter’s terms, new processes cover new ways of making and producing
products. This implies changes to the production equipment, methods and systems of
firms that manufacture products. However, it can be useful to portray process
innovations as occurring on a number of different levels. This allows for a much
broader interpretation of what is meant by process innovation, and provides scope
for embracing some of Schumpeter’s ideas. This is shown in Figure 2.3.
Figure 2.3 Levels of process innovation

At the lowest level we have changes in the equipment that forms part of a
production system. Good examples would be the introduction of numerically
controlled (NC) machine tools and then computer numerically controlled (CNC)
machine tools, which transformed manufacturing processes that involved cutting
metal by reducing the level of skill required of machine tool operators, and both
speeded up and increased the accuracy of machining operations. At this level, the
new equipment introduced as a process innovation is frequently designed to make
labour-intensive processes more efficient through the introduction of capital-
intensive equipment (Swann, 2009).
At the next level we have improvements in sub-processes within manufacturing
systems. Examples here would include things like modular design and
manufacturing cells. The latter involves grouping together items of manufacturing
equipment, such as machine tools, to enable them to focus on the production of a
particular class or type of product. It is basically a way of redesigning and
reorganising a part of a production system in order to make it more efficient.
At the third level one has the introduction of entirely new production processes.
These are likely to involve not merely the introduction of new equipment, but a
wholly redesigned process, probably involving a new and perhaps very different
technology. A classic example of a redesigned process also involving the
introduction of a new technology is the ‘float glass’ process developed by Alistair
Pilkington, at the British glassmakers Pilkington Bros. (Quinn, 1991). Prior to the
introduction of this process innovation, plate glass used for shop windows and
office windows was expensive and of poor quality. The only way of getting a flat
surface was to grind and then polish finished sheets of glass. This was a slow and
laborious process. At a stroke the float glass process dispensed with grinding and
polishing equipment. In its place came an entirely new technology, with plate glass
manufactured by drawing molten glass out of a furnace and across a bed of molten
tin in order to yield a perfectly flat surface. It led to a dramatic fall in the cost of
making plate glass. Architects and property developers could now afford to specify
large sheets of plate glass when constructing new buildings. The result can be seen
in building construction in the past 30 years, where everything from office blocks
and hotels to airports and shopping malls now employs large expanses of glass.

Mini Case: Federal Express


Federal Express was the brainchild of Frederick W. Smith, who pioneered the
idea of an overnight parcel delivery system and in the process revolutionised
the way many businesses operate. He first began to explore alternative
distribution systems as an academic interest while studying at Yale
University. However, it was not until after he had completed a tour of duty in
Vietnam that he began to seriously investigate the practicalities of creating a
more efficient parcel delivery system. He was particularly concerned about
the difficulty of getting small high-value items, such as spare parts and
medical items, to their destination quickly. At the time airfreight was the
fastest method but despite the fact that aircraft were much faster than any
other equivalent form of transport, it often took as long as two days for an
item even to reach destinations within the US.
Smith identified that the heart of the problem lay in the way in which
airfreight and parcel businesses were organised. Most delivery systems at the
time were organised on a point-to-point basis, making it uneconomic to send
small items direct. Instead items shuttled through a network often taking
several flights. Smith’s big idea was to utilise a system akin to the cheque
clearing system used by banks, where all items cleared through a central
point. Smith devised a unique ‘hub and spoke’ system where the hub formed
the central point through which all items were cleared (see Figure 2.4), using
Memphis in Tennessee as his hub, because it was geographically fairly central
to the area he hoped to serve.

Figure 2.4 Hub-and-spoke operations at Federal Express


Smith used a combination of both trucks and aircraft to create an
integrated transport system. Parcels were collected during the day and then
flown to the company’s central hub where they were sorted and put on an
overnight flight to the nearest destination city, ready for delivery by truck to
the recipient the following day. The result is a much faster and more efficient
parcel delivery system.Such was the success of organising parcel delivery this
way that in time hub-and-spoke operations became the industry standard. In
time Federal Express was able to justify the use of larger planes including
McDonnell-Douglas MD-11and Airbus A300 freighters. By 2004 expansion had
created a huge worldwide network, still organised on a hub-and-spoke basis,
delivering packages to customers in 200+ countries and handling some 3
million packages per day.
Source: Nayak and Ketteringham (1993).

An example of a service business being transformed through the introduction of a


redesigned process that utilises new technology would be SABRE, the computerised
airline reservation system introduced by American Airlines (Campbell-Kelly, 2003),
which provides an example of a service innovation brought about by improved
equipment, namely the computer. When first introduced in the late 1960s SABRE
transformed the experience of air travel, because for the first time it became
possible for travel agents booking passengers onto flights to ascertain accurately
whether an airline actually had vacant seats on a particular flight. Prior to SABRE,
airlines could only estimate how many people had booked to fly. Passengers were
required to ‘confirm’ their booking by telephone before the flight and airlines left a
certain proportion of seats empty to cover bookings still being processed by the
unwieldy paper-based systems then being used. Not only did the uncertainty mean a
poor quality of service for passengers, it often led to low load factors on many
flights, inevitably making flying more expensive. SABRE paved the way for the
massive growth in air travel that has occurred over the last three decades.
The fourth level of process innovation involves what might be described as
organisational innovations. These have little to do with the introduction of new
technology and more to do with how the business is organised and run. These
innovations are more about methods, particularly methods of organisation, than
about technologies. Examples would include things like F. W. Taylor’s ‘scientific
management’ (sometimes refered to as work study or organisation and methods),
which in its time was a way of organising work that led to big increases in
productivity as work activities were reorganised using Taylor’s principles of
scientific management. Other examples would include just-in-time production, total
quality management (TQM), and lean production (Table 2.1). These are all ways of
organising production. However, they are not confined to the factory floor. They are
ways of organising a business and require commitment at every level of the
organisation. As a result they can be classified as ways in which one can redesign a
business.

Table 2.1 Technological versus organisational forms of process innovation

As noted earlier, process innovation is something of a ‘Cinderella’ topic, but this


does not mean it should be underestimated. One of the key concepts that Joseph
Schumpeter introduced was the notion of ‘creative destruction’ where innovation
leads to the rise of new industries and the demise of old established ones. Such
dramatic changes which can sweep away whole industries relatively quickly do not
normally come about as a result of product or service innovations but through
process innovations.
In the early years of the nineteenth century there were frequent outbreaks of
rioting and civil disorder as workers broke into local textile mills to destroy textile
machinery. The Luddites, as they were known, were particularly prevalent in the
Midlands region of the UK, especially in and around Nottingham (Chapman, 2002).
Here it was stocking knitters who traditionally worked on knitting frames located in
their homes that took to rioting and breaking the new, more efficient, machines
located in factories. They feared ‘creative destruction’ as the new factory-based
machines made knitting frames redundant and destroyed their livelihoods, a
testimony to the power of process innovation.

Types of innovation
As well as focusing on applications to differentiate innovations, there are other
approaches to analysing the extent of innovation through some form of
categorisation (Dodgson et al., 2008). One widely used approach is to focus on the
degree of novelty. This was broadly the approach taken in Chapter 1 on the scope of
innovation. However, here a more systematic approach is used. The advantage of
using this sort of categorisation is that it brings the extent of the change involved in
an innovation into sharp relief. One focuses on just how new an innovation is, which
in turn highlights the technological effort associated with an innovation. Quite
literally one is examining the ‘innovativeness’ of an innovation. In an era when lots
of things are described as innovative, this kind of analysis can help to qualify such
terms and enable judgements to be made about the degree of change embodied in an
innovation.
It has long been noted that innovations vary greatly, from those which are
completely new and different from anything that has gone before, to those that
involve little more than ‘cosmetic’ changes to an existing design. In the first instance
the degree of novelty would be high while in the latter it would be very low. This
distinction between big-change and small-change innovations has led some to a
categorisation of innovation that differentiates between radical and incremental
(Freeman, 1974) innovations. In this categorisation, innovations involving major
breakthroughs, new technologies and major scientific advances would be in the
radical category. More modest innovations involving product improvements that
result in changes to product attributes, such as small improvements in performance
or greater functionality, rather than new products, would be in the incremental
category.
However, differentiating innovations using just two classes in this way is rather
limited and does not bring out the subtle but important differences between
innovations. In particular it often fails to show where the novelty really lies. To cater
for this Henderson and Clark (1990) have developed a more complex and more
sophisticated analysis. This incorporates the concepts of radical and incremental
innovation within a broader framework. Henderson and Clark’s (1990) analytical
framework provides a typology that allows us to analyse a range of innovations in
more detail than simply classifying them as radical or incremental, and at the same
time to predict their impact in terms of both competition and the marketplace. The
analysis does have its limitations, most notably that it is very product oriented,
though it can be used for service and process innovations. But it does at least help to
show the sheer range of things that can be covered by the term innovation, and
importantly it helps to focus on where the novelty in an innovation really lies.
At the heart of Henderson and Clark’s (1990) analytical framework is the
recognition that products, services and processes are actually systems. As systems
they are made up of components that fit together in a particular way in order to
carry out a given function.
Henderson and Clark (1990) point out that to make a product, service or process
normally requires two distinct types of knowledge:

▮ Component knowledge, that is, knowledge of each of the components that


perform a well-defined function within a broader system that makes up the
product. This knowledge forms part of the ‘core design concepts’ (Henderson
and Clark, 1990) embedded in the components.
▮ System knowledge, that is, knowledge about the way the components are
integrated and linked together. This is knowledge about how the system works
and how the various components are configured and work together. Henderson
and Clark (1990) refer to this as ‘architectural’ knowledge.
Henderson and Clark (1990) use the distinction between component and system
knowledge to differentiate four categories or types of innovation (Figure 2.5). They
use a two-dimensional matrix where one axis relates to components and component
changes, while the other relates to linkages between components (i.e. system
architecture) and changes in those linkages.

Figure 2.5 Typology of innovations

In this analysis radical and incremental innovation are polarised as being at


opposite extremes, where the former involves changes in components and system
architecture while the latter involves small changes in components that enhance
component performance. Against this background, the analysis introduces two
intermediate types of innovation between these two extremes (Table 2.2), namely
modular innovation and architectural innovation.
Table 2.2 Changes associated with types of innovation

Radical innovation
Radical innovation is normally the result of a major technological breakthrough or
the application of a new technology. Unlike incremental innovation where each
innovation typically draws heavily on what has preceded it, radical innovation is
non-linear and discontinuous involving a step change from what has gone before.
Hence radical innovation is about much more than improving an existing design. A
radical innovation calls for a whole new design. In Henderson and Clark’s (1990)
terminology: ‘Radical innovation establishes a new dominant design, and hence a
new set of core design concepts embodied in components that are linked together in
a new architecture’. This new architecture, with new components linked together in
a different way, often results from the introduction of a new technology. In some
cases this will be a transforming technology, which brings a different set of
priorities into play both in the market and in the industry. Thus in terms of the
degree of novelty, radical innovations involve a high level of novelty because they
employ a new design with new components integrated into a new system
architecture. A radical innovation may well involve the use of a new business model,
as when Haloid (later Xerox) introduced the electrostatic copier. In short, with
radical innovation just about everything changes.
The flat-screen TV is an example of a radical innovation. What makes the flat-
screen TV a radical innovation? Prior to its introduction, TVs and computer monitors
utilised a cathode ray tube (CRT) to display an image. Compared to a TV utilising a
CRT display, a flat-screen TV incorporates a completely different technology,
namely a liquid crystal display (LCD). LCD technology, which has its origins back in
the 1970s, operates on entirely different principles. The LCD uses liquid chemicals
whose molecules can be aligned precisely when subjected to an electrical current.
First used for pocket calculator and wristwatch displays, LCD technology owes
nothing to CRT technology. Compared to a CRT-based TV, the system architecture is
different as are the components. Thus the flat-screen TV represents a discontinuous
change rather than a linear one. CRT displays benefited from a string of linear
innovations over many years that improved their display characteristics, but the
introduction of the flat-screen TV represented a break with CRT technology. The net
result is a product that has to be manufactured in a completely different way,
rendering CRT manufacturing facilities and the knowledge surrounding them
redundant.
Most radical innovations employ a new technology. Typically it is much more
than mere tinkering to provide an improvement in performance. Radical innovations
represent a radical break from the past, with a new technology working on new
principles to give new product characteristics. To bring the new technology to
market is a huge task, with a high degree of uncertainty. Will the technology work?
Will it provide products or services with characteristics that consumers want? Thus
radical innovation is both difficult and risky.
Radical innovations are, however, comparatively rare. Rothwell and Gardiner
(1989a) estimated that at most about 10 per cent of innovations are radical.
However, they tend to have more dramatic consequences than other types of
innovation for the organisations that develop them. Typically a radical innovation
will require them to ask a new set of questions, to draw on new technical and
commercial skills and to employ new problem-solving approaches (Henderson and
Clarke, 1990). The jet engine provides a good example of a radical innovation that
had far-reaching consequences in terms of organisational capabilities. Compared to
its predecessor, the piston engine, the jet operates on quite different principles.
Among the problems it presented were the need for new materials that could
withstand very high temperatures. In terms of technical skills it required a
knowledge of aerodynamics. Nor did it stop there, for the jet had very different
things to offer potential customers (i.e. commercial airlines), namely speed and
smoothness.

Mini Case: The Intel 4004 Microprocessor


Unlike 1066 or 1492, the year 1971 is not usually recognised as one of history’s
most significant years. But it was in 1971 near San Francisco, California that
the American computer company Intel launched what was to prove one of the
most radical innovations of all time – namely, the 4004 microprocessor.
The story had begun a couple of years earlier when the Nippon Calculating
Machine Corporation approached Intel to design 12 custom chips for the new
electronic calculator it was developing, the Busicom 141-PF. Intel’s designers
suggested that it might be possible to combine a number of the chips to reduce
the total number required from 12 to 4. This would not only reduce the number
of chips that had to be produced, but one of the four chips would be capable
of being programmed so that it could be used in other products. The
programmable chip would form a central processing unit (CPU). This was the
first time anyone had combined all the functions required for a CPU on a
single chip. Called the Intel 4004, it was the world’s first microprocessor. This
revolutionary microprocessor was the size of a fingernail and contained 2,300
transistors when first shown to the public in November. It possessed the same
computing power as the first electronic computer which filled an entire room.
The 4004 chip found uses well beyond calculators. It unleashed a
technological revolution. It formed the basis of the first personal computers
(PCs) and its descendants power a significant proportion of the millions of
PCs in use around the world today. Nor is its influence confined to computers.
Microprocessors like the 4004 form the basis of MP3 players, smartphones,
tablets and a host of other electronic devices. However, microprocessors have
become steadily more powerful. The Intel core processors of today, which are
direct descendants of the 4004, contain more than 500 million transistors.
Source: Intel (2014).

Because different organisational capabilities are often required with radical


innovations, it is not unusual for them to be launched, not by existing players in an
industry, but by new entrants. The iPod is an example. Working on different
principles from earlier audio players, it provided an opportunity for a new entrant,
Apple Computer, to enter the market. Apple was not at a disadvantage because the
technology of MP3 was new and the existing firms did not have many years of
accumulated experience to draw on. Nor is this a one-off example: we saw in the
previous chapter how a radical innovation, electrostatic copying, provided an
opportunity for Haloid (later renamed Xerox) to enter the market very successfully.
The concept of radical innovation is closely linked to Christensen’s (1997) notion
of ‘disruptive technologies’. By ‘disruptive’ he means inducing significant changes in
markets and industries, often leading to high levels of uncertainty. In terms of
markets these changes might mean completely new markets or new customers or
new products/services. In industry terms the changes often mean the arrival of new
entrant firms better able to marshall the necessary organisational capabilities now
required, and the departure of existing firms.
Thus radical innovation typically has much more far-reaching consequences than
any other type of innovation. The changes that accompany radical innovation often
lead to periods of considerable uncertainty, perhaps with competing designs and
increased competition. Eventually, however, as we shall see in the next chapter this
state of uncertainty subsides and radical innovation is followed by successive
incremental innovations.

Incremental innovation
Incremental innovation involves modest changes to existing products/services (or
processes) to exploit the potential of an existing design. The changes are typically
improvements to components, possibly the introduction of new components, but
always within the confines of an existing design. However, it is important to stress
that these are improvements, not major changes. In other words, the level of novelty
is low. Christensen (1997) defines incremental innovation as ‘a change that builds on
a firm’s expertise in component technology within an established architecture’, and
this highlights an important feature of incremental innovation, namely that it is
typically the product of existing practice and expertise associated with an existing
technology rather than the introduction of a new technology.
Incremental innovations are the commonest type of innovation. Gradual
improvements in knowledge and materials associated with a particular technology
lead to most products and services being enhanced over time. These enhancements
typically take the form of refinements in components rather than changes in the
system. The technology is improved rather than replaced. Thus, incremental
innovation is something that occurs quite frequently to create an essentially linear
process of continuous change. The changes exploit the potential of an existing
design using an existing technology. Thus, a new model of an existing and
established product (perhaps described as a ‘mark 2’ or new and improved version)
is likely to leave the architecture of the system unchanged and instead involve
refinements to particular components. In the case of the automatic washing machine
(see mini case), incremental innovation describes the way in which manufacturers
have improved the efficiency of the machine by fitting more powerful motors to
give faster spin speeds. With the system and the linkages between components
unchanged and the design of the components reinforced (through refinements and
performance improvements) this places such innovations in the top-left-hand
quadrant of Figure 2.5, where they are designated ‘incremental innovations’.
New models of the iPod provide another example of incremental innovation.
Originally introduced in October 2001, there was nothing incremental about the iPod
when it first appeared. It was a radical innovation. However, since then Apple has
produced a steady stream of new versions of the iPod – different sizes (e.g. Mini and
Nano), different storage capacities and different colours. Throughout, though, the
technology and how it is configured have remained the same.
The impact of incremental innovation in terms of markets and industries is likely
to be quite different from radical innovation. Incremental innovation, by using
existing technology and the knowledge and expertise associated with it, tends to
reinforce the position of incumbent firms. Similarly in terms of markets one is
typically talking about increasing market penetration or entering new market
segments rather than the creation of new markets. Thus, incremental innovation
favours existing players. They are likely to be the ones with an established stock of
knowledge and expertise in a given technology. In that sense they will probably be
the ones best placed to generate a steady (i.e. linear) stream of incremental
innovations.
Mini Case: Automatic Washing Machine
The modern automatic washing machine is the product of a variety of
innovations. The washing machine is a system for washing clothes. The
components comprise: motor, pump, drum, programmer, chassis, door and
body. These components are linked together into an overall system.
Component knowledge is the knowledge that relates to each of the
components. System knowledge, on the other hand, is about the way in which
the components interact. The interaction is determined by the way in which
the system is configured. Responsible for the design and development of the
system, washing machine manufacturers frequently buy in component
knowledge by buying components and then assembling them into a finished
product.
Washing machines have been greatly affected by incremental innovations.
Changes in the spin speed are an example of incremental innovation. The spin
speed determines how dry the clothes will be when they come out of the
machine. In the mid-1970s automatic washing machines typically had a
maximum spin speed of 700 rpm. In 1976 Hoover launched its A3058 model
which boasted a maximum spin speed of 800 rpm. Two years later and
Hoover’s A3060 introduced a further innovation in the form of a maximum
spin speed of 1,100 rpm. Since then a steady stream of innovations has seen
the speed rise to 1,200, 1,400, 1,600 and now 1,800 rpm. Although these
advances have resulted in improved performance (i.e. drier clothes), the
system has remained unchanged.
Among a host of other incremental innovations to have affected automatic
washing machines have been changes in the washing temperatures, the
amount of water they use, the amount of electricity they consume, and the
range of programmes they are able to offer. All this in a machine that looks
pretty much the same as it did 40 years ago!

Modular innovation
Modular innovation uses the architecture and configuration associated with the
existing system of an established product, but employs new components with
different design concepts. In terms of Henderson and Clark’s (1990) framework,
modular innovation is in the top-right quadrant.
As with incremental innovation, modular innovation does not involve a whole
new design. Modular innovation does, however, involve new or at least significantly
different components. In the case of the clockwork radio it is the power source that
is new. The radio operates in much the same way as any other radio.
The use of new or different components is the key feature of modular innovation,
especially if the new components embrace a new technology. New technology can
transform the way in which one or more components within the overall system can
operate, but the system and its configuration/architecture remain unchanged.
Clearly the impact of modular innovation is usually less dramatic than is the case
with radical innovation. The clockwork radio illustrates this well. People still listen
to the radio in the way they always have; but the fact that it does not need an
external power source means that new groups often living in relatively poor
countries without access to a stable and reliable supply of electricity can get the
benefit of radio. Clockwork radio has also opened up new markets in affluent
countries – for example, hikers who want a radio to keep in touch with the outside
world. It has also provided an important ‘demonstration’ effect as it has led to other
products, such as torches, being fitted with this ingenious and environmentally
friendly source of power.

Architectural innovation
With architectural innovation, the components and associated design concepts
remain unchanged but the configuration of the system changes as new linkages are
instituted. As Henderson and Clark (1990: 12) point out, ‘the essence of an
architectural innovation is the reconfiguration of an established system to link
together existing components in a new way’. This is not to say that there will not be
some changes to components. Manufacturers may well take the opportunity to
refine and improve some components, but essentially the changes will be minor,
leaving the components to function as they have in the past but within a new
redesigned and reconfigured system.
An example of an architectural innovation would be the Sony Walkman. Amazing
as it may seem, when it appeared on the market it meant that for the first time one
could listen to music on the move. However, the significance of the Walkman is not
just that it met a hitherto unmet need. It also provides an excellent example of an
architectural innovation. At the time tape players were not new. But what the Sony
Walkman did was to repackage the components. In so doing Sony changed the way
in which the system was configured (i.e. the way in which the various components
fitted together). The result was a much smaller and lighter machine and one that
could be operated on relatively small torch batteries, thereby making it highly
mobile. At the time there were many in Sony who claimed that there was no market
for such a machine. But Sony’s boss, Akio Morita, was adamant. And he was proved
right. It was a huge commercial success, selling 1.5 million units in just two years
(Sanderson and Uzumeri, 1995). It was so successful that it was soon copied by other
manufacturers. More significantly, it changed the behaviour of consumers. Young
people found they could combine a healthy lifestyle while continuing to listen to
music, so the Walkman may be said to have helped promote a whole range of
activities like jogging, walking and use of the gym.
The value of an innovation typology
As was the case with the forms of innovation, none of the types of innovation
outlined using this framework is entirely watertight. Inevitably there is overlap and
there will be many occasions when it is a matter of judgement regarding in which
category an innovation should be placed. However, this is not really the issue. What
matters is the general value that comes from attempting a categorisation of
innovations. Categorisation helps to show that innovations are not homogeneous.
Innovations vary. Consequently, any analysis of innovation needs a degree of
sophistication that can isolate exactly where the nature of the innovation lies. In the
process this should enable the more discerning analysts to cast a more critical eye
upon some of the wilder claims surrounding objects that are described using that
much over-used adjective: ‘innovative’.
Categorising innovations into types ranging from radical to incremental can also
help to show that the influence of technology and technological change can vary
considerably. Technology works in a variety of ways. However, its impact will differ
enormously when applied to whole systems or when, for comparison, it is applied to
individual components. Hence, this form of categorisation has a predictive power,
such that those who use it can much more effectively evaluate the potential impact
of a particular innovation.
Distinguishing four different types of innovation can also help to explain why the
responses of firms to the introduction of new technologies will often vary. The
analysis means that perhaps we should not be surprised that some firms do not
respond positively to some new technologies. If the technology affects components
we can expect a rapid take-up of a new technology, because it is likely to reinforce
the competitive position of incumbent manufacturers. On the other hand, if the
technology leads to system changes and the introduction of new architectures, the
incumbents are less likely to be happy about the changes, as their position may be
eroded. In Schumpeter’s (1934) words, we are likely to see ‘creative destruction’ at
work.
This typology can also help in understanding the evolutionary process associated
with technological change. When a new technology appears, it frequently leads to a
proliferation of competing system designs, each with a different architecture. One
could see exactly this happening when the first cars were developed – there was a
multiplicity of competing architectures, and again when the first video recorders
appeared. Eventually through a process of ‘shake-out’ a common system
architecture or ‘dominant design’ evolved and was adopted by all manufacturers.
This kind of evolutionary process is in fact very common and carries major
implications for would-be innovators and entrepreneurs. They will need to recognise
that, if they enter the industry during its early years, they can expect there to be a
period of shake-out eventually. It is even more important that they recognise that a
dominant design is likely to emerge and that it is not always technically superior to
its rivals. The QWERTY keyboard is evidence that sometimes technically inferior
designs emerge as the dominant design.
However, this typology does have its limitations. First, it is very product oriented.
While most products are assembled from components configured through a product
architecture that describes the way they fit together, this is often much less apparent
with services. Services therefore do not lend themselves to this sort of analysis
nearly as easily. Secondly there are some products that are not assembled from
components and therefore do not possess an architecture. Products like chemicals
and pharmaceuticals provide appropriate examples of such products. Hence the
typology is not universally applicable even to products. Thirdly, the typology is
technologically oriented and while it may be good at differentiating the degree of
innovation in technological terms it does not necessarily differentiate in terms of the
wider impact of an innovation on society. There are examples of architectural and
even modular innovations that have had what one might describe as a radical
impact in terms of their impact on society. The personal computer provides a good
example. One has only to consider that 30 years ago virtually no one had a personal
computer on their desk at work – whereas today virtually everyone does (certainly
of those who work in offices). And yet in technological terms the personal computer
is an example of architectural innovation. The mobile phone and the Sony Walkman
provide further examples.

Case Study: Power-by-the-Hour


The manufacture of jet engines to power commercial airliners is dominated by
just three firms: General Electric and Pratt & Whitney of the US and Britain’s
Rolls-Royce. Of the three, Rolls-Royce currently holds the number two slot in
terms of market share behind General Electric. Jet engines are highly complex
and sophisticated products, containing well over 100,000 parts. The larger
engines such as those powering the largest airliners can cost $20 million each.
Not unsurprisingly the development of a new engine is a very costly and
lengthy process.
One of Rolls-Royce’s latest engines, the Trent 900 that powers the new
Airbus A380 ‘superjumbo’, includes a number of innovations. The most visible
are special curved fan blades made from titanium (Rolls-Royce, 2008) fitted to
the huge 116-inch diameter fan at the front of the engine. This generates 75
per cent of the engine’s power. The innovative fan blades are designed to
provide greater aerodynamic efficiency, lower noise and greater tolerance to
foreign object damage. Another novel feature of this innovative engine is the
use of a contra-rotating high pressure (HP) module behind the fan that rotates
in the opposite direction (clockwise) to the intermediate pressure (IP) module
to give a much more efficient gas stream through the engine. Like the swept
fan, the contra-rotating concept is a new feature in jet engine design.
However, innovation at Rolls-Royce is not confined to product innovations
like those outlined above. The company has recently been active in the field
of service innovations.
The jet engine business is so highly competitive with orders for engines
from the world’s airlines bitterly fought over by the ‘Big Three’ engine makers
that margins on new engines are often very thin. This, combined with the fact
that the market for replacement engines is limited since they typically have a
life of 25–30 years, means that engine makers have in the past always looked
to the supply of spare parts, where margins are very much higher, for
profitability. This was fine when engines were somewhat less reliable than
they are today. Thirty years ago one could expect an engine to require spares
to a value equivalent to the purchase price of the engine about every 8 years
(Smith, 2013). But a steady stream of innovations and technological advances
has meant that modern large engines are very much more reliable than their
predecessors and require spares equivalent to the purchase price of the engine
not every 8 years but every 25 years. This has seriously eroded the engine
manufacturers’ business model.
However, one of the technological advances that has had a big impact on
jet engines has been the replacement of hydro-mechanical control systems
with digital ones, a change that is akin to the introduction of fly-by-wire
control systems for aircraft. Digital controls, known as full authority digital
electronic control or FADEC for short, not only help to make engines more
reliable by ensuring that engines operate within optimal parameters, but
combined with advances in telemetry they provide engine manufacturers with
enormous amounts of data on the health and current condition of each engine.
Consequently Rolls-Royce and the other manufacturers increasingly provide
not merely spares but a range of maintenance, repair and overhaul services.
However, with access to an unprecedented amount of data about the engines
it has produced, Rolls-Royce has been able to offer a very different kind of
service from that offered by traditional independent maintenance, repair and
overhaul (MRO) providers that look after and maintain engines for airlines.
MRO firms typically provided maintenance services on a ‘time and materials’
basis where the price charged is based on the actual cost in terms of staff time
and spare parts used. Rolls-Royce broke new ground by offering its customers
an ‘integrated solution’ (Smith, 2013) in the form of fixed price maintenance
that guaranteed a given level of engine availability. This new type of
performance-based contract Rolls-Royce marketed as a ‘power-by-the-hour’
engine maintenance service, since customers are charged a fixed rate for
every hour that an engine is actually in use.
One of Rolls-Royce’s first customers for ‘power-by-the-hour’ was the US
Navy. Rolls-Royce signed a contract with the US Navy in September 2003
under the terms of which it agreed to provide maintenance and logistical
support for the F405 Adour engines that powered the Navy’s 200-strong fleet
of Boeing/ BAE Systems T-45 Goshawk advanced naval jet trainer aircraft.
Under the contract Rolls-Royce received a fixed price for each hour the
engines were in the air. This meant Rolls-Royce providing all the engine
maintenance, support, trouble-shooting, parts supply and logistics support for
the aircraft at three naval air stations in Meridian in Mississippi, Kingsville in
Texas and Patuxent River in Maryland. Performance was measured almost
exclusively against the fleet metric of providing a minimum level of ready-for-
issue (RFI) engine availability. This had previously averaged 70 per cent,
meaning that aircraft were out of action for nearly one-third of the available
time. As part of the new contract Rolls-Royce guaranteed an improved RFI
engine availability rate of 80 per cent.
For the US Navy, the switch to performance-based maintenance contracts
of this type offered three potential benefits. First, it meant that as the aircraft
operator it avoided the uncertainty of unpredictable breakdowns and repair
costs. Instead, maintenance became a known and certain fixed cost against
which it could plan. The second potential benefit was an improved level of
service, manifest in increased RFI engine availability and therefore flying
time. Finally, these services were now provided at a lower cost (see Figure
2.6).
Figure 2.6 T-45 engine costs: power-by-the-hour versus original

The success of the initial one-year contract led to a significant


improvement in maintenance quality and performance reliability, with all the
performance metrics being met. As a result the US Navy exercised its option
to renew the contract for a further four years. Two years into the contract in
2005, the programme manager for the US Navy’s Undergraduate Flight
Training Systems at Patuxent River, Maryland, commented that RFI engine
availability on the T-45 Goshawk trainers had risen above the target rate of 80
per cent in the initial year reaching 85 per cent, while the average time
between engine removals had increased from 700 hours to over 900 hours and
the expected engine removal rate had fallen by 15 per cent (Smith, 2013). As
Captain Daniel Ouimette, Commodore of Training Air Wing ONE, commented,
‘Before signing the contract with Rolls-Royce we had aircraft on the ground
because of engine availability, but this has never happened under the new
regime’ (Smith, 2013).
It was not only in operational terms that Rolls-Royce’s power-by-the-hour
contract proved beneficial to the US Navy, there were financial gains as well.
The contract brought significant cost savings over the previous
arrangements. As Figure 2.6 shows, in the contract’s first three years the
Navy’s savings amounted to $15 million, $18 million and $5 million
respectively, with total savings over the five-year life of the contract
projected to total $61 million. In 2008, upon the conclusion of the fifth year of
the contract, a new expanded five-year contract was signed, worth $90 million
per year.
The impact of the switch to power-by-the-hour type contracts is evident in
the growth of services as a proportion of Rolls-Royce’s total turnover. Twenty
years ago services represented just 25 per cent of the company’s total
revenue. By 2000 the proportion of revenue derived from services had risen to
over a third, representing 37.5 per cent of its total revenue of £5.56 billion.
Although part of the growth was accounted for by finance and leasing
activity, nonetheless a significant proportion was accounted for by repair and
overhaul work (i.e. power-by-the-hour contracts). This had doubled in value
over the previous five years (Rolls-Royce, 2001: 12). By 2011, revenue from
services had continued to grow, reaching £6.02 billion, and amounting to more
than half (53.4 per cent) of Rolls-Royce’s total revenue of £11.28 billion.

Questions
1 Identify some of the innovations incorporated into Rolls-Royce’s Trent
900 jet engine. What form of innovation do they represent?
2 Why are profit margins on sales of new engines typically very low?
3 What was Rolls-Royce’s old business model?
4 In what ways did Rolls-Royce’s switch to digital technology (i.e. the
introduction of FADECs) assist or impede its efforts to develop a new
kind of service?
5 What was the service innovation in this case?
6 Where would you locate power-by-the-hour on Den Hertog’s four
dimensions of service innovation and why?
7 Why was Rolls-Royce able to offer a fixed price contract? What risk was
it running by so doing?
8 Explain what is meant by an ‘integrated solution’ and why this was
innovative.
9 Who was adversely affected by what Schumpeter terms ‘creative
destruction’ in this case and why?
10 What was the attraction of power-by-the-hour for (a) the US Navy and (b)
Rolls-Royce?

Questions for discussion


1 What is the value of being able to categorise innovations?
2 Why might large established firms be wary of radical (disruptive) innovations?
3 Why do product innovations tend to attract more public attention than service
or process innovations?
4 Why do process innovations sometimes have wide-ranging consequences for
society?
5 Identify two process innovations which have had a big impact on society.
6 Differentiate between component knowledge and system knowledge.
7 Choose an example of an everyday household object (e.g. an electric kettle) and
identify some of the incremental innovations that have taken place.
8 Why are only a small proportion of innovations typically radical?
9 Why is the Sony Walkman an example of architectural innovation?
10 What type of innovation is Apple’s iPod?

Exercises
1 Using any household object of your choice (e.g. vacuum cleaner, hairdryer,
etc.) identify and analyse the following:
▮ system function
▮ components
▮ system linkages
▮ incremental innovation.
Outline what you consider to be the rationale behind ONE recent incremental
innovation.

2 Identify a product that has been the subject of modular innovation. Analyse
where the innovation has occurred and the impact this has had on the product.
Explain why you think this is a case of modular innovation, noting how the
system architecture has remained unchanged.
3 What is a system? Take an example of a system and analyse it using a diagram
to show the components and the linkages between them. Indicate where there
have been examples of (a) incremental innovation, and (b) architectural
innovation.
4 What is meant by radical innovation? Take an example of radical innovation
and analyse the impact it has had on society. Take care to differentiate between
the different groups within society that have been affected.
5 What is meant by the term ‘creative destruction’? Explain, using appropriate
examples, the link between creative destruction and radical innovation.
Further reading
1 Henderson, R. M. and Clark, K. B. (1990) ‘Architectural Innovation: The
Reconfiguration of Existing Product Technologies and the Failure of
Established Firms’, Administrative Science Quarterly, 35, pp. 9–30.
This paper is an excellent starting point. It gives a clear overview of the
different types of innovation. Not only is there a rationale for the typology but
each type is explained in detail.
2 Christensen, C. M. (1997) The Innovator’s Dilemma: When New Technologies
Cause Great Firms to Fail, Harvard Business School Press, Boston, MA.
This provides a detailed examination of radical innovation, although the term
that Christensen uses is ‘disruptive technology’. Several extensive and highly
detailed case studies of radical innovations are provided.
3 Dahlin, K. and Behrens, D. (2005) ‘When is an Invention Really Radical?
Defining and Measuring Technological Radicalness’, Research Policy, 34 (5), pp.
717–734.
Another useful paper that discusses the nature of radical innovation at length.
4 Smith, D. J. (2013) ‘Power-by-the-hour: The Role of Technology in Reshaping
Business Strategy at Rolls-Royce’, Technology Analysis and Strategic
Management, 25 (8), pp. 987–1007.
A more detailed perspective on service innovation at Rolls-Royce.
Theories of Innovation

Objectives
When you have completed this chapter you will be able to:
distinguish and analyse the concepts of technological paradigms and
technological change
analyse the long wave theory of innovation
identify and analyse innovation theories
evaluate the most appropriate theories for the analysis of innovations
demonstrate practical benefits that can be derived from the application of
innovation theories.

Introduction
Innovation is complex. It takes many different forms (as demonstrated in the
previous chapter), it takes place at widely differing rates, and varies enormously in
its impact on the economy and on society. Innovation also attracts a great deal of
attention. Because it is about things that are new and different, businesses are
typically keen to extol their innovations and for the same reason the media are
usually keen to report on it. Finally, innovation does not just happen. Innovation is
not inevitable. Though it sometimes seems otherwise, innovation does not take place
on an entirely ad hoc and completely unpredictable basis.
For these reasons, theories, and the models and frameworks associated with
them, can help us in analysing and studying innovation. Theories help make sense of
innovation. In particular, theories can do a lot to provide a critical perspective on
innovation. Using theories one can identify patterns of innovation, make
comparisons between innovations and predict possible outcomes from the process
of innovation.
This chapter aims to introduce a number of theories of innovation. They are
divided into two classes. First there are those that take what one might term a
‘macro’ perspective on innovation. These are not theories designed to focus on
explaining individual innovations. Rather they aim to explain the broad pattern of
innovation over time. They focus particularly on technologies and why and how new
technologies arise. Since new technologies often play an important part in
innovation, they help to explain some of the underlying factors that give rise to
innovation. Secondly comes a class that offers a ‘micro’ perspective on innovation in
that they introduce a number of theories that focus much more directly on specific
innovations. These theories aim to provide explanations for why and how particular
innovations arise.
Taken together these two perspectives provide a range of theoretical tools that
should enable the reader to come to a much clearer appreciation of the nature of
innovation.

The macro perspective: theories of technological change


The macro perspective is very much about explaining and analysing aspects of
technology, especially those associated with the impact on innovation of advances
in technology. That said, new technologies are not always a necessary part of
innovation. There are sometimes innovations that do not rely on the application and
use of new technologies. These are innovations that take existing technologies and
in a very creative and imaginative way reconfigure or redesign them to create new
solutions to problems or satisfy as yet unmet demands. The various generations of
Sony ‘Walkman’ provide a good example. The Walkman was a highly successful
innovation. In many respects it changed the way in which people listen to music.
However, Swann (2009: 27) argues that this was not because it embodied new
technology. Rather the innovation in the Walkman involved existing technology (i.e.
audio cassette players) but packaged and designed in ways that made it attractive to
new groups of consumers, especially young people who wanted to listen to music on
the move. But, despite the presence of some notable exceptions like Sony’s
Walkman, one has to recognise that very often there is a link between advances in
technology and the emergence of new technologies and innovation, particularly the
rate at which new innovations appear. Indeed new technologies are very often a key
aspect of many, if not most, innovations.
What do we mean by technology? The term ‘technology’ is defined by Simon
(1972) as:
‘knowledge that is stored in millions of books, in hundreds of millions or
billions of human heads, and, to an important extent, in the artifacts
themselves. ’
This definition probably strikes a chord with most of us, since we generally
associate technology with artefacts, that is tangible items, especially machines and
equipment whether for the direct use of consumers (i.e. gadgets and gizmos) or for
use as part of manufacturing processes. Thus, technology is concerned with
practical knowledge of how to do things and how to make things.
Science, on the other hand, is all about understanding, in particular
understanding of the natural world (i.e. fauna and flora). Science involves the use of
systematic rigorous methods of enquiry in order to develop logical, self-consistent
explanations of natural phenomena (Littler, 1988). A critical aspect is the application
of the scientific method involving observation, the development of hypotheses and
the systematic collection of data in order to arrive at explanations. The rigorous
application of scientific method results in knowledge about the natural world being
recorded and codified as a formal body of knowledge that can be relatively easily
communicated between individuals through books and papers.
Technology, on the other hand, is about the application and use of knowledge
(often this is scientific knowledge) so that it becomes embedded in ‘artefacts’ – that
is, equipment and machines – which form the most obvious examples and readily
identifiable forms of technology. However, as Forbes and Wield (2002) note,
technology is not only embedded in artefacts, but also in people and organisations.
This form of knowledge is proprietary, that is, firm-specific. Some is explicit: that is
to say, it is codified in documents as patents, drawings, manuals, standard operating
procedures and databases. On the other hand much of this proprietary knowledge is
tacit. That is to say, it is knowledge that resides within the individual, known but
extremely difficult or in some cases impossible to articulate or communicate
adequately (Newell et al., 2002).
As an example take microchip technology. This is not so much the finished chips
(i.e. integrated circuits or ICs containing large numbers of transistors) themselves,
though they do embody this technology, but rather the materials, components and
skills required to make chips, in particular silicon crystals, etching techniques and
the know-how and skills embodied in designing and fabricating them. Though useful,
to develop technologies such as this it is not necessary to understand fully the
scientific principles behind the phenomena.
While science and technology are different, they are nonetheless connected. An
understanding of phenomena can assist the development of technology. Sometimes
scientific breakthroughs lead to advances in technology. Occasionally there are
instances when it is the other way round. However, in recent times not only has the
relationship between science and technology become closer, but developments and
advances in science have increasingly led to developments in technology.

Mini Case: Thanks, Gutenberg – but we’re too pressed for time
The First Law of Technology says we invariably overestimate the short-term
impact of new technologies while underestimating their longer-term effects.
The invention of printing in the 15th century had an extraordinary short-term
impact: though scholars argue about the precise number, within 40 years of
the first Gutenberg bible between eight and 40 million books, representing
30,000 titles, had been printed and published. To those around at the time, it
seemed like a pretty big deal.
‘In our time’, wrote the German humanist Sebastian Brandt in 1500, ‘...
books have emerged in lavish numbers. A book that once would’ve belonged
only to the rich – nay, to a king – can now be seen under a modest roof ... .
There is nothing nowadays that our children ... fail to know.’ They didn’t know
the half of it.
They didn’t know, for example, that Gutenberg’s technology, which enables
lay people to read and interpret the Bible for themselves, would undermine
the authority of the Catholic church and fuel the Reformation. Or that it would
enable the rise of modern science by facilitating the rapid and accurate
dissemination of ideas. Or create new social classes of clerks, teachers and
intellectuals. Or alter our conception of ‘childhood’ as a protected early stage
in the lives of young people. In an oral culture, childhood effectively ended at
the age when an individual could be regarded as a competent communicator,
ie, about seven – which is why the Vatican defined that as ‘the age of reason’,
after which individuals could be held accountable for their sins.
In a print-based culture, communicative competence took longer to achieve
and required schooling, so ‘childhood’ was extended to 12 or 14. All these long-
term repercussions were not – indeed, could not have been – foreseen. Yet
they represent the profound ways in which Gutenberg’s technology
transformed society.
Today’s Gutenberg is Sir Tim Berners-Lee, inventor of the web. In the 17
years since he launched his technology on an unsuspecting world, he has
transformed it. Nobody knows how big the web is now, but estimates of the
indexed part hover around 40 billion pages, and the ‘deep web’ hidden from
the search engines is between 400 and 750 times bigger than that. These
numbers seem as remarkable to us as the avalanche of printed books seemed
to Brandt. But the First Law holds we don’t know the half of it, and it will be
decades before we have any real understanding of what Berners-Lee hath
wrought.
Occasionally, we get a fleeting glimpse of what is happening. One was
provided ... by the report of a study by the British Library and researchers at
University College London. The study ... combined a review of published
literature on the information-seeking behaviour of young people more than 30
years ago with a five-year analysis of the logs of the British Library website
and another popular research site that documents people’s behaviour in
finding and reading information online.
The findings describe ‘a new form of information-seeking behaviour’
characterised as being ‘horizontal, bouncing, checking and viewing in nature.
Users are promiscuous, diverse and volatile.’ ‘Horizontal’ information-seeking
means ‘a form of skimming activity, where people view just one or two pages
from an academic site then “bounce” out, perhaps never to return’. The
average times users spend on e-book and e-journal sites are very short:
typically four and eight minutes respectively.
‘It is clear’, says the study, ‘that users are not reading online in the
traditional sense, indeed there are signs that new forms of “reading” are
emerging as users “power browse” horizontally through titles, contents pages
and abstracts, going for quick wins. It almost seems that they go online to
avoid reading in the traditional sense.’ These findings apply to online
information seekers of all ages.
The study confirms what many are beginning to suspect: that the web is
having a profound impact on how we conceptualise, seek, evaluate and use
information. What Marshall McLuhan called ‘the Gutenberg galaxy’ – that
universe of linear exposition, quiet contemplation, disciplined reading and
study – is imploding, and we don’t know if what will replace it will be better or
worse. But at least you can find the Wikipedia entry for ‘Gutenberg galaxy’ in
0.34 seconds.
Source: Naughton, J. (27th January 2008) ‘Thanks, Gutenberg – But We’re Too Pressed for Time to
Read’, Copyright © Guardian News & Media Ltd 2008.

Technological change
Advances and improvements in technology leading to the emergence of new
technologies are especially important where innovation is concerned. Advances in
technology give rise to what is termed ‘technological change’. Technological change
is a broad term that encompasses both advances in technology and the impact of
such advances. Taken in the long term, technological change is a powerful and
important driver of innovation. New and improved technologies that form part of
technological change act as the source of many innovations. As new technologies
become available so innovators apply them to create new products, new services
and new processes.
While new products are often one of the most prominent manifestations of
technological change, it should be noted that technological change also gives rise to
improvements in product quality and valuable efficiency gains. The latter are a
particularly important, if often underrated, aspect of technological change.
Improvements in efficiency (i.e. the efficiency of manufacturing processes), brought
on by advances in technology, facilitate higher productivity and lower costs for
firms. This in turn means lower prices and bigger sales volumes, making more
products available to more people. Hence, as well as driving innovation,
technological change can also give rise to new patterns of consumption and new
behaviours. New processes lead to new ways of working. The outcome is significant
changes in the economic and social facets of human existence.

Technological paradigms
The path of technological change does not for the most part see technologies
advancing in a simple linear fashion. As far back as the 1930s, Schumpeter noted
that innovations driven by technological change were not evenly distributed over
time (Dodgson et al., 2008). Rather, technological change occurs in something close
to fits and starts with periods of relatively modest change broken by bursts of more
sustained change when technological advances take place quickly and cluster
together in so-called ‘leading sectors’. In explaining this pattern Dosi (1982)
introduced the concept of the technological paradigm, based on Kuhn’s (1970 [1957])
notion of scientific paradigms or schools of thought. According to Kuhn scientific
advances proceed through a series of periodic revolutions (see Figure 3.1), such as
those initiated by the work of Copernicus, Newton or Einstein that led to paradigm
shifts resulting in new schools of thought replacing existing ones.
Figure 3.1 Kuhn’s scientific paradigm

Source: Cvetnavić et al. (2012: 151): Permission granted by The Facta Universitatis, Series: Economics and
Organization.

Mini Case: 15 November 1971


15 November 1971 is not a famous date in history, but it was on this day in
Santa Clara, California that an event occurred that was to change the course
of history (Perez, 2002). Bob Noyce and Gordon Moore, the founders of a then
little known electronics start-up company known as Intel, advertised for
general sale in Electronic News the world’s first microprocessor, the 4004. For
the first time a computer’s central processing unit (CPU) was contained within
a single integrated circuit. Comprising 2,300 transistors, the 4004 was made
possible by manufacturing it using new silicon-gate process technology. Up to
this point, designers had implemented small computers and similar control
devices using large numbers of integrated circuits or ICs (i.e. chips). They
configured the physical arrangement of individual ICs (each with a dedicated
function) in order to perform a specific task. The microprocessor was very
different. For one thing the principal ICs making up a CPU were contained in
a single chip. But more importantly the microprocessor required a different
way of thinking. Instead of moving physical objects (i.e. chips) to perform a
task, all it required was reprogramming the instructions stored in program
memory (Berlin, 2007). Hence the introduction of the microprocessor brought
a decisive shift from hardware to software. Microprocessors not only made
the personal computer possible, they formed the basis of everything from
handheld devices (e.g. mobile phones) to supercomputers. The information
age had begun.
Sources: Berlin (2007), Perez (2002).
A technological paradigm describes a group of technologies that represent a general
area or field of technology. Such a field forms what Von Tunzelmann (1995)
describes as ‘the technological domain’ within which technological advances take
place. This domain forms the focus for what Perez (2002) terms ‘a strong interrelated
constellation of technical innovations’. Dosi (1982) cites nuclear technologies,
semiconductor technologies and organic technologies as examples of technological
paradigms. The case of microchip technologies cited earlier in this chapter would
also be an example of a technological paradigm as it has spawned a host of related
innovations including personal computers, smartphones, digital cameras and MP3
players.
The significance of a technological paradigm is that, much like a scientific
paradigm, it tends to be based on a selected set of ideas. These ideas in turn exert a
powerful influence on the process of innovation in terms of:

▮ the knowledge base


▮ principles and properties
▮ materials
▮ generic tasks
▮ skills.

Hence a technological paradigm plays a big part in defining ‘the rules of the game’
for the process of innovation (i.e. how it is carried out). Indeed Dosi (1982: 153)
notes that technological paradigms tend to have a powerful ‘exclusion effect’ that
confines the efforts and technological imagination of designers, engineers and
technicians, so that innovations for a time at least are often confined to the
application of a given technology to a relatively limited range of products.
However, just as new scientific paradigms emerge as the product of a scientific
revolution (see Figure 3.1) that overturns existing ideas, so too new technological
paradigms emerge as the product of technological advances that produce a
paradigm shift, that introduces a revolutionising new technology. This new
technology represents a major discontinuity in the path of technological change,
heralding what Christensen (1997) terms disruptive innovations. Examples of
paradigm shifts leading to disruptive innovations include new technologies such as
the internal combustion engine, the microchip, the jet engine, antibiotics and latterly
electric vehicles. Hence, technological paradigms are of great significance for
technological change as they result in an uneven path, with periods of relatively
modest change being interrupted from time to time by paradigm shifts leading to
rapid changes that often have a huge impact on many aspects of both the economy
and society.
The change brought on by a paradigm shift which induces a move to a new and
often very different technology, typically brings with it new knowledge, new
principles, new materials and new tasks. Indeed a paradigm shift represents in
Perez’s (2002: 8) words, the emergence of ‘new and dynamic technologies, products
and industries capable of bringing about an upheaval in the whole fabric of the
economy and of propelling a long term upsurge of development’. The new products
and new industries are typically the result of new actors (i.e. new firms) appearing
to take up the new technology while existing firms often struggle to adapt, as the
new paradigm ‘breaks the existing organizational habits not merely with regard to
technology but also management of the economy and social institutions’ (Perez,
2002: 7).
New technologies that form the basis of a paradigm shift very often find new
uses quite different from those intended by their creators. Similarly, there is often a
significant gap in time between the development of new technology and the actual
onset of the paradigm shift. Semiconductors, or transistors as they are generally
known, were developed by AT&T’s Bell Labs in the 1940s (Gertner, 2013) as a
replacement for bulky, unreliable and inefficient thermionic valves used by
telephone companies to amplify signals on long-distance telephone lines. But the
transistor revolutionised radio, television and gramophones by making sets much
smaller and portable. One of the first to see the potential of the transistor to
miniaturise electronic products was a small Japanese start-up company called Sony,
who utilised the transistor to create the world’s first portable radio. Such was the
impact of this technology that small portable radios were known as transistor
radios or simply transistors. Nor did semiconductor technology stop there. As
Gertner (2013: 250) points out in his history of Bell Labs, ‘the transistor was the ideal
digital tool’. Hence in time semiconductors in the form of microchips (i.e.
microprocessors) came to transform the computer from a machine that filled a large
room to one that did not even fill a small briefcase and formed the basis of today’s
digital age and the myriad innovations associated with it.

Creative destruction
The dramatic nature of the changes associated with a paradigm shift, led
Schumpeter (1950) to introduce the concept of ‘creative destruction’, that is, the
reconfiguration of the economy as old industries fall by the wayside and new ones
emerge in the face of dramatic changes brought on by the emergence of new
technology.
Christensen (1997) noted how in the face of what he called ‘disruptive’
technologies appearing, well-established incumbent firms often found it hard to
adapt to the new technologies. This failure often leads to their demise. When the new
technology of digital photography came in during the 1990s/early 2000s, well-
established brands like Kodak and Polaroid, which had been leading brands, failed
to adapt effectively and within a very short space of time exited the industry.
The destruction phase of creative destruction can be a painful and difficult
process. Many people, indeed sometimes whole communities, find that very rapidly
their livelihood and even their way of life becomes blighted. Industries decline,
factories close, thousands of jobs are lost and towns and cities become economically
depressed. Although the new technology creates new jobs it is often difficult for
employees to transfer, given the need for new knowledge and skills.

Mini Case: Creative Destruction in Photography


Kodak was once the Google of its day. Founded in 1888, in the early years it
was a pioneer of the new technology of photography. In the 1930s it pioneered
colour pictures with its ‘Kodachrome’ film. It also made use of innovative
marketing techniques. ‘You press the button, we do the rest’, was its strap line
in 1888.
For the best part of a century Eastman Kodak had a virtual monopoly of
the film and camera market in the US. By 1976 its share of the market for
photographic film was 90 per cent while it also enjoyed an 85 per cent share of
camera sales. Until the 1990s it was regularly rated one of the world’s five
most valuable brands. Kodak planned a move into digital technology in the
early 1990s. Apple’s pioneering QuickTake digital cameras launched in 1994,
for instance, had the Apple label but were produced by Kodak. But
implementation of the new digital strategy was slow. Kodak’s core business
faced no pressure from competing technologies, and as Kodak executives
could not fathom a world without traditional film there was little incentive to
deviate from that course. Kodak’s revenues peaked at nearly $16 billion in
1996 and its profits at $2.5 billion in 1999. Meanwhile digital technology was
gaining ground as digital files replaced film, and latterly smartphones
replaced cameras. Consumers gradually switched to the digital offerings from
companies such as Sony. By 2011 Kodak’s revenues were down to $6.2 billion.
In the same year it reported a third-quarter loss of $222 million, the ninth
quarterly loss in three years. Back in 1988, Kodak had employed over 145,000
workers worldwide; by 2011 this had fallen to barely one-tenth as many. At the
same time the company’s share price crashed, falling dramatically in the first
decade of the new century, to give it a market capitalisation of just $220
million.
And yet Kodak built one of the first digital cameras in 1975, but the new
technology, followed by the development of smartphones that double as
cameras, battered Kodak’s old film- and camera-making business almost to
death.
Kodak was slow to change. In an age of digital technology it struggled to
adapt its long-standing and highly successful ‘razor and razor blade’ business
model by which it sold cheap cameras and relied on customers buying lots of
expensive film. According to Harvard Business School’s Rosabeth Moss
Kanter, its executives ‘suffered from a mentality of perfect products, rather
than the high-tech mindset of make it, launch it, fix it’. The company did try to
diversify but took years to make its first acquisition. Similarly, it developed a
venture-capital arm that was widely admired, but the businesses it invested in
never made big enough bets to create breakthroughs. Kodak did eventually
build a hefty business out of digital cameras. By 2005, it ranked No. 1 in the
US in digital camera sales as they surged 40 per cent to $5.7 billion. But this
success proved to be short-lived, eventually being scuppered by the switch to
camera-phones.
Latterly Kodak has focused on the related field of imaging and digital
printing and has also tried to make money from its vast portfolio of
intellectual property, but this looks very much like too little too late. Kodak,
like many great companies before it, appears to have been unable to adapt in
the face of a paradigm shift to new technologies. After 132 years it appears to
have yielded to the forces of Schumpeter’s creative destruction.
Source: The Economist (2012).

Kondratiev’s long wave theory


The nature of technological paradigms and their capacity to induce an uneven path
of technological change fits well with the notion of innovations following a cyclical
pattern known as the ‘long wave’. ‘Long’ in this instance is not the 5–10 year span of
the conventional business cycle, but half a century, that is, 45–55 years. The idea of
such a long cycle of activity was pioneered by Kondratiev, a Russian economist who
founded and directed the Institute of Conjuncture in Moscow in the 1920s (Freeman
and Louçã, 2001). Kondratiev noted that economic activity exhibited significant
discontinuities resulting in a cycle that stretched from one major downturn through
a period of recovery until another major downturn over a period of approximately
50 years. Compared to the conventional business cycle this was a very much longer
span, hence the term long wave.
The concept of a long wave cycle was taken up by Schumpeter, one of the
pioneers of the study of innovation, who utilised Kondratiev’s long wave as a feature
of his analysis of the course of innovation over time. According to Schumpeter
(1939) each new long wave was the product of a set of technological innovations
(i.e. a new technological paradigm) that profoundly reshaped the pattern of
consumption and production. Each wave represents a new set of
enabling/transforming technologies (see Table 3.1). The mid-twentieth century, for
example, saw the onset of innovations that facilitated a switch to mass production,
resulting in dramatic changes in the nature and availability of consumer goods, such
as cars and electrical goods (e.g. cars, radios, TVs etc.). We are currently enjoying a
long wave that has transformed the availability and use of information. The next
wave, which may well begin within the next decade or so, may well be based on new
developments in biotechnology and nanotechnology (Nefiedow and Nefiedow,
2014).

Table 3.1 Kondratiev’s long wave cycles

Among the most significant features of the long wave is that it follows a regular
course. A long climb up from depression involving recovery and prosperity phases
leads to a maturity phase where a shallow decline leads to a steep fall in the
depression phase (Figure 3.2). Each of these phases has implications for the pattern
of innovation.
Figure 3.2 The long wave cycle

In the recovery phase new discoveries and scientific breakthroughs feed through
into new technologies that in turn are developed into new innovations that create
entirely new opportunities for investment, growth and employment. Often these new
opportunities will be created by newcomers (i.e. new firms) who view technology in
a new way. At this point in the cycle there is often a high degree of uncertainty that
produces a variety of competing product configurations. This was very much the
case in the 1980s when the diffusion of the new microchip technology saw a host of
new personal computer designs (including one called the Apple II) hit the market,
often with very different features. Offering a high novelty value and better
performance, these innovations often command a price premium and sell to
individuals and firms with an appetite for gadgets and gizmos and who like to be
seen as being at the forefront of advances in technology. Under these conditions
there will often not be particularly strong pressures for production efficiency. As
yet, these innovations are finding only a specialist market. Higher prices mean
higher profits from these innovations; these act as a decisive impulse for new surges
of growth which in turn act as a signal for imitators (Freeman, 1986) to enter the
fray.
In the prosperity phase the innovations begin to diffuse to a wider range of
applications through finding a broader market. As the innovations and their
associated new technologies reach a wider market they become better known and
imitations frequently appear. They may well catch the mood of the popular
imagination. Often there will be a bandwagon effect as others try to cash in on the
new technology. The combination of appropriate financial conditions and a large
number of would-be imitators can easily lead to a speculative boom (e.g. the
‘dot.com’ boom of the early 2000s) as investors try to cash in on the technological
advances. Over-ambitious and unrealistic plans combined with the ever-increasing
cost of capital lead to the inevitable financial crash. Such a crash typically heralds
the onset of the third phase.
In the third phase, with surplus capacity and diminishing returns as the limits of
technological advance become evident, price competition becomes intense. It is at
this point that the focus of innovation shifts. The new technology which has hitherto
been applied to create new products now begins to spill over into process
applications. This may well be where the transformative capacity of the new
technology is at its greatest. New production processes can sweep away old working
practices, leading to dramatic improvements in productivity.
Finally, market saturation leads to ever-greater price competition and declining
profitability which are the features of the depression phase of the long wave cycle.
So too are decreasing returns that begin to set in as the technological advance
reaches its limits. The result is mergers and acquisitions in pursuit of greater
efficiency. This is the ‘shake-out’ phase. Despite the depressed and difficult trading
conditions, this phase is also the point at which the discoveries, breakthroughs and
inventions that will form the basis of the next long wave begin to take place. In
Schumpeter’s analysis the innovations that occur in the recovery phase will tend to
cluster or swarm. Hence, the early stages of each long wave are associated with a
‘swarm’ of new technology-based innovations appearing on the market,
accompanied very often by a sense in the public imagination that technological
change is speeding up.
The same phenomena are to be found in the work of Mensch (1979). He showed
that, as Schumpeter predicted, the rate of innovation over time tends to vary.
According to Mensch, innovation is subject to a ‘wagon-train’ effect (Hall, 1981: 534).
Thus, while inventions and discoveries can occur at almost any time, innovations
tend to bunch together at the end of one long wave and the beginning of another.
This is one reason why the transforming effect of technological change can be so
dramatic.

The Information Age: our very own long wave


Freeman and Louçã (2001) argue that we are currently witnessing a fifth Kondratiev
long wave. Beginning in the 1980s, this long wave is associated with new digital
technologies, especially in computing, telecommunications and the Internet. These
technologies have between them begun to transform many aspects of how we
communicate and how we organise our daily lives.
As with previous long waves the origins of the many transforming technologies
lie in the breakthroughs and inventions that occurred in the downward phase of the
previous Kondratiev. Developments in semiconductors underpin developments in all
three of the transforming technologies. Jack Kilby’s idea that, instead of transistors
being linked together and mounted on a circuit board, they could simply be
manufactured from a single piece of silicon, led to the production of the first
integrated circuit (IC) as far back as 1958 (Campbell-Kelly, 2004). However, ICs
were initially very expensive and used for specialist defence applications, such as
the guidance system of the Minuteman missile. The development of the first
microprocessor by Intel in 1971 was another decisive event. Though it still required
successive incremental improvements to deliver appropriate performance, these
advances paved the way for big changes in other technologies.
One of these technologies was computing, where the availability of microchips
(i.e. ICs that comprised a complete microprocessor) led to the development of the
first personal computers, though it was to be another decade before they began to
challenge computing orthodoxy based on large mainframe applications.
Developments in semiconductors also fed through to telecommunications.
Transistors were first used for switching in the 1960s. With the introduction of digital
technology in the form of pulse code modulation (PCM) and the introduction of
packet switching, the move to digital communication gathered pace. These changes
paved the way for the development of the Internet and a host of new developments
linked to it in the last 20 years.
As with previous long waves, the transformation does not stop at the technology
itself. The transformation is changing working lives, business models, leisure
patterns and the structure and shape of business organisations themselves.

The implications of a macro perspective


This macro perspective on innovation provides a much broader and more long-term
view of innovation. It helps to provide a context to the various different types of
innovations introduced in the previous chapter. At the same time it is particularly
valuable in highlighting how individual innovations often do not take place in
isolation but form part of bigger trends and patterns.
It shows how technology, in particular advances in technology, exert a very
powerful influence over both the type and rate of innovation. Hence, it is overly
simplistic to assume, as is very often claimed, that the rate of technological change
and the rate of innovation are advancing exponentially. At the same time analysis at
this level refutes the claims of those researchers (Huebner, 2005) who suggest that
the rate of technological advance, far from speeding up, is in fact slowing down. The
perspective presented here shows that, if anything, the rate of technological change
is cyclical, but over a cycle that runs for several decades.
As a result of this cyclical pattern, innovations can to some extent be predicted,
or at least one can predict that certain types of innovation are more likely at certain
times. Radical innovations, for example, are more likely to occur as part of a
paradigm shift, when we might expect to see a cluster of significant innovations
occurring if not sumultaneously then in close proximity.
Finally, the macro perspective provides some insights into some of the powerful
driving forces at work, such as technological change and shifts in technological
paradigms, and the sort of impact they can exert on the rate at which innovation
takes place.

The micro perspective: theories of innovation


The previous section has focused on the bigger picture, in particular the driving
forces behind innovation. This section in contrast is more focused, concentrating on
specific innovations and factors that influence the nature and the success or failure
of particular innovations. A number of theories are presented as part of this more
focused perspective. The value of such theories lies in their use as analytical tools
that can help in understanding and making sense of innovation. As such they have
considerable explanatory power. They can help in explaining and even predicting
the outcome of the innovation process. They also have value in helping to categorise
and classify innovations. At the same time these theories can help in identifying
some of the factors that bring about successful innovation.
There are many theories associated with innovation, and just four are presented
here. The selection of four theories is fairly arbitrary. One could very easily have
included many more. However, the four selected here offer scope for applying a
number of different theories in different contexts without unduly complicating the
picture. There are other texts that can provide the reader with further theories
should these be required (Ettlie, 2006).
The four theories are:

▮ technology S-curve
▮ punctuated equilibrium
▮ dominant design
▮ absorptive capacity.
These theories are associated primarily with technological innovation. Despite the
emphasis on technology within the process of innovation, they all provide adequate
scope for analysing innovations in general.

Technology S-curve
One of the central ideas behind the theory of the technology S-curve is the notion of
a technology life cycle. This implies that over time the capability of a technology to
deliver improved performance will vary. Early in the life cycle the potential for a
given investment of engineering effort to deliver improved performance will be
high. Successive amounts of additional engineering effort produce ever greater
improvements in performance. This is the well-known ‘learning curve’ effect, which
results from a new technology becoming better understood, better controlled and
more widely diffused. Eventually this will begin to lessen in terms of the relationship
between inputs and performance until a point is reached where increasing
engineering effort produces diminishing returns in terms of performance
improvement. This implies that a given technology eventually reaches some kind of
‘natural limit’ as it matures (Figure 3.3).

Figure 3.3 Technology S-curve

Source: Foster (1986).

Foster (1986) argues that technologies simply have physical limits. He cites the
technology of sailing ships whose speed was limited by the physics of wind and
water. The tea clippers that traded between China and Europe in the last years of the
nineteenth century were among the most efficient sailing ships ever built,
representing the high point of sailing-ship technology. They were eclipsed by
steamships which substituted the new technology of steam power for wind. As a
technology matures so performance improvements get less and less and it requires a
radical innovation associated with a new technology. The piston-engined aircraft of
the Second World War represented the physical limit of propeller technology in
terms of speed. To go any faster required a new technology. That technology took
the form of the jet engine pioneered by Whittle in the UK (Golley, 1996) and Von
Ohain in Germany (Conner, 2001). The jet engine worked in a completely different
way from the piston engine, requiring new skills, new knowledge and new materials.
Yet it had the potential to power aircraft at much higher speeds.
Sahal (1981) suggests that technological maturity is also a matter of scale and
complexity. Scale is a matter of the product getting impossibly large in the pursuit of
improved performance, while complexity is a matter of there being more and more
components. In order to avoid the problems caused by scale and complexity a
radical innovation is required to break free of the limits of the technology.
Christensen (1997: 34) suggests that the technology S-curve can be useful not
only in terms of its descriptive value (i.e. the extent to which it represents what
happens in real life) but also in terms of its predictive value. By this is meant the
ability of the technology S-curve to predict the course or developmental path of
innovation. The predictive power of the technology S-curve is that the point of
inflection shows the point at which the existing technology has reached maturity
and is going into relative decline. The technology may still be making an effective
contribution, but the point has been reached where one might expect to find a
successor technology (i.e. another technology S-curve) to arise. The challenge is to
identify and develop the successor technology. The inability to anticipate new
technologies has been cited as the reason why incumbent firms often fail and as the
source of advantage for new entrants.

Mini Case: Screen Wars – CRT versus LCD


TV sets became consumer products in the years after World War II. There
were a mere 15,000 in the UK in 1947, but by the time of the coronation of
Queen Elizabeth in 1953 there were 1.5 million and by 1968 numbers had risen
tenfold to 15.1 million. The early sets used thermionic valves with a cathode
ray tube (CRT) for the display screen. CRTs were bulky and heavy and
initially TV screens were small, varying in size between 10 inches and 14
inches. They were often contained within large wooden cabinets with doors at
the front so that when not in use they looked like a piece of furniture.
Over the course of the 1950s and 1960s TVs gradually got bigger. By the
1970s when colour TV came in, the average size of screen increased from 14
inches to 21 inches. Although TV screens had got even bigger by the mid-1990s
most were still fitted with a CRT screen. However, CRTs were rapidly
reaching their limits. Although it was possible to make large screen TVs using
CRT technology, the resulting sets were large, bulky and very heavy, making
them impractical.
The mid-1990s saw the introduction of so-called flat screen TVs. These
utilised a completely new technology in the form of liquid crystal display
(LCD) screens. These were both much thinner and much lighter than sets
using the old CRT technology and were more energy efficient. LCD
technology had been invented in the 1960s and in the 1970s was used for
electronic devices like watches and calculators. The first TVs to utilise LCD
technology were introduced by Sharp in 1988. However, they gave a very
poor quality picture and attracted little interest. As a result few expected LCD
to catch on. More promising was plasma technology first introduced for TVs
in 1997. However, plasma technology proved short-lived. Although it gave
excellent picture quality it was only available as large screen displays of 42
inches or more, was not particularly energy efficient and was expensive. It
was to remain a niche product confined to specialist applications (e.g. high
quality home cinema applications).
Meanwhile despite its deficiencies CRT continued to dominate the market.
By 2006 almost 80 per cent of the TVs sold still used CRT technology.
However, at this point developments in LCD technology began to yield
dramatic performance improvements in terms of picture quality and lower
costs. More important, however, was the fact that LCD technology was very
flexible. It was capable of being scaled down to small sets in the 14–30 inch
range as well as being scaled up for sets in the 40–50 inch range. This feature
was to prove decisive. It meant LCD technology could be used for other
applications such as PC screens, resulting in high volume production and
lower costs. Even so it was not until the fourth quarter of 2007 that LCD sets
outsold CRT ones for the first time. From this point onwards changes came
rapidly. Sony, one of the leading TV makers, closed its last CRT plant early in
2008. Sales of flat screen LCD sets meanwhile rose from 105 million in 2008 to
187.9 million in 2010.

Punctuated equilibrium
Abernathy and Utterback (1978) used the distinction between incremental and
radical innovation introduced in Chapter 2 to show how established industries go
through periods of stability with changes confined to incremental innovations.
Eventually, however, the stability is broken by a radical innovation that, in
Christensen’s (1997) terminology, is highly disruptive, bringing the period of
stability and equilibrium to an end.
The idea of an interrupted, stepped pattern of change comes from biology where
Eldredge and Gould (1972), introduced the concept of punctuated equilibrium as a
theory of evolution in the early 1970s. Tushman and Anderson (1986) drew on the
concept to describe a pattern of innovation that proceeds via a succession of fits
and starts. In this pattern, major technological breakthroughs that form
technological discontinuities are relatively rare with most innovations being
incremental (see Figure 3.4). Tushman and Anderson (1986) cite the cases of Chester
Carlson and the development of xerography (photocopying) and Alistair Pilkington
and the development of the float glass process for manufacturing plate glass, as
examples of discontinuities. They observe (Tushman and Anderson, 1986: 441) that
technological change is: ‘a bit-by-bit cumulative process until it is punctuated by a
major advance’. Both xerography and float glass were major advances. Each in its
own way was highly disruptive, but it was followed by a period of relative stability.
This is what occurred in both instances. Innovations followed the major advances
but they were incremental innovations that resulted in modest product
improvements rather than significant changes.

Figure 3.4 Punctuated equilibrium

The discontinuities that punctuate periods of equilibrium are linked to major


technological adavances. These step changes Tushman and Anderson (1986: 44)
describe as: ‘so significant that no increase in scale, efficiency or design can make
older technologies competitive with the new technology’. No amount of incremental
change for the existing technology will render it competitive.
A key feature of technological discontinuities is that they require new skills, new
abilities and new knowledge in both the development and the manufacture of the
product. As a result such innovations can be ‘competence destroying’. This means
that existing firms are unable to use the knowledge and experience they have
accumulated during the period of equilibrium. Given that the existing knowledge
represents a big investment made over a long period of time, they are likely to want
to make use of this knowledge. Hence they are more likely to opt for incremental
innovations.
Nor is it just that ‘incumbent’ firms possess knowledge and expertise linked to the
old technology that gives rise to inertia; other factors include:
▮ traditions
▮ sunk costs
▮ internal political constraints
▮ commitment to outmoded technology.
If working practices have changed very little in a long time, they may have become
so widely accepted and so deeply ingrained in the organisation that they have
become traditions. As traditions, the rationale and reasoning behind the working
practices in question may have long since disappeared, but because this is the way
things have been done for a long time, traditions are very hard to break. Nor are
traditions confined to working practices: they can cover all aspects of a business.
Sunk costs are costs associated with prior investments. These could cover
equipment, buildings, systems or even training. Firms make investments all the time,
but where the investment is technology-specific the cost of the investment is a sunk
cost. The key aspect is that the investment cannot be transferred to the new
technology – it is sunk in the old technology. If the organisation has invested
heavily, then it may be expecting to spread the cost over future output. As a result
the organisation may be reluctant to break from the old technology.
Internal political constraints can arise for all sorts of reasons. If managers have a
strong commitment to the old technology – perhaps by virtue of their training or
their knowledge – they may well be reluctant to embrace a new technology they
know little about. Not only will this lead to a reluctance to innovate on their part, it
may even stop others pursuing innovation.
All of these factors serve to constrain or limit the responsiveness of existing
firms. This helps to account for the existence of periods of equilibrium. Under these
conditions existing firms may confine themselves to incremental innovations,
thereby prolonging the period of equilibrium. Eventually, however, radical
innovations, usually driven by a major technological advance such as the
introduction of a new technology, lead to discontinuities that punctuate the
equilibrium. A period of ‘ferment’ then ensues. Very often new firms, or outsider
firms on the fringes of the industry, are the first to apply the new technology and
develop radical innovations.

Mini Case: Carbon Fibre in Formula One


Ever wondered how it is that drivers in Formula One are able to survive
crashes at speeds of up to 200 mph? The answer lies in the material used to
build their cars. The chassis (the internal frame in which the driver sits and to
which the wheels and engine are fitted) of a modern racing car is constructed
not of metal (as with a conventional road car), but of a form of plastic called
carbon fibre. Carbon fibre is both extremely strong and extremely light. It is
also very rigid, which was the main attraction of the material when designers
first came to use it.
Racing cars have used a variety of materials for chassis construction. In
the 1950s racing cars used steel, with lengths of steel tubing welding together
to create what was termed a ‘spaceframe’ chassis. Although cheap and easy to
build, a spaceframe chassis was relatively heavy because it required the
bodywork to be built separately and then attached to the car. Spaceframe
chassis also gave relatively poor handling, which made cars slow through the
corners. Then in the 1960s Colin Chapman at Lotus came up with an entirely
new design, the ‘monocoque’ chassis. Built of aluminium sheets riveted
together to form a strong tub, monocoque chassis were lighter and stronger
than spaceframe ones. However, the key difference was that there was no
longer separate bodywork, which made the monocoque much more rigid
giving better handling. Lotus cars were soon winning races and within three
years the alumimium monocoque had become standard in Formula One.

Table 3.2 Evolution of chassis technology in Formula One

In the 1980s cars got faster and faster as manufacturers produced ever
more powerful engines and designers found other ways of improving
performance. The emphasis turned to aerodynamics, particularly ‘ground
effects’ where the shape of the car is used to create an aerofoil shape that
creates down force which improves both traction and handling. In the quest
for better aerodynamics designers made the chassis narrower. Unfortunately
a narrower chassis was less rigid, cancelling out the improvement in handling.
It was to combat this that John Barnard at McLaren came up with an entirely
new idea: a car with the chassis made entirely of carbon fibre. A form of
plastic, carbon fibre is lighter than aluminium and stronger than steel. Widely
used in the aerospace industry, Barnard subcontracted the construction of the
new carbon fibre chassis to Hercules Aerospace, a US company that made
carbon fibre parts for the AV8B jump-jet and the F-18 fighter. The McLaren
MP4 was soon winning races and in a very short time all the Formula One
teams had followed suit and were building cars using carbon fibre.
Source: Smith (2012: 337).

During a period of equilibrium, new entrants would normally find they were at a
disadvantage to incumbents, but when technological discontinuities arise and a
process of ferment occurs, the tables may be turned. While incumbents may be stuck
with ‘legacy’ problems such as sunk costs, unwanted skills and obsolete plant, new
entrants can respond more effectively to the new conditions precisely because they
are unencumbered by the baggage of an old technology, traditional ways of doing
things and an outdated view of the world.
As a theory that helps to explain the pattern of innovation exhibited in real life,
punctuated equilibrium has its limitations. It is a theory in which technology plays a
central role. Some might argue that it is a theory of technological evolution.
Similarly, it is an external theory in the sense that it tells us little about how
innovation is carried out inside the firm. However, despite this, punctuated
equilibrium is of value as an innovation theory. It helps to explain inertia, in
particular the reluctance of existing firms to adopt a new technology, which in turn
explains why some innovators find it hard to interest existing firms in new ideas.
The case of James Dyson (Dyson, 1997) and his attempts to get established vacuum
cleaner manufacturers to embrace his new more efficient dual-cyclone technology
which eliminated the need for a dust bag, is a good example. What was happening
was that incumbent firms were comfortable with the equilibrium that existed in the
vacuum cleaner industry. Dyson’s ideas posed a technological discontinuity that
threatened their investments in know-how and manufacturing capability.
Consequently, they were not keen to embrace the new technology. Another merit of
punctuated equilibrium is that it integrates the typology of innovation that
distinguishes radical and incremental forms of innovation. Finally, punctuated
equilibrium provides a good fit with reality where technological discontinuities can
be very disruptive.

Dominant design
A dominant design is a design or product configuration that comprises ‘the one that
wins the allegiance of the marketplace, the one that competitors and innovators
must adhere to if they hope to command a significant market following’ (Nordström
and Biström, 2002: 713). Quite literally it is a design that all or most firms eventually
copy.
The theory of a ‘dominant design’ is linked to ideas about the evolutionary
development (Teece, 1986) of science. In science ideas are constantly evolving. The
process of evolution comprises two stages: a pre-paradigmatic phase, when many
ideas are circulating and no one explanation of a phenomenon holds sway, and a
paradigmatic phase, when a single explanation or theory becomes widely accepted.
The switch to the latter phase and the emergence of a dominant paradigm signals
scientific maturity. This paradigm remains the accepted view until perhaps even it is
eventually overturned by another paradigm, as in the seventeenth century when
Copernicus’s theories of astronomy overturned those of Ptolemy (Teece, 1986).
The early stages involving the introduction of new technologies innovation are
often very similar. They tend to be characterised, according to Anderson and
Tushman (1990), by an era of ‘ferment’ (see Figure 3.5) where designs and
configurations are fluid. Initially it is simply a matter of new designs being
substituted for old ones. Following substitution there is a phase (often overlapping)
of intense design competition (see Figure 3.5). Competitors soon appear, with rivalry
based on competing designs, each of which is markedly different. In this pre-
paradigmatic phase no one design or configuration stands out. Typically several
design variants emerge, each embodying the fundamental breakthrough technology,
but applied in a different way to give a different product architecture. The evolution
of the bicycle (Rosen, 2002), which generated a proliferation of competing designs
or architectures in the late nineteenth century, including the famous ‘penny-
farthing’, illustrates this pre-paradigmatic period of ferment.

Figure 3.5 Dominant design

Source: ‘Innovation Management: Context, strategies, systems and processes’, Ahmed and Shepherd,
Pearson Education Limited, Copyright © Financial Times Press 2010.

A process of technological evolution (Srinivasan et al., 2006) involving variation,


selection and retention (Basalla, 1988) leads over time to the emergence of a single
design that forms the accepted market standard (see Figure 3.5). Teece (1986) likens
this to a process akin to a game of ‘musical chairs’ where competing designs
gradually fade away to leave a dominant design. Others (Bonaccorsi and Giuri,
2000) have used the term ‘shakeout’ to describe the way in which most of the early
designs fall away. When a dominant design emerges, competition then shifts away
from design and towards incremental innovations, possibly associated with
variables such as branding and promotion. Under these new circumstances new
factors, such as scale and learning, become more important and specialised capital
assets begin to replace general-purpose capital assets as firms seek lower unit costs
through economies of scale and learning. In the case of the bicycle, the dominant
design was the ‘safety bicycle’ that emerged at the end of the nineteenth century
from the proliferation of different forms, to include a variety of features we would
all recognise today, including a triangular all-metal frame, bearings, chain drive to
the rear wheel and pneumatic tyres (Anderson and Tushman, 1990).
Utterback (1993) has shown that dominant designs are more likely to appear in
mass markets, such as typewriters, bicycles, sewing machines, televisions and cars
(Freeman and Louçã, 2001), which can justify the investment in specialised capital
assets. This is borne out by well-known examples of dominant designs that include
the QWERTY typewriter keyboard, Ford’s Model T car, Boeing’s 707 airliner, JVC’s
VHS video recorders and Microsoft’s Windows operating system.
How do dominant designs arise? Abernathy and Utterback (1978) suggest three
possible factors that can give rise to dominant designs. First, consumer preference,
where a particular package of factors present in one design finds favour with
consumers in meeting their requirements. These factors will not all be technical. In
fact, dominant designs are rarely technically superior to rival designs, but the
particular package of features appeals to consumers. Sometimes it is the market
power of a dominant producer that is the key factor. Abernathy and Utterback
(1978) cite the case of IBM and its 360 series mainframe computer which became the
de facto industry standard. Finally, regulation, by either government or some form
of industry body, may be instrumental in a dominant design appearing.
The theory of dominant design contributes to our understanding of innovation in
a number of ways. It highlights the importance of the user, who may be less
interested in technical features and more interested in usability. It also highlights the
importance of standards, particularly where compatibility is an issue for the user.
Finally, it shows the importance of business strategy within innovation. In the Blu-
ray case, for instance, Toshiba lost out to Sony because its business strategy was
geared to being first to market rather than customer acceptance.

Mini Case: Blu-ray


The home entertainment industry is emerging from a period of flux brought
on by the arrival of high definition (HD) television. Nowhere has this flux
been more apparent than in the DVD market. The home DVD market is worth
£12.3 billion a year, but has lately contracted in the face of uncertainty
surrounding the format for the new generation of high definition DVDs. There
has been intense competition between two competing new formats, Toshiba’s
HD DVD and Sony’s Blu-ray.
Toshiba was first into the market and initially seemed to have the upper
hand. Its HD DVD appeared to have a number of advantages. Its discs were
cheaper to produce and sales were initially strong in Japan. In the movie field
Toshiba was quick to sign up Dreamworks, while in the computer games field
it signed up Microsoft, maker of the best-selling X-box 360 videogames
console.
However, Sony’s Blu-ray now appears to have the upper hand. Its discs,
though more expensive, have 25 gigabytes of storage compared to Toshiba’s
15 gigabytes. Sony held back the launch of its own videogames console,
PlayStation 3, and picked up much criticism from consumers at the time,
precisely because it wanted to ensure that it came with Blu-ray installed. The
Microsoft X-box 360 on the other hand, while it supports the HD DVD format,
requires a separate plug-in HD DVD player. As sales of PlayStation 3 have
now passed the 10 million mark this has helped to ensure a substantial base
for Blu-ray among videogame users. By contrast only about 1 million HD DVD
players have been sold and then mainly in Japan.
With the two formats competing neck and neck, Toshiba was dealt two
severe blows in the early months of 2008. First, Warner Bros, the world’s
largest DVD producer, opted to stop selling the new style DVDs in both
formats opting instead for Blu-ray alone. Warner, which accounts for about a
fifth of the lucrative US DVD market, was the last big Hollywood studio
producing discs in both formats. MGM, Fox, Walt Disney and Sony Pictures
had already signed up to the Blu-ray format. The second major blow was the
decision by Wal-Mart, the world’s largest retailer, to dump HD DVD across its
4,000 stores in the US. Wal-Mart’s move followed a similar decision by
consumer electronics retailer Best Buy and online video rental firm Netflix.
These twin blows effectively sealed the fate of Toshiba’s HD DVD and
confirmed the place of Blu-ray as the dominant design for high definition
DVDs. Sony’s success was in sharp contrast to its experience with VCRs where
its Betamax system lost out to the rival VHS system produced by arch rival
JVC.
Source: Wray and McCurry (2008).

Absorptive capacity
The theory of absorptive capacity differs from previous theories by virtue of the
fact that it integrates both the external dimension of innovation, which is concerned
with the evolution of technology, and the internal dimension, which is concerned
with learning and the knowledge transfer process within the innovating
organisation. The emphasis within absorptive capacity upon learning marks it out as
providing a very different analytical framework from the S-curve or punctuated
equilibrium. This is reflected in the words of Cohen and Levinthal (1990: 128), in a
seminal article that first expounded the notion of absorptive capacity, when they
identified it as being concerned with: ‘the ability of a firm to recognise the value of
new, external information, assimilate it and apply it to commercial ends’.
Figure 3.6 provides a schema in which one can see recognition, assimilation and
application at work. In this schema the external environment where technological
evolution takes place is at the top and the internal environment of the firm is at the
bottom. The process of recognising external trends and technological opportunities
is represented by the arrow linked to the box on the left. This box represents the part
of an organisation’s absorptive capacity that focuses on assimilation. Hence the link
from the external environment to this box represents a conduit or channel through
which external ideas and opportunities are fed into the organisation. The capacity of
the conduit is a crucial feature of organisations with a strong absorptive capacity. If
it is effective, the organisation will be good at recognising external ideas.

Figure 3.6 Absorptive capacity

Source: Ettlie (2006: 83).

While Cohen and Levinthal (1990) acknowledge that ‘external influences’ are vital
for innovation, recognition of such influences is only one feature of absorptive
capacity. For effective innovation there has also to be a capability to assimilate
ideas within the organisation. As Figure 3.6 shows, assimilation is dependent upon
an ability to bring external ideas in and to absorb such ideas within an organisation.
Termed ‘knowledge diffusion’ this has to extend right across an organisation and to
all levels within it. The ‘silo’ mentality has no place within the theory of absorptive
capacity. Accordingly, the theory of absorptive capacity sets considerable store by
internal communication systems that effectively transfer knowledge across the
different parts of the organisation. In this context Cohen and Levinthal note that
shared knowledge and expertise is necessary for good communication.
Nor is this the end of the story since, for effective innovation, the ideas once
absorbed within an organisation have to be applied. An organisation’s capability to
apply ideas it has absorbed is represented by the box in the centre of Figure 3.6.
What determines an organisation’s ability to recognise, absorb and apply new ideas
so as to achieve effective innovation? Ettlie (2006) reminds us that there is a
constant tension between inward-looking (the bottom channel in Figure 3.6) and
outward-looking (the top channel in Figure 3.6) absorptive capacity. Effective
absorptive capacity requires the organisation to maintain a balance between the two
if it is to lead to effective innovation.
The theory of absorptive capacity places a great deal of emphasis on an
organisation’s ability to learn. Three factors are identified by Cohen and Levinthal
(1990) as being critical in developing and extending an organisation’s capacity to
learn and thence its ability to assimilate and apply new ideas:
▮ exposure to relevant knowledge
▮ presence of prior related knowledge
▮ diversity of experience.
Exposure to relevant knowledge means that the organisation and its staff need to
utilise appropriate networks in order to ensure that they keep abreast of
developments in the field. The importance attached to prior related knowledge is
linked to the assimilation process. Cohen and Levinthal (1990) argue that the ability
to recognise the value of new knowledge and assimilate it into the organisation is a
function of the accumulated prior knowledge within the organisation. This in turn
emphasises the importance of knowledge within this theoretical model. Assimilation
requires that knowledge be evaluated, which in turn requires prior knowledge. It is
because of the learning process that Cohen and Levinthal also stress diversity of
experience. The greater the range of experience within the organisation, the greater
the scope for recognising external ideas and stimuli.
The mini case of the carbon fibre chassis in Formula One provides a powerful
illustration of the theory of absorptive capacity (see also page 60). Carbon fibre had
been known about for some time (Henry, 1988). Designers had used small amounts
of carbon fibre in non-structural applications such as wings, but it took a relative
outsider to the Formula One community, like John Barnard, to attempt to build an
entire chassis out of carbon fibre. Significantly it was his external contacts,
particularly in the aerospace industry and the US, that gave the McLaren team
access to the necessary knowledge and expertise (Smith, 2012).
In this context Bruce and Moger (1999) observe that there is a danger that
organisations that place excessive emphasis on increasing efficiency and cost
reductions through the pursuit of increased specialisation using mass-production-
type activities risk impairing the organisation’s absorptive capacity. This is because
the division of labour aimed at producing repetitive, mass-production-type activities
reduces the diversity of experience of those working within the organisation, and as
a result the scope to build up absorptive capacity, which Cohen and Levinthal argue
is a cumulative process, is limited.
Absorptive capacity is more sophisticated than some of the other theories of
innovation. It highlights the importance of external knowledge as a critical
component in innovation. It helps to explain why some organisations, even where
they are exposed to external knowledge, may be poor innovators, because they
cannot absorb and make use of the knowledge, and it serves to show why networks
and networking can be so important to innovation. However, probably the greatest
strength of absorptive capacity and the reason why it has been widely used by those
researching the field is that it integrates and brings together a number of ideas.
These include ideas about technological evolution, the learning process and
networking. Absorptive capacity offers a synthesis that draws these different
strands together. In the process it offers a powerful tool for analysing innovation.

Case Study: High Fidelity


In June 1948 Columbia Records, a division of the giant US media corporation
CBS, transformed listening to music by launching the long playing (LP)
record. Its 12-inch vinyl disc utilised microgrooves and rotated at 33 1/3 rpm.
This was much slower than the 78 rpm records which had been the dominant
form of music recording since the introduction of the gramophone in the early
twentieth century. The 78 rpm records lasted just 4 minutes, meaning that with
classical music it was necessary to change the record before the piece had
finished. Engineers at Columbia had in fact calculated that some 90 per cent
of classical pieces lasted for 17 minutes or less. Columbia’s new long-playing
record provided a previously unheard of uninterrupted 25 minutes of playing
time. For the first time it was possible to hear an ‘album’ comprising several
songs or a major classical piece without having to stop to change records.
Columbia Records was keen to see its new music recording system become
the new standard format. So keen was it that it did not even patent the
technology which had been invented by Ed Wallerstein and his research team.
Columbia Records hoped that by letting other record companies use the
technology without paying a royalty, it would be quickly adopted and become
the de facto standard. This did in fact happen but not before CBS’s great rival,
RCA Victor, refusing to admit that they had been beaten by a competitor, had
devoted enormous research effort and resources to developing their own ‘new
and improved’ system. Quite simply, the senior management of NBC, which
owned RCA Victor, thought the company was a sufficiently powerful force in
the marketplace to force its technology on the consumer. But they
underestimated Columbia’s first mover advantage. RCA, in fact, would be the
last of the major labels to finally release its own 33 1/3rpm albums, three years
later in 1951.
Although LPs appeared first in the classical market, with the rising
disposable income of young people they quickly caught on in the market for
pop music. Thus throughout the 1950s, 1960s and 1970s the vinyl LP was the
dominant format for recording and playing music.
However, a breakthrough appeared in 1962 when the Dutch electronics
firm, Philips, launched the first compact audio cassette tape. This utilised an
entirely new medium: magnetic tape. This had been in use for some time but
Philips perfected the packaging of magnetic tape into a cassette using a much
narrower tape than used previously. The result was quite literally a
standardised cassette that was much more compact and easy to use.
Unfortunately while the cassette was easy to use it could not offer the quality
of recording inherent in the high fidelity vinyl LPs then on the market.
Consequently the first audio cassette applications were in dictating machines
rather than as pre-recorded cassettes that could be used to listen to music.
This situation changed in the late 1960s when an American working in
England, Ray Dolby, patented the Dolby A noise reduction system in 1966.
This improved the sound quality of audio cassette tapes by reducing the ‘hiss’
inherent in the use of a relatively narrow magnetic tape. Dolby’s innovation
proved crucial in the popularising and commercial success of audio cassette
tape, which became a key feature of the ‘hi-fi’ market of the 1970s. The
cassette was also helped by Sony’s launch of the Walkman in the late 1970s,
which popularised listening to music on the move. Even though a rival eight-
track cartridge system appeared, offering superior sound quality, the small
size and durability of the audio cassette won out and it was the dominant
medium of the later 1970s and 1980s.
The 1980s were to see the appearance of an entirely new recording
medium: the digital compact disk (CD). The CD was the product of a huge
research effort by Sony and Philips working together. They set up a joint
taskforce in 1979 that brought together two research teams with different
technologies and expertise. The CD worked on entirely different principles
from magnetic tape. Music was stored digitally on an optical disc and a laser
was then used to read the disc. Although the laser had been invented as far
back as 1958, it was primarily used as a piece of expensive laboratory
equipment. Making it sufficiently robust to operate in a domestic environment
presented many challenges as did the problem of simplifying the technology
so that it could be mass produced and sold at a price consumers could afford.
By working together rather than in competition Sony and Philips hoped to
avoid some of the pitfalls that had affected the earlier and relatively
unsuccessful laser disc. The format finally agreed was a compromise in terms
of overall performance. Compared to the compact audio cassette the CD
offered greater capacity and greater durability, but at the expense of slightly
reduced sound quality. These benefits were not lost on consumers who were
quick to adopt the new technology. The first commercial music offered on a
CD was Abba’s The Visitors album released in 1982. By 1990 CD sales in the
US amounted to 288 million annually. By 2007 more than 200 billion CDs had
been sold worldwide.
While the CD was the dominant format for listening to music during the
late 1980s and 1990s, by the end of the latter decade a new technology had
appeared: MP3. MP3 stands for Motion Picture Experts Group Layer-3. The
Motion Picture Experts Group, as its name implies, had nothing to do with the
music industry. Rather it was a committee of technical experts set up by the
International Standards Organisation (ISO), an international body that tries to
create common standards for products and technologies in the hope of
improving compatibility and safety. The remit of the group mainly covered
video and was intended to establish common standards for displaying video
and audio using compression techniques to reduce the amount of storage
space required without compromising sound quality. The use of compression
techniques for digital audio provided a format that would allow music to be
turned into computer files, that could then be relatively easily transferred
from one device to another and which would require much less storage space.
In the early 1990s the Motion Picture Experts Group (MPEG) agreed three
standards, of which MP3 was the third (hence ‘MP3’). It did not take long for
the manufacturers of audio products to take advantage of the new standard.
In 1998 the first MP3 players appeared. Diamond Multimedia’s Rio 100 MP3
player was launched in the US at much the same time as the Korean
manufacturer Saehan introduced its MPman. Three years later the American
computer giant Apple burst onto the scene with its iPod. It combined MP3
technology with a very small high-capacity hard disk drive, making it possible
to store 1,000 songs on a portable audio player.
The popularity of MP3 players meant that consumers, instead of buying
CDs, chose instead to download music files from the Internet. Thanks to
Apple’s iTunes and other legal download providers, sales of downloaded
music surged. By 2006 music downloads accounted for 78 per cent of all
singles sales, up from 24 per cent in 2004 (Hickman, 2006). As downloading
increased in popularity so inevitably CD sales declined and MP3 became
increasingly the norm.
Source: Hickman (2006); Levy (2006).

Questions
1 What were the major technological advances in this case?
2 What were the technological paradigms and when and why did paradigm
shifts occur?
3 What, if any, were the instances of creative destruction in this case?
4 What were the implications for the music industry of the paradigm shift
from CDs to the downloading of MP3 files?
5 Which of the four theories of innovation do you think best explains
innovation in the music industry?
6 If the technological advances described in this case study resulted in
radical innovations, what sorts of innovations would you expect to find in
the intervening periods?
7 Apply the theory of punctuated equilibrium by drawing a diagram
showing innovations in music recording over the last 60 years.
8 How effectively does the theory of punctuated equilibrium explain
innovations in the music industry?
9 What evidence is there that at times of technological breakthrough one
has a number of new competing designs?
10 Why was it beneficial for Sony and Philips to collaborate to develop the
CD format?

Questions for discussion


1 What is technological change?
2 What is tacit knowledge and why is it an important aspect of technology?
3 What is a technological paradigm shift? Use examples to illustrate your answer.
4 How can the Kondratiev long wave cycle contribute to our understanding of
innovation?
5 How can a theory of innovation have a predictive capability (use appropriate
examples to support your case)?
6 Why do innovations often come from outside the industry?
7 Explain how the concept of a paradigm shift is linked to the first three theories
of innovation.
8 Use the theory of the technology S-curve to distinguish between radical and
incremental types of innovation.
9 According to the theory of punctuated equilibrium, why is the rate of innovation
not constant?
10 How might outsourcing reduce a firm’s absorptive capacity?

Exercises
1 Take an account of innovation (this could be from a biography of an innovator,
or a television programme or a film) and use any one theory of innovation to
explain why and how innovation occurred.
2 Provide a detailed critique of any one of the theories of innovation.
3 What is the connection between notions of evolution, particularly technological
evolution, and theories of innovation?
4 What is the value of theories of innovation for (a) would-be innovators, and (b)
policymakers?

Further reading
1 Huebner, J. (2005) ‘A Possible Declining Trend for Worldwide Innovation’,
Technological Forecasting and Social Change, 72, pp. 980–986.
A short but provocative article that presents a compelling case to show that far
from increasing exponentially, technological change is in fact slowing down.
While there are some potential flaws in the analysis this thought-provoking
article is well worth reading. It provides valuable insights that can help the
reader to develop a critical appreciation of the nature of technological change
and innovation.
2 Tushman, M. L. and Anderson, P. (2004) Managing Strategic Innovation and
Change, 2nd edn, Oxford University Press, New York.
One of few readers on innovation. The collection of papers includes some useful
ones on innovation theory. These include Anderson and Tushman on punctuated
equilibrium and Christensen and Bower on the technology S-curve.
3 Christensen, C. M. (1997) The Innovator’s Dilemma: When New Technologies
Cause Great Firms to Fail, Harvard Business School Press, Boston, MA.
This book provides an excellent opportunity to see how the theories of
innovation can be applied to analysing and understanding why and how
innovation occurs. The theories of dominant design, technology-S curve and
punctuated equilibrium are much in evidence in Christensen’s analysis
innovations in mechanical excavators and hard disk drives.
4 Florida, R. (2002) The Rise of the Creative Class: And How It’s Transforming
Work, Leisure, Community and Everyday Life, Basic Books, New York.
Another provocative look at technological change, though this time the
emphasis is not on the nature of such change but rather its impact on many
aspects of society. It is highly readable. Its value lies in its ability to help the
reader develop a critical appreciation of technological change and innovation.
What Does Innovation Involve?

This part contains the following 4 chapters:


4 Sources of innovation
5 The process of innovation
6 Value capture
7 Intellectual property rights
Sources of Innovation

Objectives
When you have completed this chapter you will be able to:
review the innovation process
distinguish the different ways in which the innovation process can
commence
analyse the diverse sources of innovation
identify recent changes in the relative importance of particular sources of
innovation
evaluate the relative importance of different sources of innovation.

Introduction
This chapter focuses on the exploration phase of the innovation process rather than
the later points which are concerned with exploitation and getting an invention
ready for market. It is concerned with where the new ideas, discoveries and
breakthroughs for innovation, that were introduced in Chapter 1, come from. In a
sense it is concerned with the ‘Eureka’ moment, when an individual or team begins
the process that will ultimately lead to a successful innovation. But these starting
points which we termed ‘trigger events’ in Chapter 1 have to have some kind of
context. Just who are the individuals or teams involved in these trigger events? Or to
put it another way, who are the originators of innovations?
In this chapter we will attempt to show that innovations can have very different
origins. Media portrayals of innovation tend to highlight individuals, in part because
we all relate to individuals and in part because this often makes a better story. In
reality while individuals can be very important, they are not the only players.
Teams, organisations and institutions also have very important parts to play in the
origins of innovation. If one asks the question, Where do innovations come from?,
the answer is they come from a number of different sources and this chapter aims to
analyse these sources and provide a categorisation of them. As well as examining
the various sources of innovation, the chapter also aims to show how these sources
vary in importance, both between different contexts (i.e. different industries or
sectors) and over time.

Classifying the sources of innovation


Alongside lone individuals, like James Dyson and Ron Hickman who quite
independently have ideas and develop them into successful products/services, are a
number of other important sources of innovation. These were described earlier as
teams, organisations and institutions and this is a useful categorisation. The
organisations involved as sources of innovation are typically business corporations
while the main institution involved tends to be the State. Given that there is a
particular class of individual with a distinctive role to play as a source of innovation,
namely users, this gives four basic categories for sources of innovation, namely:

▮ individuals
▮ corporations
▮ users
▮ the State.

It is tempting to see these four groups as some kind of historical progression, but
that would be a mistake. Certainly individuals come first, but they continue to be
important. They represent the classic type of innovation where an individual has an
idea and struggles to develop it into an innovation, often creating a significant
business enterprise along the way. Many nineteenth-century innovations come from
this source. However, well before the end of the nineteenth century corporations
were beginning to have an impact as a source of innovation. Thomas Edison is often
portrayed as the archetypal individual innovator, but as we saw in Chapter 1, many
of his innovations were of the corporate variety. His Menlo Park industrial research
lab was very much an example of the corporate model of innovation with
innovations sourced from a team. This source came to dominate during the course
of the twentieth century as innovations came to rely more and more on advances in
science and technology. It was against this background that Von Hippel identified
users as an important source of innovation. At the time this seemed a somewhat
novel approach to innovation, but Von Hippel showed how in particular
circumstances ideas from users could be very much the source from which certain
kinds of innovation originated, though this model as we shall see also required the
cooperation of corporations. Finally, it has come to be recognised that institutions
like the State have, and nearly always have had, an important part to play as a
source of innovation. Recent work (Mazzucato, 2013) has shown how many of the
latest innovations in computing and communications technology, while associated
with well-known hi-tech firms, are actually based on technological developments
that originated in State agencies or were funded at some point by the State.

Individuals
In the popular media, derived from biographies, television documentaries, films (e.g.
Marc Abraham’s 2008 film, Flash of Genius), press reports and business magazines,
the individual in the form of the lone inventor (Seabrook, 2008) reigns supreme –
hence why this is sometimes referred to as the ‘heroic’ model in terms of sources of
innovation. According to this model, individuals have ideas for potential new
products and services typically quite independent of any third party or business
corporation. Where do the ideas come from? In short they come via a number of
different routes (see Table 4.1). Sometimes the ideas are derived from work (e.g. Bill
Gore and the development of Gore-tex (Parsons and Rose, 2003)), sometimes from
involvement in hobbies and leisure activities (e.g. Ron Hickman and the Workmate
portable work bench (Landis, 1987)), sometimes from a sense of frustration at the
poor quality and performance of existing product (e.g. James Dyson and the bagless
vacuum cleaner (Dyson, 1997)), sometimes from a desire to find a ‘better way’ of
doing something (e.g. Dan Bricklin and the development of VisiCalc, the world’s first
spreadsheet (Campbell-Kelly, 2003)). And sometimes quite by chance (serendipity)
or through some chance event, such as Bob Kearns driving through the rain in
Detroit and suddenly thinking – why can’t a windscreen wiper behave like an eye
and blink? (Seabrook, 2008).

Table 4.1 Individuals and their innovations

Where an individual is the inventor/innovator this is often described as the


‘garage’ model of innovation (Audia and Rider, 2005). In this model an individual,
perhaps working with a partner, develops an innovation at home in his or her garage
or bedroom or somewhere similar. The garage becomes the location for innovation
by virtue of the fact that it is an individual who is innovating on his/her own and
therefore with access to only personal resources such as a home or garage. The
garage model typically implies extensive reliance on improvisation. In this context
improvisation is likely to involve elements of ‘bootstrapping’ (i.e. scavenging
resources (Jones et al., 2014: 152)) and ‘bricolage’. Baker and Nelson (2005: 33)
define bricolage as ‘making do by applying combinations of resources at hand to
new problems and opportunities’. In the context of innovation, Miner et al. (2001)
note how bricolage can help innovators explore new opportunities that would
otherwise be too expensive using conventional methods such as research and
development in established companies. Among the most famous innovators who
have used the garage model and quite literally developed an invention in the garage
are William Hewlett and David Packard who developed a prototype audio oscillator
in the garage of Hewlett’s home at 367 Addison Avenue in Palo Alto, California and
James Dyson who developed his bagless vacuum cleaner in the garage/coach-house
of his home near Bath (Dyson, 1997).
In fact it was the individual as innovator that was the model first used by Joseph
Schumpeter (Pavitt, 2005). In his early writing (sometimes referred to as Schumpeter
Mark 1 (Fagerberg, 2005)), Schumpeter stressed the role of individuals as the source
of innovation. Schumpeter particularly highlighted the character and determination
of outstanding individuals who had not only the ingenuity to develop technical
inventions, but more importantly the perseverance to see them through the long and
demanding exploitation phase required to bring them to market successfully.
A study by Jewkes et al. (1969) showed that despite much speculation, only large
firms now have the resources necessary to undertake technology-based innovation,
the individual inventor/innovator was alive and well. In a study that covered some
70 important innovations that occurred during the twentieth century, Jewkes et al.
(1969) found that in around half the cases the source of the innovation was a single
person either working on his or her own or at least independently of a corporate
undertaking. Only one-third of the innovations had as their source the research
laboratory of a corporate undertaking, the remainder being simply difficult to
classify. More recent studies (Amesse et al., 1991) support the general pattern
identified by Jewkes et al. (1969).
The resilience of the individual inventor is linked to a number of factors. First,
there is the growth in the small-firm sector that has taken place during recent years.
Second, a variety of organisational devices, such as strategic alliances, have
enabled small firms to work with large firms. Third, innovation is associated with
the applications of technology and, while large firms may be proficient where the
development of technology is concerned, small firms will often have greater
knowledge of applications. A further factor has been the increased popularity of
spin-off companies. Not only do these provide a means for individual inventors to
leave the corporate sector and set up on their own in order to develop an innovation,
they also, as in the case of Silicon Valley, provide a powerful role model for would-
be innovators. In this context the growth of the venture capital industry over the last
three decades has provided a powerful force to support this kind of trend. Finally, as
Christensen (1997) has pointed out, the emergence of disruptive new technologies is
something to which the corporate sector often finds it difficult to adapt.
Consequently, some of the new technologies of the last quarter of the twentieth
century have helped promote the cause of the individual inventor/innovator. Not
least of these have been some of the technologies associated with computing. The
examples of Steve Jobs and Steve Wozniak and the personal computer, and Dan
Bricklin and the spreadsheet, stand as testimony to the success of the individual
inventor/innovator in this context.

Corporations
Although Schumpeter originally identified individuals as the primary source of
innovation, in later life he underwent something of a conversion and his later work
identified large business corporations as the chief source of innovation (the so-
called Schumpeter Mark 2 (Fagerberg, 2005)). Schumpeter’s reasoning was that as
innovation became increasingly technology based it required extensive research and
development (R&D). Only large firms had the resources to operate industrial
research laboratories in which such R&D could be undertaken. The use of industrial
R&D laboratories first emerged in the chemical industry in Germany but was taken
up by other sectors such as electrical products where electrical technologies became
an increasingly important source of innovation. In the US this was pioneered by
Edison with his Menlo Park laboratory.
Pavitt (2005) notes that industrial R&D became increasingly integrated into large
manufacturing firms during the course of the twentieth century. Large, vertically
integrated business corporations invested in research laboratories whose R&D
activities became a source of technical breakthroughs that led to innovations across
the business world. This is what Chesbrough (2006a) refers to as the ‘closed’ model
of innovation because research activities along with most other aspects of
innovation, take place in-house.
Outstanding examples of this approach were to be found in the US where, in the
telecommunications field, AT&T’s Bell Labs earned six Nobel Prizes for inventions
such as the laser and the transistor (The Economist, 2007a), while in computing IBM
picked up three. Nor was the model confined to high-technology industries like
aerospace, automotive and electronics; in food processing, detergents, cosmetics
and industrial materials one finds a similar pattern. Companies like Procter &
Gamble, Unilever and 3M used their R&D capability to produce a steady stream of
innovations across the full range of their product portfolios.
While there is no doubt that the corporate model of innovation is still the primary
source of innovation, it has increasingly been recognised that the R&D undertaken
in industrial research labs is not the only way in which corporations can innovate.
According to Chesbrough (2006b: 48) the proportion of R&D undertaken by large
firms (i.e. with more than 25,000 employees) fell from 70.7 per cent in 1981 to 41.3
per cent in 1999. Increasingly R&D is being done by smaller firms (less than 1,000
employees) whose share of R&D has risen from 4.4 per cent in 1981 to 22.5 per cent
in 1999. This change reflects the increasing use of ‘open innovation’ (Chesbrough,
2006a) where large corporations increasingly buy in new technology by licensing it
from other firms. The greater flexibility that open innovation provides has enabled
large corporations to continue to play a very important part in innovation, only now
they sometimes innovate with someone else’s technology rather than their own.

Mini Case: 3D Printing


Chuck Hall, the Chief Technology Officer of 3D Systems, a company he co-
founded, is enjoying some minor celebrity 31 years after he first printed a
small black eyewash cup using a new manufacturing method called
stereolithography, or as it is better known today: 3D printing.
At the time Hall was working for a company that used UV light to put thin
layers of plastic veneers on tabletops and furniture. Like others within the
industry he was frustrated that the production of small plastic parts for
prototyping new product designs could take up to two months. The bottleneck
was caused by the demands of plastic injection moulding technology which
required the construction of a pattern and a mould, a time consuming and
expensive process.
Hall had the idea that if he could place thousands of thin layers of plastic
on top of each other and then etch their shape using light, he would be able to
form three-dimensional objects. After a year of tinkering with ideas in
backroom lab after hours, he developed a system in which light is shone into a
vat of photopolymer – a material which changes from liquid to plastic-like
solid when light shines on it – and traces the shape of one level of the object.
Subsequent layers are then printed until it is complete.
After patenting the invention in 1986, he set up 3D Systems to
commercialise the new method of production and went on the road to secure
both funding – eventually getting $6 million (£3.5 million) from a Canadian
investor – and customers. The first commercial product came out in 1988 and
proved a hit among car manufacturers, in the aerospace sector and with
companies designing medical equipment.
When Hall originally came up with the invention, he told his wife that it
would take 25–30 years for the technology to find its way into the home. That
prediction proved correct: the realistic prospect of widespread use of
commercial 3D printers has only emerged in recent years.
Source: Adapted from Hickey (2014).

Users
The idea that users are a source of innovation is unsurprising since one might
reasonably assume that they are best placed to know what they need. However, it
needs to be stressed that what we mean by users as innovators is not users telling
manufacturers what to make, rather like some form of market research, but users
being actively involved – indeed they may well initiate and oversee the innovation
process.
Users as an important source of innovations is particularly associated with the
pioneering work of Von Hippel (1976). He was among the first to show that in certain
industry sectors users play a critical role not only in generating ideas for
innovations, but also in their subsequent development. Von Hippel’s work focused
on the scientific equipment industry, particularly the sectors producing instruments
used for gas chromatography, nuclear magnetic resonance spectrometry, ultra
absorption spectrometry and transmission electronic microscopy. Von Hippel was
able to show that in each case the idea that formed the basis of the instrument came
from, and was initially developed by, a user who was a member of the scientific
community; only when the idea had been developed into a working prototype was it
transferred to a manufacturing company for commercial production, and the user
still remained actively involved in the programme.
It is significant that Von Hippel focused on scientific equipment as an industry
sector. This sort of equipment is widely used in scientific research and as Rothwell
(1986) points out, in this field scientific researchers form the focal point of state-of-
the-art expertise. In addition, the nature of their work – research – means that they
often have to construct new kinds of equipment in order to allow them to move
forward the frontiers of knowledge. Experimentation by its nature requires
monitoring and measuring equipment, and new forms of experimentation may well
require new forms of equipment. Similarly, in chemistry, chemists working in
government laboratories and universities often need to devise new forms of
analytical equipment in order to further their research. As Von Hippel points out,
manufacturers of scientific instruments are simply not sufficiently closely involved
in scientific research to perceive or predict the new requirements in the field which
would enable them to make the initial invention. Similarly, users do not possess the
capability to manufacture scientific instruments so that once they have developed a
working prototype, they then turn to manufacturers to produce the equipment in
quantity.
More recently, Von Hippel (2005) has described what he calls the ‘democratising’
of innovation as the involvement of users as a source of innovation has extended to
many more industries. Among the industries where users are now an important
source of innovation are:
▮ software development
▮ library information systems
▮ mountain bikes
▮ outdoor clothing and equipment (e.g. jackets, sleeping bags, etc.)
▮ extreme sporting equipment (e.g. skateboards, windsurfers, etc.).

Over the last decade there has been a steady stream of research studies detailing
user involvement in these industry sectors. In software development Hertel et al.
(2003) looked at user involvement in the development of the Linux operating
system. Morrison et al. (2000) did the same for library information systems. Parsons
and Rose’s (2003) recent history of the outdoor clothing industry shows how
climbers and walkers played a crucial role in developing innovations in this sector in
the 1980s and 1990s. Lüthje et al. (2005) describe how users in Marin County,
California, developed the first mountain bikes, while Shah (2000) did the same thing
for sports equipment, highlighting the importance of ‘learning by doing’ in
developing innovations in skateboarding, snowboarding and windsurfing.
Why are users in these industries increasingly becoming the source of
innovation? A variety of factors have been put forward as facilitating the
involvement of users in new product development. First, improvements in
communication such as the appearance of the Internet, cheaper and better
telecommunications and the growth of the media (especially media catering for
special interests, e.g. specialist magazines and websites) have given users much
improved access to each other and to manufacturers and suppliers, as well as much
improved access to knowledge.
Secondly, improvements in computing, particularly things like CAD, spreadsheets
and project-management software, have enabled users to develop their own new
product development (NPD) capability. Thirdly, greater levels of education,
particularly greatly increased rates of participation in higher education, have given
users both an increased knowledge capability and greater access to knowledge.
Finally, the growth of open innovation has provided user innovators with another
route to market (i.e. via large companies). These factors, combined with much
greater flexibility of technology and lower costs in many instances, have enabled
users to play a much more active role in innovation in certain industry sectors. No
longer forced to be passive recipients of what manufacturers conceive as
appropriate products, users have been ‘democratised’ and given access to an active
role in innovation. In many sectors enthusiastic users have leapt at the opportunities
presented by their democratisation and developed new products. This is particularly
the case in areas like sport, outdoor pursuits and cycling where enthusiasts are able
to acquire and build up specialist know​ledge that enables them to produce better
products and services.
Energising users to play a very active role in innovation in this way has also been
helped by new institutional arrangements. New financial institutions such as the
rising venture capital sector have provided a means for users to set up and grow
their own start-up ventures, while increased use of forms of collaborations such as
alliances and joint ventures has provided them with a means to work with
manufacturers and distributors on equal terms.

The State
In many respects the State is not an obvious source for innovation. And yet the
technologies that lie at the heart of many modern consumer products, including
many that have gone a long way to change our lives in recent years, rely on
technologies developed either by agencies of the State or by organisations whose
research was largely funded by the State.
It has long been recognised that there is a role for the State in funding certain
activities that would not otherwise be undertaken because of market failure. Basic
scientific research, what is often described as ‘blue skies’ research, is not something
that commercial companies normally undertake because of uncertainty about what
the potential returns, if any, might be. Thus when the social return on investing in
research is higher than the private return, private firms are unlikely to invest
(Mazzucato, 2013). Major technical advances like the creation of the infrastructure
for the Internet are simply too big and too uncertain for one firm. Hence one finds
that many scientific discoveries are the product of research funded by the State
through universities, hospitals, defence establishments and other similar research
bodies. Eventually these advances feed through into everyday items as the
technologies formed by these scientific advances diffuse into wider applications and
use.
Mazzucato (2013) gives the example of well-known recent innovations from
Apple, which have drawn heavily on technologies whose development was funded in
large measure by the State. Figure 4.1 shows some of the technologies and the
funding agencies involved. If we take the case of the iPod, two examples will serve
to provide a valuable perspective on the nature of the support provided by the State.
Figure 4.1 Sources of technologies in Apple products

Source: Adapted from Mazzucato (2013: 109).

A key feature of the Apple iPod when it first appeared in 2001 was its storage
capacity relative to its small size, a feature made possible by virtue of its 1.8 inch
Microdrive which with a 5GB capacity could store 1,000 songs. This microdrive was
one of the first hard disc drives to utilise GMR (giant magnetoresistance), one of the
first of a new generation of nanotechnologies. GMR, or spintronics as it is also
known, is a phenomenon whereby information is stored and processed by
manipulating the spins of electrons (Overbye, 2007). It was discovered by Fert and
Grünberg working in universities in France and Germany in the late 1980s, a
discovery for which they later shared the Nobel Prize for Physics. The discovery
paved the way for dramatic increases in the storage capacity of computer disc
drives (McCray, 2009). Significantly it was the US government that played a critical
role in the innovation process both in terms of basic research and commercial
exploitation of the technology. During the 1990s its Defense Advanced Research
Projects Agency (DARPA) used its Technology Reinvestment Program (TRP) to
invest millions of dollars in university-based spintronics research (McCray, 2009),
funding both basic and applications oriented projects. The latter helped to reduce
the uncertainty surrounding the new technology, in particular the way in which
storage devices using the technology could be manufactured. The result was a
relatively swift ‘discovery-to-device’ path (McCray, 2009) that encouraged
manufacturers of hard disk drives to take the new technology seriously, leading to
the rapid exploitation of GMR/spintronics technology and enabling storage of nearly
ten times more data than conventional hard disk drives while remaining smaller in
size. Music lovers, through the efforts of Apple, were in due course provided with
gigabytes of storage for their music files on iPods and similar handheld gadgets
(McCray, 2009). Nor was this the only innovative feature of the iPod and later Apple
products that owed its existence to the efforts of the State. Mazzucato (2013) notes
how the distinctive iPod click-wheel for navigating content which represented
Apple’s first attempt at utilising capacitive sensing technology, originated in the
work of a State-funded agency in Britain, having originally been discovered back in
the late 1960s by E.A. Johnson working at the Royal Radar Establishment at
Malvern.
Specialist agencies like DARPA utilise a variety of policy instruments to provide
State support for innovation. One such mechanism is the use of demonstrator
projects. These are typically State-funded projects where applications of radical new
technologies can be developed and tested on potential new products (Harborne
et al., 2007). The purpose of demonstrator projects is to explore the feasibility and
the benefits to be derived from the application of radical new technologies. An
example of a demonstrator project is the ‘Flybus’. This is a project funded by the UK
government’s Technology Strategy Board as part of its low carbon initiative and led
by gearbox specialist Torotrak, together with busmaker Optare and engineering
consultancy Ricardo. The purpose of the project is to develop a hybrid bus that
recycles kinetic energy generated under braking, that would otherwise be lost. The
aim is to demonstrate the viability of flywheel technology as an alternative to
battery hybrid vehicles. Demonstrator projects like this allow developers of new
technologies to reduce some of the uncertainty surrounding radical innovations by
learning how to apply a new technology and test possible consumer reaction.

Mini Case: My Trusty Little Sunflower Cream


People across the country now have the opportunity to buy a moisturising
cream developed by staff at Salisbury District Hospital in Wiltshire. The
cream was first developed at the hospital in Salisbury more than 20 years ago,
by pharmacists and clinical scientists to aid post-operative skin care for burns
and plastic surgery. Over the years users have found it calming to use as a
moisturiser on irritated or dry skin. Highly regarded by patients and staff
alike, it was used in many clinical areas and sold on-site to patients who asked
for it to be made available after their discharge.
However, it proved so popular that in 2012 the Trust decided to make the
cream more widely available to the general public. So it was that ‘My Trusty
Little Sunflower Cream’ was born. Repackaged in attractive new 100 ml tubes,
the cream is now available for general sale through a new website,
www.sunflowercream.com, and over the counter in the hospital’s pharmacy,
League of Friends shop, the staff club and the Cashier’s Office.
Malcolm Cassells, Director of Finance and Procurement, commented: ‘We
are sure that people will find it a very attractive product for many reasons.
The cream contains 5% pure sunflower oil and has no parabens. These are
chemicals used as preservatives, and are man-made and not naturally
occurring. The cream has not been tested on animals and comes in two
varieties, unscented and lavender scented cream.’ He went on to note that,
‘The launch of My Trusty Little Sunflower Cream is another exciting
opportunity for Salisbury District Hospital to share one of its innovations with
a much wider population. We know the cream is really liked and is beneficial
to patients and staff, so making this available to more people is great news. At
the same time the income generated will be very helpful in supporting patient
care in the hospital.’
Source: Salisbury (2014).

Other sources
Alongside these four main sources of innovation are a number of other sub-
categories that overlap with the ones shown here. For example, individuals who
come up with innovations can also be classified as outsiders in the sense that they
are not closely associated with the mainstream field in which the innovation occurs.
Similarly some of these other sources aren’t individuals, organisations or institutions
in the way that the four main sources are. Rather they involve particular contexts
which can act as a source of innovation.

Employees
Employees are a much underestimated source of innovation, and yet Adam Smith
(1776) in The Wealth of Nations noted that employees, by virtue of their close
involvement in work, often find ‘easier and readier methods of performing it’.
Today a small number of companies have actively recognised the value of Adam
Smith’s observation. Some companies operate ‘suggestion schemes’ that encourage
and reward employees who come up with ideas for new products and services or
devise improvements in production processes. Other companies take the suggestion
scheme a stage further and allow their employees to devote a modest proportion of
their time at work to developing new ideas. WL Gore & Associates, the manufacturer
of the waterproof fabric Gore-tex, for instance, actively encourages its employees to
spend around 10 per cent of their time working on speculative ideas (Deutschman,
2004). 3M operates a similar scheme, referring to this type of activity as
‘bootlegging’.
As companies come increasingly to recognise the importance of innovation in
providing them with competitive advantage, then we may well see more companies
devising ways of encouraging employee innovation.

Outsiders
A consistent feature of innovation over many years has been the substantial
proportion derived not from those working within a given technological paradigm,
but from outsiders who have hitherto had little to do with it. There is a case for
arguing that outsiders provide an important source of innovation. Table 4.2 provides
some examples of innovations developed by outsiders.

Table 4.2 Outsiders as innovators

To what extent were they outsiders? Chester Carlson, the inventor of the
photocopier, worked for P.R. Mallory & Company, a manufacturer of electrical and
electronic components (later better known for its Duracell batteries), analysing
patents (Owen, 2004). Steve Jobs and Steve Wozniak, the pioneering Apple computer
innovators (Campbell-Kelly, 2003), were college drop-outs and although Wozniak
worked for Hewlett-Packard, he worked on calculators, not computers. John
Barnard, the designer of the McLaren MP4, the world’s first carbon fibre racing car,
was new to Formula One, having previously worked in the US on Indycars (Cooper,
1999). Finally, Jeff Bezos, who pioneered Internet-based retailing through the
creation of Amazon.com, was a fund manager in the financial services industry
(Cassidy, 2002).
None of these individuals worked in the field in which they were to achieve
success as innovators. They were not part of the community in which their
innovation was based and in that sense they were outsiders.
What do outsiders have that industry insiders lack? In analysing why outsiders
are important as a source of innovation it is worth exploring the role of industry
insiders. Within any industry there will tend to be what Galbraith (1958: 35)
describes as the ‘conventional wisdom’, which comprises ‘the ideas which are
esteemed for their acceptability’. In more technologically based industries this may
amount to what Dosi (1982) terms a ‘technological paradigm’ where the domain of
what is technologically possible is informally prescribed by industry insiders. In
short, assumptions may well be deeply embedded and as such go unquestioned.
Where people work in a group of like-minded specialists or belong to such groups,
the phenomenon may be even more pronounced. Similarly, groups can be insulated
from the world around them, sharing a collective perspective that leads all too easily
to assumptions and ideas going unquestioned. The insularity may be worse if firms
have close relationships with their customers. It was noted by Christensen (1997)
that, in some circumstances, paying close attention to customers and customer
needs may actually be counter-productive for innovation, in so far as it leads to
greater insularity as the firm’s outlook becomes more specialised and ignores some
of the wider trends in technology and potential customers.
Outsiders may be able to avoid at least some of these pitfalls. Since they are not
part of an established community, they are likely to have fewer inhibitions when it
comes to challenging accepted ideas. Similarly, outsiders may be more willing to try
unorthodox ideas precisely because they are not familiar with the ‘conventional
wisdom’. The case of Chester Carlson illustrates this well. When trying to develop a
means of copying he found that the conventional wisdom prescribed chemical
methods for reproducing photographs, a field closely related to his experimental
work on document copying. Lacking expertise in chemistry, Carlson was obliged to
pursue a completely different direction involving the use of an electrical method
which he later called ‘electrophotography’ (Van Dulken, 2000). As well as being
willing to try unorthodox ideas and approaches, outsiders may also be more willing
to try simple ideas.
However, one of the biggest advantages enjoyed by outsiders is that they often
have external contacts in fields which may be unrelated but nonetheless prove
useful. A feature of these contacts is likely to be their diversity, enabling the
innovator to draw from a relatively wide knowledge base. This was true of John
Barnard when he designed the McLaren MP4 racing car. The established practice
was for the chassis of a racing car to be constructed from sheets of aluminium
rivetted together to form a tub. This was light, strong and relatively easy to build.
While carbon fibre had been used on racing cars, it had only been used for single
components, such as the aerodynamic wing at the rear of the car. The idea of
building the whole chassis (which was the main part of the car) from carbon fibre
was revolutionary. However, Barnard had a friend who worked for British
Aerospace in Weybridge where the Harrier jet was built. The friend explained how
carbon fibre was being used in the aerospace industry. In this way Barnard was able
to confirm that his idea was feasible. Unfortunately, it still proved impossible to get
anyone in the UK to undertake the construction of anything as big as a car chassis
from carbon fibre. At this point Barnard was again able to call on one of his contacts
outside Formula One, in this case a former colleague from the US, who suggested
that Hercules Aerospace of Salt Lake City, Utah, who built guided missile
components using carbon fibre, might be able to help (Henry, 1988). Hercules
Aerospace not only had the expertise and the facilities, they also had an R&D section
willing to undertake one-off jobs.
Thus outsiders may possess a range of advantages over industry insiders – not
only are they likely to be more open to new approaches and willing to challenge
existing ideas, but the range of external contacts they can draw on means their
absorptive capacity, as far as external linkages are concerned, is likely to be greater
too.

Spillovers
Spillovers typically occur when one firm benefits from another firm’s investment in
R&D. Afuah (2003) gives the example of a firm conducting research on cholesterol
drugs, where the knowledge that it gains about how the body makes cholesterol
spills over to its competitors. The nature of spillovers can vary, but they might for
instance result from one firm making an investment in R&D that leads to a scientific
discovery or development of a new product that other firms are able to imitate or
copy. Alternatively, if the firm that has developed the new product chooses not to
exploit/commercialise it, it might license it to others. Either way a firm, other than
the one that made the initial investment, is able to bring an innovative new product
to market. Two examples illustrate this source of innovation. Dan Bricklin and his
company, Software Arts, developed VisiCalc, the world’s first spreadsheet in the late
1970s. However, the spreadsheet idea was soon copied by other firms including
Lotus with its spreadsheet 1-2-3, Borland with Quattro and Microsoft with Multiplan
(later renamed Excel). Although Software Arts was the first mover (being the first to
get a product to market), it was ultimately overtaken first by Lotus and latterly by
Microsoft. Quite literally, after Software Arts had developed the spreadsheet and
proved the concept in the marketplace, the idea then spilled over and became public
knowledge to be taken up by others. (At the time software could not be patented in
the US, unlike the situation today.) Similarly, when Du Pont developed Teflon in the
late 1950s, it chose not to become heavily involved in applications on the grounds
that this was not its area of expertise. Instead it was left to Bill Gore, a researcher at
Du Pont, to go it alone and develop fabrics using Teflon, in particular the high-
performance, weatherproof fabric Gore-tex.
Spillovers are likely to occur in situations where it is difficult to prevent others
from appropriating the benefits from an invention. Intellectual property rights are
the means by which inventors normally endeavour to prevent others appropriating
benefits. Success is dependent on being able to engage a tight appropriability
regime. Sometimes despite best endeavours this proves difficult. Software Arts was
hampered by the fact that at the time it developed VisiCalc, software could not be
patented and the Software Arts’ founders had to rely on copyright. A similar thing
happened with EMI’s CAT scanner, where the principle was relatively easily
understood and complementary assets such as training, product support and
servicing proved key features of competitive advantage (Teece, 1986). Under these
circumstances, even though patents were employed, it was not possible to prevent
knowledge spilling over into the public domain for other firms to take up.
Spillovers are also more likely in situations where staff move around a lot. This is
referred to as staff ‘churn’ and greater mobility of staff tends to mean that just as
staff move around, so does knowledge. Similarly, if there is a lot of contact between
staff in different companies one can expect the movement of knowledge and
therefore spillovers.

Process needs
Sometimes the demands of a manufacturing process will act as a stimulus to
innovation. Abernathy and Utterback (1978) note that this is most likely to occur in
industries which have reached a point of maturity in terms of industry evolution,
particularly those producing established commodity products in large volumes, such
as light bulbs, paper, glass, steel and chemicals. Here pressures for cost reduction
are like to be at their most intense, acting as a spur to process innovations that can
make an already efficient production process even more efficient.
Alistair Pilkington’s development of the ‘float glass’ process in the 1960s and
1970s exemplifies just such conditions. At that time the manufacture of plate glass,
which was increasingly being used in construction, was expensive and time
consuming because it required sheets of glass to be subjected to grinding and
polishing in order to obtain a flat surface. The float glass process developed by
Pilkington did away with these stages in the production process. Instead glass was
drawn directly from the furnace over a bed of molten tin. Although it took several
years to perfect this innovation, it did eventually result in plate glass being produced
much more quickly and at much lower cost. So great was the improvement that
Pilkington’s were able to license the process to other glass-makers and it is still
widely used today. The significance of float glass as an innovation is that it is a clear
case of a process need – in this case effectively a production bottleneck – being the
source of the innovation. As is often the case it was the existence of a bottleneck
that provided the stimulus to innovate.

Case Study: The Mountain Bike


A transformation in sales of bicycles across North America and Europe has
taken place over the last 20 years. This change has seen consumers switch
from road bikes to mountain bikes. Introduced into the UK in the late 1980s,
mountain bikes quickly caught the imagination not only of the emerging
‘yuppie’ culture, but of commuters as well, attracted to this stylish but sturdy
new type of bicycle (Rosen, 2002: 133). The introduction of this new type of
bike led to a new boom in the cycle industry and the emergence of a new
dominant design. This was the biggest change in bicycle design since the so-
called ‘safety bicycle’ challenged the ‘penny farthing’ back in the 1890s (Berto,
1999: 11). Just how big an impact mountain bikes had on the bicycle can be
gauged by sales figures for the UK market. In 1988 sales of mountain bikes
made up just 15 per cent of the 2.2 million unit UK market. Two years later,
sales of mountain bikes had not only risen dramatically to 50–60 per cent of
the bikes sold in the UK, they had helped to push the overall market to 2.8
million units (Rosen, 2002: 133).
Among the changes brought in by the new mountain bikes were a switch to
fat ‘balloon’ tyres instead of thin ones, the adoption of an erect riding position
instead of a crouched one, the substitution of flat handlebars for dropped
ones, the use of front and rear derailleur gears to provide at least 15 speeds,
cantilever brakes and thumb-operated gear shifters (Berto, 1999: 20).
Dramatic though the change was, its origins lie not with the big bicycle
manufacturers but with riders themselves. The particular riders all came from
Marin County, California on the Pacific coast of the US.
Marin County is located to the north of the Golden Gate Bridge in San
Francisco, California. It is quite literally on the opposite side of San Francisco
bay to Silicon Valley. Marin County is a hilly area comprising dense woodland.
These woodland areas cover the slopes of Mount Tamalpais, and in the early
1970s a group of young cyclists, mostly men in their teens and twenties, made
up of high school and college students, firemen, bike-shop mechanics and
members of the general public, began to use them for off-road racing. They
particularly liked the rough fire roads that ran through the forests. These dirt
tracks were steep and ideal for downhill racing. One run in particular was the
infamous ‘repack run’, a steep descent down Mount Tamalpais that dropped
1,300 feet in less than 2 miles. Riders would make the ascent in the back of a
truck and then race each other down. The ride got its name from the effect it
had on the bikes taking part. The bikes were single-speed models with old-
fashioned coaster brakes which would overheat with the excessive use to
which they were subjected, forcing the rider to repack the hub with grease
before making another descent (Rosen, 2002: 135).
Existing commercial road bikes were not suited to the rough conditions
(Lüthje et al., 2005). Hence the young downhill racers turned instead to older
more robust models. Among the bikes used for downhill racing were old
Schwinn models with fat ‘balloon’ tyres. Particularly prized was the Schwinn
Excelsior, the classic ‘newsboy’ bike of the 1930s and 1940s. Though the
frames of these bikes were heavy, they were sturdy enough to withstand the
rough treatment meted out in downhill racing and the extra strength more
than compensated for the extra weight. The use of old-fashioned models like
the Schwinn, bought second-hand from backyards for no more than a few
dollars, led to the bikes being nicknamed ‘clunkers’.
Though the early clunkers were comparatively unsophisticated, their riders
gradually found ways of adding features to improve a clunker bike’s
performance for downhill racing. Among the modern components added as
part of the ‘standard Marin County conversion’ (Rosen, 2002: 135) were
derailleur gears, front and rear drum brakes, motorcycle gear levers, wide
motocross handlebars, handlebar-mounted gear shift levers, and big balloon
tyres with knobbly tread patterns. As downhill racing became more popular in
Marin County, so riders added modern components to their clunker bikes in
order to enhance performance.
A cottage industry (Lüthje et al., 2005: 954) developed in Marin County as
clunker riders built bikes not only for themselves but for friends and even
fellow riders. By the late 1970s half a dozen small assemblers existed in Marin
County. However, the core of each bike was still an old Schwinn frame.
Unfortunately the supply of old Schwinns was limited and newer Schwinn
models were of no use because they were lighter and not as strong. Riders
scoured the state of California for old Schwinn bikes that they could convert
into clunkers. Old-fashioned cycle repair businesses proved to be a valuable
source of supply. When they found a cycle repair business that had old
Schwinn models, usually piled up as scrap, riders would buy them up and take
them back to Marin County.
Such was the interest generated by clunker bikes that in time the supply of
old Schwinn models began to dry up. Some riders then began to build their
own frames. These frames improved on the Schwinn design and provided the
additional components as standard. Among the first riders to build his own
frame was a rider called Joe Breeze (Berto, 1999: 43). Not only was Breeze an
experienced downhill racer he was also an experienced frame-builder. His
frame was lighter and stronger than the original Schwinn design. Although Joe
Breeze only built a handful of frames, the custom-built bikes that he built
using his frame were widely seen and much admired. They came to be known
as ‘Breezers’ and they were the first modern mountain bikes (Berto, 1999: 45).
They quickly acquired a reputation. The Breezers began to expand the market
beyond Marin County.
Breeze only produced a very small number of custom-built mountain bikes
and there were soon other riders building frames, though always in very small
quantities for downhill racing in and around Marin County. However, in 1979
it was not long before two riders, Gary Fisher and Charlie Kelly, teamed up to
form a company that would produce and sell mountain bikes on a commercial
basis. They called their company MountainBikes (Rosen, 2002: 136). Fisher
and Kelly needed someone who could build frames, not on a custom basis ‘but
in quantity’ and they teamed up with frame-builder Tom Richey who was
based in Palo Alto in Silicon Valley on the other side of San Francisco Bay.
Using frames produced by Richey (and occasionally other local frame-
builders), MountainBikes built, equipped and marketed the first commercial
mountain bikes. Within a couple of years there were more than a dozen firms
making mountain bikes, but in each case the quantities produced were
relatively small, and the market for mountain bikes was generally confined to
the West Coast of the US.
At the same time as the first commercial mountain bikes were appearing on
the market, cycle-component manufacturers such as Shimano and Sun Tour
began producing and distributing components such as derailleurs, crank sets,
tyres and handlebars that were specially designed for off-road use (Lüthje
et al., 2005: 954). Not only that, companies like Shimano continued to develop
componentry that helped to make cycling, particularly using mountain bikes,
more ‘user friendly’. Shimano developed index shifting (for gear changing),
integrated gearing and improved braking systems that not only enhanced
performance but also improved functionality and reliability (Rosen, 2002:
138), making the use of mountain bikes more straightforward and less
problematic for inexperienced users.
In 1982 another Californian cycle company, Specialized, a bike- and bike-
parts importer that supplied Marin County bike assemblers, took the next step
and brought out the first mass-produced mountain bike (Berto, 1999). They
had a Fisher-Kelly-Ritchie design mass produced in Japan. Marketed as the
‘Stumpjumper’ it represented the general public’s introduction to the mountain
bike. Major cycle manufacturers soon followed with similar designs which
were retailed through conventional cycle outlets first across the US and then
across Europe.
By the end of the 1980s the mountain bike was fully integrated into the
mainstream cycle market. By 2000 total retail sales of cycles in the US
amounted to $5.89 billion, of which some 65 per cent were sales of mountain
bikes. However, the process of innovation did not stop here. As mountain
biking dramatically increased in popularity so mountain biking enthusiasts
found new uses for their machines and the demand for improvements in
performance continued. This demand was met by a steady flow of innovations
derived from riders.
Sources: Berto (1999), Lüthje et al. (2005), Rosen (2002).
Questions
1 Who were the innovators in this case?
2 From which of the sources identified in this chapter, did the innovation of
mountain bikes come?
3 Which model of the innovation process best describes the way in which
mountain bikes were developed?
4 Draw a diagram to describe the innovation process and the various
parties involved in this instance.
5 What do you understand by the term ‘cottage’ industry, and why were the
early producers of ‘clunkers’ thus described?
6 What do you consider is the significance of mountain biking having
originated in a narrowly defined geographical area, i.e. Marin County?
7 Which academic writer has been a leading proponent of the notion of
user-innovators?
8 Why, according to Christensen, are incumbent firms often relatively slow
to innovate?
9 Why were incumbent cycle manufacturers relatively slow to introduce
mountain bikes?
10 Which of the four theories of innovation gives prominence to ‘outsiders’
in initiating innovation?

Questions for discussion


1 What factors have led to a resurgence of innovation by individuals?
2 Why does innovation require a ‘flash of genius’?
3 Give examples of how ‘chance’ can lead to innovation.
4 Account for the apparent decline of ‘corporate labs’ as a source of innovation.
5 Why do large firms often have a poor record of innovation?
6 What advantages do users have as a source of innovation?
7 What does Von Hippel mean when he talks about the ‘democratisation of
innovation’?
8 Why are outsiders often an important source of innovation? Give examples of
innovations by outsiders.
9 What do we mean by spillovers and how can spillovers contribute to
innovation? Use an example to illustrate your answer.
10 How can process needs lead to innovation?
Exercises
1 Take two examples of innovation where in each case the source of innovation is
different. Compare and contrast the innovation process.
2 Research an example of user innovation. Draw a diagram to show the various
parties involved in the process of innovation. Comment upon the different roles
taken by these parties and the nature of the links between them.
3 Take an example of user innovation and show how user communities have
facilitated user engagement in innovation.
4 Why has innovation become ‘democratised’?
5 Read The Economist article ‘Out of the Dusty Labs’ (The Economist, 2007a) and
discuss why in spite of advances in technology there has been a relative decline
in big industrial R&D laboratories like Bell Labs.

Further reading
1 Von Hippel, E. (1976) ‘The Dominant Role of Users in the Scientific Instrument
Innovation Process’, Research Policy, 5 (3), pp. 212–219.
A path-breaking piece of research that first highlighted the importance of users
as a source of innovation. A sequel to this work is in Von Hippel (2005), a book
entitled Democratising Innovation, which brings Von Hippel’s work right up to
date. It represents a powerful alternative perspective on the process of
innovation that plots the spread of user-initiated innovations. It provides
detailed accounts of several recent pieces of research into sectors where user
innovators are prevalent. It cites several recent research studies into user
innovation.
2 Leadbeater, C. (2006) The User Innovation Revolution: How Business Can
Unlock the Value of Customers’ Ideas, National Consumer Council, London.
This report provides an excellent overview of user innovation. It is well
structured and written in a clear and readable style. User innovation is
contrasted with more traditional forms of innovation. Examples are widely
used. Although the emphasis is on the practicalities of user innovation, there are
plenty of informative and up-to-date references.
3 Parsons, M.C. and Rose, M. B. (2003) Invisible on Everest: Innovation and
the Gear Makers, Northern Liberties Press, Philadelphia, PA.
A study of the development of the outdoor clothing and equipment industry
written by a practitioner and a business historian. Chapter 8 in particular is an
invaluable source that provides accounts of innovations in outdoor clothing
(e.g. Gore-tex jackets) initiated by users in the form of climbers and walkers,
particularly in the UK in the 1970s, 1980s and 1990s.
4 Mazzucato, M. (2013) The Entrepreneurial State: Debunking Public v. Private
Sector Myths, Anthem Press, London.
The Process of Innovation

Objectives
When you have completed this chapter you will be able to:
differentiate exploitation from invention
distinguish the steps in the innovation process
differentiate and distinguish the different activities associated with the
process of innovation
evaluate the techniques available to facilitate the process of innovation
differentiate and evaluate different models of the innovation process
compare and contrast the open and closed forms of innovation.

Introduction
The innovation process is concerned with the various activities necessary to turn an
idea or discovery into a commercial product or service which consumers, be they
individuals or firms, will purchase. This chapter explores the nature of this process.
In particular the various activities involved with the exploitation of inventions to
make them into commercially viable products and services are examined and
explored. In this way the chapter explains what is involved in carrying out
innovation. One might say it looks at how firms innovate.
As well as looking at the activities associated with innovation the chapter also
looks at different ways of organising these activities to create a process. Several
different models of the innovation process are examined. The existence of a number
of models of the process reflects the fact that there are distinct and different ways in
which firms approach or carry out innovation. It also reflects the changing nature of
innovation, especially the increased importance of knowledge and the various
different ways in which that knowledge can be channelled into innovation.

The steps in the innovation process


To progress from an idea to a product or service that is on the market and available
for consumers to purchase involves a number of activities that are linked together
to form a process. Figure 5.1 presents a generic model of the innovation process. It
highlights the main steps that have to be undertaken, shown as a particular
sequence starting with the generation of an idea or research leading to new
discovery at one end, and the finished product going onto the market at the other. It
is important to stress that this is an idealised model designed to highlight the
activities that have to be undertaken. In real life innovation is rarely as neatly
packaged as this. The steps or stages will often not be as clearly differentiated as
shown in the model, some may even be absent, and some may not come in quite this
sequence. However, for the purposes of explaining the nature of the various
activities associated with innovation, it is helpful to use an idealised model and
portray innovation as taking place as a generic process comprising well-defined
steps or stages.

Figure 5.1 A generic-model of the innovation process

Figure 5.1 distinguishes a total of seven steps in the innovation process. All seven
are associated with innovation and can be said to be part of the innovation process.
However, as the diagram makes clear, the first two steps (ideas/insight and
development) are particularly associated with the invention. The remaining five
steps turn an invention into an innovation. Labelled ‘commercialisation’ they were
briefly touched on as part of the discussion of exploitation in Chapter 1 and are the
activities involved in transforming an invention into a commercially viable product.
Whereas an invention is usually produced as a one-off, it requires innovation to
transform it into something that can be produced in quantity and with the standards
of reliability that consumers have come to expect.

Ideas/insight
This generic model of the innovation process begins with an idea or insight that acts
as the trigger to commence the innovation process. The idea itself may stem from a
problem, an event or an activity. The idea for Twitter, for example, stemmed from a
brainstorming activity. The idea for the Velcro fastener came when Georges de
Mestral took his dog for a walk and he became fascinated by the way in which seeds
and burrs stuck to the dog’s fur. The idea for the float glass process which radically
transformed the making of glass came to Alistair Pilkington while he was doing the
washing up and he saw articles floating on the water. Quite often these triggers that
give rise to ideas and insights happen quite by chance.
However, not all innovations arise from individuals having ideas and insights.
Much innovation arises as a result of scientific discoveries resulting from scientific
research. This is typically carried out over many years often by large teams of
people. This sort of research is carried out in research laboratories located in
universities or medical facilities or laboratories of large business corporations. A
good example would be the drug Viagra which was developed in the laboratories of
the pharmaceutical giant, Pfizer, based in Kent.
A third possibility is an idea or insight resulting from a technological
breakthrough. In this case the idea or insight usually occurs in a company.
Sometimes the company is a large one with extensive research and development
(R&D) facilities. Technological breakthroughs that have been developed in this way
would include the CD developed by Sony or the Post-it Note developed by 3M.
However, as Christensen (1997) notes, quite often the companies that achieve
technological breakthroughs are not big ones, but very small ones. As outsiders or
‘outliers’ in Gladwell’s (2009) terminology, these small firms are often the first to
spot the potential of new technologies. Examples would include the construction
equipment manufacturer JCB, which as a small start-up company was among the
first to apply hydraulic technology to mechanical excavators, or Dyson applying
cyclone technology to create the world’s first bagless vacuum cleaner. Although
both of these companies are very big today, it was when they were small start-ups
that they pioneered the application of new technologies to create what ultimately
became very significant innovations.

Mini Case: PARC


Xerox Parc (it stands for Palo Alto Research Centre) was set up in the early
1970s as the Xerox Corporation’s second major research lab. It is located at
3333 Coyote Hill Road in Palo Alto, California, in the heart of what were then
apricot orchards but what is now Silicon Valley. It is housed on land leased
from Stanford University and to the northwest lies Stanford’s Hoover Tower,
while to the north lies the headquarters of a company created by two ex-
Stanford students, Hewlett-Packard.
Being located on the West Coast, PARC was 3,000 miles from Xerox’s
headquarters in Rochester, New Jersey. The choice of a distant location was
deliberate. Xerox had pioneered the introduction of the photocopier, in its
time a new technology that had transformed offices and office work around
the world. Despite dominating the copier market Xerox was facing increasing
competition and was anxious to develop new technologies for the office. To
come up with the technological breakthroughs that would form the basis of
these new technologies, Xerox’s bosses felt their new research lab had to be
located away from its existing facilities and close to where new developments
in computing were taking place, and that meant California. It was here in
1970, that Xerox began to assemble some of the world’s brightest and best
computer scientists and engineers. None of them joined PARC with the
thought of becoming rich. Rather they were attracted to PARC ‘by the thrill of
pioneering’ (Hiltzik, 2000).
Over the next decade PARC’s researchers produced a string of
technological breakthroughs described by Malcolm Gladwell (2011) as ‘an
unparalleled run of innovation and invention’. Among the technological
breakthroughs that the PARC team came up with were laser printing, the
ethernet network protocol, the computer mouse, and perhaps most
significantly of all the graphical user interface – the system of overlapping
windows, pull-down menus and point and click commands that forms the
interface by which we all interact with the modern computer.
Xerox utilised all these developments to produce what at the time was an
entirely new kind of computer, one that served an individual and sat on
his/her desk, rather than a whole business and housed in its own room. In the
words of Alan Key, the PARC scientist who was its chief architect, it was to be
a ‘personal computer’ (Hiltzik, 2000). Launched in the early 1980s as the Xerox
Alto it was not a commercial success. However, others were able to capitalise
on PARC’s technological breakthroughs, among them a 24-year-old visitor to
the Coyote Road research centre who in 1979 was given a tour of the facility
and a demonstration of some of its new creations. The visitor was impressed.
His name? Steve Jobs.
Source: Gladwell (2011), Hiltzik (2000).

Development
Development is about turning ideas and insights into products. The product that
results from the development stage will not be ready to sell to consumers, but it will
have many of the operational characteristics of the final product. In short, the
product will work, and so will demonstrate the feasibility of placing it on the market,
even if there is still more to do before it is produced in the volumes and to the
standards of reliability demanded by consumers.
Central to the development stage of the innovation process is the construction of
models and prototypes (see Figure 5.2). The purpose of models is to convey the
form, style and ‘feel’ of an object (Leonard-Barton, 1991). Models serve to
communicate the appearance of the proposed product. They are typically used to
give an impression of what the product will actually look like. 3D examples, whether
they are mock-ups, white models or computer-aided design (CAD) models, enable
people to visualise the form of the product. Computer-generated 3D models can be
particularly useful with complex products, where potential problems such as
incompatibility, clashes or lack of space, can be highlighted by ‘walking through’ the
model long before the real thing is built.

Figure 5.2 Models and prototypes

Source: Adapted from Leonard-Barton (1991).

Prototypes in contrast usually have little to do with form and instead are all about
function. Hence the term ‘functional prototypes’ in Figure 5.2. Unlike the final
product, a prototype is a version of the product constructed as a one-off. Prototypes
are typically constructed on a ‘jobbing’ basis using general-purpose equipment
rather than specialist purpose-built equipment. They are often made with different
materials from those that will go into the final product. This is normally because the
materials used for prototypes are easier to work with and more flexible. James
Dyson explains how he made prototypes:

‘And all the while I was making cyclones. Acrylic cyclones, rolled brass
cyclones, machined aluminium cyclones (which looked like prosthetic limbs
for the Tin Man in the Wizard of Oz – whose life was changed by a cyclone).
For three years I did this alone.’
Dyson (1997: 122)
Functional prototypes usually form the basis of the experimentation and testing that
lies at the heart of development. Experimentation and testing has to take place to
ensure that the product works in the way intended, in particular that it will work
consistently and with the kind of performance that consumers are likely to demand.
This means that development is usually a slow and laborious process. Accounts of
new inventions frequently dwell on just how slow and painstaking the development
stage can be. James Dyson’s account of the development stage of his dual-cyclone
vacuum cleaner is typical:

‘This is what development is all about. Empirical testing demands that you
only ever make one change at a time. It is the Edison principle, and it is
bloody slow. It is a thing that takes me ages to explain to my graduate
employees at Dyson Appliances, but it is important. They tend to leap into
tests, making dozens of radical changes and then stepping back to test their
new masterpiece. How do they know which change has improved it and which
hasn’t?’
Dyson (1997: 124)
Prototypes do have other uses too. They are used to facilitate the integration of
components and sub-systems. This is particularly the case with complex products
which have a large number of components and sub-systems that have to interact and
work together. Another function of prototypes is to facilitate learning. The process
of testing enables those who have developed a new technology to learn about its
properties, through the acquisition of formal technical knowledge. Where innovation
is concerned, knowledge is cumulative and hours spent testing prototypes can help
developers to learn about the properties of a new technology. However, learning in
terms of the acquisition of tacit or informal knowledge can be just as important.
User-test prototypes, in the form of working prototypes, are often used to enable
firms to learn about users and user behaviour.
Finally, prototypes have a part to play in risk reduction. Tests carried out with
prototypes can help to identify potential risks. It is technological rather than market
risks that will be identified in this way. With products that generate significant
safety issues if they do not function correctly, as with many mechanical products,
this can be extremely important. Of course, having identified these sorts of risks,
firms may occasionally choose, for whatever reason, to ignore them.
Mini Case: Prototyping Public Services
Barnet Council were looking to develop new and innovative ways of
delivering services, at a time when community need is growing in scale and
complexity, but resources for this are increasingly limited. They wanted to
test a radically different approach and involve residents in designing and
delivering local solutions.
To do this, they asked the social design agency ‘thinkpublic’, to help them
develop their prototyping capacity within the council and community.

What is prototyping?
The design technique of prototyping takes ideas rapidly into action for the
purpose of testing, while managing the risk by starting small and designing
out problems early before significant public finances need to be committed. It
is both a process and a mindset. In its simplest form most of us prototype
different things every day; for example, when we are cooking we experiment
with different versions of a recipe until we find the one we like, or when we
are trying to find the fastest and easiest route to work. Prototyping can be
applied to testing and developing anything from new products, services, IT
systems and methods of communication. ‘What’s been great is the ability to
experience a new approach, take calculated risks and to use all the experience
and talents available to make decisions at an early stage,’ says Lesley
Holland, change manager in major projects at Barnet. ‘It really brought
partnership working and sharing resources alive.’
The benefits of prototyping for the public sector include:

▮ turning ideas into real and tangible things


▮ involving citizens and frontline staff in a meaningful way
▮ saving time, effort and money
▮ managing risk by rapidly testing and developing ideas
▮ producing better results as outputs are improved by ongoing feedback
▮ encouraging citizens and staff to be sustainably involved with the
creation and delivery of new ideas.

Prototyping in action
At Barnet they have applied prototyping to help them radically rethink how
they look at the challenges surrounding families with the greatest needs.
Previous research highlighted that out of 35 meetings that one family had with
the state in a single year, only five of these meetings asked for any new
information. They co-designed, with a range of stakeholders, a community-led
service called Barnet Community Coaches (BCC). The service aims to help
families develop and become more resilient, reach their goals and reduce their
dependence on the state. This is now been being rapidly tested with volunteers
and families within the Grahame Park estate in Barnet. This rapid live testing
will last for six weeks to help them learn quickly what works and what
doesn’t, before they invest in a pilot. Over this period they are measuring a
number of factors, including the increase in well-being of the families and
coaches, along with measuring the cost of running the service.
‘A few weeks ago Community Coaching sounded like a fairy tale magic
genie that might transform the lives of families,’ says Cephas Akuklu, a lead
volunteer community coach. ‘Many of the people I shared the prototype with
were sceptical. Now some of these people are asking me lots of questions and
I feel honoured to be part of a process that is going to get families to explore
more options and bring out their potential.’
Alongside this they have been exploring different social business models
for how the BCC service should be developed, funded and run. They are
currently looking at the franchise model where key people in the community
will be responsible for running the service, with information and support from
the council.

What they have learnt so far


Prototyping has been a good way to test new ideas that aim to solve
challenging problems quickly and cheaply. It has provided a safe space to fail
and adapt. It has helped them secure early buy-in from all the key
stakeholders needed for success, they have generated a range of innovative
service options and have learnt quickly what does and doesn’t work. The
prototyping process has helped open up the redesign of local services and
involve local people.
Source: Adapted from Lambeth and Szebeko (2011).

By the time the development process is complete, fully functioning prototypes


should be in operation. They will not necessarily look like the final product but they
will have the final product’s operating characteristics. By this point there should be
a reasonable degree of certainly that the product will work in the way intended.

Design
Hence designers are required to determine the attributes and features of the final
product that will go on sale in the marketplace. This is likely to involve specifying:

▮ the precise shape of the product


▮ the tolerances to which it will be manufactured
▮ the materials to be used in manufacture
▮ the process by which the product will be manufactured.
This is a very important part of the innovation process. Walsh (1996) notes that
there is actually considerable variation in what firms mean and do when it comes to
design. Similarly Verganti (2009) reminds us that while for many design is primarily
about ‘styling’, that is to say aesthetic aspects, in fact design is about much more; in
particular it is about ensuring a product genuinely meets consumers’ needs.
So just what is design? And within the innovation process what is its function?
Walsh (1996: 513) explains the role of design as ‘the creative realization of concepts,
plans and ideas and the representation of those ideas as sketches, blueprints [or]
models’. We should perhaps also add that today those representations are likely to
be on a computer screen rather than paper. However, the notion of representation is
a sound one, since in essence the task of designers is to take technical details and
represent them as a product that can be manufactured and sold to customers.
Walsh (1996: 514) makes the very important point that design is a crucial part of
innovation not just because it involves a very strong creative element but because
design involves the coupling of ‘technical possibilities and market demands’. And it
does not stop there, the integrative aspects of the designer’s role are not confined to
matching inventions and markets; designers also have to take account of
manufacturing and production techniques and capabilities. Designs have to be
aesthetically pleasing, functional and manufacturable.
Hence in producing a design, the designer has to factor in a number of constraints
and come up with a design that will both appeal to the customer and enable the firm
to make money. These sorts of constraints are all likely to form part of the design
brief. Most designers or design teams will have a design brief to work to that
incorporates the requirements of the various stakeholders (e.g. R&D, engineering
and marketing staff) responsible for the product. Accordingly, the design brief will
outline what is required of the design and note a number of constraints. These
constraints will typically come from other functional areas of the business such as
marketing, manufacturing and finance. The design brief will also include constraints
derived from the work that has been done with the prototypes.
The designer’s task is to take the design brief and translate it into a design that
meets the requirements of the team responsible for the product. This means that it
has to operate effectively, while appealing to consumers and being capable of being
manufactured at a cost that will both enable the consumer to afford it, and generate
a return for the firm.
In reality there may be no single designer. Instead there may well be a design
team that brings together different types of designer including: technical designers
able to design systems; industrial designers able to ensure functionality; and more
traditional designers able to create a form that will have appeal for consumers.
Often some or all of these design skills will not be available in-house and will need to
be bought in, probably from a design house that has expertise and experience in the
field.

Mini Case: Jonathan Ive


There is no question that design drives Apple. Successive Apple products,
certainly from the iMac onwards, have been distinctive by virtue of their
aesthetic appeal and their functionality. Both were qualities that Steve Jobs
constantly strived for, yet the person who ensured they were embedded in
Apple products was not so much Jobs himself as British-born designer,
Jonathan Ive. It was Ive who translated Jobs’ ideas, beliefs and even some of
his obsessions, into sought-after products.
Ive, who joined Apple some 20 years ago and cut his teeth by redesigning
the ill-fated Newton MessagePad PDA, is now responsible for Apple’s
Industrial Design as well as Human Interface (i.e. hardware and software
design). In this role he gets to design, and oversee the design of, the products
of one of the world’s most valuable brands. To help him he has a loyal team of
designers to work with and he has been adding to those ranks with carefully
chosen newcomers, including recently some with experience in health
monitoring, high-end fashion and even so-called ‘wearables’.
The team are based at Apple’s headquarters in Cupertino in Silicon Valley,
in a subterranean design studio, from whence the shape and form of all
Apple’s hi-tech products emerges. It has been described as a playground for
designers, but it is the source of all that is distinctive about Apple. Before the
Ive era at Apple, it was a company run by engineers. But according to Kahney
(2013), engineering now reports to design in Cupertino. This is the position all
designers wish they were in, to have input on new projects from the start and
then involvement to follow through on a project to the very end. As Kosner
(2013) puts it, ‘What great designers want is a seat at the table’. This was
something Steve Jobs in his unique position at Apple could provide. Jonathan
Ive repaid the favour with an obsessiveness that matched Jobs’ own.
Source: Kosner (2013).

Market evaluation
Inventions may be technically very sophisticated and result in what are technically
great products but, as Chesbrough (2006a: 64) reminds us:

‘There is no inherent value in a technology per se.’


Indeed he goes on to point out that:

‘the economic value of a technology remains latent until it is commercialized


in some way.’
By latent he means untapped and unrealised. Thus commercialisation has a critical
role to play in the innovation process. For its part commercialisation requires the
implementation of a ‘business model’. A business model, as we learnt in Chapter 1,
serves the dual function of enabling value capture and value creation (Chesbrough,
2006b: 108) to take place and it is at the market evaluation stage that decisions about
the business model are made.
Value creation as we saw in Chapter 1 involves articulating the ‘value
proposition’. This determines the actual value (i.e. benefit) that the new
product/service provides for the customer, while at the same time specifying who
the customer is. This in turn will mean specifying the particular markets and market
segments in which the new product/service can be sold. The extent of the value
proposition will not necessarily be related to the sophistication of the technology.
Seemingly modest advances in technology can result in products that deliver very
powerful value propositions as far as customers are concerned. In the case of
photocopiers, the small desk-top copiers developed by Japanese manufacturers in
the late 1970s were not technically very sophisticated, but they offered a much more
flexible, personalised service (Chesbrough, 2006a), something that customers valued
highly. By the same token technical staff often overvalue technical sophistication.
Also, some products require ‘complementary assets’ such as product support, after
sales service and training, and it is essential that this is recognised and
arrangements made to ensure such assets will be available. Without these aspects of
an innovation being clearly defined there is a real danger that the innovation will
appeal to no one other than the inventor.
Value capture, on the other hand, involves figuring out how to make money from
the innovation, in particular the type of revenue generation mechanism that will
produce the highest possible return. A variety of revenue generation mechanisms
are possible, including outright sale, leasing and the razor and razor blades model
where the product is sold at or below cost while consumables are charged at a high
margin. Selecting the most appropriate revenue generating mechanism can be
critical. This was the case with Xerox and the introduction of the world’s first plain
paper copier, the Model 914 (Owen, 2004). Recognising it had limited marketing
expertise, Xerox initially tried to license its novel plain paper copier technology to
big firms like IBM, Kodak and General Electric, but given the sophisticated
technology associated with plain paper copying, they figured that the new copiers
would be too expensive and were not interested. In the end Xerox hit on the idea of
leasing the copier and charging a royalty for any copies beyond normal monthly
usage. It was an almost instant success because leasing offered a low-cost way of
acquiring the machine, and once acquired companies found the new technology was
so easy to use and gave such good results, that staff made far more use of the new
copiers than had been expected.
Thus market evaluation has a vital role to play in presenting an accurate picture
of the potential value of a new product/service. Without a clear perspective on the
value proposition, for instance, there is a risk that the value of a new
product/service remains unquantified. This in turn raises the prospect that those
developing the new technology may seriously overvalue it, resulting in poor sales
and ultimately a failed innovation.

Production engineering
Essentially production engineering is concerned with how the product will be
manufactured. Whereas prototypes are usually made on a one-off basis, the final
product is likely to be manufactured in substantial quantities and this calls for quite
different processes and skills.
The initial decisions surrounding production engineering concern who is to
undertake manufacture. Will the product be made in-house or will it be outsourced
to subcontractors? Developments in IT combined with improvements in
communication mean that it is now possible to design a product on one side of the
world and produce it on the other. In a whole range of industries it is now
commonplace for manufacturing to be contracted out.
Assuming manufacture is going to take place in-house, production engineering
involves a raft of decisions surrounding the way in which the product is made. Many
of these decisions will in turn be linked to the type of manufacturing system (see
Figure 5.3) being used. There are very often big differences between a functional
prototype used for development and the final product, and this reflects the fact that
at the production engineering stage the product is often revised to make it easier
and cheaper to manufacture. Such work typically centres on a close examination of
assembly operations. With careful preparation and effective design it is usually
possible to simplify the design, thereby eliminating a number of assembly
operations. Ulrich and Eppinger (2003) provide a number of examples of design
changes of this type:
▮ reducing the parts count
▮ using standardised components
▮ using self-aligning parts
▮ using assembly operations that require a single, linear motion.
Figure 5.3 A manufacturing system

Source: ‘Towards the Fifth-generation Innovation Process’, Rothwell, R, International Marketing Review,
Volume: 11, No.1, Copyright © 1994, Emerald Group Publishing Limited.

Changes such as these reflect the volume of production that is anticipated. With the
prospect of volume production it is worthwhile redesigning and revising the product
in order to make assembly quicker, easier and cheaper.
Trott (2002: 150) gives the example of a toolbox. Manufactured as a prototype, it
is produced on a one-off jobbing basis initially, in which the various components are
held together with industrial fasteners. With a satisfactory prototype, the
production-engineering function then proceeds, with the help of the designers, to
simplify the design to make it suitable for manufacturing in large batches. This
entails eliminating the need for fasteners and investing in specialist machinery that
will shape the individual components. The result is a big drop in the number of parts
required and an assembly process that involves all the parts being held in place via
‘push and snap’ operations undertaken by assembly workers. Hence one gets a
product that has the external characteristics and functionality of the original design
and yet is capable of being assembled more quickly and more cheaply. Not only
does this indicate the sort of activities carried out at the production-engineering
stage, it also illustrates an important principle, highlighted by Trott (2002): namely
that, as the volume of production increases so the most appropriate method of
manufacture also changes.

Market/pilot testing
Having ensured that the product can be made in a way that will ensure it appeals to
consumers while at the same time making money for the company, further testing
has to be carried out to ensure that it can go into the marketplace. The testing is
likely to be of two types. There will be market testing to elicit customer reaction to
the product, while at the same time there may well be a need for statutory testing to
ensure that the product meets appropriate safety requirements or to accredit the
product so that it can be sold.
Market testing involves launching the product on a trial basis, usually within a
limited geographical area. This kind of testing typically has two objectives,
described by Baker and Hart (1999) as mechanical and commercial. Mechanical
testing is designed to ensure that the distribution systems that deliver the product to
the customer are functioning effectively. With innovations this can be particularly
important. By launching the product in a limited and controlled fashion it should be
possible to identify potential problems and rectify them. Commercial testing, on the
other hand, is designed to gather data from which to construct sales forecasts and
budgets and to see the reaction of competitors.
As well as market testing there may well be a requirement for further physical
testing, this time with the final product or something that is very close to the final
product. Testing at this stage will have less to do with developing the product and
more to do with ensuring it will be safe in the hands of consumers. Consequently,
much of the testing at this stage will involve interaction with consumers. Testing
may be a statutory requirement or necessary for the product to gain type approval
or certification before it can be used to provide public services (e.g. crash tests for
cars and blade-off and bird strike tests for aero-engines).

Full-scale manufacture and launch


Before full-scale manufacture can actually take place the equipment that forms part
of the manufacturing system (see Figure 5.3) has to be commissioned. This is
designed to ensure that not only are the individual items of equipment that comprise
the manufacturing system functioning as they should, but that they are also
interacting effectively. With a complex manufacturing system and a sophisticated
control system this can be a demanding task. Pieces of equipment will often work
perfectly in isolation, but put together they become quite ineffective. Consequently,
the commissioning process is intended to prove the system and ensure it is
functioning as planned.
Even with a fully effective manufacturing system, those who are going to operate
it have to be recruited and trained. This is something that has changed over the past
20 years as companies in the West have learnt from Japanese companies the
importance of careful planning and preparation as far as the labour force is
concerned. Not only is it important to select people with appropriate aptitude and
skills, they have to be trained to use the equipment.
Finally manufacturing can begin. Even this is not likely to be full-scale
manufacture to begin with. Typically, firms will deliberately plan their production so
that initially they are producing at perhaps 20 per cent or 30 per cent capacity. This
allows those operating the system to move up the ‘learning curve’ as they become
more familiar with the system. The learning curve concerns the way in which,
particularly in batch operations, it will often take less time to manufacture the
hundredth item than the first. This can be an important feature of manufacturing in
some industries. Aerospace is a good example. Firms like Airbus and Boeing find
that, even with sophisticated production systems, it will typically take much less
time to produce an airliner as output expands. This reflects the fact that this sort of
learning relies heavily on tacit knowledge and is a cumulative process.
Consequently, as output expands, learning increases and it can take less time to
manufacture the product.
There are other reasons why firms will typically ‘ramp up’ (Ulrich and Eppinger,
2003) production gradually from a relatively low base. Products produced during the
ramp-up can be evaluated to spot potential flaws. They can also be supplied to
preferential customers who will not only evaluate the product but also provide
valuable marketing data in terms of their perception of the product. In addition,
ramping up gradually can also help with stock building. It can be disastrous to raise
customer expectations by promoting a new product, only to deny customers access
to it because the product has not yet reached retail outlets. Gradually building up
production can allow time for distributors to build up stocks prior to the product
being formally launched. Then, eventually full-scale production can get under way.
The market-launch phase brings another round of potential problems. The
activities involved are likely to be quite different from those encountered earlier,
however. Since the market-launch phase is mainly concerned with marketing it will
not be covered in detail here.
The market-launch phase essentially requires the coordination of a whole range
of different activities. Some of the activities include:
▮ ensuring that retail outlets have appropriate stocks
▮ booking advertising space
▮ designing and producing advertisements
▮ booking exhibition space
▮ ensuring that literature about the product has been designed, written and
printed
▮ informing the press and ensuring that they have had time to familiarise
themselves with the product.

The list is at best indicative of the sorts of activities that have to be undertaken.
They all form part of the process of introducing the product to the public. While it
may require a different set of skills, getting this phase of the innovation process
right is just as important as all the others.
This reinforces one of the central features of the innovation process and one that
researchers have increasingly come to recognise. It is very easy to see innovation in
heroic terms. The pursuit of new ways of doing things, the struggle to get something
to work, these are usually the things that people think of when they think of
innovation. Yet there is actually a lot more to innovation. It is a lengthy process,
even when activities are carried out concurrently. It is also a process that includes
many different activities, and all of these activities are important. The feature of the
innovation process that is increasingly recognised is that it is important for all of
them to be carried out effectively.

Models of the innovation process


While the generic model of the innovation process enables the various activities
associated with innovation to be identified, it does not reflect the range of different
approaches to innovation that are available. In particular, it fails to take account of
some of the newer models of the innovation process that have been introduced in
recent years.
Rothwell (1994) identifies no fewer than five models of the innovation process.
Significantly, he suggests that these models form part of a continuum that has seen
new models of innovation introduced over the last half-century. All five of these
models are presented here.

Technology push
The technology push model (Figure 5.4) is very much the traditional perspective on
the process of innovation. It is effectively the research-led version of the generic
model presented earlier, since one of the features of this model is that it is driven by
developments in science and technology. It assumes that more technology, brought
about by additional expenditure on R&D, will lead inexorably to more innovation.
The process is entirely linear and sequential, each stage following on from the
completion of the previous one. The model virtually ignores the marketplace, which
is portrayed as being passive and simply taking what technology has to offer. The
model is naïve as far as the process itself is concerned. We are told very little about
the nature of the process. Having said that, there are industries where the innovation
process does take place in very much this way – for example, the pharmaceutical
industry.

Figure 5.4 Technology push process

Source: ‘Towards the Fifth-generation Innovation Process’, Rothwell, R, International Marketing Review,
Volume: 11, No.1, Copyright © 1994, Emerald Group Publishing Limited.

Demand pull
Recognising the passive role given to marketing, theorists in the late 1960s and early
1970s came up with a new perspective on the process of innovation. In the demand
pull model, the role of the market is central. According to Rothwell (1994), the move
to a more market-centred type of innovation process reflected the maturing of many
technology-based industries and a growing realisation that consumer requirements
were becoming more sophisticated.
In the demand pull model (Figure 5.5), the market forms the source of ideas for
new innovations. Knowledge of consumer requirements is seen as driving research
and development rather than the other way around. This is a variant on the generic
model, if one sees consumer needs as the source of new ideas that lead to
innovation.

Figure 5.5 Demand pull process

This model of the innovation process is appropriate for mature


technologies/industries where firms’ innovation effort is devoted to minor
improvements that are better at meeting consumers’ requirements.

Mini Case: Iridium


Barry Bertinger, an engineer with the electronics company Motorola, first had
the idea for Iridium, a highly sophisticated satellite phone, in 1985 after his
wife, who worked as a real estate executive, complained that she could not
reach clients via her cell phone when they were on holiday in the Bahamas.
Barry and two other engineers working at Motorola’s Satellite
Communications Group in Arizona developed the concept behind Iridium – a
constellation of 66 low-Earth-orbiting (LEO) satellites that would allow
subscribers to make phone calls from any location on the planet. Although his
superiors at Motorola actually rejected the Iridium concept, it was no less a
person than Robert Galvin, Motorola’s chairman at the time, who gave
Bertinger approval to go ahead with the project. Robert Galvin, and his
successor and son Christopher Galvin, viewed Iridium as a potential symbol
of Motorola’s technological prowess.
Communications satellites, which had been operational since the 1960s,
were typically geostationary satellites that orbited at a relatively high altitude
of some 22,000 miles. At this altitude, large phones were required and were
subject to an annoying quarter-second voice delay. Iridium’s innovation was
to use a large constellation of low-orbiting satellites which, because they
orbited much closer to the earth (400–450 miles), meant the phones could be
much smaller and the voice delay was virtually imperceptible.
On 1 November 1998, after launching a $180 million advertising campaign
and an opening ceremony where Vice President Al Gore made the first phone
call using Iridium, the company launched its satellite phone service charging
$3,000 for a handset and $3–$8 per minute for calls. The results were
devastating. The Iridium satellite phone system was plagued by problems with
suppliers, batteries that needed to be recharged 2–3 times a day, call that
suffered from interference and frequent cut-offs, gaps in global coverage and
competition from cell phones whose coverage and quality were rapidly
improving. By April 1999, the company had a mere 10,000 subscribers. Facing
negligible revenues and debt interest amounting to $40 million per month, the
company came under tremendous pressure. In April, CEO Dr Edward Staiano
quit. In June 1999, Iridium fired 15 per cent of its staff. By August, Iridium’s
subscriber base had crept up to 20,000 customers, significantly less than the
52,000 necessary to meet loan covenants. Two days after defaulting on $1.5
billion in loans, Iridium filed for Chapter 11 bankruptcy on Friday, 13 August
1999, making it one of the 20 largest bankruptcies in US corporate history.As
Iridium Interim CEO John A. Richardson put it in August 1999,

‘We’re a classic MBA case study in how not to introduce a product. First
we created a marvellous technological achievement. Then we asked how

to make money on it.
As a postscript, Iridium did in fact emerge from Chapter 11 and in a
restructured form is in business today, but with better handsets, more reliable
technology and competing in what is effectively a different market.
Source: Finkelstein and Sanford (2000).

Coupling
For many industries both technology push and demand pull innovation processes
are flawed. They both rely on innovation being a linear, sequential process.
Unfortunately, processes like this are said by some to encourage what is often
described as ‘over the wall’ behaviour (Trott, 2002: 215), where the departments
responsible for each stage carry out their task in isolation, providing little in the way
of guidance and help to the next department. It was to overcome precisely these
kinds of problems that the coupling model (Figure 5.6) evolved.

Figure 5.6 Coupling model process

A crucial difference between this model and the earlier ones is the presence of
‘feedback loops’. The lines of communication between the various functions carry a
two-way traffic. No longer can functions operate on an ‘over the wall’ basis,
forgetting about the process once their immediate tasks have been completed. In the
coupling model one has a series of distinct functions or stages, but they are
interacting and interdependent. Each phase is also linked or coupled (hence the
name) to the marketplace (see top of diagram) and the state of technology (see
bottom of diagram).

Integrated
The 1980s were characterised by powerful forces for change. In many fields the old
order and the old certainties that had prevailed in the years since the Second World
War gave way to new and much more intense competitive pressures. Developments
in technology, in both the computing and the communications fields, led to the
introduction of IT-based manufacturing systems that shortened product life cycles.
In parallel with changes in manufacturing technology came new ideas about
manufacturing management. Many of these new ideas, such as just-in-time
production and set-up reduction, came from Japan. Among the most powerful ideas
were notions of concurrent or parallel development. Applied to new product
development, this implies an end to the strictly linear and sequential processes
prevalent in the three models of the innovation process presented so far. Japanese
companies rely on project teams that integrate the various functions. Under such
arrangements the functions are brought into the new product development process
from the start, and joint group meetings ensure that issues such as manufacturability
are considered early in the process rather than near the end. Team-based new
product development therefore represents a much more integrated process (Figure
5.7).

Mini Case: Lessons from Apple


Not invented here, and very welcome
Innovation can come from without as well as within. Apple is generally
perceived to be an innovator in the tradition of Thomas Edison or Bell
Laboratories, employing gifted scientists and engineers who come up with
new ideas and new products that are the result of their moments of
inspiration. In fact, its real skill lies in bringing together its own ideas with
technologies from outside and then wrapping the results in elegant software
and stylish design. The idea for the iPod, for example, was originally dreamt
up by a consultant whom Apple hired to run the project. It was assembled by
combining off-the-shelf parts with in-house ingredients such as its distinctive,
easily used system of controls. And it was designed to work closely with
Apple’s iTunes jukebox software, which was also bought in and then
overhauled and improved. Apple is, in short, an orchestrator and integrator of
technologies, unafraid to bring in ideas from outside but always adding its
own twists.
This approach, known as ‘network innovation’, is not limited to electronics.
It has also been embraced by companies such as Nike, Rolls-Royce, Marks and
Spencer and several drugs giants, who now appreciate the value of admitting
that not all good ideas start at home. Making network innovation work
involves cultivating contacts with start-ups and academic researchers,
constantly scouting for new ideas and ensuring that engineers do not fall prey
to ‘not invented here’ syndrome, which always values in-house ideas over
those from outside.
Source: Adapted from The Economist (2007b).
Figure 5.7 Integrated model process

Network
Finally, the 1990s saw the advent of what Rothwell (1994) describes as a ‘fifth-
generation’ innovation process. Termed the network model (Figure 5.8), this reflects
the way in which some organisations increasingly rely not only on their own internal
resources for innovation, but instead draw also on external resources, either for the
development of major sub-systems and components or to undertake specific phases
of the innovation process (see Table 5.1). This is normally achieved through
alliances, agreements and contracts with third-party organisations. The use of
networks reflects continuing developments in computing and communications
which have facilitated information transfer and outsourcing. The resulting vertical
disintegration has led to organisations ceasing certain activities such as research
and some forms of development, preferring instead to buy them in as and when
needed. Companies that utilise the network model of innovation increasingly take on
the role of systems integrator where they manage the innovation process and the
integration of the development activities carried out by partners. An example would
be Apple’s role in developing the iPod (see mini case).
Table 5.1 Examples of network innovation

Sources: Cox et al. (2003); Sherman (2002).

Network innovation is by no means universally practised. Only in certain


industry sectors has it proved popular. Good examples are pharmaceuticals,
aerospace and computing. In pharmaceuticals, developments in biotechnology have
fostered the growth of small specialist biotechnology companies on which large
pharmaceutical companies increasingly rely as a source of innovation. In
computing, the growth of specialist companies making computer peripherals has
helped to make Silicon Valley in California viable. Finally, in aerospace, the
prohibitive cost of developing a new airliner or a new engine has led many
aerospace giants to put together joint ventures and partnerships that bring together
a number of suppliers to engage jointly in new product development.
While developments in IT and communications have helped to make the sort of
‘collaborative innovation’ implied by the network model viable (i.e. through the
electronic transfer of CAD-generated design data to the manufacturing function),
the use of this approach is not entirely the result of facilitating factors. Rising
consumer expectations and increasing emphasis on choice and variety have also
helped to fuel an increasing emphasis on innovation and new-product development.
Thus, companies anxious to provide consumers with ever greater choice have
increasingly sought to look outside their own organisation for ideas and
technologies. By looking outside they have access to a greater range of capabilities.
Again the Apple iPod provides a very good example of this type of network
innovation, for although Apple can undoubtedly claim the credit for this highly
successful innovation, in reality, as Figure 5.8 shows, the company relied on a
number of key partners for many aspects of the innovation.
Figure 5.8 Network model: the iPod - Apple and its partners

Source: Sherman (2002).

Mini Case: Connect & Develop


Proctor & Gamble is a multinational company well known for its wide range
of consumer products, covering everything from snacks to hygiene products
and detergents. It has a massive research capability employing some 7,500
scientists whose task is to generate a steady stream of new products. A
measure of this research capability is Proctor & Gamble’s $5 million annual
spend on R&D and the eight patents per day that its scientists notch up
(Dodgson et al., 2006).
However, by 2000 there was concern about the ability of the company’s
conventional invent-it-ourselves model of innovation to deliver a sufficient
number of new products. The company’s senior management realised that
changes in the business environment meant that increasingly innovation was
being carried out by small and medium-sized companies. Recognising that
simply spending more on R&D was unlikely to produce the number of new
products required, Proctor & Gamble opted for a new strategy.
Called ‘Connect & Develop’, the new strategy was based on research into
the performance of a small number of innovations derived not from the
company’s own labs but from external sources (i.e. other companies,
universities, etc.), which showed that many of these innovations had been
very successful. The goal of the Connect & Develop strategy was that
ultimately some 50 per cent of the company’s innovations should come from
external sources. Retaining its commitment to its existing lab’s sources the
company hoped that by using external sources as well it could significantly
increase its overall level of innovation. As part of the strategy Proctor &
Gamble systematically searches throughout the world for promising ideas and
technologies, and then applies its own manufacturing, marketing and
procurement capabilities to turn them into successful innovations. Examples
of products that have come through this route are: Olay Regenerist, Swiffer
Dusters and Crest SpinBrush.
How does it work? Every year Proctor & Gamble produces a top-ten needs
list for each of its businesses, showing those consumer needs which, if
addressed, would generate the highest sales growth. The needs lists are then
developed into scientific problems to be solved. The company uses its own
proprietary networks as well as a range of open networks, to seek out
potential technologies that can address these needs. The proprietary networks
include some 70 technology entrepreneurs working out of six regional
connect-and-develop hubs focusing on technologies that are the speciality of
the region, who seek out potential new technologies by meeting with
university and industry researchers around the world. Proctor & Gamble also
uses its top 15 suppliers as a network. The open networks used comprise a
range of independent technology-seeking networks, including NineSigma and
Innocentive, commercial enterprises who specialise in connecting companies
seeking solutions to technology problems with companies, universities,
government and private labs and consultants with the capability to solve the
problems. Once a potential technology has been identified through one of the
networks, then it is given an initial screening by one of the technology
entrepreneurs before being sent to the business concerned for more detailed
evaluation by its R&D staff and brand managers. If the technology is
evaluated positively then the company’s external business development
(EBD) group will begin negotiations to acquire a licence or set up an
appropriate form of collaboration. Proctor & Gamble hasn’t yet reached its
target for externally sourced innovations, but at 35 per cent it is well on the
way. More significantly, the Connect & Develop strategy has effected a
cultural change within the company in terms of how it goes about innovation.
Source: Huston and Sakkab (2006).

Open innovation
The first four of Rothwell’s five models are all effectively closed models of
innovation, in which a single firm uses its own internal resources and capabilities to
undertake all the activities that form part of the generic innovation process. Such
companies are typically vertically integrated. In contrast, much attention has in
recent years come to focus on what Chesbrough (2006a) terms ‘open innovation’,
where innovating companies increasingly utilise external sources in order to carry
out innovation.
The network model is effectively a form of open innovation, because it relies on a
measure of externalisation in order to complete the activities required for
innovation. The ideas/discoveries will come from inside the company, but when it
comes to developing them into innovations, outside external organisations may be
used for certain activities. In the development of chilled ready-meals for example,
Marks & Spencer relied upon a number of external partner organisations to
implement its ideas for innovation (Cox et al., 2003). Other examples are shown in
Table 5.1.
Open innovation carries the logic of the network model a stage further. With open
innovation firms utilise external resources for innovation in one of two possible
ways:

▮ either, taking internally generated ideas/discoveries and using an external route


to market via a third party organisation (perhaps through a licensing
agreement) so that the latter then develops the ideas/discoveries into
marketable products/services which it then markets
▮ or, actually sourcing ideas/discoveries themselves from external organisations,
with subsequent development taking place internally using the firm’s own
resources/facilities.
Each of these possible paths to innovation is portrayed in Figure 5.9, which also
shows the route followed for closed innovation.
Figure 5.9 Open innovation

The rise of open innovation has been matched by the decline of the ‘corporate’
form of innovation which Chesbrough (2006a) terms ‘closed innovation’. Under this
system innovation was a technology-led affair mainly undertaken by large,
vertically integrated, companies with huge corporate R&D labs, like Xerox’s PARC
and AT&T’s and Bell Labs (Vaitheeswaran, 2007), which were the source of new
discoveries driving the innovation process, and where success relied on retaining
tight control of intellectual property and being first to market. This was the model of
innovation that was extolled by Schumpeter and became dominant during the
second half of the twentieth century. However, the rise of first network-based
models of innovation and now open innovation has challenged (but by no means
eliminated) the closed innovation model.
Why? What has led to this change? The reasons for this are changes in the
external environment, or what Chesbrough (2006a: xxv) refers to as the ‘landscape’.
Of these changes two have been of particular significance:

▮ greater mobility of knowledge, brought about by the growth of universities and


their research capability, a massive increase in the proportion of graduates in
the labour force and greatly increased mobility of staff who are no longer
willing to stay with the same company for life
▮ greater mobility of capital which, through the rise of venture capital in all its
forms, has led to a big growth in technology-based spin-off, spin-out and new
start companies.
Open innovation implies a clear move away from an approach to innovation based
on vertical integration (the old corporate model) to one based on vertical
disintegration, where innovation instead becomes much more flexible both in
sourcing new ideas/discoveries and in realising the commercial potential of
internally generated ideas/discoveries. Businesses that take the open innovation
route become very much more fluid and flexible with ideas, discoveries and
inventions increasingly flowing both in and out of the organisation (Dodgson et al.,
2008).
However, there is considerably more to open innovation than simply increased
flexibility. Some of the key features of open innovation are:

▮ it explicitly recognises that no one firm can hire all the best brains, hence the
importance of accessing external knowledge/expertise
▮ networking in various forms can provide the means of linking to external
knowledge/expertise
▮ it recognises that there are innovation strategies other than a first mover
strategy
▮ the management of intellectual property is vitally important in order to ensure
maximising its value, but this can be achieved in a variety of ways.
Under these circumstances it becomes vitally important to extract as much
knowledge from the external environment as possible.
How is open innovation conducted? If we think of open innovation as having two
forms:

▮ external sources
▮ external routes
then for each there are a range of options available. Among the external sources are
large companies, start-up companies, universities and technology brokers. Large
companies will tend to be ones with significant research facilities that may have
intellectual property that falls outside their normal product portfolio; start-up
companies are likely to be small, highly specialised enterprises with a research
capability but without the resources to bring new products to market; universities
are likely to have intellectual property derived from their research activities; and
technology brokers are in the business of linking owners of intellectual property
with users of intellectual property. External routes on the other hand will typically
be some form of licensing agreement or new venture creation, perhaps through a
joint venture or a spin-off company.
It would be wrong to imagine that open and closed innovation are mutually
exclusive choices. They are not. Most companies that use open innovation are likely
to use closed innovation as well. Proctor & Gamble, one of the best known users of
open innovation, for example currently obtains about 35 per cent of its innovation
(Huston and Sakkab, 2006) from external sources. Impressive though this figure is, it
still means that Proctor & Gamble maintains a very substantial research capability,
though one that is now complemented by externally sourced technologies.

Case Study: The Chilled Meals Revolution


What did you have for dinner last night? We are increasingly eating a range of
exotic meals eaten not in restaurants or collected from the takeaway but
prepared in our own homes in a matter of minutes. Chicken tikka, chicken
Madras, mango chicken curry, chicken chow mein, these are just a few from
the wide range of ready-meals that are available in the chiller cabinets of our
supermarkets these days. They offer high-quality, ready-prepared meals at
reasonable prices. But it was not always so. Chilled ready-meals such as
chicken tikka are a relatively recent innovation that only began to appear on
our supermarket shelves in the early 1990s, pioneered initially by multiple
retailer Marks & Spencer.
Prior to the introduction of chilled food, ready-made meals were available
in our supermarkets, but they came as frozen foods. Unilever’s frozen food
subsidiary Birds Eye introduced the first ‘TV dinners’ as they were known in
1969. Over the years, the freezer cabinets of Britain’s supermarkets became
home to a range of ready-meals. Social changes in the 1970s and 1980s, such
as the increasing number of women working full-time and the increasing
number of single-person households, meant a steady increase in the
popularity of these kinds of products. Though the range of meals became
steadily more sophisticated, the products themselves did not. They might look
attractive, but when it came to eating them, most were nothing like the ready-
meals we have today. As frozen foods, they were hampered by the fact that
freezing food and then thawing it to reconstitute it had an adverse effect on
both the texture and the flavour of the food that inevitably made the meals
less palatable. This was a major drawback that constrained the growth of the
market for ready-prepared meals sold in supermarkets. The solution was not
to freeze the food but simply to chill it. Chilling involves lowering the
temperature of the food to about 5 degrees centigrade but not actually
freezing it. Keeping food at a low temperature helps to preserve it (for a time
at least), while not actually freezing it avoids the problem of damaging the
texture and flavour of the food. However, as Cox et al. (1999) point out,
chilling presents formidable logistical difficulties, since the meals are highly
perishable and have a very limited shelf-life, with the result that the maximum
period of time that can elapse between production and final consumption of
such products is a few days rather than weeks or months in the case of frozen
foods. Without very careful and precise coordination of supply and demand,
the premium price associated with delivering to the consumer a superior
product would be more than absorbed by high wastage rates.
By the late 1980s Marks & Spencer felt recent developments in technology,
combined with their proven and long-standing skills in relational contracting
in the textile and clothing sector, offered scope for offering a range of chilled
products in the form of ready-meals.
The technological developments that formed part of this innovation
covered both consumption and production. On the consumption side, the
introduction of microwave ovens in the early 1970s and their widespread use
in domestic households meant that a means of quickly and easily preparing
and heating chilled ready-meals was readily available. On the production side,
developments in IT systems, especially in the field of communications
technology and data management, helped to provide retailers with an
unprecedented degree of control over their operations.
The IT developments centred on electronic point of sale (EPOS) systems
based on laser-scanning technology introduced in the 1980s. These helped to
transform retailers’ ability to exercise detailed operational control over the
goods going through their stores. EPOS systems, which scanned all the goods
going through the check-outs, enabled retailers to link their inventory
replenishment to consumer requirements. No longer did they have to estimate
demand, since EPOS systems enabled them to link their purchases of
replacement inventory directly to consumer purchases. A key feature of this
was the use of electronic data interchange (EDI) systems. EDI in particular
enabled retailers to manage inter-firm coordination, between themselves,
manufacturers and distributors, in real time. Working in real time brought an
unprecedented degree of precision, both to inventory management and
purchasing. No longer was it necessary to estimate demand by laboriously
checking the stock on the shelves to see which were empty or at least needed
restocking. EPOS systems using barcodes on all items of stock did this
automatically. The systems were also highly efficient as EDI replaced paper-
based administrative systems with computer links. Effective inter-firm
coordination required a high degree of systems compatibility in order to
provide links that would facilitate data transfer between retailers,
manufacturers and logistics/distribution companies. The achievement of the
necessary compatibility can be directly attributed to the work of a trade
association (Bamfield, 1994), the Institute of Grocery Distributors (IGD). The
IGD brought together retailers, manufacturers and distributors to establish a
set of common standards for barcoding. Barcoding was an essential element
in ensuring rapid and easy data transfer.
To add to the wealth of data that retailers now possessed in relation to the
goods going through their stores came developments in data warehousing and
data mining. Retailers introduced store loyalty cards in the 1990s so that they
could link the data coming from their EPOS/EDI systems to individual
consumers. This in turn permitted retailers to record the activities of
consumers on a regular basis. Then, using data mining they could establish
consumer buying patterns and trends. Armed with this sort of data, retailers
were in a position to exercise a high level of coordination between all the
parties involved in producing and selling goods to consumers. This was
particularly significant for food items where the perishable nature of the
goods was an important issue.
The introduction of these various technologies created an infrastructure
that provided scope for innovation in ready-meals, driven not by
manufacturers but by retailers. Marks & Spencer began with a range of meat
pies and quiches marketed under its St Michael brand name. Their strategy for
chilled ready-meals was that they should be promoted as a substitute for
takeaway meals or even restaurant meals, which had been increasing both in
popularity and the range of dishes available. The key elements in promoting
these products were variety, novelty and quality (Cox et al., 2003). As such,
chilled ready-meals were marketed as high-quality, premium-priced products.
Marks & Spencer’s strategy was that as quality substitutes for restaurant
meals, their range of chilled ready-meals should offer an extensive and
constantly changing array of new products that mirrored customer eating
trends. To provide them with the necessary variety and choice, as well as new
offerings, they turned to small specialist food manufacturers. These ranged
from micro-kitchens employing fewer than five people to larger concerns
such as Hazlewood Foods, though most were relatively small concerns. For
retailers like Marks & Spencer the advantage of using several small
manufacturers was the flexibility offered by small suppliers (Cox et al., 1999).
These small firms manufacture in small batches, which is highly desirable
given the relatively short shelf-life of the product. Being small, these firms are
also highly specialised. Many specialise in particular product bases (e.g.
poultry, fish, etc.) and ‘ethnic’ recipes (e.g. Indian, Thai, Italian, etc.).
Specialisation provides scope for retailers offering a very broad product
range and also facilitates the rapid development of new products. Given the
access to customer behaviour provided by customer loyalty schemes (like
Tesco’s Clubcard), retailers were anxious to be able to identify new market
niches and fill them with new products as quickly as possible. Of the small
specialist food manufacturers used by the major retailers, S & A Foods of
Derby is typical. The company was started in 1986 by Perween Warsi after she
despaired of ever finding a decent samosa in her local supermarket in Derby.
S & A Foods began as a micro-kitchen supplying a range of chilled Indian
ready-meals sold as own label products for retailers like Marks & Spencer and
has grown to the point where it has two factories manufacturing Indian meals.
As a specialist food manufacturer S & A Foods does not engage in marketing,
branding or distribution, focusing its efforts instead on developing new
products.
In developing chilled ready-meals, firms like Marks & Spencer assembled
new-product-
development teams that brought together employees from the specialist food
manufacturers with employees from the packaging companies and their own
staff drawn from their food technology and hygiene departments. The new
product teams formed a network for pooling knowledge and drawing on data
from a variety of sources in order to facilitate innovation. From the retailer
came data on purchasing patterns and trends derived from its IT system.
From the specialist food manufacturers came ideas for new dishes and
guidance on which dishes would be more suitable for chilling and reheating.
From the packaging companies came guidance on packaging materials
suitable for use in microwave ovens. This was particularly important as
microwavable meals necessitated the development of ‘active’ forms of
packaging designed specially for use in microwave ovens. Working on a
collaborative basis the new product development teams devised a range of
ready-meals suitable for chilling.
Figure 5.10 The innovation network for chilled ready-meals

Source: ‘New Product Development and Product Supply within a Network Setting: The Chilled Ready-
meal Industry in the UK’, Cox et al, Industry and Innovation, Vol. 10, No. 2, Copyright © 2003
Routledge.

While the innovation network based on new product development teams


generated new products, retailers like Marks & Spencer were able to use their
IT systems to coordinate product and distribution to ensure that the right
amount of stock got to the right store at the right time at the right temperature
so that it was available when consumers wanted it.
To gauge the success of chilled ready-meals as an innovation one has only
to travel a few miles on a motorway and count the number of large trucks
with the words ‘Chilled Distribution’ painted on the side. Similarly, a visit to
any supermarket will reveal rows of chiller cabinets offering a wide range of
Indian, Chinese, Thai, Italian and traditional British ready-meals. A more
conventional evaluation reveals that sales of chilled ready-meals almost
doubled between 1993 and 1999.

Table 5.2 UK retail sales of chilled ready-meals 1993–1999

Source: Cox et al. (2003).

Questions
1 Why did retailers like Marks & Spencer choose to use small firms as their
suppliers?
2 What aspect of Marks & Spencer’s prior knowledge and experience
proved particularly useful in terms of the innovation process used to
develop chilled ready-meals?
3 Which model of the innovation process did Marks & Spencer adopt in
order to bring about the innovation of chilled ready-meals?
4 What benefits did Marks & Spencer (and the other retailers) obtain from
the particular innovation process they used?
5 What alternative models of the innovation process might Marks &
Spencer have used?
6 What enabling factors permitted firms like Marks & Spencer to use their
chosen model of the innovation process?
7 What do you think were the critical factors in achieving successful
innovation in this case?
8 Why was coordination vitally important and how was it achieved?
9 Using an appropriate series of market research reports, such as Mintel or
Key Note reports, show:
▮ how the market for chilled ready-meals has grown in the last decade
▮ how the shares of the chilled ready-meal market have changed over
the last decade.

Questions for discussion


1 Why is innovation often a lengthy process?
2 Where do innovations come from?
3 What are the relative merits of technological change and the market as sources
of innovation?
4 Why can it be problematic portraying innovation as a series of phases?
5 Distinguish between research and development.
6 Why is testing such an important part of the development process?
7 Which personal qualities do you think are required of those engaged in
development work?
8 What do you consider to be the most important feature of the network model of
the innovation process and why?
9 Which do you consider provides the better explanation of innovation –
technology push or demand pull?
10 Why has the network model of innovation become popular in recent years?
11 What kinds of organisation are likely to be associated with closed innovation?
12 What factors have led to the decline in closed innovation in recent years?
13 What are the implications of open innovation as far as individual lone
innovators are concerned?
14 What are the implications for policymakers of the increased popularity of open
innovation?

Exercises
1 Using an account of an innovation of your choice, prepare a report that
describes the innovation process. The report should:
a make clear the type of innovation
b use one of the models of the innovation process to make clear how the
innovation was conducted
c identify and describe the various steps or stages in the innovation process.
2 What is open innovation? What factors have led to this way of undertaking
innovation becoming much more popular in recent years?
3 Using one example of an innovation with which you are familiar, explain what
is meant by the term network innovation.

Further reading
1 Dyson, J. (1997) Against the Odds, Orion Business, London.
James Dyson’s autobiography is one of the best studies of the innovation
process. Covering the 15-year period from conceiving the idea for a cyclone
vacuum cleaner to his finally getting one into production and onto the market, it
explains in detail the steps involved including: how the idea arose, the building
of prototypes, testing them, design of product, the acquisition of manufacturing
facilities and the problems he faced in getting his new cleaner into the shops.
2 Owen, D. (2004) Copies in Seconds: How a Lone Inventor and an Unknown
Company Created the Biggest Communications Breakthrough since Gutenberg
– Chester Carlson and the Birth of the Xerox Machine, Simon & Schuster, New
York.
Another tale of how an invention found its way to market. The Xerox plain
paper copier was an extraordinarily successful innovation. This book outlines
just how hard it was to get the invention to market. It also provides very useful
insights into business models and how important they can be.
3 Chesbrough, H. W. (2006a) Open Innovation: The New Imperative for
Creating and Profiting from Technology, Harvard Business School Press,
Boston, MA.
Although many of the ideas of open innovation have been around for years, this
is the book that put it all together in a logical and coherent way. Well worth
spending time in detail. It not only provides a very coherent explanation of the
nature of open innovation, it also provides some first-class case studies.
Perhaps it is a bit Silicon Valley-centric, but some very valuable insights and
details nonetheless.
4 Bruce, M. and Bessant, J. (2001) Design in Business: Strategic Innovation
through Design, FT Prentice Hall, Harlow.
A textbook that looks at the innovation process. As the title indicates, this is a
book that focuses primarily on design and the management of the design
function. However, it nonetheless provides a valuable insight into how
innovation comes about. It provides very specific inputs on key features on
design aspects of the innovation process.
5 Ulrich, K. T. and Eppinger, S. D. (2003) Product Design and Development,
3rd edn, McGraw-Hill, New York.
Another textbook, this time that explains the technical aspects of the innovation
process. For those without a technology background this book provides an
excellent account of the various activities that make up the innovation process.
One might almost say this is a ‘how to do it’ book.
Value Capture

Objectives
When you have completed this chapter you will be able to:
understand the concept of appropriability
appreciate the importance of value capture in the innovation process
analyse the appropriability mechanisms that are available to innovators
explain what is meant by complementary assets
analyse the different forms that complementary assets can take
identify the range of revenue generating mechanisms that are available to
capture value.

Mini Case: With the Beatles – EMI and the CT Scanner


EMI, or Electrical and Musical Industries to give the company its full title, is
probably best known as the owner of several well-known record labels,
including one that in the 1960s signed a then little known band called the
Beatles. However, knowing that entertainment could be a fickle business, EMI
used some of the cash generated by its success in the music business to fund
research being carried out by its relatively small electronics division. Among
the first projects to be funded was one proposed by Godfrey Hounsfield, a
senior researcher in the company’s Central Research Laboratory (CRL).
Hounsfield’s proposal offered the company an opportunity to diversify into
the fast-growing medical electronics field.
Hounsfield’s research linked together X-ray technology and developments
in computing and cathode ray displays. Conventional X-ray equipment was
used to generate a succession of images, taken by moving in a 160° arc
around a patient’s head, which were then stored on a computer. Hounsfield
and his team then used pattern recognition software to process the data and
display an integrated picture of the cross section of the human brain. The
resulting three dimensional image of the brain was a big advance on the
conventional two dimensional X-ray image. A prototype head scanner was
installed at Atkinson Morley Hospital in London in 1971 and the first
successful diagnosis of a brain tumour occurred the following year.
EMI now recognised that it had a significant ‘invention’ on its hands and
steps were taken to obtain appropriate patent protection. However, in
exploiting its CT scanner technology, EMI faced some big challenges. It had
very limited experience of the medical equipment market and no exposure to
the US which formed much of the biggest healthcare market especially for
advanced diagnostic equipment. While it had manufacturing facilities, they
produced defence products not medical equipment. Similarly, developing a
commercially viable CT scanner was going to be very expensive. But EMI was
determined to exploit the technology itself by investing directly in
manufacturing and marketing. It enjoyed early success. By 1976 EMI’s
medical electronics division had expanded dramatically, having sold 300 units
at $0.5 million each, including large numbers in the US and Japan. However,
from this point onwards things quickly began to unravel.
Despite extensive patent protection rival products began to appear.
General Electric, the leading manufacturer of X-ray equipment, entered the
market with a scanner of its own followed by other X-ray manufacturers such
as Siemens and Toshiba. These companies not only had large sales forces in
place, they also possessed well-established product support teams which were
vital in ensuring that medical equipment worked reliably. EMI, in contrast,
had to create these facilities from scratch at short notice in a market with
which it was not familiar. By 1978 EMI’s medical electronics division had lost
more than $50 million. A year later, although Godfrey Hounsfield had been
awarded a Nobel prize for his work, financial weakness forced EMI to merge
with Thorn Electronics. In the same year the company began to withdraw
from the medical equipment market and eventually sold its scanner interests
to General Electric of the US.
Source: Bates et al. (2012).

Introduction
The case of the EMI CT scanner shows how it is possible for a company that is a first
mover/pioneer with an outstanding technological breakthrough to fail when it comes
to capturing value. In terms of Figure 6.1, where CT scanners are concerned the
innovator captured an initial slice of value but this proved to be a very short-term
gain because it was very rapidly deprived of value as competitors/imitators like
General Electric moved in to capture a very big slice of value in the long term.
Hence this chapter is devoted to the thorny issue of just how do innovating
companies ensure they capture value – that is to say, ensure they make money out
of their innovation. EMI was not unusual. Instances of innovators failing to make
money – that is to say, failing to appropriate the benefits of their technical prowess
and ingenuity – are actually remarkably common. Examples include VisiCalc in
spreadsheets, Osborne Computers in portable PCs and even Apple with its personal
digital assistant (PDA) – the Newton. It is not that these innovations were technical
failures – somebody made money, namely imitators/copiers rather than the
innovators themselves. What is abundantly clear from the EMI case is that neither
technological capability, patent protection, nor even a first mover advantage is
sufficient to ensure successful innovation. Innovators have to pay careful attention
to the business model they apply to their innovation. As we saw in Chapter 1 it is the
business model that is crucial in unlocking the ‘latent’ value of an innovation in
order to ensure value is captured, or as Chesbrough and Rosenbloom (2002: 259) put
it, ‘the realization of economic value’.

Figure 6.1 Who captures value from innovation?

Source: ‘Profiting from technological innovation: Implications for integration, collaboration, licensing and
public policy’, Teece, Research Policy, Copyright © 1986 Published by Elsevier B.V.

In the past capturing value was often somewhat easier than today. In the
industrial era of product standardisation and mass production, innovations typically
took the form of new products. In these circumstances the innovators’ task was to
pack new technology into the product, protect it through patents, and then attach a
price ticket so that ownership could be transferred when it was sold. In an age of
mass production, scale was king. It implied lower costs which in turn were a key
source of profits (hence value for the innovator).
Today it is more complicated. Customers often want solutions, requiring the
application of a lot of tacit knowledge, rather than just products. Hence the
service/support element is particularly important. At the same time the coming of
the digital age has, as Teece (2010) notes, brought with it significant changes. There
is more emphasis on knowledge and information (rather than physical artefacts and
objects), intellectual property is often more intangible, and there are new and very
different channels of distribution (i.e. ways of getting the product/service to the
consumer). As Dahlinder (2005) points out, knowledge and information, unlike
physical objects, can be used by many people without diminishing their productivity,
thereby making charging for use much more difficult, because nothing has actually
been ‘used up’. As a result it is increasingly difficult for innovators to capture value.
Industries like music recording and newspaper publishing are at the forefront of
these changes. When there was only one way to acquire what had been produced
(i.e. through buying a physical artefact/product) value capture was relatively
straightforward. Now with much easier and many more ways of accessing ‘content’
it is much harder for producers of content to capture value and gain an appropriate
return for their efforts at innovation.
As Pisano and Teece (2007) note, figuring out how to capture value from
innovations is not just a matter for innovators. It is also an important issue for public
policy. If innovators increasingly see value (i.e. their private returns and profits)
being syphoned off by imitators, competitors, suppliers or even customers (i.e. in
terms of Figure 6.1 they see others getting a bigger slice and themselves a smaller
slice of the pie), then they will conclude that the hard work and effort of innovating
simply isn’t justified.
It is against this background that this chapter will explore exactly what is
involved in trying to capture value from an innovation. In so doing there are three
key factors that we have to consider:

▮ appropriability
▮ complementary assets
▮ revenue generating mechanisms.
With a better understanding of these factors, innovators are more likely to ensure
that they capture value, thereby ensuring that they profit from innovation.

Appropriability
To appropriate is to acquire. Thus appropriability is the ability of an innovator to
gain or acquire a return for his/her investment in intellectual effort and hard work in
innovating a new product or service (Hurmelinna-Laukkanen et al., 2008). Hence
when we talk about appropriating a return from an innovation, ‘it’s all about
generating rents from proprietary knowledge’ to use Liebeskind’s (1997: 623) turn of
phrase. This proprietary knowledge is the intellectual input (i.e. creativity) of the
innovator that produces the innovation. And innovators not unreasonably normally
expect to be rewarded for their endeavour, through what is effectively a rent.
Appropriability is linked to an innovation’s uniqueness which in turn is a function
of the extent to which the innovation or the knowledge on which it rests can be
protected from imitation (Hurmelinna-Laukkanen and Puumalainen, 2007). Being
able to maintain uniqueness is potentially a source of significant bargaining power
and thus appropriability.
Appropriability is typically described in terms of there being either a strong or
weak ‘appropriability regime’. If the appropriability regime is strong, then the
innovator is likely to be able to exercise a high degree of control over what he or she
has created, and therefore prevent others from copying or imitating the innovation,
thereby excluding them from accessing value. However, if it is relatively easy to
imitate or copy the innovation, then it will be hard to exclude others and under these
conditions the appropriability regime would be described as weak. Weak
appropriability places the innovator at the mercy of the other parties shown in
Figure 6.1 (e.g. imitators, competitors, suppliers etc.) and ‘the slice of the pie one
gets to keep’ (Afuah, 2014: 156) may, as far as the innovator is concerned, prove to
be disappointingly small or even negligible.

Appropriability mechanisms
A variety of factors determine the extent to which the appropriability regime of an
innovation is strong or weak. These include technological and marketing
capabilities, the existing knowledge base and the ability to learn (Hurmelinna-
Laukkanen and Puumalainen, 2007: 95). Thus, for example, if the technology is well
known and well understood, imitation, perhaps through reverse engineering, may be
relatively easy. Similarly digital technologies (e.g. music files) often make it
relatively easy to make copies (Swann, 2009). On the other hand, if the technology is
new and not well understood imitation will probably be difficult and appropriability
strong.
If appropriability is relatively weak then firms can take action to obstruct
imitation and strengthen the appropriability regime. Intellectual property rights such
as patents are the most obvious means available. However Hurmelinna-Laukkanen
and Puumalainen (2007: 96) note that intellectual property rights represent just one
of many ways of taking control of a firm’s intangible resources in order to
strengthen appropriability. Hurmelinna-Laukkanen and Puumalainen (2007) note
that these other ways are often overlooked because they are not equally
advantageous and some have other often better uses. Hurmelinna-Laukkanen and
Puumalainen (2007: 96) refer to these various ways of defending innovations against
imitation as ‘appropriability mechanisms’.
The range of appropriability mechanisms that are available is shown in Figure
6.2, divided into five broad categories: institutional protection; nature of knowledge;
human resource management; practical/technical means; and lead times.

Figure 6.2 Appropriability mechanisms

Source: Hurmelinna-Laukannen and Puumalainen (2007: 98).

Institutional protection
Intellectual property rights (IPRs) are the most evident and best known form of
institutional protection (Swann, 2009). They comprise a range of rights provided by
the State in the interests of promoting creativity and innovation. Outlined in more
detail in the next chapter, they include patents, design rights, trademarks and
copyright. They provide protection for intellectual property embodied in various
different forms including artefacts, designs, symbols and the written word. They are
often combined to provide more effective protection from imitation, as when patents
and trademarks are used together, with the patent protecting a particular product
and the trademark protecting the brand. Hurmelinna and Puumalainen (2005: 4) note
that the very act of registering an IPR serves to increase and strengthen
appropriability, by sending a powerful signal with regard to what they term
‘proprietary intent’, signifying a determination on the part of the innovator to
actively defend proprietary knowledge, that is, through litigation. This in itself
strengthens appropriability, making attacks on an innovator’s exclusivity less likely.
However, IPRs have their limitations. They can be expensive and time consuming to
acquire, are costly to enforce and require the intellectual property to be made public.
However, IPRs are not the only form of institutional protection. Legally binding
contracts represent another increasingly important form of institutional protection.
These can take various forms. Among the more widely used forms of contract are
non-disclosure and confidentiality agreements (Liebeskind, 1997). These are legal
contracts in which two parties agree not to share confidential information (i.e.
intellectual property) with third parties (Swann, 2009). They are specifically
designed to stem the flow of knowledge (i.e. codified and explicit knowledge) out of
an organisation. The increased use of such agreements reflects a greater awareness
on the part of business organisations of the potential value of knowledge in general
and proprietary knowledge in particular.

Nature of knowledge
Knowledge can be divided into two forms: codified, or explicit, knowledge and tacit
knowledge. Codified knowledge, as its name implies, is knowledge that can be
structured and turned into forms that can be transferred and analysed with relative
ease. Just the fact that knowledge is written down means that it is effectively
codified. The significance of codified knowledge is that it tends to facilitate
knowledge sharing/transfer (Liebeskind, 1997). Consequently it is more easily
appropriated by rivals than tacit knowledge.
Tacit knowledge, on the other hand, is implicit and idiosyncratic (Hurmelinna-
Laukannen and Puumalainen, 2007: 96). It is typically acquired through experience
(i.e. learning-by-doing) rather than formal learning (i.e. via exams). A feature of
tacit knowledge is that it is cumulative, often being built up over a long period of
time. Also it tends to be embedded in individuals and organisations and their
routines. These distinctive elements of tacit knowledge, as Hurmelinna and
Puumalainen (2005: 4) identify, mean that tacit knowledge is not easily transferred
and is often not easy to access or acquire, hence by its very nature it provides a
defence against imitation.
Human resource management
We saw in the previous section that tacit knowledge is typically embedded in people
and one way for would-be imitators to access such knowledge is to ‘poach’ staff.
Indeed, in some industry sectors the movement of staff, known as ‘staff churn’, is
endemic as firms compete to acquire the best brains. Clearly, as people move so
tacit knowledge will tend to move with them.
In the light of this firms may rely on the human resource management function to
exercise control over staff (Hurmelinna and Puumalainen, 2005) and thereby restrict
the transfer of information in order to limit what Liebeskind (1997: 629) describes as
‘the potential for leakage of knowledge from one firm to another’. In this context,
that knowledge will tend to be primarily tacit knowledge. One way in which human
resource management can do this is through employment contracts. These can be
drafted to include conditions that will deter the movement of staff and thereby
strengthen appropriability. Such contracts may also be used to fully exploit an
individual’s duty of loyalty in order to limit the communication and transfer of
information. Similarly, hiring practices and personnel rotation may be carefully
monitored and reviewed in the light of the transfer of tacit knowledge, particularly if
the organisation is included in any collaborative agreements with other firms.
Hurmelinna-Laukannen and Puumalainen (2007) perceptively note that where
human resource management is concerned, strengthening appropriability is
typically less a matter of contractual obligations of employees and more a matter of
‘softer’ psychological issues linked to employee commitment.

Mini Case: Motor Sport Valley


The Oxfordshire/Northamptonshire border is home to a regional
agglomeration of small firms specialising in the construction of racing cars,
car components and a range of related specialised services. This regional
agglomeration or cluster is recognised as the world’s leading centre for racing
car production with three-quarters of the world’s single seat racing cars
designed and assembled there, including most of the leading teams in Formula
One.
It has been described as a ‘knowledge community’ (Henry and Pinch,
2000a: 191). A feature of this community is the continuous movement of staff
between companies. Recent research revealed that designers and engineers
typically move eight times during the course of their careers, with the average
stay at any one firm lasting a fraction under four years. This high degree of
labour mobility or ‘staff churn’ as it is known, results in knowledge being
continuously reconfigured and advanced. The continuous churn of staff
means that knowledge, especially tacit knowledge embedded in individuals,
which is normally less easily transferred than codified knowledge, moves
from one firm to another.
Source: Henry and Pinch (2000a).

Practical and technical means


Alongside policy measures associated with human resource management are a
range of more practical steps that firms can take to protect their intellectual capital.
These include the identification and protection of trade secrets such as formulae,
recipes and specifications. A good example is the formula for Coca-Cola which has
been described as ‘the World’s best kept trade secret’ (Prendergast, 2013: 487), which
is apparently kept locked in a bank vault in Atlanta. The case of Coca-Cola
illustrates a feature of trade secrets, namely that keeping them secret tends to be
much easier for process innovations than for product innovations, mainly because
the latter can be subjected to reverse engineering.
Other practical measures include the use of security routines such as passwords,
digital signatures, copy prevention, entry codes, ID cards, restricted areas and
signing-in procedures (Hurmelinna and Puumalainen, 2005) designed to restrict
access to knowledge. In a similar vein there is what Liebeskind (1997: 650) terms
‘structural isolation’, where an organisation locates its activities in such a way as to
make access by outsiders very difficult. One way of doing this is through
‘geographical separation’, where a firm’s activities are located in relatively isolated
places. For example, when Rolls-Royce developed its first jet engines during World
War II it did so not in Derby which was home to its existing R&D facilities, but 100
miles away on the other side of the country in a former textile mill in the isolated
Lancashire town of Barnoldswick.
Hurmelinna-Laukannen and Puumalainen (2007) note that keeping information
secret (especially as far as competitors are concerned) through these sorts of
practical measures is likely to run in parallel with the use of institutional protection
such as IPRs, because in the UK at least patents cannot be obtained for inventions
that are in the public domain.

Mini Case: F117-A Stealth Fighter


In the 1980s the Pentagon funded an ultra secret aerospace programme to
develop a new generation of fighter aircraft that would utilise so-called
‘stealth’ technology. This was designed to make it almost undetectable to
increasingly accurate Soviet surface-to-air missiles (SAM). Aircraft
manufacturer Lockheed was given a contract to develop such an aircraft
known as the F117-A ‘stealth fighter’. However, unlike normal military aircraft
development programmes, the existence of such an aircraft was kept secret
and at the time its existence was not acknowledged by the US government.
Lockheed was not allowed to test the aircraft using its normal test facilities
at Burbank in California. Instead the Pentagon insisted that the flight test
centre be located far from prying eyes on a remote desert airstrip, which had
originally been used for nuclear warhead testing, in the middle of the Nevada.
It was part of Nellis Air Force base 140 miles from Las Vegas, located in an
uninhabited area of undulating plains and scrub that formed one of the most
desolate spots in North America.
As a result, although the F117-A stealth fighter went into service with the
US Air Force in 1983, few people even in the air force knew of its existence
and it was not until 1988 that the US government acknowledged its existence.
Source: Rich and Janos (1994).

Lead time
The fifth appropriability mechanism is the use of the lead time achieved by getting
an innovation to market before anyone else. This effectively means the use of a ‘first
mover’ strategy where innovation takes place well ahead of any potential rivals. The
aim is essentially to utilise intellectual property/proprietary knowledge as quickly as
possible in order to gain what James et al. (2013) describe as ‘preemptive
competitive advantages’ that result in the innovator getting economic value (i.e.
rent) from the innovation. This is one of the most obvious ways of gaining and
maintaining appropriability. Being first into the market is an opportunity to build
brand awareness ahead of rivals, thereby establishing a strong presence within a
market. It is also an opportunity to master a new technology and establish
technological leadership (Lieberman and Montgomery, 1988).
Both of these features are linked to appropriability. By getting an innovation to
market ahead of rivals, innovators can work from a position of strength. Not only
may an established market presence serve to deter imitators, the innovation can
shape customer expectations about the nature of the product/service, lock in
customers so that they find it expensive to switch (Lieberman and Montgomery,
1988) and enable the innovator to accumulate manufacturing and distribution
capabilities (Dodgson et al., 2008). Furthermore as Dodgson et al. (2008) note, being
first can be less costly and easier than institutional protection through IPRs such as
patents.
Hurmelinna-Laukannen and Puumalainen (2007) note that many of the
appropriability mechanisms are overlapping and complementary. For example, non-
disclosure agreements (NDAs) not only restrict the outflow of knowledge, they also
serve to remind employees of their obligations with regard to information disclosure
and the preservation of trade secrets.
Finally, it is important to note that while seeking a lead time can be the basis of
appropriability, such a course of action is not without risk as there are possible
pitfalls and problems. These centre principally on possible inflexibility
disadvantages (James et al., 2013) associated with early commitment to a particular
technology, product architecture or market segment, which can lead to significant
switching costs for the innovator through having to write off R&D expenditure and
the like.
Finally, it is worth reflecting on what actually happens. How do firms endeavour
to protect their proprietary knowledge and ensure appropriability in practice?
Research by Swann (2009) shows that informal appropriability mechanisms (i.e.
trade secrets and lead times) are more widely used than formal ones (i.e. patents,
trademarks and copyright). The reasons behind this, Swann (2009) suggests, are that
patents are time consuming and expensive, offering what in some instances at least
is only limited protection anyway. It is acknowledged that the picture varies from
industry to industry with patents being extremely important in certain fields such as
pharmaceuticals. Similarly, the preference for informal mechanisms applies more to
small companies than large ones, although it is worth noting that some innovators,
such as James Dyson and Ron Hickman, though small at the time of their initial
innovations, made very effective use of the patent system.

Complementary assets
Kastelle and Steen (2011: 199) observe that what they term ‘the most damaging of
several myths surrounding innovation’ is that it is all about ideas. Similarly, Teece
(2010: 183) points out that, ‘technological innovation by itself does not automatically
guarantee business or economic success’. One reason why both these statements are
true is that innovation requires that creative inputs and know-how be used in
conjunction with ‘other capabilities and assets’ (Teece, 1986: 288). These assets are
known as ‘complementary assets’ and they are ones that a firm uses to create value
(i.e. make money) in the commercialisation process (Sullivan, 1998). They are
defined by Dodgson et al. (2008) as ‘a bundle of know-how and activities’ associated
with the successful exploitation of an innovation. No matter how advanced the
technology of an innovation, if the consumer cannot access it to make a purchase,
or does not know how to use it, or cannot get it fixed when it goes wrong, then its
value for the consumer is likely to be limited. Hence complementary assets include a
range of supporting activities required to ensure that an innovation provides the
consumer with a service that effectively meets his/her needs. Typical
complementary assets are shown in Figure 6.3 and they normally constitute those
assets required to get the innovation to the consumer.
Figure 6.3 Complementary assets

Source: ‘Profiting from technological innovation: Implications for integration, collaboration, licensing and
public policy’, Teece, Research Policy, Copyright © 1986 Published by Elsevier B.V.

In the broadest sense complementary assets can be differentiated into:


(a) those assets that form part of what a firm can do (i.e. its capabilities)
(b) those assets that the firm owns.
Examples of the former would include manufacturing capabilities and sales and
service expertise. Examples of the latter would include brand names, distribution
channels and customer relationships. They tend to be assets that play an important
role both in getting the product/service to the consumer and in ensuring that the
consumer gets an appropriate level of service from the innovation. Small start-up
companies by virtue of their size and newness are typically lacking when it comes to
what they do, because they have not yet had time to accumulate capabilities in
manufacturing or product support. Similarly, they typically own little. Hence as
Dodgson et al. (2008: 275) observe, such enterprises are often poorly placed when it
comes to complementary assets, while large companies tend to be (but are not
always) better placed by virtue of the scale of the resources they have at their
disposal. Because of this difference in asset holdings, one often finds small firms
joining together with large firms in order to complete the innovation process. James
Dyson, for example, sought to license his dual vacuum technology to larger
electrical goods manufacturers as did Ron Hickman with his Workmate portable
workbench. Both men recognised that they lacked the necessary complementary
assets to go it alone and hence sought an alliance with a larger undertaking (though
initially at least both were unable to win the confidence of a larger firm).
An alternative way of differentiating complementary assets, and one that is more
widely recognised, is to differentiate between generic and specific complementary
assets. Generic complementary assets are ones that are general purpose assets
(Teece, 1986: 289). These are ones that do not need to be tailored to the requirements
of the particular innovation. Generic assets are usually widely available and can be
bought or contracted for in the open market. In contrast, specific assets are
specialised assets created in conjunction with the innovation. They are bespoke
assets that may well comprise techniques or processes that are unique to the
innovation. Specific assets may well be cumulative and been built up over time.
Their idiosyncratic (i.e. highly differentiated) nature makes them difficult to
replicate or imitate (Teece, 2006). They are likely to be tightly held by the innovator
and are potentially very valuable in terms of capturing value.
The significance of complementary assets is evident from the case of EMI and its
pioneering CT scanner. This was an innovation where complementary assets proved
to be an important feature where capturing value was concerned. Specific
complementary assets associated with various forms of product support were vitally
important in delivering an appropriate level of service (e.g. training, maintenance,
updating etc.) to customers where this radically new and highly complex piece of
medical technology was concerned. However, complementary assets of this nature
were difficult for EMI, as a newcomer to the medical technology field, to acquire.
They were idiosyncratic and took many years to accumulate, being closely linked to
customer relationships (i.e. with hospitals in the US in particular). Such
complementary assets cannot easily be contracted for. In contrast, existing medical
equipment manufacturers like General Electric already possessed the necessary
complementary assets.

Mini Case: Pixar


Lucasfilm is probably best known as the film production company behind the
Star Wars and Indiana Jones films. At one time it included a small computer
graphics division made up of a small team that produced computer hardware
and software used in computer animation. Having completed the Star Wars
trilogy in the late 1980s George Lucas decided to sell off the division as part of
a restructuring exercise. In the end Lucas did a deal to spin off the team as an
independent company in which Steve Jobs was the major shareholder with a
$10 million stake. Given the name Pixar, under Jobs’ leadership it continued to
produce highly sophisticated animation computers and software.
As a sideline the company also made a small number of short animation
films designed to promote the company’s products. Its products, though
technically brilliant, only sold in very small numbers and Jobs was forced to
put in more money and make major cutbacks. However, the management
team persuaded Jobs to fund Pixar’s John Lassiter to make another short film
using the company’s latest computer animation software. Entitled Tin Toy, it
was effectively a promotional film for Pixar, to be shown at the annual
Siggraph conference. It was not only well received at the conference but it
went on to win an Oscar, the first time the award had gone to a film made
entirely by computer animation.
Great critical success, however, made little difference to Pixar’s finances.
But this did not put off John Lassiter who now planned something bigger – a
half-hour computer-animated TV special. Given that Pixar was only a small
computer company with none of the resources required for TV production,
Lassiter approached Disney’s CEO Jeffrey Katzenberg who had seen Tin Toy
at the Oscars and been impressed. Lassiter hoped to persuade Disney not only
to fund production, but significantly to promote and distribute the resulting
animation as well. Katzenberg indicated that Disney, which had previously
produced everything in-house, would be prepared to fund not a TV special but
a feature length film!
For Jobs the news came as a potential lifesaver, as Pixar had yet to make a
profit and had been reliant on injections of cash from him. Jobs and
Katzenberg thrashed out a deal for not one picture but three. Despite the fact
that this was the first time a Disney film had been made by outside the
company, the creative and production aspects of Toy Story were under
Pixar’s control. Pixar soon had badly needed cash coming in and,
significantly, it now had the all-important means to promote, market and
distribute the film as well. Premiered in November 1995 Toy Story was a
stunning success. It was the highest-grossing film of the year, eventually
making more than $350 million. A week after the premiere, Pixar’s IPO took
place. Priced at $22 a share, they were selling for $39 by the end of the first
day making Jobs a billionaire.
Source: Young and Simon (2005).

Revenue generating mechanisms


In capturing value it is not sufficient just to ensure appropriability and as necessary
the provision of appropriate complementary assets. In order to ensure success the
innovator has to select an appropriate revenue generating mechanism that will turn
the new product or service into cash. As its name implies, a revenue generating
mechanism is the means by which the consumer acquires the new product or service
and the innovator gains revenue. Chesbrough and Rosenbloom (2002: 529) describe
the revenue generating mechanism as the ‘architecture of revenues’. Failure to pick
an appropriate revenue generating mechanism can either deter the consumer from
acquiring the product or service, or result in the innovator failing to get a sufficient
return.
In the past it was relatively simple; most innovations were new products. As
physical artefacts the consumer obtained exclusive access and the innovator just
used outright sale as the revenue generating mechanism. But as more and more
innovations are services, including many for which there is no exclusive access, so it
has become harder to determine the most appropriate way to generate revenue. The
innovator can now choose from a wide range of revenue generating mechanisms:

▮ outright sale
▮ renting/leasing
▮ advertising
▮ subscription
▮ usage fee
▮ brokerage fee
▮ licensing
▮ razor and razor blades
▮ freemium.

Outright sale
Almost certainly the commonest revenue generating mechanism is where the
revenue comes from selling ownership rights to an asset (i.e. a physical product). It
is widely used because it is relatively easy to use with physical products because
there is clearly something to transfer. It is often more difficult with services, which
is why as services have grown in importance alternative ways of generating
revenue have become more popular.
One of the key points about this aspect of value capture is recognising that it is
an important issue and making a conscious choice of the most appropriate revenue
generating mechanism rather than simply following the industry norm.

Renting/leasing
In this instance revenue comes from temporarily granting the consumer exclusive
rights to use an asset for a fixed period in exchange for a fee. A good example would
be car rental, which is typically charged by the day, although in the US Zipcar
charges by the hour (Osterwalder and Pigneur, 2010). Similarly timeshare schemes
provide access to property at fixed times.

Advertising
This revenue stream is generated by selling advertising space in return for a fee.
Widely used in the media industry where it is used by newspaper and magazine
publishers, who typically generate very significant amounts of revenue in this way,
sometimes, as with free newspapers which are given away to consumers, it is the
only way of generating revenue. Osterwalder and Pigneur (2010) note that though
this has traditionally been found in the media industry, it has increasingly spread to
software and services.

Subscription
Revenues are generated by providing continuous access to a facility or services for
a fixed period by charging a monthly or yearly fee in order to return for access to
the facility. Good examples include sporting facilities (e.g. golf clubs), gyms and
social facilities such as clubs. Subscriptions normally provide unlimited access.

Usage fee
Revenue is generated from the use of the service, normally on the basis of the period
of time involved. Examples include phone companies whose charges are based on
the duration of the call. In the UK fixed line telephones are also charged a rental fee
in addition to a usage fee.

Brokerage fee
In this case the revenue stems from performing an intermediation function on behalf
of two or more parties. Examples would include estate agents who bring together
potential buyers and sellers, and credit card companies that take a small percentage
of the value of a transaction in exchanges between consumers and providers of
merchanting services (Osterwalder and Pigneur, 2010).

Licensing
Revenue is generated by granting permission to use protected intellectual property
(i.e. patented technologies) in exchange for a fee which is normally charged on a per
item basis (i.e. a royalty payment). For some companies this can be a very important
way of obtaining revenue. The chip designer ARM Holdings, for example, charges a
royalty fee for the use of its chip designs which are widely used in mobile phones
and audio products like the iPod. This is effectively ARM Holdings’ only source of
income.

Razor and razor blades


In many respects this is a special case of outright sale, where there is a charge to
obtain outright ownership of the item, but the cost does not reflect the cost of
providing it, because the seller also sells associated consumable items which often
have to be obtained frequently. Old style razors are the classic example and they
gave their name to this revenue generating mechanism. In this case the consumable
item is the razor blade which unlike the razor has a relatively short life and has to be
frequently replaced. Other good examples are inkjet printers and jet engines. In the
case of the latter, the product is highly durable and lasts anywhere from 20 to 40
years, but it requires regular maintenance and the use of spare parts.

Freemium
This is a relatively recent addition to the repertoire of revenue generating
mechanisms. In this case the basic services are typically free and you pay for
advanced features/services (e.g. LinkedIn, the social networking site for business
professionals).
Chesbrough and Rosenbloom (2002) show how in the case of Xerox and its
innovation of the plain paper copier, the Xerox 914, the choice of revenue generating
mechanism was a crucial aspect of innovation. Using the conventional outright sale
method would have resulted in the product being relatively expensive and beyond
the budgets of many prospective purchasers. Instead Xerox opted for leasing as the
revenue generating mechanism, which made the product much more affordable.

Case Study: VisiCalc – the ‘Killer App’


If the appearance of the Altair 8800 on the cover of Popular Electronics
magazine was one of the defining moments in the birth of the personal
computer, so too was the arrival of a software innovation – VisiCalc
(Campbell-Kelly, 2003), the world’s first spreadsheet. Dan Bricklin first came
up with the idea while a student at Harvard University, where he was required
to do financial modelling assignments (e.g. ratio analysis) with a pencil and
paper, a laborious and time-consuming task. Bricklin’s initial idea was simple.
The sheet would be displayed on screen and the user would be able to enter
data directly, just as if one were writing figures on a sheet of paper. The user
would then merely highlight the requisite figures with the cursor to get the
computer to perform whatever mathematical operation one wanted and
deliver the answer on screen. One of its outstanding features was that unlike
many computer programs at the time it would function in real time giving
instant results.
The first prototype was developed by Bricklin over a weekend using a
borrowed Apple II personal computer. The software was written in Apple
BASIC. As a prototype it did not have a scroll facility, but most of the other
recognisable features of a modern spreadsheet like Excel, were present.
Having proved the concept, Bricklin teamed up with a friend, Bob Frankston,
to develop a commercially viable product. They formed a start-up company,
Software Arts Inc., to produce the new software. They even consulted a patent
attorney about the possibility of patenting the software. However, they were
advised that it was unusual to attempt to patent software and that patents
were rarely granted for software programs because they were regarded as
mathematical algorithms. Instead, they relied on copyright and registered the
VisiCalc name as a trademark. As a small start-up company, Software Arts
Inc. was faced with the difficult task of actually getting the product into the
hands of potential users. To do this, Bricklin and Frankston then struck a deal
with Dan Fylstra, the founding editor of the well-known computing magazine
BYTE. As part of the deal Fylstra’s publishing company, Personal Software,
would publish the new software and Bricklin and Frankston were to get a
royalty of 35.7 per cent on gross sales.
The first application of the VisiCalc spreadsheet had been when Bricklin
used it for one of his class assignments, the Pepsi-Cola case study. Bricklin
used VisiCalc to do five-year financial projections that tested a variety of
different strategies. Not long afterwards VisiCalc was announced to the public
at the National Computer conference in New York, but it attracted very little
attention. Also present at the conference was Bill Gates, whom Bricklin
described as ‘a young kid well known for his version of BASIC and speeding
tickets’. Another person at the New York conference was Ben Rosen, an
analyst at Morgan Stanley. In a short article in the Electronics Newsletter he
described VisiCalc as an ‘electronic blackboard’ where you write out what you
want to do, press a key and the software automatically does all the
calculations and displays the results. Updates and changes, he noted, could be
made quickly and easily. Prophetically Rosen wrote, ‘at $100 VisiCalc could
emerge as one of the bargains of our time’, adding, ‘VisiCalc could some day
become the software tail that wags (and sells) the personal computer dog’.
Most of the early versions of VisiCalc were for the Apple II computer and in
the first few months sales of 500 copies a month were strong, and favourable
press comments combined with word-of-mouth recommendations propelled
sales to 12,000 per month in the second half of 1980 despite ratcheting up the
price to $250 (Campbell-Kelly, 2003: 214). The beauty of VisiCalc was that for
the first time ordinary people, especially people with no knowledge of
computers or programming, could use a computer to do complete business-
related tasks, such as drawing up a budget or a business plan, without the
need for computer specialists. Sales of Apple II computers, which were $2.7
million in the year before VisiCalc appeared, rose to $200 million in 1980 and
more than one-third of a billion dollars the following year.
Table 6.1 Best-selling software applications 1983

Source: Campbell-Kelly (2003: 215).

By 1983 VisiCalc was the world’s top-selling software. However, Bricklin


had not patented his creation. He was aware of the need to protect his
intellectual property but was advised that since one could not obtain patents
for computer software (at this time) there was no formal legal protection
available. Consequently VisiCalc soon had many imitators (see Table 6.1). In
time the imitators became better known than VisiCalc. One of the most
successful was Lotus 1-2-3, produced by the Lotus Development Corporation.
As the IBM PC became by far the dominant personal computer, VisiCalc –
which was primarily an application for the Apple II – was eclipsed. In time
Lotus bought out Software Arts. But eventually even Lotus 1-2-3 was eclipsed
and fell by the wayside. It was overtaken by Multiplan, a spreadsheet
developed by Microsoft. Its big attraction was that it offered compatibility
with Microsoft’s best-selling word processing package, Word. When
Microsoft’s Windows operating system began to gain dominance, the writing
was on the wall. Multiplan, now renamed Excel, came bundled with other
Microsoft software as part of its Microsoft Office package and in time all the
other spreadsheets ceased trading, leaving Excel as the spreadsheet.
Ironically, Software Arts’ inability to capture value meant that the diffusion of
the spreadsheet as an innovation occurred more rapidly than would otherwise
have been the case. What Dan Bricklin lost in terms of his ability to
appropriate the benefits from his intellectual property, the rest of us gained in
terms of the speed of diffusion of this important innovation.
Source: Campbell-Kelly (2003).

Questions
1 What was Bricklin’s contribution to the innovation of the spreadsheet?
2 What is a ‘killer app’ and why has VisiCalc been described thus?
3 In what ways did appropriability prove to be an issue in this case?
4 How would you describe the appropriability regime in this instance and
why?
5 What were the two main problems that Bricklin and Frankston faced in
terms of capturing value from their software?
6 Which appropriability mechanisms did Bricklin and Frankston utilise and
how effective were they?
7 What did Dan Fylstra and his publishing house Performance Software
provide and why was this important in terms of the initial success of
VisiCalc?
8 What were the complementary assets that Microsoft was able to utilise
for its spreadsheet software?
9 What does this case tell us about the importance of complementary
assets?
10 Ultimately who captured the most value from this innovation and why?

Questions for discussion


1 What is meant by the term ‘latent’ value and what has latent value to do with
innovation?
2 Identify four examples of innovations where the innovator or innovating firm
has not been the main recipient of value and indicate who in fact did obtain the
bulk of the value.
3 What is tacit knowledge? Give three examples of everyday activities where tacit
knowledge is an important element.
4 Why is tacit knowledge likely to be cumulative?
5 What is a confidentiality agreement and why have they become much more
widely used in recent years?
6 What is staff churn and why is it often a feature of Formula One racing teams?
7 What is a razor-and-razor-blades revenue generating mechanism? Give three
examples of products where this mechanism is used.
8 Why is product support a complementary asset and why does it tend to be
cumulative?
9 In the Pixar mini case identify the complementary assets that Pixar needed
when it came to develop Toy Story, its first full-length feature film.
10 What is a trade secret?
Exercises
1 Prepare a management report for the board of a small company explaining (i)
the nature of complementary assets; (ii) the forms that such assets typically
take; (iii) why and how the company which is developing an innovative new
product needs to consider such assets very carefully; (iv) how a small company
might acquire such assets.
2 Prepare a PowerPoint presentation to give to a group of first-year business
students explaining what a business model is and why the selection and
implementation of a business model is an important aspect of successful
innovation. Try to bear in mind that your audience probably knows very little
about innovation and that it is very important to convey to them the principal
concepts associated with the notion of a business model. Your presentation
should also indicate why business models are becoming increasingly important.

Further reading
1 Teece, D. J. (1986) ‘Profiting from Technological Innovation: Implications for
Integration, Collaboration, Licensing and Public Policy’, Research Policy, 15, pp.
285–305.
The definitive work. In many respects this is what started it all. This study
analysed some of the reasons for the failure of innovations and introduced the
concept of complementary assets. In so doing it showed how one could have
amazing technological breakthroughs, but new and better technology was not
enough to ensure success in innovation. Teece (1986) went on to show in some
detail what that something else was.
2 Kastelle, T. and Steen, J. (2011) ‘Ideas Are Not Innovation’, Prometheus, 29
(2), pp. 199–205.
A provocative article, that provides a valuable and all too rare critical
perspective. Worth reading because it looks at many of the big issues. It
highlights how innovation is about very much more than having good ideas. In
this case it focuses not so much on the technological as the creative input to
innovation and shows how creativity alone is insufficient for innovation.
3 Hurmelinna-Laukkanen, P. and Puumalainen, K. (2007) ‘Nature and
Dynamics of Appropriability: Strategies for Appropriating Returns to
Innovation’, R&D Management, 37 (2), pp. 95–112.
One of very few studies to look specifically at how firms capture value. The
focus is primarily on appropriability mechanisms and the authors consider a
varied range of such mechanisms. It provides a valuable perspective on how
firms, in very practical terms, set about capturing value. It stands in contrast to
the assumption that is often made that the only way to capture value is by
patenting it.
4 Swann, G. M. P. (2009) The Economics of Innovation: An Introduction,
Edward Elgar, Cheltenham.
Titles can be deceptive! While there is no doubt that as the title implies this is a
book about economics, in fact the treatment of innovation is broader and more
wide-ranging than one might expect. In particular, there is some excellent
coverage of the practical side of value capture. The chapter on intellectual
property not only considers why firms have to give serious consideration to
value capture, it also reviews a range of ways in which this can be brought
about.
Intellectual Property Rights

Objectives
When you have completed this chapter you will be able to:
appreciate the rationale behind the various legal rights associated with
intellectual property
identify the various types of intellectual property right (IPR)
distinguish the benefits conferred by intellectual property rights
differentiate the remedies available to those whose intellectual property
rights have been infringed
show how intellectual property rights can be used to create value for their
creator.

Introduction
We saw in the previous chapter that value capture forms a key aspect of innovation.
We saw too that there are a variety of ways in which one can seek to appropriate
value. These include institutional protection – which comprises a series of legal
rights provided by an institution, namely the State. These rights are very specific
and are in relation to intellectual property.
This chapter is about intellectual property in general and intellectual property
rights (IPRs) in particular. The nature of intellectual property and how it arises is
considered. The different forms of intellectual property right are introduced and
explained in detail. So too are the various mechanisms and procedures for
registering these rights.
Having acquired an intellectual property right what do you do with it? The
chapter goes on to explain the different forms of protection associated with each
intellectual property right. Similarly, the various institutions and individuals
associated with the process of registration are analysed. There is then consideration
of the forms of protection offered by intellectual property rights, and finally a short
section showing how intellectual property can be used, in particular how it can be
traded in much the same way as other forms of property (i.e. bought and sold in
various ways), in order to create value for its creator.

Intellectual property and intellectual property rights


Intellectual property is property of the mind or intellect. In the context of innovation
it particularly refers to the thoughts and ideas that lie behind the production of
something new and different. Where do the ideas come from in the first place? They
are essentially the product of the human intellect through the exercise of human
creativity and ingenuity.
According to Swann (2009) intellectual property involves three key attributes:

▮ It is intellectual – meaning that it is the product of the human mind or intellect.


So it is created by human beings and is the product of cognitive (thought)
processes. Hence it is the output of a creative process.
▮ It is typically intangible – even though it may be represented in a physical form
such as a drawing or a formula. But its value is not limited by the physical form
that it takes.
▮ It is treated in law as property – just like any other form of property such as
land or physical objects such as cars. As property, it carries ownership rights.
However, unlike physical forms of property that are tangible, ownership of
intellectual property does not normally carry with it the ability to deny access
to it.
Because of these three attributes, intellectual property has one very important
characteristic: it is often very easy to copy. Thus, as Swann (2009: 104) observes it is
relatively easy to copy CDs, or copy musical tracks downloaded from the Internet,
or photocopy a confidential document that contains a description of an invention or
a new formula. This applies particularly when the intellectual property has been
codified in some way to turn what is normally tacit knowledge into explicit
knowledge (Newell et al., 2002).
For innovation this is where the problem lies, because copying has the potential
to deny the creator some or all of the commercial benefit they would normally
expect to derive from their creative endeavour. This in turn acts as a major
disincentive for potential creators. Why invest a lot of time, effort and resources in
something, only for someone else to profit from it? With such a disincentive
surrounding creative effort, society will be the poorer because those individuals
with the creative intellectual powers to originate new ideas will be less inclined to
bother with innovation.
To overcome this potential loss to society, as we saw in the previous chapter
various forms of institutional protection are provided. The institution that provides
them is the State. It does this by giving legal recognition, in the form of intellectual
property rights, to the ownership of the products of creative effort. In turn, the
proprietor can use this legal recognition to stop other people exploiting his or her
property. Hence intellectual property rights create for the innovator a system by
which he or she can benefit from their ingenuity. As with other forms of property,
the proprietor, as the owner, may choose to sell the intellectual property rights or
license them to others. However, since many intellectual property rights are
monopolistic in nature, the State for its part requires that certain rigorous tests be
met before such rights will be granted.
In the UK a government agency, the Intellectual Property Office, is responsible
for granting intellectual property rights. The main forms of intellectual property
right are shown in Figure 7.1.

Figure 7.1 Forms of intellectual property and associated rights

Source: Bainbridge (2007: 4).

Each of these rights provides legal recognition of ownership for the creator of a
new design, new product or written work. Ownership in turn gives the owner of the
IPR the right to stop others from exploiting his or her intellectual property, for a
time at least. In this way intellectual property rights provide the creator with a
system by which he or she can ensure that they benefit from their creative and
intellectual endeavour.
It is important to note that intellectual property rights apply to two different
forms of intellectual property (see Figure 7.1), namely creative and reputational
intellectual property. The creative form arises directly from creative activity, while
reputational is the product of a more indirect process in the sense that it is creative
activity leading to reputational enhancement. Just as the nature of intellectual
property takes two forms, so intellectual property rights can arise in two different
ways. Some intellectual property rights are inherent in the sense that they arise
automatically without any kind of intervention taking place, while others are the
product of a formal registration process.

Intellectual property rights through registration

Patents
A patent is a 20-year monopoly right (in the UK) granted by the state to an inventor.
The object of a patent is to buy breathing space for the inventor. It is a reward for
invention, designed to provide the patent-holder with an exclusive right to benefit
from it, but for a limited period of time. Exclusivity gives the patent-holder the right
to prevent others from making or selling a patented product or using a patented
process. It is a ‘social bargain’ designed to promote innovation and the spread of
new ideas, and in return for exclusivity the patent-holder is obliged to provide the
Intellectual Property Office with full details of how the invention works (Figure 7.2).

Figure 7.2 Patents and the State

The origin of the patent system in the UK goes back to mediaeval times when the
monarch granted individuals monopolies for a variety of purposes. The actual term
‘patent’ is derived from the Latin litterae patentes, meaning an open letter intended
for public display. Over time it became abbreviated from ‘letters patent’ to just
‘patent’.
The function of patents is to stimulate and encourage innovation. Without the
assurance of a temporary monopoly there would arguably be no incentive to
innovate. Why invest time, effort and money on a breakthrough that someone else
can then copy and sell? As Levy (2012) notes, ‘patents have created an environment
that led to such landmark technologies as the cotton gin, Morse code, the Yale lock,
the Xerox machine, the laser and the hula hoop’. If the likelihood of copying can be
reduced, the chances of financial success for the innovator are greater. This is
actually a matter of public policy, as the State has to weigh the benefit to the public
interest of encouraging innovation against the cost (to the public) of a slower rate of
diffusion (i.e. take-up) of the innovation.
In practice, it has been found that a 20-year monopoly gives the innovator
sufficient incentive to invent, while ensuring that the resulting innovation does not
command a premium price for too long because of the lack of competition. The State
has to weigh the benefits of diffusion leading to the rapid spread of a new
technology against the benefits to creativity and innovation arising from granting
inventors exclusivity.
In the UK the body that deals with patents on behalf of the State is the Intellectual
Property Office. Before a patent is granted by the Intellectual Property Office, an
inventor has to show that the invention is new. In practice this means meeting three
conditions:
1 Novelty. An invention must be new. According to the Patents Act 1977 an
invention may be considered new if it ‘does not form part of the state of the art’.
State of the art is all about whether the invention has been made public prior to
the date at which the patent application is filed. Making something public is quite
narrowly defined. Using an invention on a single occasion in one location would
be sufficient for an invention to be part of the state of the art. This has important
implications for innovators. Demonstrating an invention, perhaps to potential
investors, prior to filing a patent could easily jeopardise the eventual granting of a
patent. Hence as Bainbridge (1999) advises, anyone contemplating field trials of a
prototype invention ought to file a patent application before conducting any such
trials, if there is a possibility of members of the public seeing the invention.
2 Inventive step. An invention must involve an ‘inventive step’. Essentially this
means that it must not be obvious. This raises the question of obvious to whom?
The answer is that it must not be obvious to someone skilled in the art, which is to
say a notional skilled worker. The notional skilled worker does not have to be an
expert: rather he or she is implied to have a general knowledge of the subject.
Hence, the notion of an inventive step implies that the apparatus being patented
would strike someone with a reasonable general knowledge of the subject as
incorporating something that constitutes a genuine invention. In the ‘windsurfer’
case (Windsurfing International Inc v. Tabur Marine (Great Britain) Ltd), it
was held that someone who had assembled a surfboard powered by a freesail
system and sailed it at Hayling Island in Hampshire 10 years before Windsurfing
International filed its patent for a windsurfer, had effectively anticipated the later
patent (Bainbridge, 1999: 351). Consequently it was held that there was no
inventive step. Similarly in the case of Dyson Appliances Ltd v. Hoover Ltd (2000)
it was held that, because the vacuum cleaner industry was firmly committed to the
use of bags in vacuum cleaners, a bagless cleaner was not obvious and therefore
involved a genuine inventive step.
3 Industrial application. Finally, an invention has to be capable of being used in
some kind of industry. This reflects both the history of patents, which were at one
time referred to as ‘industrial’ property rather than ‘intellectual’ property and their
practical nature. The requirement for an industrial application rules out scientific
discoveries. The discovery has to be incorporated into some sort of apparatus,
device or product if it is to be patentable. Alternatively, if it is produced by an
industrial process it may be patentable. Thus, the discovery of the drug pencillin
was not patentable as such but it could be patented when it was produced through
an industrial process.
Providing it can meet these three tests then an invention is patentable. However,
certain items are excluded (though this is effectively covered by the need for an
industrial application). Scientific theories, mathematical models and aesthetic
creations (e.g. literary or artistic works) are excluded. Computer software and
business methods (at least outside the US) are among those excluded. It is possible
to patent software-related products. If the software results in the introduction of a
‘technological innovation’ then it may be patentable. The critical point, in Europe at
least, is that as long as the software brings about a ‘technical effect’ leading to an
inventive step that goes beyond the normal physical interaction between the new
program and the hardware leading to a technical improvement in the running of the
computer or an attached device (such as making the computer memory usage more
efficient), then it may be patentable. In the US around 15 per cent of the 170,000
patents granted each year are for software (Gapper, 2005).

Mini Case: BTG Sues Amazon over Tracking Software


BTG, the British patent licensing company, is suing a group of American
retailers including Amazon.com, the largest online retailer in the world, for
allegedly infringing its rights over a technology to track customers’ use of the
Internet.
The company’s lawsuit, filed in a Delaware court, claims the retailers are
using a technique it has already patented, to monitor when customers move
from one site to another. The technology is important because retailers will
normally pay a fee to other sites that direct traffic their way.
BTG is claiming an undisclosed amount of damages from the group, which
also includes BarnesandNoble.com, the electronic version of America’s
pervasive bookshop chain. BTG buys patent rights to new technologies and
licenses them to manufacturers. It also sets up its own companies to develop
technology and is best known for its subsidiary Provensis, which is testing a
revolutionary varicose vein treatment. The treatment ran into trouble at the
end of last year when US regulators suspended its tests.
The company, originally set up by the government to protect and patent
the country’s inventions, has a history of taking on opponents much larger
than itself to protect its wide-ranging intellectual property rights.
Earlier this month BTG sued Microsoft and Apple for including patented
technology in their operating systems that allows users to obtain software
updates over the Internet.
In a statement about the latest lawsuit, BTG said: ‘The suit asks for
unspecified damages for past infringing activity and an injunction against
future use of the technology.’ It also said it had tried to reach an agreement
but had failed. Fighting the case in court could take three years, making it
possible that the parties will yet reach an agreement over the technology.
Ian Harvey, BTG’s chief executive, said the patents are ‘fundamental to the
tracking of users for online marketing programmes’, adding that the
technology’s commercial potential is ‘significant’. BTG’s shares moved ahead
4p to 140p.
Source: Griffiths (2004: 48).

Obtaining a patent
To obtain a patent in the UK, an inventor has to follow a procedure with a number of
clearly defined steps:
1 Making an application to the Intellectual Property Office: the application, on
forms supplied by the Intellectual Property Office, has to contain:
▮ a request for a patent
▮ identification of the applicant
▮ a description of the invention.
The description has to be sufficiently clear and complete; the level of protection
will otherwise be limited. Once the application has been received by the
Intellectual Property Office it is said to be ‘filed’.
2 Search and publication: once a claim for a patent has been filed and a search fee
paid (within a 12-month period) a preliminary search will be undertaken by an
Intellectual Property Office examiner who will go through the records of
previous patents to see if the invention meets the necessary conditions and is in
fact new. While this is taking place the patent is said to be ‘patent pending’.
3 Full (‘substantive’) examination: this is the final stage in the process. The
applicant pays a further fee within six months of publication and detailed
examination of the description then takes place to see if it meets all the relevant
legal requirements of the Patents Act 1977. Attention will focus on whether
documents reported at the search stage, and any others which have come to
light since, indicate that the invention is not in fact new or is obvious. The
applicant gets a report and may make amendments at this stage. Once the
examiner is satisfied that it meets all the requirements the patent is issued.
There is no such thing as a worldwide patent. In general, an application for a patent
must be filed and a patent granted and enforced in each country where patent
protection for the invention is sought, in accordance with the law of that country. It
is possible to obtain a patent on a regional basis through regional bodies like the
European Patent Office (EPO) or the African Regional International Patent
Organization (ARIPO). It is also possible to file an international patent application
under the terms of the Patent Cooperation Treaty (PCT) with the World International
Patent Office (WIPO) in Geneva (Mostert, 2007). This has the effect of a national
patent application in all filing countries (currently 125). However, the PCT
application has to be followed by national applications in those countries where
patent rights are desired. (The advantages of using this route are that the inventor is
given more time and there are cost reductions.)

Patent agents
Anyone contemplating filing a patent should consider employing a patent agent (or
patent attorney as they are sometimes known). They are lawyers who specialise in
intellectual property law. They are typically individuals who combine a scientific or
engineering background with extensive legal knowledge. They not only provide
advice on the protection of intellectual property (for a fee) but also give guidance
on whether an invention is patentable or not, and will draft patent applications on
behalf of clients and ensure the appropriate process is followed. Because they are
involved with patents every day, they can provide a wealth of valuable information
on what is involved in taking out a patent as well as conducting appropriate
searches of patent records and ensuring that a patent application is drafted
sufficiently specifically to ensure that it provides the inventor with genuine
protection from infringement. Many large companies, especially those in hi-tech
industries like aerospace and pharmaceuticals, employ full-time patent agents to
oversee the protection of their portfolio of intellectual property.

What protection does a patent provide?


A patent will not stop others from copying the invention. As with most forms of
intellectual property, a patent is a legal right that is enforceable by legal action. If a
direct infringement of the patent occurs – that is, someone brings out a very similar
product without the patent owner’s consent – then the inventor has to take legal
action through the courts in order to secure a remedy.
There are four main remedies (Bainbridge, 2007) that the courts provide:
1 an injunction restraining the defendant from carrying out activities that
infringe the patent
2 damages to compensate for the loss suffered as a consequence of the
infringement, or an account of profit where instead of damages the award is
based on the profits actually made and attributable to the infringement
3 an order that the infringing articles be destroyed or delivered up (i.e. handed
over to the patent-holder)
4 a declaration that the patent is valid and has indeed been infringed by the
defendant.
The last-named may seem a bit like stating the obvious, but is in fact very important.
It is quite common where a case of patent infringement is alleged, for the accused to
counter-petition alleging that the patent is not valid, should never have been granted
to the patent-holder and should therefore be rescinded. When James Dyson, for
instance, took action against Hoover, alleging that their Triple Vortex cleaner
infringed the patent on his dual-cyclone technology, Hoover promptly started a
counter-action claiming that Dyson’s patent was not valid in the first place. Under
these circumstances the court is obliged to consider the validity of the patent and if
it upholds it, then this is valuable reassurance for the patent-holder. An injunction, in
contrast, is a restraining order that halts production of the offending article
immediately, thereby preventing any further instances of infringement. Damages
and an account of profit are both designed to provide a measure of financial
compensation, while an order either removes the prospect of further infringement at
some later point or calls upon whoever has perpetrated the infringement to hand
over the counterfeit items so that the patent-holder can destroy them. Very
occasionally the copies are so good and so like the real thing that the patent-holder
may choose to sell them. In the case of the Workmate, the portable workbench
developed by Ron Hickman and licensed to the American DIY manufacturer Black
and Decker, the Japanese ‘Kinzo’ was such a good copy that when the
manufacturers were required to deliver up the offending counterfeit stock as part of
an out-of-court settlement, Black and Decker was able to relabel the confiscated
items and sell them (Landis, 1987).
Just which of the four remedies the court will apply varies according to the
circumstances. Sometimes, as was the case with Hoover’s infringement of James
Dyson’s patent, all four remedies are applied. It should be stressed that in the UK at
any rate, the financial penalties imposed are typically modest. They may well not
cover all the legal costs involved in prosecuting the infringement. For instance, Ron
Hickman’s legal costs for defending the patent on his Workmate workbench, came
to significantly more than he or Black and Decker ever received in damages.
However, despite this, the cumulative effect of more than 20 legal actions was to
eventually produce the desired effect, namely discouraging counterfeit products
(Landis, 1987). Of course, for this to happen it is essential that great care is taken in
drafting the patents in the first place. But if the patent has been properly drafted,
then if the inventor does have to resort to the courts it should enable him or her to
eliminate counterfeit copies, at least for the 20 years that a patent remains in force.

Mini Case: ‘Floor Wars’


James Dyson’s bagless vacuum cleaner was a breakthrough in vacuum
cleaner technology when it first appeared on the UK market in 1993. Prior to
then, virtually all vacuum cleaners worked on the principle of extracting dust
and dirt by passing the stream of dirty air through from a bag which acted as
a filter in which the dirt collected. Vacuum cleaners employing a bag to filter
the dirt formed a dominant design and had done so since Hoover pioneered
the vacuum cleaner back in 1908.
The principle of cyclonic separation utilised in Dyson’s new cleaner was
not new and formed part of the ‘prior art’. What was new was the use of more
than one cyclonic separator in series to provide successively better filtering
(Van Dulken, 2000). This was the technology which Dyson developed and
patented. Unable to persuade any of the existing vacuum-cleaner
manufacturers to adopt his patented dual-cyclone technology, Dyson was
forced to set up his own company. His DC01 bagless vacuum-cleaner racked
up £2.4 million of sales in its first year and within two years it had become the
UK’s best-selling vacuum cleaner. Eventually some of the established vacuum-
cleaner manufacturers produced their own ‘bagless’ designs. One such
company was Hoover (the European subsidiary owned by the Italian firm
Candy), who brought out their own bagless vacuum cleaner, the Hoover
Triple Vortex (HTV) in 1999. It was similar to Dyson’s dual-cyclone cleaner. So
similar that Dyson brought an action against Hoover alleging infringement of
his European Patent (UK) No. 0042723 entitled ‘Vacuum Cleaning Appliance’.
Dyson sought an injunction, delivery up and either damages or an account of
profits. Dyson’s claim for patent infringement rested on Hoover’s use of three
cyclones. The first is cylindrical and of lower efficiency. The second does not
deposit particles but rather serves to create two air streams. The third is high
efficiency. Unique to the HTV is that air is recirculated through the second and
third cyclones.
Hoover in turn challenged the validity of Dyson’s patent, with a counter-
claim requesting that the patent be revoked. The first part of Hoover’s
challenge to Dyson’s patent was that ‘prior art’ anticipated Dyson’s patent.
The court rejected that any of the prior art cited by Hoover, anticipated
Dyson’s patent. In particular, the court looked at US Patent No. 2 768 707 for
an industrial unit entitled ‘Separator for use with Vacuum Cleaning’. The court
also looked at the Johnston/Donaldson (US Patent No. 4 204 849) appliances
and found that they could not be described as cleaning devices since in reality
they were dust-control apparatuses.
For the second part of Hoover’s challenge, the court considered the
‘windsurfer test’ (as set out in Windsurfing International v. Tarbur Marine
(Great Britain Ltd), and in particular looked in some detail at the relevant
skilled man and common general knowledge. The court held that it was not
obvious to move to the invention from the prior art and that the technology of
the prior art was a long way away from the patent.
With Dyson’s patent upheld, Hoover had to defend its alleged infringement.
Hoover’s case for non-infringement was that the first and second cyclones
were not connected in series as required by the patent. The court rejected this.
Hoover also argued that Dyson’s patent required the appliance to be
unidirectional, whereas Hoover’s appliance was not. The court said that
recirculation of the air was not relevant to the issue of infringement. Hoover
also tried to argue that its highest efficiency cyclone was not ‘frusto-conical’
shaped as the patent required, but was in fact trumpet-shaped. The court
rejected this very literal interpretation of frusto-conical. The High Court ruled
that Hoover had indeed infringed Dyson’s patent (Dyson, 2003). Although
Hoover appealed, the company was required to stop production of its Triple
Vortex cleaner, deliver up existing machines, and in due course pay Dyson £4
million in damages together with £2 million in costs.
Sources: Dyson (1997 and 2003); Tidd et al. (2001); Eaglesham (2001); Van Dulken (2000).

Registered designs
There are two forms of protection available for designs: registered designs and
design rights. Registered designs require registration, design rights do not. While
there is considerable overlap between the two forms of protection, registered
designs cover designs where outward appearance is important, while design rights
cover designs that are more functional. It is important to note in both cases the
protection applies to the design and not the product or article.
The purpose of a registered design is to provide protection for the look or
appearance of products. Registered design tends to apply particularly to aspects of
products such as shapes or surface patterns. As a form of protection a registered
design is likely to be particularly appropriate for products where appearance is a
key attribute, such as jewellery, glassware and furniture. However, the Designs
Registry, the part of the Intellectual Property Office that deals with this particular
intellectual property right, receives applications from every branch of technology
including cars, laptop computers, washing machines, tennis racquets and even such
mundane items as paperclips!

What is a registered design?


In essence it works in a similar way to a patent in that it too is a monopoly right that
can be bought and sold and which is used to stop copying. However, it is a monopoly
right that covers the outward appearance of an article. The main features of a
registered design are:
▮ the design must be new and materially different
▮ it covers appearance resulting from the lines, contours, colours, shape, texture
or materials of a product
▮ two exceptions are – must-fit and must-match
▮ two- and three-dimensional objects are covered
▮ duration – 5 years but it can be extended to 25 years.
As with patents, a design has to be new if it is to be registered. A design is regarded
as new if it has not been made public in the UK. There is more flexibility than with
patents, because a design can be shown for purposes such as marketing during the
12 months preceding registration. Just as patents involve disclosure of the details of
the invention to the public, so too designs that have been accepted as registered
designs are open to public inspection at the Intellectual Property Office.
The significance of appearance highlights the main distinction between this form
of intellectual property right and patents. For registered designs, outward
appearance is important – one cannot rely on function, operation, manufacture or
material of construction of an article, as in the case of a patent. What constitutes an
article has recently been extended by an EC Design Directive to include:

‘... any industrial or handicraft item intended to be assembled into a complex


product, packaging, get-up, graphic symbols, and typographical typefaces, but
excluding computer programs. ’
In terms of the two types of exception, the must-fit exception relates to function and
means that it is not possible to gain a registered design for a design that is purely a
matter of function. Therefore, one could not gain a registered design for the jaws of
a spanner, because the jaws form part of the function of a spanner. Similarly, the
must-match exception relates to parts of a design. One cannot gain a registered
design for part of a design where the part is determined by the shape of the whole. It
is not possible to gain design rights for things like the front-wing panel of a car,
because the shape is determined by the overall shape of the car.
What does one gain from a registered design?
As with patents the main benefit is the exclusive right in the UK to make any article
to which the design has been applied. The significance of this right is that the owner
can then take legal action against anyone who infringes upon this exclusivity, which
in turn forms a deterrent to would-be copiers.

Trademarks
Intellectual property does not only apply to the products of creative effort but also
covers commercial reputations. Specifically, this refers to trademarks. A trademark
is a sign used to distinguish the goods or services of one trader from those of
another. The law defines a trademark as:

‘... any sign capable of being represented graphically which is capable of


distinguishing goods or services of one undertaking from those of other
undertakings. ’
Typically the term covers words, logos and pictures, although these days it has been
extended to include other forms used to identify particular goods or services.

Mini Case: Chocs Away


In a two-fingered rebuke to its biggest rival, Cadbury has delivered a major
snub to the Swiss confectioners Nestlé by blocking its attempt to trademark
the shape of its world-famous Kit Kat bar. It may be one of the most
recognisable chocolate bars in the world, but the Intellectual Property Office
has ruled that – while fans recognise the ‘fingers and grooves’ identity of the
Kit Kat – the style aspect of the design was not the primary reason they
bought it.
Siding with the argument brought by Cadbury, the ruling officer said the
snappable lengths of wafer were designed to aid portioning, while it was the
brand that drove global sales. The argument means that Cadbury could be
free to introduce a rival ‘breakable’ bar in the UK. It presents an unusual
complication for Nestlé, which has successfully gained a registered trademark
on the bar’s appearance at European Union level.
Despite attempts by the British makers of Dairy Milk to block that decision
earlier in the year, Nestlé successfully convinced EU officials that the public
associated the hotly contested two-fingered shape with Kit Kat exclusively.
Taking a break from proceedings, a representative for Nestlé said the Swiss-
based company was ‘perplexed and disappointed’ by the UK ruling. ‘Kit Kat
was launched over 75 years ago and is one of the most iconic shapes of any
chocolate bar, recognized around the world,’ he said. ‘We are perplexed and
disappointed with this decision by the UK Intellectual Property Office. The UK
is the birthplace of Kit Kat and we are assessing whether to appeal.’
The Kit Kat furore marks the latest in a series of bitter battles for chocolate
trademarks. Last year, Cadbury bested its rivals by successfully claiming
legal rights over the distinctive deep purple colour of its Dairy Milk
packaging. A partner at a legal firm that specialises in trademark law said that
Nestlé would probably contest the UK ruling, but voiced concerns that the
pro-Cadbury decision could be more legitimate than that taken at the EU.
Gary Johnston, a partner and trademark lawyer at Mathys & Squire, told The
Grocer trade magazine that he expected Nestlé to appeal, adding: ‘It is odd
that the IPO in the UK has refused registration of the Nestlé Kit Kat mark,
particularly when it is the subject of a Community Trademark registration
which is de facto valid and subsisting in the UK. The decision turned on the
evidence provided by the IPO as distinct from that filed at the Community
Trademarks Office. This saga may trundle on for a little longer. Personally, I
see no reason why Nestlé cannot secure registration of the iconic shape which
has been used extensively, spans generations, is heavily promoted and readily
recognized by the public.’
Fellow intellectual property lawyer Lee Curtis, of Harrison Goddard Foote,
added: ‘The hearing officer of the UK application suggested he came to a
different decision to the Community Trademarks Office as he had the benefit
of “better” evidence from experts. He also made the comment that although
the shape of the bar may have become associated with Nestlé...it did not act
as a trademark.’
Source: Duggan, O. (5th July 2013) ‘Chocs away. Cadbury’s two-fingered snub to Kit-Kat makers
Nestlé’, Copyright © The Independent.

Registration of a trademark
Trademarks have been an important feature of commercial life for a very long time.
Since mediaeval times they have been used by traders to differentiate their goods
from those of others. Trademarks have taken many different forms. In the
eighteenth century, as shops became more widespread, signs were used by
shopkeepers to denote the type of goods they were selling. In the nineteenth century
the appearance of manufactured and standardised consumer products led to
trademarks being used by manufacturers to differentiate their products. Among the
first products to use trademarks in this way were everyday household items like
soap, tea and chocolate.
As the means for communicating with customers have been extended and
become vastly more sophisticated (i.e. through television, animation, computer
graphics, simulation and special effects), so the scope for differentiating products
has expanded and trademarks have become more widely used. Developments in
marketing such as branding and relationship marketing have increasingly led
companies to enhance the image and reputation of their products and services, and
trademarks have normally formed an important part of this process. Similarly, the
increased emphasis on merchandising has also led to trademarks assuming greater
importance.
Although some protection is available for trademarks without registration
through common law by means of an action for ‘passing off’, registration of a
trademark provides the most comprehensive protection for a name, brand name,
logo or slogan. Registration via the Intellectual Property Office lasts for ten years
and can be renewed indefinitely. It was first introduced in the UK in 1875 through the
Trade Marks Registration Act of that year. The very first registered trademark was
registered by the brewing concern Bass in the form of a red triangle symbol for one
of its pale ales. The trademark is still in use today and Bass reckons that over the
years it has had to deal with 1,900 cases of infringement (Bainbridge, 1999).
In the past only words or logos or combinations of the two were registrable, but
the Trade Marks Act 1994 was a landmark piece of legislation that greatly expanded
the range of things that could be registered as trademarks. Among the more recent
and more unusual registrations have been:

▮ the Coca-Cola bottle


▮ a Chanel perfume bottle
▮ Bach’s Air on a G-string
▮ the colour green
▮ the sound of a dog barking
▮ the colour yellow
▮ the slogan ‘exceedingly good cakes’.
These registrations reflect the new classes of trademark that were eligible for
registration under the 1994 Act. Specifically the items that can now be registered
include:

▮ domain names
▮ logos
▮ music
▮ slogans
▮ colours
▮ shapes.
Another change introduced by the Trade Marks Act 1994 is in relation to
infringement of trademarks. The Act places a statutory duty upon trading standards
officers to take action against those who trade in counterfeit goods using
unauthorised trademarks. Trading standards officers have the power to seize
counterfeit goods, thereby assisting in the enforcement of trademarks.
In order for a trademark to be registered, an application has to be made to the
Intellectual Property Office. This needs to satisfy a number of criteria:

▮ section 1(1) of the Trade Marks Act 1994 which specifies what can be registered
and now includes colours, shapes and pieces of music
▮ distinctiveness – in the sense that the trademark singles out the company and its
product from its competitors
▮ non-deceptiveness – in the sense that it should not in any way mislead the public
or lead them to believe that the product has attributes that are not in fact
present
▮ no conflict with existing trademarks.

Mini Case: Redwell Sounds Like Red Bull


Energy drink giant Red Bull has accused a tiny Norfolk brewery of tarnishing
its trademark – claiming that customers could be confused by their ‘similar’
names. Redwel Brewing, a five-month-old business in Norwich that employs
just eight people, has been warned that it faces legal action unless it changes
its name.
The powerful soft drinks giant, which sold 5.2 billion cans last year, said
the brewery could ‘dilute’ its international brand. It was told to ‘immediately
withdraw [its] UK tradematk application’ or face Red Bull’s legal might, in a
letter drawn up by the firm’s brand enforcement manager, Handjörg
Jeserznick. In his letter Mr Jeserznick said that both names ‘consist solely of
English words and contain the common element “red”’. He added:
‘Furthermore the term B-U-L-L and the term W-E-L-L share the same ending
and just differ in two letters. The ending (L-L) is identical and therefore the
terms RED BULL and “redwell” are confusingly similar from a visual as well
as from a phonetic point of view.’
Benjamin Thompson, one of four directors at the brewery said he was
‘completely surprised’ when the letter arrived. He told The Independent
newspaper that the brewery which supplies 3,000 litres of beer weekly, was
named after Redwell Street in Norwich and that the choice had nothing to do
with the Austrian-based firm. He said: ‘The first time it occurred our name was
even remotely similar was when we received the letter. All we want to do is
run our business in peace.’ He added that the brewery had offered to change
their trademark application so it did not include soft drinks, but that Red
Bull’s lawyers had taken a ‘very firm stance’. He said that they were told they
could not use the ‘Redwell’ name to make any branded merchandise for the
brewery, to make shandy, or to make any beer that differed from their
original ale.
Among Redwell’s supporters is copywriter Graham Lineham, who tweeted:
‘My friends at @Redwellbeer in Norwich are being threatened [with] legal
action because @redbull think they own the word “red” and the letters LL.’ As
for legal costs Mr Thompson said the brewery could not afford to go through
the courts. ‘We’re a tiny brewery, with eight employees. We generate a certain
amount of income which we’re trying to use to expand our operation, not
waste on solicitors’ bills. We really don’t want to [change our name]. We don’t
have the money; we would have to re-brand completely.’
Note: Shortly after this appeared in the press, Red Bull announced that they
would not proceed with a legal challenge against Redwell Brewery using the
name Redwell on its beer.
Source: Morrison, S. (15th August 2013) ‘Tiny brewer sued – because Redwell sounds like Red Bull’,
Copyright ©The Independent.

Intellectual property rights that are inherent


It is a common misconception that protection for intellectual property derived from
the creative activity of individuals, only arises when the registration of a particular
piece of intellectual property is complete. In reality as Figure 7.1 indicates, there are
intellectual property rights that come into existence automatically upon creation or
which otherwise arise, without the need for the completion of any kind of
registration process or the payment of accompanying fees. These are normally
termed ‘inherent’ rights, precisely because no registration is required. They
represent an important class of intellectual property rights.

Design right
Design rights protect the appearance of a product (or part of a product) in terms of
its lines, contours, colours, shape, texture and material. This right arises
automatically once the design is recorded in a design drawing. Beyond that there is
no formal registration process, although it is sensible to document the design in
terms of authorship and date. A design right for a particular design lasts for either 10
years from the first marketing of products that use the design, or 15 years from
creation of the design, whichever occurs first.
The owner of such a design has the right within the first five years to prevent
anyone from copying the design; however, for the remainder of the period, third
parties may apply for a licence of right in respect of the design. This means that they
are entitled to a licence to make and sell products copying the design. Again, from a
practical perspective, it is advisable to retain any working documents prepared in
creating the design, and sign and date any design documents.
Not all designs qualify for design right; the design must be the shape or
configuration of a product. The owner of an unregistered design right does retain
the right to take action in the event that an unauthorised person copies the design to
produce a design that is the same, or substantially similar. There are limits to the
scope of protection offered by unregistered design rights, so if a third party
independently creates a design that does not involve copying your design, this will
not be an infringement of an unregistered design right. In contrast, where the design
right is registered, if the effect of any new design is to create an impression that the
new design is the same as your design, this would be classed as an infringement of a
registered design right. This is an important distinction between unregistered and
registered design rights.

Copyright
Copyright is an inherent right that comes into effect through creative effort, giving
rise to a wide range of creative works including literary, artistic and musical works.
Other types of work are also included, such as films, sound-recordings broadcasts
and typographical layouts, though with these, unlike the others, there is no
requirement for originality. Copyright is automatic and comes into existence upon
creation of the work. The Berne Convention requires signatories to recognise the
copyright of authors of other signatory countries (currently 164) in the same way
that they recognise the copyright of their own nationals.
Copyright confers an exclusive right to certain actions in relation to the work
upon the owner. The actions concern the exploitation of the work and include selling
copies, giving others permission to copy and the like. The significance of the
exclusive right is that if someone who is not the copyright holder and does not have
permission to copy makes copies and sells them, then the copyright owner can sue
for infringement and seek redress, such as an injunction to forbid the sale of copies,
together with damages.
The range of works covered by copyright is extensive. Literary works covers a
great deal more than books. In the commercial field it extends to authorship of many
things of a technical or commercial nature that perhaps would not at first seem to be
literary works, such as technical reports, equipment manuals, databases and
customer lists as well as engineering and architectural drawings and plans. Of
particular significance these days is that copyright extends to computer software.
Given the growth of computer applications this is a rapidly expanding field. Thus,
for a wide range of new products and services, such as videogames, for instance,
intellectual property rights may be less a matter of patent protection and more a
matter of copyright.
The copyright owner is normally the author who created the work, and, as with
many other forms of intellectual property right, he or she can assign it or sell it to
another. However, there are circumstances where copyright may be conferred not
on the author but on others. For instance, if authorship occurs during the ordinary
course of employment then copyright belongs to the employer. Similarly, a
contractor who creates a work will retain copyright unless the terms of the contract
specify that he or she will not retain copyright.
The exclusive right conferred by copyright extends to a wide range of activities
that includes:

▮ copying or reproducing
▮ adapting
▮ distributing
▮ issuing and renting
▮ public performance
▮ broadcasting.

However, copyright legislation does provide for certain activities of an educational


or academic rather than commercial nature to be undertaken without infringement.
For example, it is permitted to copy at least part of a work for the purpose of private
study or research. Similarly, reviews and other works of criticism can copy part of a
work.
The duration of copyright varies according to the nature of the work, and from
country to country. As Figure 7.3 shows, in the UK the longest period of copyright
protection relates to literary, musical, artistic and dramatic works where the
protection lasts for 70 years beyond the lifetime of the author.
Figure 7.3 Copyright timescales

Mini Case: Beatles for Sale


From The Guardian newspaper, 13 December 2013
With the absolute minimum of fanfare and the greatest reluctance, 59 Beatles
songs are being released next week. There won’t be the often talked about 28-
minute version of ‘Helter Skelter’, nor the ‘holy grail’ 1967 performance of
‘Carnival of Light’. But there will be four extra versions of ‘She Loves You’,
five ‘A Taste of Honeys’, three outtakes of ‘There’s a Place’ and two demos of
songs given to other artists.
The downloads of Beatles recordings, which have long been bootlegged
but never been legally available, include outtakes, demos and live BBC radio
performances. A spokeswoman for Apple would only confirm that the 59
tracks are being released. As to the company’s motivation: ‘No comment.’ Is it
because of the copyright laws? ‘No comment.’
The release is because of recent changes in European Union copyright
laws. Previously
artists would retain copyright for 50 years after a song was released. That was
increased to 70 years but another change makes unreleased material free of
copyright – and therefore in the public domain – 50 years after it has been
recorded. Industry observers say the Beatles release could be one of many
annual issuings of previously unreleased recordings.
The new – or old – Beatles recordings include 44 unreleased songs recorded
for BBC programmes in 1963. It includes ‘I Saw Her Standing There’, recorded
live for Saturday Club, presented by Brian Matthew, in March; ‘You Really
Got A Hold On Me’ recorded for Pop Goes the Beatles in September; and ‘Love
Me Do’ recorded for Easy Beat in October.
The release could well become a trend. Last January Sony released a four-
CD set of 86 Bob Dylan tracks and was unashamed about the reason, giving it
the subtitle The Copyright Extension Collection Vol 1. It was clearly not for
general consumption, however, because Sony released only 100 copies and
you would now have to pay more than £700 on eBay if you wanted one. Sony
followed that up with the release of the 50th Anniversary Collection: 1963.
Again it is only 100 copies – this time on six vinyl LPs – and it contains
unreleased recordings and outtakes which now benefit from copyright.
Source: Brown (2013), Copyright © Guardian News & Media Ltd 2008.

Passing off
Passing off is common law tort. As such it was established not through legislation
but through judicial precedent (i.e. the product of judgements by individual judges).
With passing off there is no requirement for registration. Instead, passing off is
based on the principle that a trader must not sell goods under the pretence that they
are the goods of another. To do so is to commit passing off.
Passing off is a form of unfair competition, where one party misrepresents the
goods or services they are selling, so as to potentially confuse consumers into
thinking they are buying a product made by somebody else. Those adversely
affected by such unfair competition use passing off to stop or prevent others from
copying their work. It is not confined to products but extends to things like
packaging, labelling, brand names or promotional materials.
For an action for passing off to succeed, the trader whose goods have been
passed off has to demonstrate three things known as the ‘holy trinity’ and first
enunciated in the so-called Jif Lemon case of Reckitt and Colman v. Borden:

▮ the claimants’ goods/services have acquired goodwill or reputation in the


marketplace that distinguishes them from competitors
▮ the defendant misrepresents his goods or services, whether intentionally or
unintentionally, so that the public have the impression that the goods offered
are those of the claimant
▮ the claimant’s trade is normally through loss of sales, though it could equally be
through damage to or dilution of reputation.
Although recent changes to trademark legislation have expanded what can now be
registered as a trademark, nonetheless as the Rihanna Mini Case shows, passing off
is still a valuable common law right that is still used by those who believe that their
goods/services have been misrepresented in some way.
Mini Case: Rihanna Wins T-shirt Face-off
The singer Rihanna won her legal battle with billionaire businessman Phillip
Green yesterday when the High Court ruled that Topshop duped customers
into buying unauthorised T-shirts bearing her image.
In a case with millions of pounds in damages and legal costs at stake, Mr
Justice Birss found that the company had ‘passed off’ the T-shirts as being
authorised by the singer, imperilling her reputation. The dispute centred on
the sale of a shirt showing a photograph of Rihanna taken during a video
shoot for her ‘We Found Love’ hit single in 2011. The offending garment,
which went on sale in March last year, was initially promoted as the ‘Rihanna
Tank’ before Topshop dropped the mention of the singer. But it continued
selling the top until it sold out last August.
The singer, who has an exclusive deal to design clothing for Topshop’s
rival River Island, along with two Los Angeles companies, took the action
against Arcadia Group Brands Ltd and Topshop. Mr Justice Birss, who dubbed
Rihanna a ‘world famous pop star’ with a ‘cool, edgy image’, ruled in her
favour at a hearing in London. A ‘substantial number’ of buyers were likely to
have been ‘deceived’ into buying the T-shirt because of the ‘false belief’ that it
had been authorised by Rihanna, he said. He ruled that the ‘goodwill’ of people
towards the singer had been damaged, that her merchandising business had
lost sales, and the singer had suffered ‘loss of control over her reputation in
the fashion sphere’.
But the singer still has a fight on her hands, with the fashion chain now
planning an appeal. In a statement, Topshop said: ‘We robustly dispute the
judge’s conclusion.’ The company added that it was ‘surprised and
disappointed’ at the decision. But the judge warned that the ruling should not
be taken as a signal for a flurry of legal actions from celebrities. ‘This case is
not concerned with so-called “image rights” ... however much various
celebrities may wish there were, there is no general right by famous people to
control the reproduction of their image,’ he said.
Source: Owen, J. (1st August 2013) ‘Rihanna wins T-shirt face-off after court rules Topshop duped her
fans’, Copyright © The Independent

Licensing
One of the key features of intellectual property rights (IPR) is that they provide
scope for licensing, that is to say, where the IPR associated with an invention has
been legally established through a patent, the holder can then permit someone else
to produce the invention in return for a fee. Such an arrangement is usually known
as a ‘licensing agreement’.
One of the attractions of licensing agreements is that the inventor does not have
to complete the final stages of the innovation process, such as manufacturing and
distribution. Where the inventor is an individual or a small company this can be a
very important consideration. Licensing provides a means whereby small start-up
businesses, lacking financial resources and complementary assets (e.g. reputation
and brand name, marketing expertise, merchandising capability, product support
facilities, etc.), can commercialise their technological innovation. Licensing not only
means that the innovator does not have to find the capital expenditure required to
build or buy the assets required, it also reduces the risk (Teece, 1986).
Both James Dyson, with his dual-cyclone vacuum cleaner, and Ron Hickman,
with his Workmate® portable workbench, were individual inventors. They did not
work for a company or have the backing of a company behind them. Consequently,
neither planned to produce their invention themselves. Both tried to interest large,
well-established consumer-product companies in their invention and persuade them
to take out a licence. Unfortunately both Dyson and Hickman, despite a great deal of
effort, found it extremely difficult to persuade a company to adopt their invention
and agree to purchase a licence, possibly because in both cases the invention was
unlike anything then on the market. What is significant is that both men felt that
licensing was the most sensible course of action. They recognised that they did not
have the expertise or the resources to undertake the final commercialisation phase
of the innovation process. Equally both men recognised the importance of asserting
and protecting their intellectual property (i.e. their inventions) through patents. As it
turned out both men did eventually find a company willing to take out a licence.
James Dyson persuaded a Japanese company, Apex, to take out a licence for his
dual-cyclone technology, while Ron Hickman had to start manufacturing and selling
his portable workbench on a small scale before Black & Decker agreed to take out a
licence.

Case Study: The Anywayup Cup


Mandy Haberman studied Graphic Design at St Martins School of Art, before
working for the Inner London Education Authority on an adult literacy
programme. She spent time as a mum with three young children and it was
then that she first dabbled with inventing. Her youngest child had severe
feeding difficulties as a baby and this spurred Mandy to develop a special
feeding bottle. She spent a long time researching feeding bottles and
eventually devised the Haberman feeder, a specialist feeding bottle supplied
to hospitals (Insley, 2012).
In the summer of 1990 Mandy was at the home of a friend, when a visiting
toddler using a conventional trainer cup with a lid and a spout, left a trail of
blackcurrant drink stains across an immaculate cream-coloured carpet.
Aware from her own children of the difficulties infants had in switching from
feeding bottles to conventional cups and the inadequacies of existing so-called
trainer cups, this experience set Mandy thinking about how trainer cups might
be improved in order to solve this problem. There were trainer cups on the
market at this time which could be manually shut off for travelling, but none
that sealed automatically between sips. Her aim was a trainer cup that would
seal between sips making it genuinely leak-proof, even if it was turned upside
down and shaken vigorously or simply left in an upside down position.

Figure 7.4 Patent GB-B-2266045

Source: Cole (2005).

Her idea was to replace the row of small holes in the spout of a
conventional trainer cup with one large opening. Into this opening a soft, slit-
cut membrane was moulded. The slit valve was of the same type as used on
feeding bottle teats. The valve controlled the flow of liquid. It would allow air
in between sucks but would only allow liquid out when suction was applied.
When the child wanted a drink, his or her suction would open the valve. When
not in use the valve would automatically return to the closed position. It was a
simple solution to a well-known problem that had been around for a long time.
Mandy experimented in the kitchen of her home in Hertfordshire. She built
several prototypes before finally coming up with one which would not leak
when it was lying on its side with the spout downmost, even if left overnight.
Mandy had learnt a lot from her early experience with the specialist
Haberman feeder. She employed a patent agent and she spent a lot more on
intellectual property rights, registering patents in all her possible strategic
markets. However, prior to proceeding with her initial patent application,
Mandy instructed her patent agent to conduct a search to see if there was any
‘prior art’. Nothing of significance was revealed and she duly filed a patent
with the UK’s Patent Office (now the Intellectual Property Office) in 1992. It
was granted as patent GB-B-2266045.
Once the patent had been finalised she approached a total of 18 companies
(Cole, 2005) that manufactured products for infants, in the hope that one of
them would be sufficiently interested to take out a licence. In each instance
she first got the company to sign a non-disclosure agreement. Several
companies expressed an interest including H.J. Heinz Company Ltd, Addis Ltd
(which sells under the ‘Maws’ trademark) and Jackel International Ltd (which
sells under the Tommee Tippee trademark). Despite this promising start, in the
end no licencing agreement was forthcoming with any of the companies.
Eventually Mandy found a small company (it had just five employees at the
time) located in Wales who were willing to help. V & A Marketing Ltd run by
Vic Davies and Adrian Llewellyn-Jones had been established to exploit
intellectual property rights for new and innovative products, and Mandy
granted them manufacturing and distribution licences to enable them to
produce and market her design. Working on a shoestring budget they
marketed and distributed the cup under the trademark ‘The Anywayup Cup’.
They decided to launch the cup at a trade exhibition but unfortunately
chose the Nursery Trade Fair which was one for nursery schools and crèches,
not trade buyers of baby products (Cole, 2005). Despite this, the new trainer
cup with its non-spill valve generated much interest. When they attended the
Baby and Toddler Fair they were overwhelmed. For the two events combined
they took a total of some 8,000 advance orders. And this was before the first
cup had even come off the production line. Sales commenced in March 1996.
By the end of that year the cups were selling at the rate of 20,000 per month.
Only 12 months after launch they were selling at the rate of 685,000 a year. By
this point orders were running well ahead of capacity. They were forced to
retool and took the opportunity to redesign the product. With both the old and
new designs on the market they achieved total sales of over three-quarters of
a million cups in 1997. In the first 9 months of 1998 they sold 2 million cups.
These sales were achieved despite an advertising budget of barely £2,000
(Cole, 2005). Instead sales came predominantly through word-of-mouth and
through recommendation from mother to mother. Although supermarkets are
generally reluctant to deal with one-product companies they did succeed in
getting the product onto the shelves of major supermarket chains such as
Safeway and Tesco. Tesco had initially showed little interest, but when Adrian
Llewellyn-Jones mailed a cup filled with Ribena® to their buyer, and not a
drop was found to have leaked, the buyer very quickly adopted a much more
positive attitude.
But later on in 1998 sales plummeted. This coincided with the launch of a
similar product, the Super Seal Cup, under Jackel International’s well-known
brand name Tommee Tippee. Its non-spill valve bore a striking similarity to
that of the Anywayup Cup (Insley, 2012), which was now facing a massive
loss of market share. Despite huge misgivings about taking on a large
business corporation almost single-handed, Mandy Haberman together with V
& A Marketing decided to take Jackel International to court alleging
infringement of her patent.
As is very common in cases like this Jackel International claimed that
Mandy’s patent was not valid and that consequently they were not guilty of
infringement. In court their lawyers argued that the design of the valve was
‘obvious’. They claimed that nothing in Mandy’s design was ‘outside normal
workshop modifications’ (Insley, 2012) and that ‘its operating principles had
been known of for a long time’. Jackel International argued that Mandy had
‘merely solved a known problem with simple and readily-available expedients’
(Cole, 2005: 653). Since Mandy’s design was ‘obvious’ it did not therefore
involve an inventive step and her patent was not valid and therefore could not
be infringed in the first place.
However, the judge, Mr Justice Laddie (Cole, 2005: 654), referring to the
case of Windsurfing International Inc. v. Tabur Marine (Great Britain) Ltd
[1985] RPC 59, noted,

‘Mrs Haberman has taken a very small and simple step, but it appears
to me to be a step which any one of the many people in the trade could
have taken at any time over at least the preceding ten years or more. In
view of the obvious benefits which would flow from it, I have come to the
conclusion that had it really been obvious to those in the art it would
have been found by others earlier, and possibly much earlier. It was
there under their very noses. As it was, it fell to a comparative

outsider to see it. It is not obvious.
In other words, Mandy genuinely had taken an inventive step. Her patent was
held to be valid and infringed.
Source: Cole (2005); Haberman (2014); Insley (2012).

Questions
1 What is a non-disclosure agreement and why did Mandy Haberman use
one?
2 Why do you think Mandy Haberman initially looked for a third party to
take up her invention and exploit it commercially?
3 What is a licensing agreement?
4 What is meant when it says, ‘sales came predominantly through word-of-
mouth’?
5 What is a patent agent and why did Mandy Haberman instruct him to
search for ‘prior art’?
6 What did Jackel International mean when they claimed that Mandy
Haberman’s design was obvious?
7 What was the significance of the claim that there was ‘no inventive step’?
8 What was the case of Windsurfing International Inc v. Tabur Marine
(Great Britain) Ltd and why did the judge refer to it?
9 Why did the judge feel that Mandy Haberman’s design was not obvious?
10 What was the judge referring to when he talked about ‘those skilled in the
art’?

Questions for discussion


1 Which of the following items would not meet the criteria required for a
registered design?

▮ a portable CD player
▮ a rubber sealing ring for the door of a washing machine
▮ a toilet disinfectant container
▮ a corkscrew.
2 Why is computer software not normally patentable? What other forms of
protection are available?
3 Why are trademarks an increasingly important piece of intellectual property?
4 Why do companies accused of infringing a patent often mount a defence based
on a counter-petition claiming that the patent is not valid?
5 What is meant by diffusion? What impact does the patent system have on the
rate at which new technological advances are diffused?
6 What is meant by ‘inventive step’ where patents are concerned and why is it
important?
7 What is the Windsurfer test?
8 Why do inventors need to take particular care before publicising their
inventions?
9 Which of the following can be registered as a trademark?
▮ a brand name
▮ the shape of a container
▮ a smell
▮ a colour
▮ a domain name.

10 What is meant by the term ‘passing off’?


11 Why has the Internet been a very significant development as far as copyright is
concerned?

Exercises
1 Why have trademarks become an increasingly important form of intellectual
property?
2 What is intellectual property and how can a firm or individual protect it?
3 What remedies are available to holders of patents who find that their patents
have been infringed?
4 Why do some patent-holders choose not to license their invention/innovation to
others?
5 Carly Fiorina, the former head of Hewlett-Packard, when asked whether the
company was living up to its creative traditions under her leadership, would
point out the company filed 11 patents per day under her leadership compared
with only three per day when she arrived. Comment critically on this statement.

Further reading
1 Bainbridge, D. (2007) Intellectual Property, 6th edn, Pearson Longman,
London.
A definitive legal textbook that provides a highly detailed technical explanation
of the various forms of intellectual property. However, it is a legal text and as
such is not recommended for use other than as a reference book.
2 Mostert, F. (2007) From Edison to iPod: Protect your Ideas and Make Money,
Dorling Kindersley, London.
And now for something completely different. This is not a textbook, rather a
popular introduction to the subject of intellectual property. It covers most
aspects of intellectual property and in considerable detail. A big plus is the
illustrations which provide highly informative examples of the various
intellectual property rights. Strongly recommended as an introduction to the
subject.
3 Van Dulken, S. (2000) Inventing the 20th Century: 100 Inventions that Shaped
the World, British Library, London.
This is not a book about intellectual property rights, but it gives a valuable
insight into the subject. Using patent records from the Intellectual Property
Office, including many diagrams and illustrations, it provides an account of 100
UK patents of the twentieth century.
4 de Werra, J. (ed.) (2013) Research Handbook on Intellectual Property
Licensing, Edward Elgar, Cheltenham.
A really interesting and useful collection of papers written by both intellectual
property practitioners and academics. The book has two particular features that
make it valuable for those interested in the use of intellectual property as an
important business asset. First, the text covers the licensing of not merely
patents (as one finds in many books) but all types of intellectual property
including trademarks and copyright. Secondly, the text is not Anglocentric,
providing perspectives on aspects of licensing from around the world.
How Do You Manage Innovation?

This part contains the following 4 chapters:


8 Innovation strategy
9 Technological entrepreneurs
10 Funding innovation
11 Managing innovation
Innovation Strategy

Objectives
When you have completed this chapter you will be able to:
understand the nature of strategic management
appreciate the need to take a strategic perspective on innovation
explain the nature of innovation strategy
recognise the various innovation strategies that are available as a means of
exploiting innovations
evaluate innovation strategies and determine the appropriate
circumstances in which to use them.

Introduction
You can have fantastic ideas, you can be very clever at inventing, you can invest a
fortune in research and development (R&D) and you can be a recognised
technological leader, but you may still fail when it comes to innovating. How come?
Ideas, inventions and R&D are all concerned with value creation. They are part of
the creative process that leads to the development of a new product, service or
process, but as we saw in Chapter 1, there is more to innovation than value creation.
The other crucial ingredient in innovation is value capture and if you don’t get that
right, then no matter how good you are at all the other things, the innovation may
well fail, or at least you will fail to benefit from the innovation.
Value capture is about innovators appropriating (i.e. receiving) benefits from
their innovation, and this is dependent on what happens in the marketplace. It is in
the marketplace where innovations succeed or fail. It is often assumed that with
innovation what matters is winning the race to be first-to-market. This is, after all,
why innovation is often portrayed as a heroic endeavour, where the innovator
struggles against seemingly overwhelming obstacles to get the product on to the
market before anyone else. However, innovators who rush their innovations to
market frequently discover that winning is not enough.
As we saw in Chapter 4, in the state of flux and discontinuity that the theories of
punctuated equilibrium and dominant design predict will follow the introduction of a
major new innovation, a number of competing designs may well appear. The
innovator may then find that while being first-to-market is an advantage, it is not
enough of an advantage. Under these circumstances seemingly attractive and useful
new products can be upstaged by offerings from competitors. Sometimes these
competing designs are copies which are less sophisticated and less proficient
technically, but despite this they offer the consumer better value perhaps because
they come with better technical support, or better availability. As a result the
innovator fails to capture value, which instead goes to a competitor or competitors.
Hence, having got something to work, having developed a prototype, perhaps
even having got as far as patenting it, the innovator, whether as an individual or an
organisation, is then faced with some major choices in terms of how best to exploit
the innovation. These choices are about how best to attack the market and they
form the focus of this chapter. Because the choices have long-term consequences in
terms of the life cycle of the innovation, they represent strategic decisions: that is to
say, decisions that affect the long-term future of the innovation. They may even
affect the long-term future of the organisation. Consequently, this chapter is all
about strategic management and strategic decisions connected with innovation: in
short it is about innovation strategy.
Why does strategy matter when it comes to innovation? Partly it is because
market dynamics, particularly where new products and services are concerned,
involve a degree of uncertainty. It can be very difficult to predict competitor
reaction. It is precisely to deal with this sort of uncertainty that it is necessary to
consider the bigger picture which includes not just consumers and their reaction to
an innovation, but a host of other parties, including competitors. Another factor is
that sometimes the resources required to bring an innovation to market are on such
a scale that they require the organisation concerned to ‘bet the company’. This
means that the investment associated with the innovation, in terms of time, effort
and money, is such that if the innovation fails, the future of the organisation may be
at risk. Finally, decisions about innovation tend to involve a relatively long
timescale. It takes time to get an innovation to market and it can take time for an
innovation to catch on and make money. Under these circumstances tactical
decision-making involving timescales of weeks and perhaps months is not sufficient.
Mini Case: PJB-100 Digital Music Player
‘The MP3 that changes everything.’ That was how the American magazine
Popular Mechanics described a pioneering new MP3 digital music player
which included an integrated hard drive that allowed you to store up to a
hundred CDs’ worth of music. You might think that the magazine was talking
about Apple’s iPod, but you would be wrong. This pioneering product was an
innovation developed by a big computer company in Silicon Valley, but it
wasn’t Apple. It was the computer manufacturer DEC (since taken over by
Compaq which in turn was acquired by Hewlett-Packard) and the digital
music player was the PJB-100.
Today, when almost everyone has heard of the iPod, few have heard of the
PJB-100. It was developed by a small team at DEC led by Cambridge-educated
computer scientist, Andrew Birrell, the man who first came up with the idea of
fitting the smallest available hard drive, a 2.5-inch drive from a notebook
computer, into an MP3 player to create what was effectively a personal
jukebox. Over the course of a year Birrell’s team solved the problems of
energy management, navigation, file transfer and integration with a PC, and
the PJB-100 went on the market almost two years before the iPod, in
November 1999.
Sadly, although the PJB-100 was the first product of this type to reach the
marketplace, it was not a commercial success. It proved to be just a little too
big, a little too expensive and a little too awkward to use. So it was that while
DEC pursued a first mover strategy to be first-to-market with this new type of
product, it was actually Apple with its follower/imitator strategy that struck
gold, and DEC’s engineers will ‘dwell forever in a destination they never
booked – the limbo populated by creators doomed to see their great ideas
realized, and hugely improved upon, by companies with more visionary
bosses’ (Levy, 2006: 51). They are in good company.
Source: Levy (2006).

The nature of strategy


Strategic decisions have long-term consequences and very often impact upon a lot
of people. Hence, strategic decisions are big ones. There is often a lot of money
involved and many people are affected. Because their consequences can be so
significant, strategic decisions are normally left to the most senior managers within
an organisation, although particular individuals and groups lower down the
organisation may be highly influential.
Strategic decisions and their associated strategies can typically be ordered in
most organisations into a hierarchy (see Figure 8.1) that has business strategy at the
apex, functional strategy in the middle and product/service strategy at the base.

Figure 8.1 Hierarchy of strategies

Business strategy as its name implies is concerned with the strategy of the whole
business, in particular how it achieves its long-term objectives or goals, such as
growth or internationalisation. In meeting these objectives, business strategy is
mainly concerned with answering the question: how does the business compete?
Only by competing successfully and beating its competitors is a business likely to
grow.
Within any particular business strategy there will be associated functional
strategies (see Figure 8.1). These cover functional areas such as marketing, human
resources and operations. They operate at a level below business strategy and they
determine how each of the functional areas supports the business strategy. Hence a
marketing strategy is an example of a functional strategy. Typically a marketing
strategy would specify a range of goals, policies and actions designed to engage
with customers and competitors in pursuit of an overall business strategy. It would
aim to integrate a range of marketing activities covering advertising,
merchandising, promotion, market research, market planning, marketing
communications and the like, to be used in achieving the marketing goals. Strategies
for other functional areas would similarly aim to take a strategic approach to the
management of the function.
Among the functional areas covered in this way is technology and the relevant
functional strategy is a technology strategy. Dodgson (2000: 134) describes a
technology strategy as comprising ‘the definition, development and use of
technological competencies’. Thus technology strategy is concerned with decisions
about the technology that an organisation uses in order to deliver products and
services to customers. These decisions are likely to include: which technologies
should an organisation employ? How much money should the organisation invest in
technology? How should the technology be developed?
Burgelman et al. (2001) argue that technology strategy is to do with the set of
technological capabilities that the firm chooses to develop. Virtually all
organisations employ one or more technologies. However, only certain of these
technologies will be crucial to an organisation and capable of materially influencing
its competitive advantage. It is these core technologies that form the focus of
technology strategy. Technology strategy is concerned with the long-term
development of the core technologies that make up the technology base of the
organisation. In this context development has to address two issues: the breadth of
the technologies that are core technologies and their depth. Breadth refers to the
range of technologies, which may be set narrowly where the technology base is
highly specialised, or broadly if it encompasses a number of different technologies.
Similarly, depth refers to the level of expertise associated with technology. This can
range from a comparatively superficial level with only modest expertise where
depth is limited, to extensive expertise where greater depth is present. Where
technology is a crucial feature of a product or service, as in high technology
industries such as computing, aerospace or biotechnology, technology strategy is
likely to be critical to the competitiveness of that organisation.
Below functional strategy comes a third level, namely product strategy. This
essentially sets out the long-term development of the product or service. A product
strategy is rather like a roadmap in that it shows where the product or service is
going over the long term. In so doing a product strategy sets out a vision for the
product in terms of its long-term development and how this fits with the overall
direction of the organisation. This is likely to include the anticipated life cycle of the
product, details of how the product will compete (i.e. its competitive strategy), the
product platform in terms of the other products that may be derived from it, the
market segments in which it will compete, and the technologies to be employed. A
product strategy is also likely to say something about innovation and may well
include details of the relevant innovation strategy for the product. An innovation
strategy is therefore likely to be an integral part of a bigger and broader product
strategy.

Innovation strategy
What is an innovation strategy? At this point it is perhaps appropriate to draw a
military analogy, specifically the distinction between strategy and tactics. Taking
the example of the Battle of Waterloo in 1815, where an Anglo-Dutch army under
Wellington and a Prussian army under Blucher defeated Napoleon, Wellington’s
decision to form his infantry into squares to face the French cavalry, was tactical. In
contrast Wellington’s decision to fight Napoleon at Waterloo, rather than retreat and
fight somewhere else, was a matter of strategy. Thus strategy in military terms is
about big decisions such as whether, where and when to fight. Similarly with
innovation, innovation strategy is about the big decisions surrounding innovation.
Decisions about the level of research and development (R&D), the type of
innovation, or the most appropriate intellectual property rights to employ, are
tactical decisions. Important though these decisions are, they are not matters for
innovation strategy. Innovation strategy is concerned with bigger, broader and
longer-term issues. If the innovation equivalent of the battlefield is the market, then
innovation strategy is concerned with questions of whether, where and when to
fight, which when translated from the battlefield to the market comes down to:
▮ whether to enter a market?
▮ when to enter the market?
▮ where to enter the market?

The first of these questions may seem rather drastic, but it is a question that should
be asked. It may be more appropriate to let another organisation, which by virtue of
financial resources, brand name or expertise is better equipped, carry out the
implementation of the innovation. With innovation being conducted by another
organisation, this is innovation via an external route, a form of open innovation (as
outlined in Chapter 1). The distinction between external routes to innovation and
innovation strategies is highlighted in Figure 8.2. The second question is one of the
most crucial and yet the timing of innovation is all too often not questioned, as it is
assumed that being first to market is the best strategy. As we shall find out shortly,
there is considerable evidence to suggest that this is by no means always the case.
Finally, where to innovate is not so much a matter of geography as whereabouts, in
terms of the market, should one innovate. In particular is there an alternative to a
full-scale assault on the market? Are there particular market niches which it might
be better to target?
Figure 8.2 Innovation strategies

These may seem innocuous questions but they are crucial. The market is no mere
receiver of innovations. Markets, particularly for innovations, are surrounded by
uncertainty and it is this aspect which makes these questions strategic. One has only
to contemplate what happens if the questions aren’t asked or if the wrong answers
are given. Not only is history littered with failed innovations which the market
rejected, but the consequences of failure can be catastrophic, precisely because in
some instances innovation strategy is about ‘betting the company’, with the result
that failure brings down the whole organisation. An example of betting the company
and an inappropriate choice of innovation strategy is provided by the example of
the De Havilland Comet, the world’s first commercial jet airliner. The British aircraft
manufacturer De Havilland was the first to develop and market a commercial jet
airliner, the Comet. The De Havilland Comet entered service almost six years before
its nearest rival from the American planemaker, Boeing. When the Comet first
entered service with the airlines, it transformed air travel by virtue of being much
faster and more comfortable than the piston-engined airliners then in service.
However, a series of crashes led to the Comet fleet being grounded. Investigations
showed that the aircraft suffered from a design flaw. Unfamiliar with the demands
of high-speed flight, the designers had underestimated the stresses imposed on the
airframe, resulting in the onset of metal fatigue. Although a modified design was
introduced, it was too late. Boeing’s engineers were able to learn from these
mistakes and produce a more robust and reliable aircraft with the result that the
Boeing 707 outsold the De Havilland Comet more than tenfold and became the
airline industry standard for commercial jet airliners.

External routes to innovation


In terms of strategic decisions about innovation the most crucial question is clearly:
should the company enter the market with its innovation?
Drastic though a negative answer to this question may seem, it could be
appropriate for a number of reasons (Ford and Ryan, 1981):
▮ Lack of resources. The company that developed the technology (i.e. the patent-
holder) may lack not just the necessary finance to exploit the technology, but
also the facilities or the staff. This may well occur with SMEs. Where finance is
concerned it is not necessarily a case of having the necessary finance; access to
finance may be just as important.
▮ Lack of knowledge. The company that developed the technology may not have
sufficient knowledge of manufacturing, marketing or distribution channels. This
can often occur where scientists and technologists have developed a new
technology, but lack the commercial background to exploit it. Given that
innovation is often initiated by outsiders, it is perhaps not surprising that lack of
knowledge can be a problem.
▮ A poor fit with the company’s strategy. The technology may be one that has
applications in markets that are too small, too remote or too specialised to be of
value for the company that has developed the technology. Under these
circumstances exploitation of the technology will not fit comfortably within the
strategy of the company.
▮ Lack of reach. If the technology has applications that span markets across the
world, then it may be that the company does not have the global reach to
market technology applications in all these markets. Under such circumstances
it may prefer to sell the technology, but on terms that confine it to very specific
markets.
If a firm decides that these factors mean it is not wise for it to exploit the innovation
itself it has the option to utilise some form of open innovation as a means of
exploiting the technology externally. Under a regime of open innovation the aim will
be to transfer the technology to a third party (see Figure 8.3) in order that they may
complete the innovation process by bringing the innovation to market.
If open innovation is the preferred route, then as Figure 8.3 shows there are a
number of possible innovation strategies that are available, of which three of the
most common are:
▮ licensing
▮ spin-offs
▮ divestment/demerger.

Figure 8.3 Open innovation

Licensing
As we saw in the previous chapter, licensing is open to organisations that can
exercise control over their intellectual property rights. With patent protection in
place one firm can grant another a licence to manufacture products using its
technology. Under the terms of a licence the patent-holder normally retains
intellectual property rights over the technology but allows the licensee to use the
technology in the products or services it develops, in return for a royalty fee.
However, it is normal for the licensing agreement to provide for a royalty payment
that will be a percentage of the purchase price of the product. Typically this ranges
between 3 per cent and 10 per cent. Licensing agreements will usually include a
minimum level of royalty that is not a function of sales, so that the inventor is at
least guaranteed a minimum return. But the exact financial arrangements will vary
according to circumstances. If the firm selling the technology thinks there is a high
level of uncertainty surrounding the products or services the licensee is planning, it
may seek a significant initial payment for the licence and a smaller royalty fee.
Licensees will typically be organisations that possess assets the owner of the
technology does not have, such as:
▮ Knowledge. This might be market knowledge, resulting from a substantial
market presence, or the result of experience working in a particular market.
Alternatively it might be production knowledge derived from years of working
in the trade. Whatever the field, it is likely to be ‘tacit’ knowledge based on
experience rather than formal knowledge resulting from qualifications. It is
precisely because tacit knowledge can take a long time to acquire and is not
easily codified or assimilated through conventional learning, that it may be
better to grant a licence to someone who does have this kind of knowledge.
▮ Access to finance. This might be cash, but is probably likely to mean loan capital
or equity capital. Certainly it needs to be some form of long-term capital since
this is what the innovation is likely to require. Sometimes conventional sources
of finance will not be appropriate, especially if there is a high degree of risk. In
these circumstances it may particularly be ‘patient capital’ that is required: that
is to say, the investor providing it needs to be willing to wait a long time for a
return on their investment as innovations frequently take a long time to
generate a return. Of course, it is not so much a case of having such capital as
having access to individuals or organisations who themselves have access to it.
▮ Motivation. Finally, those seeking to exploit a technology have to have
motivation. They particularly have to have the motivation to carry out
innovation themselves. Innovation is a long and difficult process and requires
the necessary motivation to see it all the way through. This is particularly
important when one considers that innovation requires considerable
commercial acumen. Inventors for their part frequently like inventing and may
not be much interested in what are often commercial decisions. Under
circumstances such as these the exploitation of technology through innovation
is probably best left to someone else, that is, a licensee.

According to Cesaroni (2003), the case for licensing, as opposed to in-house


development, as a means of exploiting a proprietary technology, rests on three
factors:

▮ complementary assets in production and marketing


▮ transactions costs associated with acquiring complementary assets
▮ competition in the final product market.
Complementary assets are the assets required to support the production and sale of
products incorporating the technology and might include manufacturing expertise,
marketing expertise, product support or training. If a company does not have these
complementary assets, as was initially the case with James Dyson and his dual-
cyclone technology (Dyson, 1997), licensing is more appropriate than in-house
development. Transaction costs are the costs of transactions/exchanges associated
with in-house development (i.e. the purchase of complementary assets) or licensing
the technology. If the transactions costs of licensing a technology are lower than the
cost involved in purchasing the required complementary assets, then licensing is the
more logical strategy. Finally, licensing the technology may be appropriate
depending on the extent of competition in the final product market.
Pilkingtons, the glass manufacturers who developed the revolutionary ‘float
glass’ process for manufacturing plate glass, for instance, relied heavily on
licensing. Their thinking was that licensing the technology would both provide the
company with an income and prevent other companies from developing an
alternative process. The strategy proved highly effective. The first foreign licence
was issued to the Pittsburgh Plate Glass Company in 1962 and by the 1990s the float
glass process had been licensed to 35 companies in 29 countries. This was in addition
to the 14 plants operated by the company itself (Henry and Walker, 1991).
In recent years there has been renewed interest in licensing as a strategy for the
exploitation of technology. One study (Kollmer and Dowling, 2004) noted that
licensing was no longer confined to small companies lacking the resources to exploit
a technology fully, with many large well-established concerns using it to exploit
their more peripheral technology assets while focusing their internal resources on
core activities.

Spin-offs
A spin-off is where one firm quite literally creates another independent company in
order to exploit the innovation. Typically the new independent company that is
spun-off in this way is a relatively small entity specialising in the application of the
new technology. It is likely to be an attractive option where the technology of the
innovation is not closely related to the core technology of the firm, because it avoids
unnecessary distractions.
In order to spin off the innovation through the sale of a subsidiary company, it is
necessary to ‘package’ the technology alongside the staff who have developed it and
the associated corporate resources (e.g. equipment, facilities, etc.). The normal way
of doing this is to locate the technology and the relevant human and other resources
in a separate company. As a separate company, the spin-off will have its own
management who will be able to set the direction for the new entity. Quite often the
spin-off company will then be sold although the parent company may retain an
equity stake in it. Among those likely to take a financial interest in the new company
are venture capital organisations, who are looking to fund promising new
technologies and promising new management teams. They will be hoping to
‘harvest’ their investment at a later date possibly through an initial public offering
(IPO) when the innovation begins to achieve success in the marketplace.
One of the main advantages of spinning off a company is that it enables the new
management to focus on the application and development of the new technology.
Quite literally they are free to innovate without some of the distractions of corporate
life. Spin-offs have the attraction that they can generate a substantial lump sum,
rather than the future income stream associated with licensing. If the parent
company is anxious to reinvest the proceeds in other ventures (e.g. core business,
new ventures, etc.) then clearly a spin-off has attractions.
A good example of the use of a spin-off as an innovation strategy is the computer
graphics company Pixar. It was originally part of George Lucas’s film production
company, Lucasfilm, best known as the maker of Star Wars. It comprised some 40
people who designed computer software and hardware for use in the making of
computer graphics and animation. Lucas spun it off and sold it to Steve Jobs for $10
million (Young and Simon, 2005). As a separate entity it developed a string of
innovations including Toy Story, the world’s first full-length feature film made using
computer animation.

Divestment/demerger
Divestment is very similar to spinning off a company. There is actually very little
difference between the two. If anything, the difference is one of scale and timing.
While a spin-off is likely to occur at or near the start of the innovation process and
involve the creation of a small start-up company, divestment in contrast is likely to
take place later when the innovation has begun to show some prospect of success.
The new company that is created may well be quite large and possess quite
significant assets.
Divestment normally takes place when the parent company decides that the
major part of the business is no longer part of its core function. Hence divestment is
much more akin to ‘breaking up’ a business than in the case of a spin-off. It is
essentially a process of organisational reconfiguration in reponse to new
opportunities. The aim is typically to free up the managerial resources necessary for
the development of new technological capabilities that can be used to pursue new
applications. It is both a recognition that new opportunitities (i.e. innovations)
require managerial time and attention and a way of providing a better technological
focus.
A good example of divestment as an innovation strategy is provided by the
information services company, Experian. It was demerged from the retailer Great
Universal Stores, or GUS as it is better known, in 2005. Experian itself began life in
the 1970s as a small section of GUS responsible for keeping credit records of the
retailer’s customers many of whom bought goods on hire purchase. Over time this
part of the business expanded and innovated by not only keeping credit records but
also selling credit information to third parties. Trading as Experian, both the nature
of the data and the people who wanted to access it grew to include other retailers,
car dealers and financial services organisations. In the process a range of new
services emerged, driven in particular by the growth of the financial services sector
and the Internet. As the services became steadily more specialised and innovative so
it eventually made sense for GUS to divest itself of Experian.

Mini Case: Vodafone


Vodafone is the world’s second largest mobile communications provider. It
has more than 400 million customers, 90,000 employees and operates in more
than 30 countries. All this has been achieved in less than 30 years. Vodafone
was created by Racal Electronics, a leading British defence electronics
company. Back in the 1980s in the days before there were mobile phones,
Racal produced specialist military radio equipment including radar,
microwave radios, avionics and electronic warfare devices as well as data
communications equipment such as modems. Its radio and microwave
expertise led it to apply for and be awarded the UK’s first private sector
cellular licence in 1983. Two years later Racal made the first cell-phone call in
the UK from St Katherine’s Dock in London to Newbury in Berkshire.
However, it was one thing to make such a call on an experimental basis
and quite another to provide a mobile phone servce on a commercial basis.
But the mid-1980s saw a significant drop in Racal’s sales of IT equipment and
the company decided to take a gamble by launching a cell phone network. To
do this required a big investment in infrastructure (e.g. mobile phone masts)
that cut deep into Racal’s profits. But the UK appeared to offer a good
potential market because of the country’s small size and the density of its
population. At the time the UK was also very gadget-minded with the highest
number of personal computers and VCRs per household in Europe. So it was
that ‘Vodafone’ was born. At first the principal market for cell phones was
business users with handsets located in cars, hence the term ‘car phone’.
Before the end of the decade Racal’s new Vodafone network had 170,000
subscribers and more significantly was adding new subscribers at the rate of
15,000 a month. Very quickly Racal found a third of its profits were coming
from cell phones.
Racal’s gamble on new technology had paid off. By the start of the 1990s as
handsets became pocket-sized, the era of true mobile phones as we know
them today had begun to take off. To take advantage of the rapidly
developing mobile phone market Racal decided to demerge its mobile phone
network and in 1991 Vodafone became an independent company listed on
both the London and New York stock exchanges. The company continued to
innovate and in 1993 it introduced the UK’s first digital mobile phone service.
Meanwhile as a separate and independent entity, the company invested
heavily in developing the Vodafone brand. In 1997 it introduced its speech
mark logo and then invested heavily in sports sponsorship to improve brand
recognition, including a landmark deal to sponsor the McLaren-Mercedes
Formula One team. All the time it continued to add new services and to
expand, adding subscribers and opening new markets, until today it is among
the ten largest companies in the FTSE 100 with a market capitalisation of £50
billion.

Internal routes to innovation: innovation strategies


If the answer to the question about entering the market is affirmative then there are
a number of potential innovation strategies that can be employed to determine when
and where market entry occurs. Some of these innovation strategies, such as first-
mover/pioneer and follower/latecomer strategies are relatively well known. Others
are, however, rather more obscure. Four such strategies are presented here:

▮ first-mover/pioneer strategy
▮ follower/latecomer strategy
▮ niche strategy
▮ derivative strategy.

The four selected innovation strategies provide an interesting contrast. There are
those such as the first-mover/pioneer and follower/latecomer strategies that relate
primarily to the timing of an innovation. They answer the question: when should
market entry occur? In contrast, the niche and derivative strategies, while they also
involve a strategic element because they are concerned with major decisions about
innovation that have long-term consequences, are concerned not with when an
innovation should enter the market, but rather with where (i.e. which part of the
market).

First-mover/pioneer strategy
The first-mover strategy, as its name implies, is about being first to market with a
new product or service. It is the most obvious strategy and probably the most
appealing for innovation. Its intuitive appeal (Suarez and Lanzolla, 2005) lies in the
fact that most people probably picture innovation as being rather like a race, and a
first-mover, by being first to get an innovation to market, is the race winner. This
has been given renewed emphasis in recent years by the ‘dot-com’ era (Mellahi and
Johnson, 2000), where many new start-ups stressed the need to be first to market,
often at the expense of profitability.
There is no shortage of organisations that have successfully employed a first-
mover strategy for innovation. Famous examples include Coca-Cola, Xerox and
Nike. All three were first to market with their innovations establishing strong brands
that have continued to sell well and dominate their respective markets over many
years.
In fact there are a number of factors put forward as potential benefits of a first-
mover strategy. First, the first-mover, by being first, has an opportunity to establish
a technological lead, thereby becoming more familiar, more practised and more
competent as far as the technology is concerned. A headstart may enable a firm to
get further along the ‘learning curve’ (Lieberman and Montgomery, 1988), thereby
securing a cost advantage over rivals (see Figure 8.4). In fields where technology is
important this may indeed be plausible, though it is perhaps worth noting that the
ability to learn and acquire knowledge is not only a function of volume (i.e. units
produced over time). In addition the learning curve varies from industry to industry
and in some it is not significant. Some organisations simply have a greater capacity
to learn, and this may be more important than having a headstart.

Figure 8.4 First-mover advantage

Source: ‘Innovation Management: Context, strategies, systems and processes’, Ahmed and Shepherd,
Pearson Education Limited, Copyright ©Financial Times Press 2010.

A second factor is linked more directly to technological leadership. Where


technological advance is a function of in-house R&D, first-movers who can protect
and contain the technology, perhaps through patents or trade secrets, can deter
rivals for whom intellectual property rights form a barrier to entry (Lieberman and
Montgomery, 1988).
A third factor is the ability to acquire scarce resources, thereby pre-empting later
arrivals in the market (Lieberman and Montgomery, 1988). The scarce resources
might include locations, suppliers or distribution facilities. While the acquisition of
such resources may be important in some fields (e.g. retailing, where locations can
be critical), nonetheless it is by no means certain that there will be resources whose
acquisition is crucial for competitiveness. Fourthly, being first-to-market provides
an opportunity to build a customer base ahead of competitors. Building market
share in this way provides an opportunity to ‘lock in’ customers, who may find it
inconvenient or expensive to switch to other firms (Lieberman and Montgomery,
1988). Each of these factors represents a barrier to entry, so that one can see a first-
mover strategy as being clearly linked to creating barriers that deter would-be
competitors.
Other possible benefits to be derived from a first-mover strategy include: the
scope for building brand recognition, shaping consumer preferences and
expectations in order to define standards that effectively frame consumer
preferences by positioning a product in the minds of consumers, and the acquisition
of patents and other intellectual property rights that may deter potential
competitors.
Though the case for a first-mover strategy appears strong with a firm rationale
supported by significant potential benefits, in reality there are limits to its
effectiveness. Given that there are plenty of examples of first-movers who have won
the race to market, only for the innovation to ultimately fail in the marketplace, this
perhaps should not be a huge surprise. It has already been noted that there can be
big differences between industries; in some learning effects are crucial (e.g.
aerospace), while in others they are not, and the same goes for scarce resources and
standards. Suarez and Lanzolla (2005) suggest that two important factors affecting
the suitability of a first-mover strategy are the pace of technological change and the
rate at which the market is expanding. If rapid technological change is taking place,
they suggest that a first-mover advantage is unlikely because the rapid pace of
change will draw in new competitors. This is closely linked to the theory of
punctuated equilibrium which predicts that periods of relative stability will be
broken by technological breakthroughs that lead to disequilibrium with many
competing designs. The Osborne 1 portable computer provides a good example.
Osborne was the first company to produce and market a portable computer.
However, it weighed 24 lbs and this highly innovative computer was quickly
superseded by much lighter models as laptop technology rapidly evolved. The same
logic is likely to apply with rapid market changes which provide potential
competitors with an opportunity to enter the market.
Thus while a first-mover strategy has a number of potential benefits, whether or
not they are realised is far from certain and is contingent upon a range of contextual
factors. Hence, would-be innovators have to be well aware of the context within
which they are innovating if a first-mover strategy is to be successful.

Mini Case: eBay


eBay, the online auction website, was founded almost by accident by 28-year-
old, French-born Iranian American, Pierre Omidyar in 1995. At the time
Omidyar had never attended an auction and had little idea how they worked.
A computer programmer by profession, Omidyar had previously worked for
Claris, an Apple subsidiary. Working out of the spare bedroom of his town
house in San José, Omidyar wrote the initial code for the online auction site
which he named AuctionWeb, in a weekend. It was initially located on his own
personal website. It comprised no more than a listing of a single item. But
when that item began to attract bidders Omidyar realised he was on to
something. Even so the commercial opportunities were not apparent until his
Internet provider informed him he would have to upgrade to a business
account because of the sheer volume of traffic his site was attracting.
With the fee increased from $30 a month to $250 a month Omidyar made
the decision to start charging for the service, something that users did not
object to. The income allowed Omidyar to recruit his first employee.
Meanwhile business boomed. There were some 250,000 auction transactions in
1996. But in the first month of 1997 there were some 2 million.
With this sort of growth and the dot-com bubble getting underway, the
business attracted $6.7 million in venture capital from Benchmark Capital. At
this point, Omidyar decided to change the name of the company and register a
new domain name. He tried to register the domain name echobay.com after
his consulting company which was called Echo Bay Technology Group, but
this name was already taken so instead he opted for an abbreviated form:
eBay.com, thus eBay as we know it today was born.
Exponential growth continued. By 1998 eBay had attracted half a million
users, had 30 employees and had revenues of $4.7 million. At this point, with
the dot-com boom at its height, Omidyar decided to take the company public
with an initial public offering (IPO). The shares, which were priced at $18,
reached $53.50 on the first day of trading, making Omidyar an instant
billionaire. With a strong brand, eBay continued to thrive, despite the
downturn caused by the bursting of the dot-com bubble in 2000 that saw many
technology stocks crash. By 2008 worldwide revenues had reached $7.7
billion, and there were 15,000 employees on the payroll.
Today eBay continues to be the world’s top online auction, despite the
appearance of rivals. From an early start eBay has easily eclipsed other
upstart web-based auction sites. These include Amazon and Webstore, along
with eBid and Online Auction. It has been a case of popularity breeds
popularity. Being much the biggest online auction site, buyers know they are
more likely to find what they want while sellers know they are more likely to
get the best price. In the process eBay has revolutionised person-to-person
trading, which has traditionally been conducted through things like garage
sales, car boot sales and flea markets.
Source: Cohen (2002).

Follower/latecomer strategy
Variously described as a follower or latecomer or sometimes even an imitator
strategy, this involves taking a ‘wait-and-see’ approach, rather than perceiving
innovation as a race in which being first to market is critical. The idea is to
deliberately hold back when a discontinuity occurs and technological advances
mean that an innovation is imminent, in order to see how both the market and the
technology adapt to the innovation. When it becomes clear that there is a high level
of consumer acceptance in the market or the number of competing designs begins to
show signs of diminishing, then and only then does the latecomer enter the market.
Clearly it is not without risk as there is always the possibility of being completely
left behind and as a result shut out of the market. However, the risks may not be as
great as some imagine and may well be counter-balanced by advantages derived
from learning from the mistakes others have made.
Latecomer advantages can be derived in a variety of ways. The free rider effect
(Cho et al., 1998) is where a latecomer is able to utilise the benefit of investments
made by pioneer firms as they entered the market earlier. These investments might
include educating consumers to promote market acceptance, providing some form
of infrastructure perhaps to promote ease of use or access to the innovation or
gaining regulatory approval, in each case with the intention of supporting
innovation. If the benefit from these investments cannot be contained or limited,
then there is the very real prospect that firms other than the pioneer will use or
easily copy these facilities. Hence latecomers may actually gain advantage from the
work of pioneers. Information spill-over effects are very similar. They arise where
the diffusion of technologies over time results in reduced research and development
(R&D) costs for latecomers. Over time pioneer firms may find it difficult to contain
in-house the knowledge and expertise that develops from working with a new
technology. As it spills out into the public domain, so latecomers can access it,
without having to undertake the underpinning R&D expenditure (Teece, 1986). In
terms of Porter’s (1980) generic strategies this (and the free rider effect) will place
latecomers/followers at a cost advantage relative to pioneers.
Closely related are so-called learning effects, where followers, like Amazon and
Google for example, are able not just to learn from the mistakes and failures of
others (Schnaars, 1994), but are better placed to understand the key features of a
fast developing technology. The level of uncertainty is also likely to be a factor here.
When the discontinuity is great, so too will be the level of uncertainty. However, as
the problems associated with the technology become more widely known and
uncertainty reduces, so the scope for learning becomes much greater.
Other potential benefits that can accrue to follower/latecomers include a better
understanding of customer requirements (Shankar et al., 1998), avoiding
unnecessary R&D and the provision of complementary assets. The first of these
arises where consumer requirements are initially unclear. Over time these
requirements are likely to be established, something that a follower can capitalise
on. Where R&D is concerned, then the advantage that a follower has is being able to
avoid committing research to technological paths that will not lead to successful
innovations. Quite literally followers can avoid ‘duds’, something that pioneers
facing a higher degree of uncertainty find harder to do. Finally there is the matter of
complementary assets. Where innovation takes place in a market where services
such as marketing, manufacturing capability or after-sales service are important to
consumers, latecomers may have an advantage over pioneers by virtue of having
had more time to develop such complementary assets (Teece, 1986). This puts a
pioneer at a disadvantage in relation to a follower.
If these sorts of factors are in evidence, then the follower/latecomer strategy may
well prove the most appropriate. Despite the apparent attractions of being first,
when it comes to innovation, pioneering may well have its limitations. What matters
is being able to judge when these conditions are likely to apply.

Mini Case: Google


Google has become a household name. Indeed as one commentator (Vise,
2005) put it, ‘Not since Gutenberg invented the modern printing press more
than 500 years ago, making books and scientific tomes affordable and widely
available to the masses, has any new invention empowered individuals, and
transformed access to information, as profoundly as Google’. It has become
part of the fabric of daily life as millions use it everyday to access information
by ‘googling’ it on a computer or increasingly on a mobile phone. It has
become synonomous not merely with search engines but the Internet itself.
And yet Google was by no means the first search engine.
The grandfather of all search engines was a thing called Archie created
back in 1990. Created by Alan Emtage, Bill Heelan and Peter Deutsch,
computer science students at McGill University, it provided listings of files
located on public anonymous File Transfer Protocol sites, creating a
searchable database of file names. However, it was not until 1994 that
anything that we might recognise as a search engine appeared. One of the
first was Webcrawler, which for the first time allowed users to search for any
word in any webpage. It was not only the first search engine to be used by
members of the public rather than computer science specialists, it set the
standard for what a search engine should be able to do. In the same year a
number of other search engines appeared that were to become widely known
and widely used by the public. They included Lycos, which orginated at
Carnegie-Mellon University and was one of the first to become a commercial
endeavour. Other popular search engines that appeared in 1994 included
AltaVista and Yahoo, the latter created by David Filo and Jerry Yang while
graduate students at Stanford University. By the end of 1994 Yahoo had
achieved more than one million hits and Filo and Yang began to realise
Yahoo’s commercial potential. Incorporated early in 1995 it gained venture
capital funding from Sequoia Capital, before making an IPO early in the
following year. Yahoo was one of the stars of the dot-com bubble of the late
1990s.
Meanwhile back at Stanford University graduate students Larry Page and
Sergei Brin, having followed developments in search engine technology, were
beginnining to turn their attention to the design of search engines. In 1996
they developed BackRub, a search engine that for the first time utilised
backward links. In 1998 BackRub, renamed as Google, was launched. Its
unique design ranked web pages according to the number of links to them, a
feature that according to PC Magazine meant it produced extremely relevant
results. This feature and its ease of use saw Google rapidly rise to prominence
and by 2000 it was widely used. By 2014 it was by far the most widely used
search engine with almost 70 per cent of the search engine market.

Niche strategy
One of the difficulties that innovations, particularly those based on a new
technology, often face is that initially they are uncompetitive compared to existing
products, in terms of cost and sometimes even overall performance. The technology
S-curve outlined in Chapter 3 shows how in the early years of a new technology it
may actually deliver poorer performance than an existing technology on a well-
established if maturing S-curve. Geels (2002) notes how the first steamships provide
a classic example of this phenomenon. In terms of overall performance, wind-
powered sailing ships, such as the great tea and wool clippers like the Cutty Sark,
were for many years more efficient than steamships. The latter were hampered by
the fact that they used a lot of fuel (i.e. coal) which severely restricted the amount of
cargo they could carry. In situations like this, despite enthusiasm for a new
technology on the part of the technologically minded, there is no particular reason
for consumers to purchase an innovation that utilises a new technology, other than
perhaps the novelty factor. Established technologies may well exert what Dosi
(1982) refers to as a powerful ‘exclusion effect’. Consequently, a new technology
may not be readily accepted by consumers. New technologies often struggle against
well-established technological regimes and find it difficult to break into the market
and win consumer acceptance.
Hence with new technologies, especially those that are radically different from
what has gone before, when it comes to selecting an appropriate innovation
strategy, it may make sense not to mount a full-scale assault on the marketplace.
Instead, innovators may prefer to seek out narrowly defined market segments which
form niche markets where, as Kemp et al. (1998: 187) point out, ‘the advantages of
the technology are highly valued’. According to Geels (2002) a niche strategy of this
type offers a number of potential benefits:
▮ protection from full-scale competition
▮ the provision of opportunities for learning
▮ the provision of space to build supporting networks
▮ stimulating further development in terms of volume production and cost
effectiveness.
The first of these benefits is directly linked to the notion of avoiding mainstream
markets, in order to sidestep competition from large and well-established
competitors, something that is likely to be particularly important if the innovator is a
relatively small and inexperienced newcomer. Opportunities for learning refer to
scope for improved and enhanced knowledge and understanding of a new
technology. This is likely to take time and be cumulative since this sort of
knowledge is likely to be tacit and derived from experience. Hence a niche strategy
can provide the time for this sort of knowledge to be accumulated. Building
networks refers to the creation of what Kemp et al. (1998: 186) describe as a
‘constituency’, comprising a range of actors likely to be associated with supporting a
new technology. These actors may well include material and component suppliers,
customers, providers of research and testing services, technical experts and
providers of product support services. Further development, on the other hand,
refers to improving and enhancing a technology so that it works more reliably and
more effectively. According to Kemp et al. (1998: 186) this is one of the principal
benefits of an innovator pursuing a niche strategy since niches form ‘protected
spaces’, where ‘promising technologies’ can be developed into applications that
deliver the kind of services that deliver real value to consumers.
The central idea behind the niche strategy is achieving market entry via one or
more small niches in the market. The rationale behind this is that in market niches
there may be groups of consumers with particular needs which are not being met, or
not being met very well, in the main market. Since innovations, particularly radical
innovations involving new technologies, often have new attributes such as
mobility/portability, reduced size, lower power consumption or greater efficiency,
innovators can use these attributes to differentiate the product and appeal to groups
of consumers in market niches. With the prospect of their needs now being rather
better catered for, the innovation may create value for them and they may be willing
to purchase the new product. Targeting groups of consumers in this way provides an
opportunity for the innovator to establish a bridgehead in specific market niches. In
this way the innovator gains market presence. Then as the technology matures and
costs come down and overall performance improves the innovation can be extended
to the main market.
The idea of employing a niche strategy as an innovation strategy is far from new.
It has been used many times, especially when the new technology represents a new
S-curve. The move to a new technology and a new S-curve may hold out the
prospect of significantly better performance. However, this performance
improvement may be one that will only come in the long term, with the performance
of the new technology in the short term actually being inferior. Quite literally the
new technology is currently at the bottom of its S-curve where it is located below the
S-curve of the old technology.
In this kind of situation it makes sense to follow a niche strategy. This is precisely
what happened when the new technology of steamships was introduced in the
nineteenth century. As we have already seen, when steam-powered ships first
appeared they could not compete in economic terms with existing sailing ships
because their cargo-carrying capacity was severely limited by the need to carry
large supplies of fuel. Faced with this, those building the new steamships followed a
niche strategy. The niche they targeted was ‘mail packets’, ships employed to carry
the mails. This was a service that was so important in the days before mobile phones
that most governments subsidised it. Not only that, mail represented a high value
cargo that took up little space. In addition, reliability was a key feature of the
service. This was an area where sailing ships found it hard to compete because they
were very dependent on the weather. Consequently, the first steamships were
relatively small mail packets employed to carry mails from one country to another.
Over time as the technology improved and complementary assets (e.g. coaling
stations for refuelling) became available so steamships moved into other niche
markets until eventually they took over the main market.
This is by no means an isolated example. The first hydraulic excavators could not
compete on economic terms with existing cable-operated excavators (Christensen,
1997). The first jet aircraft could not compete with existing piston-engined aircraft
because they used far too much fuel. In both cases it was obvious that the existing
technology had matured and was at the top of the S-curve with only limited scope
for further development. But despite this, at the point where the new technology
came on to the market, it was actually uncompetitive and in both cases the
innovators opted for a niche strategy when it came to selecting an appropriate
innovation strategy. In the case of hydraulic excavators the niche was housebuilders
and public utilities and in the case of the jet it was military applications especially
fighter aircraft.
As we will see in Chapter 13, green innovations, such as electric vehicles, are
currently fighting very similar battles because the new technologies that they rely
on are relatively immature and require more development to make them fully
competitive. And under these circumstances many green innovators are using niche
strategies.

Mini Case: JCB


When it comes to digging holes, bright yellow excavators bearing the logo
‘JCB’ are a familiar sight. The letters JCB are the initials of the man who
developed the first hydraulic excavator (Christensen, 1997) back in the early
1950s, one Joseph Cyril Bamford, the founder of J C Bamford Excavators Ltd.
Described as a ‘backhoe’ because the first excavators were merely
attachments that fitted on the back of tractors, these machines were relatively
crude and offered only limited performance. Because the technology of
hydraulics was relatively new, and the seals and pumps available were
relatively weak, the power of these early hydraulic excavators was limited.
Compared to existing excavators on the market which relied on cable
technology to operate the excavating bucket, these early backhoe excavators
were small (the bucket had a capacity of a mere ¼ cubic yard) and only a
very limited reach (180 degrees compared to 360 degrees on a traditional
cable-operated excavator).
The new hydraulic excavators were simply not powerful enough to match
existing cable-operated excavators when it came to the primary function of
shifting earth. Customers for the cable-operated excavators then on the
market were mining, general excavation or sewer contractors and they
demanded a bucket size of between one and four cubic yards. However,
though they performed relatively poorly in terms of their capacity to move
earth, the new hydraulic excavators were more versatile and much more
manoeuvrable. Faced with this situation, JCB targeted new markets
(Christensen, 1997) where potential customers might appreciate the smaller
size and greater manoeuvrability of the backhoe excavator. These markets
included housebuilders, local authorities and utility companies. They had
previously not used excavators. Because most of their excavation work was
on a small scale, it had in the past been completed by manual labour, quite
literally men with shovels and wheelbarrows. JCB successfully gained a
foothold in this niche of the market. Thus the early users of hydraulic
excavators were very different from the mainstream customers of the cable-
operated excavator manufacturers (Christensen, 1997). Then, having
demonstrated the effectiveness of the new hydraulic technology and as the
technology itself improved, JCB were able to enter the mainstream market
until in time hydraulic excavators came to dominate the mainstream market.
JCB, which had begun life in a lock-up garage in Uttoxeter in rural
Staffordshire, became the world’s fifth largest manufacturer of construction
equipment.
Source: Christensen (1997).

Derivative strategy
A derivative strategy essentially involves applying a new technology to an existing
product to create a new product. The original product will already have a presence
in the market and the derivative strategy aims to capitalise on this existing market
position, in order to gain market entry for the ‘new’ product. Hence a derivative
strategy is something of a hybrid strategy. Clearly it is not a strategy that can be
used by new firms, as there has to be an existing product and it has to already be
positioned in the market. However, for established products with a reputation it can
be an attractive strategy. Keeble (1997) notes that the exploitation of new
technologies is not confined to new-product development. It can also occur through
what Rothwell and Gardiner (1989a) describe as ‘re-innovation’. Rothwell and
Gardiner note that in many industry sectors there are relatively few completely new
products entering the market. Instead, one often finds a large number of what they
(Rothwell and Gardiner, 1989a) describe as ‘post-launch improvements’.
In some cases these improvements extend to a complete redesign of the product.
Rothwell and Gardiner (1989b) give the example of the development of a heat gun
for paint-stripping by the consumer products manufacturer Black & Decker. In this
instance the product was new and Black & Decker were uncertain of whether it
would be accepted by consumers. Given the uncertainty, they utilised the outer
casing, motor, fan and switch from an existing product – an electric drill, thereby
substantially lowering the development cost and the scale of the investment in
innovation. This was an example of a derivative strategy being used, though in this
case the technology was not particularly new.
Smith and Rogers (2004) show how in the aerospace industry manufacturers
often use derivative strategies by adding a new technology to an existing product.
Airframe manufacturers such as Boeing, for instance, will fit a new engine to an
airliner in order to improve its performance. Smith and Rogers (2004) cite the case
of Rolls-Royce’s Tay engine. This was a derivative of the company’s well-established
Spey engine which had been in service for some 20 years. By adding a new fan
utilising the advanced, wide-chord fan technology of the company’s much larger
RB211-535 engine, Rolls-Royce was able to develop an engine that was substantially
more fuel efficient, quieter, lighter and more reliable. The use of derivative strategy
in this way greatly reduced the development cost and the time taken to get the new
engine into service. In particular, the certification of the engine was simpler and
quicker because major systems from the old engine were used on the new one and
these had already been certificated years earlier. Not only that, the Spey engine
already had an established customer base, and airframe manufacturers such as the
American business jet manufacturer, Gulfstream, were keen to use what they saw as
a product with which they had some familiarity, though now with much improved
performance characteristics.

Case Study: Route 76


It may not be quite Route 66, running across the USA from Chicago, Illinois to
Los Angeles, California and made famous in song by Nat King Cole, Chuck
Berry and the Rolling Stones among others, but London’s route 76 is one of the
world’s busiest bus routes. Every six minutes a red double-decker bus makes
the 40-minute trip from Waterloo Station just south of the river Thames to
Tottenham in north London. Though they look like any other London bus, in
fact the buses on route 76 are different. They are quieter, less polluting, more
fuel efficient and smoother than other buses in London. How come? Well the
26 buses that service route 76 are special. They are all Volvo hybrid-electric
buses.
Volvo based in Sweden is Europe’s second largest commercial vehicle
(truck and bus) manufacturer (Sushandoyo and Magnusson, 2014). Back in
the early 2000s Volvo signalled its ambition to take a lead in the development
of hybrid-electric powered heavy commercial vehicles, by starting research
into hybrid-electric technologies. Given that the commercial vehicle market is
long established and one that has been firmly committed to the incumbent
technology of the diesel engine for a very long time, this was a bold decision,
not least because of scepticism from some of its own engineers. However,
Volvo was a big company producing more than 250,000 trucks and buses a
year, making it one of the world’s largest producers of heavy vehicles.
Although Toyota had successfully introduced a hybrid electric vehicle into
the car market through its innovative Prius model, breaking into the market
for heavy commercial vehicles was a tough call. On the production side, Volvo
faced a high level of spending on R&D, the acquisition of much new
knowledge that had to be integrated with existing knowledge, a long payback
period and the risk of making a premature choice of technology (Sushandoyo
and Magnusson, 2014). In the market, diesel technology was well established,
and operating in a commercial environment vehicles had to be completely
reliable and familiar to the staff who used them. Not unsurprisingly operators
were generally reluctant to risk new and untried technologies, given the need
to maintain a reliable and efficient service.
Against this background Volvo pressed ahead in developing a new kind of
powertrain for heavy commercial vehicles. Whereas conventional automotive
powertrains rely upon an internal combustion engine as the sole source of
power, a hybrid-electric powertrain utilises an internal combustion engine,
one or more electric motors and some form of battery. Hybrid technology
allows kinetic energy from braking to be converted into electric power that is
then stored until required by the electric motor. Vehicles using this kind of
technology offer a number of benefits including improved fuel efficiency,
lower emissions and a smoother ride. Hybrid technologies can follow one of
two possible models, either series configuration where the vehicle is driven by
battery topped up by a conventional diesel engine and kinetic energy from
braking, or parallel configuration where the vehicle is driven by the battery, a
diesel or a combination of the two.
Volvo was one of several automotive firms to explore hybrid technology in
the early 2000s. These included prototype hybrid trucks and buses based on a
series configuration developed as part of Sweden’s Strategic Vehicle Research
and Innovation programme. However, trials in Gothenborg were
disappointing. Although the prototypes exhibited low emissions they were not
as fuel efficient as the company had hoped. Particularly significant was the
vehicles’ poor reliability. Volvo’s engineers found that hybrid-electric systems
using a series configuration were very sensitive to failure. If any of the
electrical components failed for any reason the vehicle came to a complete
standstill. This poor performance resulted in a lack of interest on the part of
potential operators.
In the light of these setbacks Volvo managers initiated an advanced
engineering programme in 2002 designed to investigate the relative merits of
the two competing (i.e. series and parallel) hybrid configurations
(Zolfagharifard, 2010). This found that the parallel hybrid configuration was
the more versatile (see Table 8.1), although it was the more technologically
complex because of the need to integrate diesel and electric technologies. In
the light of this, Volvo opted to develop parallel hybrid technology and in 2006
CEO Leif Johannson announced that the company planned to launch a hybrid
truck on the market by the end of the decade.
Table 8.1 Feasibility of hybrid configurations in heavy vehicle applications

Source: Sushandoyo and Magnusson (2014).

Having developed appropriate technology, Volvo was confronted with a


choice when it came to the latter stages of the innovation process. Of the
250,000 heavy vehicles it produced each year, the vast majority (some 238,391
in 2011) were trucks, while barely 12,000 were buses. However, rather than try
to break into the mainstream truck market, Volvo chose instead to utilise the
new technology in bus applications, in particular the citybus market segment.
Although this represented a relatively small specialist segment of the heavy
commercial vehicle market, it was felt that this was where the environmental
benefits of hybrid-electric technology were likely to be particularly apparent.
The biggest city bus fleet in Europe is in London, comprising some 8,500
buses that cover 700 bus routes and 19,500 bus stops. As the biggest potential
customer London was an obvious choice when it came to trying out Volvo’s
new hybrid citybus. In addition, London’s Mayor was keen to improve air
quality in the city, which earlier reports had identified as problematic.
Consequently when Transport for London (TfL) announced plans to
evaluate a number of alternative-fuel vehicles for bus operation in the city,
Volvo agreed to participate in a large-scale field trial along with BAE Systems,
Enova, Siemens and Allison. Volvo provided six prototype citybuses. These
were the only vehicles to utilise a parallel configuration for their hybrid-
electric technology. As part of the trials, Volvo provided specially trained
technicians to maintain the vehicles and there were also regular visits from a
technical team from Volvo’s main R&D facility in Sweden. Some buses were
even recalled to Sweden from time to time for detailed technical evaluation. In
addition Volvo managers met with operating staff every three months to
review progress. Analysis of how the new buses had performed showed that
they had achieved a 50 per cent reduction in emissions and fuel savings
amounting to 34 per cent. What was particularly impressive to operators like
Arriva was their excellent level of reliability (one of the main problems to
occur in the earlier trials in Gothenborg), with the new buses achieving 98 per
cent availability, better than any competitor (Volvo, 2011). In addition,
passenger surveys were positive, with passengers reporting the new vehicles
were quieter and smoother.
Based on the positive results of these field trials, Volvo initiated series
production of the new hybrid bus in 2010. Arriva, one of five major citybus
operators in London ordered more than 50 examples of the new hybrid-
electric buses from Volvo. And a total of 26 of them now operate route 76. In
fact route 76 has proved something of a trailblazer, since having proven its
hybrid-electric parallel technology in London, orders for Volvo’s new hybrid
have flooded in from all over the world. Volvo received orders for more than
200 in the first year alone. Spain, Finland, Mexico, Germany and Brazil are just
some of the countries placing orders.
In 2012 Volvo announced that they were ceasing production of low-floor
inner-city buses (the largest city bus segment in Europe) with conventional
powertrains and only offering hybrid-electric ones from January 2014.
Source: Sushandoyo and Magnusson (2014).

Questions
1 What was previously the dominant design in terms of powertrain
technology for heavy commercial vehicles?
2 Why were operators of heavy commercial vehicles likely to be wary of
switching to a new technology?
3 Which was the main market and which the niche market?
4 What was Volvo’s innovation strategy for its hybrid-electric technology?
5 Why did Volvo utilise the innovation strategy identified in question 4?
6 How effective was the innovation strategy identified in question 4?
7 What does the case tell us about the problems that innovators face when
it comes to winning over customers to a new technology?
8 What were the main benefits of Volvo’s hybrid-electric technology?
9 Who benefited from TfL’s decision to use Volvo’s hybrid buses?
10 What does this case tell us about the problems facing green innovations?

Questions for discussion


1 What are strategic decisions?
2 Give examples of strategic decisions to do with innovation.
3 What is an innovation strategy?
4 Differentiate between first-mover and follower strategies.
5 Why do first-movers quite often fail?
6 What advantages, if any, do followers gain over first-movers?
7 What is a niche strategy and what advantages does it offer the would-be
innovator?
8 Why might a firm want to take an external route in order to develop a
technology or idea it has pioneered?
9 Why did many dot-com businesses adopt a first-mover strategy?
10 What risks does a firm expose itself to when adopting a follower/latecomer
strategy?

Exercises
1 Prepare a presentation outlining the case for an innovation strategy of your
choice. Assume that the presentation is being made either to the board of
directors of the company that has developed a new technology or to members
of the financial community who are planning to invest in the company.
2 Prepare a report that compares and contrasts the relative merits of first-
mover/pioneer and follower/latecomer innovation strategies.
3 Why do innovations fail? Choose an example of an unsuccessful innovation, and
write a report showing how the innovation strategy contributed to the failure of
the innovation.
4 As an employee of a company that has developed and patented a new
technology, prepare a report for the company’s senior management explaining
why external routes can be an effective way of exploiting new technologies.
Pay particular attention to the circumstances where external routes may be
appropriate and the benefits and pitfalls associated with them.

Further reading
1 Grant, R. M. (2008) Contemporary Strategic Analysis, 6th edn, Blackwell,
Oxford.
The field of strategy is extremely well catered for when it comes to texts but
most such texts place comparatively little emphasis on innovation. One of the
exceptions is Grant (2008). This text not only provides excellent coverage of
strategy in all its forms, it also provides detailed coverage of innovation
strategy. There is a whole chapter devoted to the management of innovation
and this incorporates a section on strategies to exploit innovation where both
external routes to innovation are outlined and specific innovation strategies. It
also includes plenty of examples.
2 Chesbrough, H. W. (2006a) Open Innovation: The New Imperative for
Creating and Profiting from Technology, Harvard Business School Press,
Boston, MA.
Although this is not a book about strategy it does provide a very useful
perspective on external routes to innovation. A key element of Chesbrough’s
thesis is that recent changes have led to much more open forms of innovation.
External routes are a key element within this and are covered in considerable
detail.
3 Lieberman, M. B. and Montgomery, D. B. (1988) ‘First-Mover Advantages’,
Strategic Management Journal, 9, pp. 41–58.
Although now some 20 years old, this remains one of the defining papers on
innovation strategy. It focuses on first movers, explaining in detail the potential
benefits to be gained from this type of strategy. At the same time, because it
outlines the potential disadvantages attached to being a first mover, it also
provides a