THE ASSESSMENT PROCESS
Welcome! In this session we will talk about how we go about DevOps assessments. There are
4 assessment patterns – assessment for a single application and/or database stack for
example Java with Oracle, assessment for specific application portfolios having multiple
stacks, assessment for production support and maintenance or IT service management cycles,
and assessment for enterprise IT or a group of say, business portfolios. The last case of
enterprise IT or business portfolios may start with prioritizing where DevOps assessments are
required, and then it leads to one or more of the first three specific cases.
A recommended team mix is of two consultants at a minimum for any assessment – 1 process
consultant preferably conversant with people management, 1 technical architect or
consultant. For multiple portfolios, more such consultants may be required based on
timelines of assessment. As a pre-requisite, get any earlier assessment reports and IT strategy
or plan documents the customer may already have. Possibly ask and get to know why the
assessment was not sufficient, if it was done for the same portfolios. A pre-assessment to get
a high level understanding of portfolios on their criticality and DevOps readiness may be done.
We have a pre-assessment questionnaire – we lovingly call it PAQ – using which you can get
such a report. Identify customer and vendor stakeholders who will participate, early on before
you start the assessment. Members may include representation from Dev, QA, Ops, IT
security, Agile or DevOps champions, architect and business. Ask business on what are the
current pain points that they need IT to resolve.
Let us quickly look at the patterns in detail. For assessment of a single application stack, it
may typically take 2-4 weeks with 2 consultants. Both consultants work together on analysis
and report preparation.
Application portfolios with multiple stacks may take 4-6 weeks with say, 4-6 consultants based
on variability of IT processes across stacks. Two important aspects that need to be considered
are – A. inter-stack process dependencies during analysis, B. objective to arrive at unified
target architectures as much as possible across the stacks.
IT service management assessment may cover process flows on incident, problem and change
management, include aspects of security or SRE, and relevant monitoring metrics and
dashboards for DevOps with role-based access.
For enterprise IT or business portfolio assessments, you may start with pre-assessment
followed by creating a heatmap on where the individual IT portfolios stand. You may further
draw up a criticality versus complexity matrix of the portfolios. This will help in prioritizing
which portfolios to pick up for further detailed assessments. In addition to aspects that are
important for multiple portfolio assessments, you may also want to give a business versus IT
metrics view to depict how well IT maps to business portfolios and how well business
leverages IT.
ASSESSMENT PRESENTATION AND WORKSHOP
Welcome! Coming to how an assessment report needs to be presented, always get the
assessment sponsor for any assessment presentation – it may be the CIO, the chief architect,
engineering or transformation lead, or the head of Dev, QA or Ops. In addition, get the
primary representation from Dev, QA and Ops teams. Start with a brief explanation of the
methodology used. Then briefly talk about the scope – portfolios or stacks covered – the team
with whom you worked on data capture, and how. Now when you present the report,
articulate the key benefits first along with data points or metrics. Then talk about the key
takeaways – what works and what can improve. Based on the stakeholder profiles, you may
further delve deep into the analysis, the architecture and other details on the outcomes.
We come across customers who would like to see how an assessment works and evaluate the
value it provides, before getting into an actual assessment. We recommend a 4 hours quick
workshop to Get Started. Get any pre-assessment report done earlier as a reference. Keep a
white board, markers, art papers and post-its handy. Now, Step 1 – probe the customer and
draw up a quick chart on overall portfolio including the application stacks, databases,
infrastructure and teams. Step 2 – pick up one stack and draw the entire process network
diagram – not very detailed but containing the IT activities in boxes connected by arrows
across the process – for that stack. Probe and involve the customer while you draw it. Identify
the gaps and compute the overall cycle time. Step 3 – prioritize the gaps and how they can be
resolved, along with a 30-60-90 days plan. Check if there is a cycle time reduction as gaps are
closed. Validate with the pre-assessment report and get concurrence with the customer. At
last, present the plan with key benefits including IT cycle time reduction.
PRE-ASSESSMENT QUESTIONNAIRE or PAQ
Welcome! In this session we will cover the pre-assessment questionnaire or PAQ that enables
us to generate a high level DevOps readiness assessment for a given organization’s or
portfolio’s IT. Such a pre-assessment sets the way to understand how mature and equipped
IT is, on their DevOps journey. It also sets the stage to prioritize further detailed assessments.
Let us look at the PAQ in a little more detail. At the top are brief instructions given in
expandable rows. You may take a look at them before you start. Apart from the brief portfolio
details that you need to fill up in the beginning, there are 4 categories of questions in the
questionnaire – scope and direction, practices, technology and metrics. A total of 20
questions are there spanning across the categories. Each question has one or more sub-
questions given across the columns. All responses are to be selected from given drop-down
lists.
Under scope and direction – marked yellow, we start with business criticality of the portfolio
and overall impact due to IT. This helps in prioritizing the portfolio for further assessment.
Next question is on technology stack in terms of application, databases and environments.
This is followed by IT pain areas across parameters such as cycle time, maintainability,
security, etc. This helps in setting the focus on what to improve. The next question is on
organizational role who is driving DevOps initiatives. This is followed by the question on
operating structure – essentially role-wise team organization. The last question is on process
and architectural directions that shapes up the portfolio’s IT landscape.
Coming to the questions under practices – marked green, we cover the engineering practices,
namely requirements, code and data, quality, deployment, environment, security and
reliability, IT service management areas, and lastly the cultural and workflow practices that
cut across the former practices. Each of the sub-questions may evaluate people, process,
technology – or may be more of one of such parameters – for the given practice question.
There are questions on AI or ML based considerations that may or may not be relevant for
the portfolio; AIOps is on incorporating AI based capabilities to DevOps pipelines. MLOps is
on doing DevOps for managing ML based application development cycles.
The questions under technology – marked orange, include specific areas of technologies or
tools usage across the IT life cycle. The first three questions are respectively on Dev related,
Ops related, process related – namely, managing requirements, monitoring or IT service
management workflows – technologies and tools. The last question in this category is about
how mature are the reporting and analytics capabilities across the practices.
The last category on metrics – marked blue, has questions on development life cycle and IT
service management life cycle respectively. The corresponding responses are based on
metrics data – rather data range – based on actual IT performance for the portfolio.
It is important that all the questions are responded to for generating a complete and accurate
DevOps readiness report. Leaving any of them blank or filling in inappropriate or invalid
responses may result in errors, inaccuracies or incompleteness of the report. If the PAQ is
filled up for multiple portfolios, you may want to pre-fill some of the common fields across
the portfolios – for example, most of the responses to questions on architecture direction,
culture or workflow may be same across portfolios – and use this as the template, hence
saving effort in filling them up every time.
HOW TO READ THE BASIC PRE-ASSESSMENT REPORTS
Welcome! There are 3 basic reports that are generated from our pre-assessment. The first
one that we cover in this session is the DevOps readiness report. Let us look at it in a little
more detail.
At the top left, we summarize the scope in terms of business criticality, pain areas, technology
stack and infrastructure for the portfolio.
On the top right, we provide a DevOps readiness score within 0 to 10 based on the analysis
of the PAQ responses. Below the score lies the set of 5 foundational parameters – process
and architecture guidelines, architecture and security governance, people skilling, inter-role
collaboration, and work tracking and pipeline integration. For the first two, primary inputs
are responses on process and architectural directions. For the parameters on culture,
responses on cultural and workflow practices provide the primary inputs. For the last one,
primary inputs are from both the questions. The scoring is in terms of doughnuts – green
indicates mature and consistent practice, yellow indicates somewhat mature but may need
improvements, and red indicates that there is a lack of maturity and definitely needs
improvement.
Now, here are the operational maturity indicators across the DevOps practice areas over the
pipeline. The view shows 4 dimensions – people, process, technology and metrics. Maturity
is indicated by green, yellow and red discs. For instance, a red or yellow indicator for people
denotes that there may be a lack of collaboration, an activity is not being reviewed in parallel
by a given role, or say, an activity is done without resolving a failure that can impact the
success of such activity. Similarly, a red or yellow indicator for process denotes lack of defined
process for hand-offs, code development is not test driven, or say, certain production issues
have no traceability to requirements. A red or yellow indicator for technology denotes lack of
automation, tool chain based pipeline integration or say, automated monitoring mechanisms.
For metrics, maturity is based on how good the metrics looks with respect to IT performance;
the key metrics being, release frequency, release cycle time, change success in production,
mean time to repair incidents in production – or MTTR, with prevalence of security and
performance issues, and mean time between failures in production – or MTBF. Such points
are detailed out in the boxes marked ‘What Works’ and ‘What can Improve’ with separate
color coding for people, process and technology aspects.
Finally at the left side, the maturity for 5 key outcomes are indicated. Green, yellow and
orange colored boxes indicate high, medium and low readiness respectively, for each of the
five outcomes. Business alignment is based on factors such as criticality, IT impact and pain
areas of business with respect to IT. Culture and collaboration indicate maturity of people and
team behavior. IT life cycle automation is based on extent of tool based automation, and
integration of such automation to the pipeline, monitoring and tracking mechanisms. Security
and reliability indicate maturity on corresponding practices across the pipeline. Metrics-led
feedback is about mechanisms that exist towards validation and progress using suitable
metrics, and how effectively such metrics are considered for feedback loops across practices.
The report does provide a starting point on what needs to improve towards achieving a
desired state of DevOps led IT. However, note that the report is not based on any in-depth
analysis to visualize the target state and know how it is to be done.
We will briefly cover the next two. The next one is the heatmap that we create for multiple
portfolio pre-assessments, to compare on where each of such portfolios stand with respect
to DevOps. It contains the summarized indicators for people, process, technology and metrics,
along with areas of improvement and optionally, portfolio-wise scores.
The third one too applies for multiple portfolio assessments, say enterprise-wide. We first
summarize the overall score for each portfolio as green, yellow or red disc. Then a graph is
plotted for the portfolios by criticality for DevOps on the Y-axis and a categorization
parameter for such portfolios on the X-axis. Such categories may be by environment and
infrastructure types as shown here, or may be by application stacks, groups owning the
portfolios, perceived complexity to achieve DevOps, etc. Both this report and the heatmap
may be used for prioritizing transformation initiatives or for prioritizing need for further
detailed DevOps assessments.
EXERCISE ON QODE
Your exercise starts with a customer IT landscape view. You do not have to do anything here,
however this is for you to read. You may typically start with collecting data on the IT context
of the customer this way, which will help you to understand and prioritize where DevOps fits
in.
In our case, you will consider the Java based front office system. The data is already captured
and given to you in the standard QODE questionnaire format. Let us look at the data. There
are 11 columns. The first one is the task category or epic, followed by the task id and
description. This is followed by the total time taken to execute the task and manual time
spent. As you may already know, a task’s total time comprises of manual time, automation
time – or time taken by such tool – and possibly some idle or float time before a succeeding
task is executed. This is followed by criticality as high, medium or low; predecessor tasks, role
executing the task, input and output of the task, and lastly the transition type based on the
input and output given. Note that there may be minor discrepancies in the data where you
may make necessary corrections or assumptions.
Now let’s talk about what you need to do. Based on the data, draw the process network
diagram. Make it simple – you may only articulate the events – start and end state boxes
showing input or output – and tasks interconnecting the events along with time details. Make
sure there is only one initial task from which the entire process originates; use dummy tasks
if required. Once done, compute the relative start times for the process paths.
Next, find out the critical path based on the total times of possible paths. Then optimize the
tasks on the critical path, making corresponding adjustments to the other paths.
Next, you may draw out the target process diagram and then compute the target cycle time
for the target process.
Lastly, draw up a 30-60-90 days plan showing DevOps stories that will be implemented to
arrive at the target state. Present or share the process diagrams, critical path and plan for our
evaluation. The entire exercise may take around 2-4 hours. This way you can do an actual
quick workshop with your customer to get started on DevOps.