0% found this document useful (0 votes)
421 views102 pages

BSIMM12 Foundations Report Overview

This document introduces the BSIMM12 (Building Security In Maturity Model version 12) report. It was authored by security professionals at Synopsys and is based on data from 128 organizations. The report contains three parts: an introduction to BSIMM12 and its framework, guidance on using the model, and an exploration of the building blocks of a software security initiative (SSI). It aims to help organizations measure and improve their SSI practices over time.

Uploaded by

cours stock
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
421 views102 pages

BSIMM12 Foundations Report Overview

This document introduces the BSIMM12 (Building Security In Maturity Model version 12) report. It was authored by security professionals at Synopsys and is based on data from 128 organizations. The report contains three parts: an introduction to BSIMM12 and its framework, guidance on using the model, and an exploration of the building blocks of a software security initiative (SSI). It aims to help organizations measure and improve their SSI practices over time.

Uploaded by

cours stock
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

BSIMM12

2021 FOUNDATIONS
REPORT
BSIMM12 FOUNDATIONS REPORT AUTHORS
SAMMY MIGUES Principal Scientist at Synopsys
ELI ERLIKHMAN Managing Principal at Synopsys
JACOB EWERS Principal Consultant at Synopsys
KEVIN NASSERY Senior Principal Consultant at Synopsys

ACKNOWLEDGEMENTS
Our thanks to the 128 executives from the SSIs we studied from around the world to create BSIMM12,
including those who choose to remain anonymous.

AARP Fannie Mae NEC Platforms


Adobe Finastra NetApp
Aetna Freddie Mac NewsCorp
Alibaba Genetec NVIDIA
Ally Bank Global Payments Oppo
Autodesk HCA Healthcare PayPal
Axway Highmark Health Solutions Pegasystems
Bank of America Honeywell Principal Financial Group
Bell HSBC RB
Black Duck Software iPipeline SambaSafety
Black Knight Financial Services Johnson & Johnson ServiceNow
Canadian Imperial Bank of Landis+Gyr Synopsys
Commerce Lenovo TD Ameritrade
Cisco MassMutual Teradata
Citigroup McKesson The Home Depot
Depository Trust & Clearing Medtronic The Vanguard Group
Corporation
MediaTek Trainline
Eli Lilly
Morningstar Trane
eMoney Advisor
Navient U.S. Bank
EQ Bank
Navy Federal Credit Union Veritas
Equifax
NCR Verizon Media
F-Secure

2 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


BSIMM12 TABLE OF CONTENTS

ACKNOWLEDGEMENTS................................................... 2 PART THREE:


BUILDING BLOCKS OF THE BSIMM......................42
PART ONE: WELCOME TO BSIMM12.............. 6 GOVERNANCE.........................................................................................44
WELCOME TO BSIMM12...................................................................... 6 • Governance: Strategy & Metrics (SM)............................44
• Governance: Compliance & Policy (CP)........................ 46
THE BSIMM’S HISTORY....................................................................... 6
• Governance: Training (T)........................................................ 49
THE MODEL.................................................................................................. 6
INTELLIGENCE...........................................................................................51
• The BSIMM12 Framework ........................................................ 7
• Intelligence: Attack Models (AM).......................................51
• The BSIMM12 Skeleton .............................................................. 8
• Intelligence: Security Features & Design (SFD).......53
• Undertanding the Model.........................................................12 • Intelligence: Standards & Requirements (SR)..........55
CREATING BSIMM12 FROM BSIMM11........................................13 SSDL TOUCHPOINTS...........................................................................57
• Where Do Old Activities Go?................................................ 14 • SSDL Touchpoints: Architecture Analysis (AA)........57
• SSDL Touchpoints: Code Review (CR)............................59
ROLES IN A SOFTWARE SECURITY INITIATIVE................ 16 • SSDL Touchpoints: Security Testing (ST)..................... 60
• Executive Leadership................................................................ 16
DEPLOYMENT...........................................................................................62
• SSI Leadership................................................................................ 16
• Deployment: Penetration Testing (PT)..........................62
• Software Security Group......................................................... 18 • Deployment: Software Environment (SE)...................63
• Satellite............................................................................................... 20 • Deployment: Configuration Management
• Everybody Else................................................................................21 & Vulnerability Management (CMVM)...........................65

• BSIMM Terminology...................................................................22
• More on Builders and Testers..............................................23 APPENDIX.......................................................................................... 69
INTERPRETING BSIMM MEASUREMENTS........................... 24 BUILDING A MODEL FOR SOFTWARE SECURITY......... 69
THE BSIMM AS A LONGITUDINAL STUDY........................... 70
• Changes to Longitudinal Scorecard...............................73
PART TWO: USING THE BSIMM.......................... 28
CHARTS, GRAPHS, AND SCORECARDS.................................77
USING THE BSIMM............................................................................... 28
• The BSIMM12 Expanded Skeleton...................................80
• SSI Phases........................................................................................ 28
• Comparing Verticals................................................................. 85
• Traditional SSI Approaches...................................................29

Vertical Comparison Scorecard......................................86
• The New Wave in Engineering Culture........................29
MODEL CHANGES OVER TIME.................................................... 90
• Convergence as a Goal..............................................................31
IMPLEMENTING SN SSI FOR THE FIRST TIME...................92
• A Tale of Two Journeys:
Governance vs. Engineering................................................32 • Create a Software Security Group.....................................92
The Governance-Led Journey........................................32 • Inventory All Software in the SSG’s Purview.............93
Governance-Led Checklist • Ensure Infrastructure Security Is Applied to
for Getting Started.................................................................33 the Software Environment....................................................93
Maturing Governance-Led SSIs................................... 34 • Deploy Defect Discovery Against Highest
Enabling Governance-Led SSIs.....................................35 Priority Applications...................................................................93
The Engineering-Led Journey.......................................36 • Publish and Promote the Process....................................95
Engineering-Led Checklist • Progress to the Next Step in Your Journey.................95
for Getting Started.................................................................36 • Summary...........................................................................................95
Prioritizing In-Scope Software.......................................37 DATA TRENDS OVER TIME............................................................. 97
Maturing Engineering-Led Efforts............................ 38
Engineering-Led Heuristics.............................................39
Enabling Engineering-Led Efforts.............................40
BSIMM12 TABLE OF CONTENTS

BSIMM12 LIST OF TABLES BSIMM12 LIST OF FIGURES


Table 1. New Activities...........................................................................13 Figure 1. The Software Security Framework........................... 7
Table 2. The Software Security Group...................................... 20 Figure 2. The BSIMM Skeleton......................................................... 8
Table 3. BSIMM12 ExampleFirm Scorecard............................25 Figure 3. Number of Observations for [AA3.2]
and [CR3.5] Over Time.......................................................................... 14
Table 4. Most Common Activities Per Practice................. 42
Figure 4. Ongoing Use of the BSIMM in Driving
Table 5. Top 20 Activities by Observation Count............... 43
Organizational Maturity......................................................................15
Table A. BSIMM Numbers Over Time....................................... 70
Figure 5. Nearest Executive to SSG..............................................17
Table B. BSIMM Verticals Over Time......................................... 70
Figure 6. Percentage of SSGs with the CISO as
Table C. BSIMM12 Reassessments Scorecard Their Nearest Executive...................................................................... 18
Round 1 vs. Round 2 ..............................................................................71
Figure 7. Percentage of Firms That Have a Satellite
Table D. BSIMM12 Reassessments Scorecard Organized by Bsimm Score.............................................................21
Round 1 vs. Round 3 ............................................................................ 74
Figure 8. AllFirms vs. ExampleFirm Spider Chart.............26
Table E. Observation Rate of Selected Level 3
Figure 9. SSG Evolution...................................................................... 30
Activities for 76 R1 And 52 R2 Firms ...........................................76
Figure A. AllFirms Round 1 vs. AllFirms Round 2
Table F. BSIMM12 Scorecard .......................................................... 78
Spider Chart ..............................................................................................72
Table G. BSIMM12 Skeleton..............................................................80
Figure B. Activity Increases Between First and
Table H. Vertical Comparison Scorecard................................ 86 Second Measurements.......................................................................73
Table I. Activity Changes Over Time.......................................... 90 Figure C. AllFirms Round 1 vs. AllFirms Round 3
Spider Chart ..............................................................................................75
Figure D. Longitudinal Increases Between 21 Firms
Round 2 And Round 3 by Practice..............................................76
Figure E. AllFirms Spider Chart.................................................... 79
Figure F. BSIMM Score Distribution.......................................... 85
Figure G. BSIMM Activity Roadmap
by Organizational Approach...........................................................92
Figure H. A Sample Governance-Led
Organization’s Testing Program Over Time........................ 94
Figure I. BSIMM Activity Roadmap by
Organizational Approach with Cost.......................................... 96
Figure J. Average BSIMM Participant Score........................ 97
Figure K: Average and Median SSG Age for
New Firms Entering the BSIMM.................................................. 98
Figure L. Number of Firms That Received
Their Second or Higher Assessment........................................ 99
Figure M. Number of Firms Aged Out of the
BSIMM Data Pool................................................................................... 99
Figure N. Average Financial Services Firm Scores........ 100
Figure O. Statistics for Firms With and Without
a Satellite Out of 128 BSIMM12 Participants.........................101
PART ONE
WE LCOME
TO BSIMM1 2
WELCOME TO BSIMM12
The BSIMM is the result of a multiyear study of real-world software security initiatives (SSIs). Each year, a variety of
firms in different industry verticals use the BSIMM to manage their SSI improvements because it provides a clear
picture of actual practices across the security landscape. Here, we present BSIMM12 as built directly out of the data
we observed in 128 firms. In this section, we talk about its history and the underlying model, as well as the changes
we’ve made for BSIMM12. We describe the roles we typically see in an SSI and some related terminology. We also offer
guidance on how to use the BSIMM to start, mature, and measure your own SSI.

THE BSIMM’S HISTORY

We built the first version of the BSIMM over a decade ago (late 2008) as follows:
• We relied on our own knowledge of software security practices to create the software security
framework (SSF).
• We conducted a series of in-person interviews with nine executives in charge of SSIs. From these interviews,
we identified a set of common activities that we organized according to the SSF.
• We then created scorecards for each of the nine initiatives that showed which activities the initiatives carry
out. To validate our work, we asked each participating firm to review the framework, the practices, and the
scorecard we created for their initiative.
Today, we continue to evolve the model by looking for new activities as participants are added and as current
participants are remeasured. We also adjust the model according to observation rates for each of the activities.

THE MODEL
The BSIMM is a data-driven model that evolves over time. Over the years, we have added, deleted, and adjusted the
levels of various activities based on the data observed throughout the project’s evolution. To preserve backward
compatibility, we make all changes by adding new activity labels to the model, even when an activity has simply
changed levels (e.g., by adding a new CR#.# label for both new and moved activities in the Code Review practice).
When considering whether to add a new activity, we analyze whether the effort we’re observing is truly new to the
model or simply a variation on an existing activity. When considering whether to move an activity between levels, we
use the results of an intralevel standard deviation analysis and the trend in observation counts.
Whenever possible, we use an in-person interview technique to conduct BSIMM assessments, done with a total of 231
firms so far. In addition, we’ve conducted assessments for 10 organizations that have rejoined the community after
aging out. In 40 cases, we assessed both the software security group (SSG) and one or more business units as part of
creating the corporate SSI view.
For most organizations, we create a single aggregated scorecard, whereas in others, we create individual scorecards
for the SSG and each business unit assessed. However, each firm is represented by only one set of data in the model
published here. (“Table A. BSIMM Numbers Over Time” in the appendix shows changes in the data pool over time.)
As a descriptive model, the only goal of the BSIMM is to observe and report. We like to say we visited a neighborhood
to see what was happening and observed that “there are robot vacuum cleaners in X of the Y houses we visited.” Note
that the BSIMM does not say, “all houses must have robot vacuum cleaners,” “robots are the only acceptable kind
of vacuum cleaners,” “vacuum cleaners must be used every day,” or any other value judgements. We offer simple
observations, simply reported.
Of course, during our assessment efforts across hundreds of organizations, we also make qualitative observations
about how SSIs are evolving and report many of those as key takeaways, themes, and other topical discussions.
Our “just the facts” approach is hardly novel in science and engineering, but in the realm of software security, it has
not previously been applied at this scale. Other work around SSI modeling has either described the experience of a
single organization or offered prescriptive guidance based on a combination of personal experience and opinion.

6 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


THE BSIMM12 FRAMEWORK
The BSIMM is organized as a set of 122 activities in an SSF, represented in Figure 1. The framework includes 12 practices
that are organized into four domains.

DOMAINS

GOVERNANCE INTELLIGENCE SSDL TOUCHPOINTS DEPLOYMENT

Practices that help Practices that result in Practices associated Practices that interface
organize, manage, and collections of corporate with analysis and with traditional
measure a software knowledge used in assurance of particular network security and
security initiative. carrying out software software development software maintenance
Staff development security activities artifacts and processes. organizations. Software
is also a central throughout the All software security configuration,
governance practice. organization. Collections methodologies include maintenance, and other
include both proactive these practices. environment issues
security guidance have direct impact on
and organizational software security.
threat modeling.

PRACTICES

GOVERNANCE INTELLIGENCE SSDL TOUCHPOINTS DEPLOYMENT

1. Strategy & Metrics 4. Attack Models 7. Architecture Analysis 10. Penetration Testing
(SM) (AM) (AA) (PT)

2. Compliance & Policy 5. Security Features 8. Code Review 11. Software Environment
(CP) & Design (CR) (SE)
(SFD)
3. Training 9. Security Testing 12. Configuration
(T) 6. Standards & (ST) Management &
Requirements Vulnerability
(SR) Management
(CMVM)

FIGURE 1. THE SOFTWARE SECURITY FRAMEWORK. This figure shows how the 12 practices align with the four
high-level domains.

7 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


THE BSIMM12 SKELETON
The BSIMM skeleton provides a way to view the model at a glance and is useful when assessing an SSI. The skeleton
is shown in Figure 2, organized by practices and levels. More complete descriptions of the activities and examples are
available in Part Three of this document.

GOVERNANCE

STRATEGY & METRICS (SM) COMPLIANCE & POLICY (CP) TRAINING (T)

LEVEL 1 LEVEL 1 LEVEL 1

• [SM1.1] Publish process and evolve • [CP1.1] Unify regulatory pressures. • [T 1 .1] Conduct software security
as necessary. awareness training.
• [CP1.2] Identify PII obligations.
• [SM1.3] Educate executives on • [T 1 .7] Deliver on-demand
• [CP1.3] Create policy.
software security. individual training.
• [SM1.4] Implement lifecycle • [T 1 .8] Include security resources
instrumentation and use to in onboarding.
define governance.

LEVEL 2 LEVEL 2 LEVEL 2

• [SM2.1] Publish data about software • [CP2.1] Build PII inventory. • [T2.5] Enhance satellite through
security internally and drive change. training and events.
• [CP2.2] Require security sign-off
• [SM2.2] Verify release conditions with for compliance-related risk. • [T2.8] Create and use material
measurements and track exceptions. specific to company history.
• [CP2.3] Implement and track
• [SM2.3] Create or grow a satellite. controls for compliance. • [T2.9] Deliver role-specific
advanced curriculum.
• [SM2.6] Require security sign-off • [CP2.4] Include software security
prior to software release. SLAs in all vendor contracts.
• [SM2.7] Create evangelism role and • [CP2.5] Ensure executive awareness
perform internal marketing. of compliance and privacy
obligations.

LEVEL 3 LEVEL 3 LEVEL 3

• [SM3.1] Use an internal tracking • [CP3.1] Create a regulator • [T3.1] Reward progression
application with portfolio view. compliance story. through curriculum.
• [SM3.2] SSI efforts are part of • [CP3.2] Impose policy on vendors. • [T3.2] Provide training for vendors
external marketing. and outsourced workers.
• [CP3.3] Drive feedback from
• [SM3.3] Identify metrics and use software lifecycle data back to policy. • [T3.3] Host software security events.
them to drive resourcing.
• [T3.4] Require an annual refresher.
• [SM3.4] Integrate software-defined
• [T3.5] Establish SSG office hours.
lifecycle governance.
• [T3.6] Identify new satellite members
through observation.

FIGURE 2. THE BSIMM SKELETON. Within the SSF, the 122 activities are organized across different levels.

8 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


INTELLIGENCE

ATTACK MODELS (AM) SECURITY FEATURES & DESIGN (SFD) STANDARDS & REQUIREMENTS (SR)

LEVEL 1 LEVEL 1 LEVEL 1

• [AM1.2] Create a data classification • [SFD1.1] Integrate and deliver • [SR1.1] Create security standards.
scheme and inventory. security features.
• [SR1.2] Create a security portal.
• [AM1.3] Identify potential attackers. • [SFD1.2] Engage the SSG
• [SR1.3] Translate compliance
with architecture teams.
• [AM1.5] Gather and use constraints to requirements.
attack intelligence.

LEVEL 2 LEVEL 2 LEVEL 2

• [AM2.1] Build attack patterns • [SFD2.1] Leverage secure-by-design • [SR2.2] Create a standards
and abuse cases tied to components and services. review board.
potential attackers.
• [SFD2.2] Create capability to solve • [SR2.4] Identify open source.
• [AM2.2] Create technology-specific difficult design problems.
• [SR2.5] Create SLA boilerplate.
attack patterns.
• [AM2.5] Maintain and use a top N
possible attacks list.
• [AM2.6] Collect and publish
attack stories.
• [AM2.7] Build an internal forum
to discuss attacks.

LEVEL 3 LEVEL 3 LEVEL 3

• [AM3.1] Have a research group that • [SFD3.1] Form a review board or • [SR3.1] Control open source risk.
develops new attack methods. central committee to approve and
• [SR3.2] Communicate standards
maintain secure design patterns.
• [AM3.2] Create and use automation to vendors.
to mimic attackers. • [SFD3.2] Require use of approved
• [SR3.3] Use secure coding standards.
security features and frameworks.
• [AM3.3] Monitor automated
• [SR3.4] Create standards
asset creation. • [SFD3.3] Find and publish secure
for technology stacks.
design patterns from
the organization.

FIGURE 2. THE BSIMM SKELETON. Within the SSF, the 122 activities are organized across different levels.

9 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


SSDL TOUCHPOINTS

ARCHITECTURE ANALYSIS (AA) CODE REVIEW (CR) SECURITY TESTING (ST)

LEVEL 1 LEVEL 1 LEVEL 1

• [AA1.1] Perform security • [CR1.2] Perform opportunistic • [ST 1 .1] Ensure QA performs edge/
feature review. code review. boundary value condition testing.
• [AA1.2] Perform design review for • [CR1.4] Use automated tools. • [ST 1 .3] Drive tests with security
high-risk applications. requirements and security features.
• [CR1.5] Make code review mandatory
• [AA1.3] Have SSG lead design for all projects. • [ST 1 .4] Integrate opaque-box security
review efforts. tools into the QA process.
• [CR1.6] Use centralized reporting to
• [AA1.4] Use a risk methodology close the knowledge loop.
to rank applications.
• [CR1.7] Assign tool mentors.

LEVEL 2 LEVEL 2 LEVEL 2

• [AA2.1] Define and use AA process. • [CR2.6] Use automated tools with • [ST2.4] Share security results with QA.
tailored rules.
• [AA2.2] Standardize • [ST2.5] Include security tests in
architectural descriptions. • [CR2.7] Use a top N bugs list (real QA automation.
data preferred).
• [ST2.6] Perform fuzz testing
customized to application APIs.

LEVEL 3 LEVEL 3 LEVEL 3

• [AA3.1] Have engineering teams lead • [CR3.2] Build a capability to combine • [ST3.3] Drive tests with risk
AA process. assessment results. analysis results.
• [AA3.2] Drive analysis results into • [CR3.3] Create capability to • [ST3.4] Leverage coverage analysis.
standard design patterns. eradicate bugs.
• [ST3.5] Begin to build and apply
• [AA3.3] Make the SSG available as • [CR3.4] Automate malicious adversarial security tests
an AA resource or mentor. code detection. (abuse cases).
• [CR3.5] Enforce coding standards. • [ST3.6] Implement event-driven
security testing in automation.

FIGURE 2. THE BSIMM SKELETON. Within the SSF, the 122 activities are organized across different levels.

10 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


DEPLOYMENT

CONFIGURATION MANAGEMENT &


PENETRATION TESTING (PT) SOFTWARE ENVIRONMENT (SE)
VULNERABILITY MANAGEMENT (CMVM)

LEVEL 1 LEVEL 1 LEVEL 1

• [PT 1 .1] Use external penetration • [SE1.1] Use application • [CMVM1.1] Create or interface with
testers to find problems. input monitoring. incident response.
• [PT 1 .2] Feed results to the defect • [SE1.2] Ensure host and network • [CMVM1.2] Identify software defects
management and mitigation system. security basics are in place. found in operations monitoring and
feed them back to development.
• [PT 1 .3] Use penetration testing
tools internally.

LEVEL 2 LEVEL 2 LEVEL 2

• [PT2.2] Penetration testers use all • [SE2.2] Define secure deployment • [CMVM2.1] Have
available information. parameters and configurations. emergency response.
• [PT2.3] Schedule periodic • [SE2.4] Protect code integrity. • [CMVM2.2] Track software bugs found
penetration tests for in operations through the fix process.
• [SE2.5] Use application containers
application coverage.
to support security goals. • [CMVM2.3] Develop an operations
inventory of software delivery
• [SE2.6] Ensure cloud security basics.
value streams.
• [SE2.7] Use orchestration for
containers and virtualized
environments.

LEVEL 3 LEVEL 3 LEVEL 3

• [PT3.1] Use external penetration • [SE3.2] Use code protection. • [CMVM3.1] Fix all occurrences of
testers to perform deep-dive software bugs found in operations.
• [SE3.3] Use application behavior
analysis.
monitoring and diagnostics. • [CMVM3.2] Enhance the SSDL to
• [PT3.2] Customize penetration prevent software bugs found
• [SE3.6] Enhance application inventory
testing tools. in operations.
with operations bill of materials.
• [CMVM3.3] Simulate software crises.
• [CMVM3.4] Operate a bug
bounty program.
• [CMVM3.5] Automate verification of
operational infrastructure security.
• [CMVM3.6] Publish risk data for
deployable artifacts.
• [CMVM3.7] Streamline incoming
responsible vulnerability disclosure.

FIGURE 2. THE BSIMM SKELETON. Within the SSF, the 122 activities are organized across different levels.

11 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


UNDERSTANDING THE MODEL
Let’s walk through an overview of how the BSIMM is organized.

This is part of the SSF shown in Figure 1.


It shows the Governance domain, which includes the Strategy & Metrics,
Compliance & Policy, and Training practices.

This is the Training practice,


expanded to show its
included activities.
Each practice includes a set of
activities separated into their
levels based on observation
frequency. Each activity is
unique; the levels do not
contain different versions
of the same activity.

This is an activity description


from the Training practice.
Each of the 122 BSIMM12
activities has such a
description, and they are
all found in Part Three of
this document.

This Training practice box is a sample of how activity observations might


appear in a BSIMM assessment scorecard (where a “1” indicates the
activity was observed).
An activity’s level is also embedded in the activity numeric identifier (e.g., SM2.1
is a Strategy & Metrics activity at level 2). For more detail on each activity, read
the BSIMM activities in Part Three of this document. Level assignment for each
activity stems from its frequency of occurrence in the BSIMM data pool. The most
frequently observed activities are placed in level 1, while those activities observed
infrequently are placed in level 3. Changes in the BSIMM data pool over time can
result in moving activities to other levels, such as moving from level 2 to level 3
or from level 2 to level 1.

12 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


CREATING BSIMM12 FROM BSIMM11
BSIMM12 includes updated activity descriptions, data from 128 firms in multiple vertical markets, and a longitudinal
study. For BSIMM12, we added 20 firms and removed 22, resulting in a data pool of 128 firms. In addition, in the
time since we launched BSIMM11, 15 firms conducted reassessments to update their scorecards, and we assessed
additional business units for eight firms.
We used the resulting observation counts to refine activity placement in the framework, which resulted in moving
four activities to different levels. In addition, we added a newly observed activity, resulting in a total of 122 activities
in BSIMM12.

Changes made for BSIMM12:


• [SM1.2 Create evangelism role and perform internal marketing] became SM2.7
• [T 1 .5 Deliver role-specific advanced curriculum] became T2.9
• [ST2.1 Integrate opaque-box security tools into the QA process] became ST 1 .4
• [SE3.5 Use orchestration for containers and virtualized environments] became SE2.7
• [CMVM3.7 Streamline incoming responsible vulnerability disclosure] was added to the model

We also carefully considered but did not adjust [CMVM2.2 Track software bugs found in operations through the fix
process] and [SR2.4 identify open source] at this time; we will do so if the observation rates continue to increase.
As concrete examples of how the BSIMM functions as an observational model, consider the activities that are now
SM3.3 and SR3.3, which both started as level 1 activities. The BSIMM1 activity [SM1.5 Identify metrics and use them to
drive resourcing] became SM2.5 in BSIMM3 and is now SM3.3 due to its observation rate remaining fairly static while
other activities in the practice became observed much more frequently. Similarly, the BSIMM1 activity [SR1.4 Use
coding standards] became SR2.6 in BSIMM6 and is now SR3.3 as its observation rate has decreased.
To date, no activity has migrated from level 3 to level 1, but we see significant observation increases in recently
added cloud- and DevOps-related activities, with [SE2.6 Ensure cloud security basics] being a probable candidate
after having quickly migrated from level 3 to level 2. See Table 1 for the observation growth in activities added
since BSIMM7.

BSIMM7 BSIMM8 BSIMM9 BSIMM10 BSIMM11 BSIMM12


ACTIVITY
OBSERVATIONS OBSERVATIONS OBSERVATIONS OBSERVATIONS OBSERVATIONS OBSERVATIONS

SE3.4
0 4 11 14 31 44
(now SE2.5)

SE3.5
0 5 22 33
(now SE2.7)

SE3.6 0 3 12 14

SE3.7
0 9 36 59
(now SE2.6)

SM3.4 0 1 6
AM3.3 0 4 6
CMVM3.5 0 8 10
ST3.6 0 2
CMVM3.6 0 0
CMVM3.7 0

TABLE 1. NEW ACTIVITIES. Some of the most recently added activities have seen exceptional growth (highlighted above) in
observation counts, perhaps demonstrating their widespread utility.

13 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


6

OBSERVATION COUNTS 4

0
7

10

11

12
M

M
M

M
M
IM

IM
IM

IM

IM
IM
BS

BS
BS

BS

BS
BS
AA3.2 CR3.5

FIGURE 3. NUMBER OF OBSERVATIONS FOR [AA3.2] AND [CR3.5] OVER TIME. [AA3.2 Drive analysis results into
standard design patterns] had zero observations during BSIMM7 and BSIMM8, while [CR3.5 Enforce coding standards]
decreased to zero observations over the last five BSIMM iterations. There are another two activities with zero observations in
BSIMM12: [CMVM3.6 Publish risk data for deployable artifacts], which was added in BSIMM11, and [CMVM3.7 Streamline incoming
responsible vulnerability disclosure], which was just added in BSIMM12.

WHERE DO OLD ACTIVITIES GO?


We continue to ponder the question, “Where do activities go when no one does them anymore?” In addition
to [CR3.5 Enforce coding standards], we’ve noticed that the observation rate for other seemingly useful
activities (shown in Figure 3) has decreased significantly in recent years:
• [T3.6 Identify new satellite members through observation] observed in 11 of 51 firms in BSIMM4 but
only four of 128 firms in BSIMM12
• [SFD3.3 Find and publish secure design patterns from the organization] observed in 14 of 51 firms
in BSIMM4 but only five of 128 firms in BSIMM12
• [SR3.3 Use secure coding standards] observed in 23 of 78 firms in BSIMM6 but only 9 of 128 firms
in BSIMM12
We believe there are two primary reasons why observations for some activities decrease toward zero over
time. First, some activities become part of the culture and drive different behavior. For example, the SSG
may not need to select satellite members (see [SM2.3 Create or grow a satellite]) if there is a good stream
of qualified volunteers from the engineering groups. Second, some activities don’t yet fit tightly with
the evolving engineering culture, and the activity effort currently causes too much friction. For example,
continuously going to the engineering teams to find secure design patterns (see [SFD3.3 Find and publish
secure design patterns from the organization]) might unacceptably delay key value streams.
It may also be the case that evolving SSI and DevOps architectures are changing the way some activities
are getting done. For example, if an organization’s use of purpose-built architectures, development kits, and
libraries is sufficiently consistent, perhaps it’s less necessary to lean on prescriptive coding standards (see
[CR3.5 Enforce secure coding standards]) as a measure of acceptable code.
Just as a point of culture-driven contrast, we see significant increases in observation counts for activities such
as [SE2.5 Use application containers to support security goals], [SE2.7 Use orchestration for containers and
virtualized environments], and [SE2.6 Ensure cloud security basics], likely for similar reasons that we see lower
counts for the activities above. The engineering culture has shifted to be more self-service and to include
increased telemetry that produces more data for everyone to use. We, of course, keep a close watch on the
BSIMM data pool and will make adjustments if and when the time comes, which might include dropping an
activity from the model.

14 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Fifty-two of the current participating firms have been through at least two assessments, allowing us to study how
their initiatives have changed over time. Twenty-one firms have undertaken three BSIMM assessments, seven have
done four, and three have had five assessments (see Figure 4). One North America firm has undertaken its sixth
assessment, continuing its use of the BSIMM as an SSI planning and management tool.

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%
NORTH AMERICA UNITED KINGDOM/ ASIA/
EUROPEAN UNION PACIFIC

PERCENTAGE OF FIRMS WITH 1 MEASUREMENT


PERCENTAGE OF FIRMS WITH 2 MEASUREMENTS
PERCENTAGE OF FIRMS WITH 3 OR MORE MEASUREMENTS

FIGURE 4. ONGOING USE OF THE BSIMM IN DRIVING ORGANIZATIONAL MATURITY. Organizations are continuing to
do remeasurements to show that their efforts are achieving the desired results.

15 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


BSIMM10 was our first study to formally reflect software security
changes driven by engineering-led efforts, meaning efforts SSI LEADERSHIP
originating bottom-up in the development and operations
teams (often while they evolve into a DevOps group), rather than Individuals in charge of day-to-
originating top-down from a centralized SSG. These results show day efforts in the BSIMM12 SSIs
up here in the form of new activities, in new examples of the way we studied have a variety of titles.
existing activities are conducted, as well as in discussion of the Examples include:
paths organizations might follow to maturation over time. We
expanded that analysis for BSIMM12. • Director Application Security
This BSIMM12 report also includes a brief “Trends” section (Data • Director Application
Trends Over Time, p. 97) that describes shifts in SSI behavior Security Architecture
affecting activity implementation across multiple practices. Larger • Director Cybersecurity
in scope than an activity, or even a capability that combines
• Director Enterprise
multiple activities within a workflow, we believe these trends
Security Architecture
inform the way that initiatives execute groups of activities
within their evolving culture. For example, it’s clear that—within • Director InfoSec
engineering-led initiatives particularly—there’s a trend toward • Director Product Assurance
collecting event-driven security telemetry in addition to or
sometimes even rather than conducting point-in-time scans that
• Director Security Assurance
produce reports. For example, in early iterations of the BSIMM, we • Director Security
referred to the actions of secure configuration tuning of software Assessment Services
being deployed as “publish install guides” as this task was most • Information Security Director
often accomplished with humans reading technical standards.
This evolved over time to [SE2.2 Define secure deployment • Security Assurance Manager
parameters and configurations] due to the overwhelming • Manager Software
adoption of orchestration and deployment automation to achieve Security Engineering
these same goals with greater reliability. Similarly, the BSIMM
• Manager Vulnerability
previously recognized efforts toward cloud security as an extension
Management
of “do host and network security basics;” however, cloud security
tooling, best practices, and critical skillsets diverged to the extent • Senior Director
that a distinct activity was added as [SE2.6 Ensure cloud security Application Security
basics]. This trend toward automation fundamentally changes • Senior Director GRC Programs
the way organizations interpret and implement multiple
activities within the Governance, SSDL Touchpoints, and
• Senior Director Product Security
Deployment domains. • Senior Information Security Leader
• VP Application Security
ROLES IN A SOFTWARE • VP Engineering
SECURITY INITIATIVE • VP InfoSec
Determining the right activities to focus on and clarifying who • VP Product and
is responsible for their implementation are important parts of Application Security
making any SSI work.
• VP Software
EXECUTIVE LEADERSHIP
Historically, security initiatives that achieve firm-wide impact are
sponsored by a senior executive who creates an SSG where software
security testing and operations are distinctly separate from software delivery. The BSIMM empowers these individuals
to garner resources and provide political support while maturing their groups. Security initiatives without that executive
sponsorship and led solely by development management, by comparison, have historically had little lasting impact
across the firm. Likewise, initiatives spearheaded by resources from an existing infrastructure security group usually
generate too much new friction when it comes time to interface with development groups.
By identifying a senior executive and putting them in charge of software security directly, the organization can
address two “Management 101” concerns: accountability and empowerment. It also creates a place where software
security can take root and begin to thrive. Whether a firm’s current SSI exists primarily in an engineering group or is
centrally managed, the BSIMM serves a common purpose by providing leaders insight into the activities firms such as
their own have adopted and institutionalized. While vendors’ marketing and conference submission cycles generate
a wealth of new ideas to try, the BSIMM study serves to reduce the guesswork necessary to separate durable activities
from clever fads.

16 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


As shown in Figure 5, we observe a wide spread in exactly where the BSIMM12 SSGs are situated.

CISO 67

TO 19

CTO 18
NEAREST EXECUTIVE TO SSG

CISO - CHIEF INFORMATION SECURITY OFFICER

CSO 11 TO - TECH ORG: ENG, APPSEC, APPRISK, ETC.

CTO - CHIEF TECHNOLOGY OFFICER


CIO 4
CSO - CHIEF SECURITY OFFICER
COO 3 CIO - CHIEF INFORMATION OFFICER

COO - CHIEF OPERATING OFFICER


CRO 2
CRO - CHIEF RISK OFFICER
CFO 2
CFO - CHIEF FINANCIAL OFFICER

CAO 1 CAO - CHIEF ASSURANCE OFFICER

CPO - CHIEF PRIVACY OFFICER


CPO 1

FIGURE 5. NEAREST EXECUTIVE TO SSG. Although many SSGs seem to be gravitating toward having a CISO as their nearest
executive, we see a variety of executives overseeing software security efforts.

Of course, not all people with the same title perform, prioritize, enforce, or otherwise provide resources for the same
efforts the same way across various organizations.
The significant number of SSGs reporting through a technology organization, in addition to those reporting through
the CTO, remains relatively flat. A future increase here might reflect a growth of engineering-led initiatives chartered
with “building security in” to the software delivery process, rather than existing within a compliance-centric mandate.
Organizational alignment for software security is evolving rapidly, and there are constant natural reorganizations over
time, so we look forward to next year’s data.
In BSIMM-V, we saw CISOs as the nearest executive in 21 of 67 firms, which grew in BSIMM6 to 31 of 78, and again for
BSIMM7 with 52 of 95. Since then, the percentage has remained relatively flat, as shown in Figure 6.

17 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


70%

60%

50%

40%

30%

20%

10%

0%
-V

10

11

12
M
M

M
M

M
M
M

IM
IM

IM
IM

IM

IM
IM
IM

BS
BS

BS
BS

BS

BS
BS
BS

FIGURE 6. PERCENTAGE OF SSGS WITH THE CISO AS THEIR NEAREST EXECUTIVE. Assuming new CISOs generally
receive responsibilities for SSIs, these data suggest that CISO role creation is also flattening out.

SOFTWARE SECURITY GROUP (SSG)


The second most important role in an SSI after the senior executive is the SSG leader and their team. Each of the 128
initiatives we describe in BSIMM12 has an SSG—a true organizational group dedicated full time to software security.
In fact, without an SSG, successfully carrying out BSIMM activities across a software portfolio is very unlikely, so the
creation of an SSG is a crucial first step. The best SSG members are software security people, but they are often hard
to find. If an organization must create a software security team from scratch, it seems best to start with developers
and teach them about security. Starting with IT security engineers and attempting to teach them about software,
software development lifecycles (SDLCs), defect prioritization, and everything else in the software universe usually fails
to produce the desired results.
SSGs come in a variety of shapes and sizes, but SSGs in the more mature SSIs appear to include people with deep
coding experience, architectural skill, and scripting expertise. Code review is an important best practice, but to
perform code review, the team must actually understand code (not to mention the huge piles of security bugs
therein). That said, the best code reviewers sometimes make poor software architects, so asking them to perform
an architecture analysis will likely fail to produce useful findings. As organizations migrate toward their view of
DevSecOps, SSGs are starting to build their own software in the form of security automation, defect discovery in CI/CD
pipelines, and infrastructure- and governance-as-code.

The people on the SSG team


SSGs are often asked to mentor, train, and work directly with hundreds of developers, so communication skills,
teaching ability, and practical knowledge are must-haves for at least some of the SSG staff. As the technology
landscape changes, leading SSGs make a point of maintaining their depth in evolving disciplines such as digital
transformation, cloud infrastructure, CI/CD, DevOps, DevSecOps, privacy, supply chains, and so on. Essentially, SSGs
are groups of people—whether one person, 10, or 100—who must improve the security posture of the software
portfolio, so management skills, risk management perspectives, an ability to contribute to engineering value streams,
and an ability to break silos are critical success factors.

18 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Although no two of the 128 firms we examined had exactly the same SSG structure, we did observe some
commonalities that are worth mentioning. At the highest level, SSGs seem to come in five flavors:
• Organized to provide software security services
• Organized around setting and verifying adherence to policy
• Designed to mirror business unit organizations
• Organized with a hybrid policy and services approach
• Structured around managing a matrixed team of experts doing software security work across the development or
engineering organizations

Some SSGs are highly distributed across a firm whereas others are centralized. Even within the most distributed
organizations, we find that software security activities are coordinated by an SSG. This is true even if that SSG is staffed
by a single leader with a title such as Security Program Manager or Product Security Manager, or execution of specific
tasks is delegated to security capability owners (e.g., penetration testing teams, security testing teams, software
security architects).

SSG team dynamics and structures


When we look across all the SSGs in our study, we do see several common SSG teams or communities:
• People dedicated to policy, strategy, and metrics
• Internal services groups that (often separately) cover tools and penetration testing
• Development teams focusing on security frameworks
• SDLC owners and integrators
• Incident response groups
• Groups responsible for training development and delivery
• Externally-facing marketing and communications groups
• Software vendor and supply chain risk management groups
• Groups responsible for security efforts within CI/CD pipelines, clouds, and DevOps teams

As more firms emphasize software delivery speed and agility, whether under the cultural aegis of DevOps or not,
we’re increasingly seeing SSG structures manifest organically within software teams themselves. Within these teams,
the individuals focused on security conduct activities along the critical path to delivering value to customers. Whether
staff are borrowed from corporate security or employed by the engineering team directly, we see individuals taking
on roles such as product security engineer or security architect, or possessing functional titles such as Site Reliability
Engineer, DevOps Engineer, or similar.

SSG team responsibilities


Within these roles, some individuals bear responsibilities that are often much greater than that of security champions,
including comparison and selection of security tools, definition of secure design guidelines and acceptable
remediation actions, and so on. In the spirit of agility, these individuals are often contributing significant amounts
of code to delivery, including code that operates continuous build and integration and that can go beyond simply
adding defect discovery steps to the pipeline. They could even implement security features or author design patterns
as part of the delivery team, as well as build orchestration such as infrastructure-as-code for secure packaging,
delivery, and operations. These engineers might also contribute monitoring and logging code that aids operations
security and incident response. Whereas traditional security champions typically act tactically to execute activities
defined by a central SSG, these embedded product security engineers (regardless of moniker) tend to establish and
implement, then align other development teams to their improvements, acting in a sense as a distributed SSG.
Table 2 shows some SSG-related statistics across the 128 BSIMM12 firms but note that a handful of large outliers affect
the numbers this year.

19 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


THE SOFTWARE SECURITY GROUP

SSG Size for 128 BSIMM12 Firms Average is 22.2, largest is 892, smallest is 1, median is 7.0

Average of 2.59% (1 SSG member for each 39 developers),


SSG Member-to-Developer Ratio for 128 BSIMM12 Firms
median of 0.74% (1 SSG member for each 135 developers)

SSG Member-to-Developer Ratio for 50 BSIMM12 Firms with


Largest is 51.4%, smallest is 0.33%, median is 1.7%
Less Than or Equal to 500 Developers

SSG Member-to-Developer Ratio for 78 BSIMM12 Firms with


Largest is 14.9%, smallest is 0.08%, median is 0.51%
More Than 500 Developers

TABLE 2. THE SOFTWARE SECURITY GROUP. We calculated the ratio of full-time SSG members to developers by averaging
the ratios of each participating firm. Looking at average SSG size, it seems that while SSGs can scale with development size in
smaller organizations, the ability to scale quickly drops off in larger organizations.

SATELLITE
In addition to the SSG, many SSIs have identified individuals (often developers, testers, architects, and DevOps
engineers) who are a driving force in improving software security but are not directly employed in the SSG. When
these individuals carry out software security activities, we collectively refer to them as the satellite. Many organizations
refer to this group as their software security champions.
Satellite members are sometimes chosen for software portfolio coverage (with one or two members in each
engineering group) but are sometimes chosen for other reasons, such as technology stack coverage or geographical
reach. They’re also sometimes more focused on specific issues, such as cloud migration and IoT architecture. We’re
even beginning to see some organizations use satellite members to bootstrap the “Sec” functions they require for
transforming a given engineering team from DevOps to DevSecOps.
One of the most critical roles that a satellite can play is to act as a sounding board for the feasibility and practicality
of proposed lifecycle governance changes (e.g., new gates, tools, guardrails) or the expansion of software security
activities (e.g., broader coverage, additional rules, new automation). Understanding how SSI governance might affect
project timelines and budgets helps the SSG proactively identify potential frictions and minimize them.
In many organizations, satellite members are likely to self-select into the group to bring their particular expertise to a
broader audience. These individuals often have (usually informal) titles such as CloudSec, OpsSec, ContainerSec, and
so on, with a role that actively contributes security solutions into engineering processes. These solutions are often
infrastructure- and governance-as-code in the form of scripts, sensors, telemetry, and other friction-reducing efforts.
In any case, successful satellite groups get together regularly to compare notes, learn new technologies, and expand
stakeholder understanding of the organization’s software security challenges. Similar to a satellite—and mirroring the
community and culture of open source software—we’re seeing an increase in motivated individuals in engineering-
led organizations sharing digital work products, such as sensors, code, scripts, tools, and security features, rather than,
for example, getting together to discuss enacting a new policy. Specifically, these proactive engineers are working
bottom-up and delivering software security features and awareness through implementation regardless of whether
guidance is coming top-down from a traditional SSG.
To achieve scale and coverage, identifying and fostering a strong satellite is important to the success of many SSIs
(but not all of them). Some BSIMM activities target the satellite explicitly, as shown in Figure 7..

20 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


76%
HAVE A SATELLITE
54%
HAVE A SATELLITE
23%
HAVE A SATELLITE

TOP 1/3 OF FIRMS MIDDLE 1/3 OF FIRMS BOTTOM 1/3 OF FIRMS

FIGURE 7. PERCENTAGE OF FIRMS THAT HAVE A SATELLITE ORGANIZED BY BSIMM SCORE. Presence of the satellite
and average score appear to be correlated, but we don’t have enough data to say which is the cause and which is the effect. Of
the 10 BSIMM12 firms with the lowest score, only one has a satellite.

Sixty-nine percent of the 52 BSIMM12 firms that have been assessed more than once have a satellite, while 62% of the
firms on their first assessment do not. Many firms that are new to software security take some time to identify and
develop a satellite.
These data suggest that as an SSI matures, its activities become distributed and institutionalized into the
organizational structure, and perhaps even into engineering automation as well, requiring an expanded satellite to
provide expertise and be the local voice of the SSG. Among our population of 128 firms, initiatives tend to evolve from
centralized and specialized to decentralized and distributed (with an SSG at the core orchestrating things).

EVERYBODY ELSE
SSIs are truly cross-departmental efforts that involve a variety of stakeholders:
• Builders, including developers, architects, and their managers, must practice security engineering, taking at least
some responsibility for both the definition of “secure enough” as well as ensuring what’s delivered achieves the
desired posture. Increasingly, an SSI reflects collaboration between the SSG and these engineering-led teams
coordinating to carry out the activities described in the BSIMM.
• Testers typically conduct functional and feature testing, but some move on to include security testing in their test
plans. More recently, some testers are beginning to anticipate how software architectures and infrastructures
can be attacked and are working to find an appropriate balance between implementing automation and manual
testing to ensure adequate security testing coverage. While the term “testers” usually refers to quality assurance
(QA) teams, note that some development teams create many of the functional tests (and perhaps some of the
security tests) that are applied to software.
• Operations teams must continue to design, defend, and maintain resilient environments. As you will see in the
Deployment domain of the SSF, software security doesn’t end when software is “shipped.” In accelerating trends,
development and operations are collapsing into one or more DevOps teams, and the business functionality
delivered is becoming very dynamic in the operational environment. This means an increasing amount of their
security effort, including infrastructure controls and security configuration, is becoming software-defined.
• Administrators must understand the distributed nature of modern systems, create and maintain secure builds,
and begin to practice the principle of least privilege, especially when it comes to the applications they host or
attach to as services in the cloud.

21 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


• Executives and middle management, including business owners and product managers, must understand how
early investment in security design and security analysis affects the degree to which users will trust their products.
Business requirements should explicitly address security needs, including security-related compliance needs. Any
sizable business today depends on software to work; thus, software security is a business necessity. Executives are
also the group that must provide resources for new digital transformation efforts that directly improve software
security and must actively support digital transformation efforts related to infrastructure and governance-as-
code. For a study of CISOs and their organizations, see https://www.synopsys.com/software-integrity/resources/
analyst-reports/ciso.html.
• Vendors, including those who supply on-premise products, custom software, and software-as-a-service, are
increasingly subjected to service-level agreements (SLAs) and reviews (such as the upcoming Payment Card
Industry [PCI] Secure Software Lifecycle Standard and the BSIMMsc) that help ensure that products are the result
of an SSDL. Of course, vendor or not, the open source management process also requires close attention.

BSIMM TERMINOLOGY
Nomenclature has always been a problem in computer security, and software security is no
exception. Several terms used in the BSIMM have particular meaning for us. The following list
highlights some of the most important terms used throughout this document:

• ACTIVITY. Actions or efforts carried out or facilitated by the SSG as part of a practice. Activities
are divided into three levels in the BSIMM based on observation rates.

• CAPABILITY. A set of BSIMM activities spanning one or more practices working together to
serve a cohesive security function.

• CHAMPION. Interested and engaged developers, cloud security engineers, deployment engineers,
architects, software managers, testers, and people in similar roles who have a natural affinity for
software security and contribute to the security posture of the organization and its software.

• DOMAIN. One of the four categories our framework is divided into, i.e., Governance, Intelligence,
SSDL Touchpoints, and Deployment.

• PRACTICE. BSIMM activities are organized into 12 categories or practices. Each of the four
domains in the SSF has three practices.

• SATELLITE. A group of individuals, often called champions, that is organized and leveraged by an SSG.

• SECURE SDL (SSDL). Any software lifecycle with integrated software security release conditions
and activities.

• SOFTWARE SECURITY FRAMEWORK (SSF). The basic structure underlying the BSIMM,
comprising 12 practices divided into four domains.

• SOFTWARE SECURITY GROUP (SSG). The internal group charged with carrying out and
facilitating software security. As SSIs evolve, the SSG might be entirely a corporate team, entirely
an engineering team, or an appropriate hybrid. The team’s name might also have an appropriate
organizational focus, such as application security group or product security group. According to
our observations, the first step of an SSI is to form an SSG. (Note that in BSIMM11, we expanded
the definition of the SSG, a fundamental term in the BSIMM world, from implying that the
group is always centralized in corporate to specifically acknowledging that the group may be a
federated collection of people in corporate, engineering, and perhaps elsewhere. When reading
this document and especially when adapting the activities for use in a given organization, use
this expanded definition.)

• SOFTWARE SECURITY INITIATIVE (SSI). An organization-wide program to instill, measure,


manage, and evolve software security activities in a coordinated fashion. Also known in some
literature as an Enterprise Software Security Program.

22 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


MORE ON BUILDERS AND TESTERS
Builders
• Traditionally, as an organization matures, the SSG works to empower builders so they can carry
out most BSIMM activities themselves, with the SSG helping in special cases and providing
oversight, such as with integrating various defect discovery methods into CI/CD toolchains. In
some engineering-led efforts, the SSG role has changed dramatically, serving to publish and
spread the activity that engineering-led stakeholders plan, implement, and improve of their
own accord.
• Today, especially in engineering-led organizations, builders often invite SSG members to
participate directly in engineering processes by delivering software and other empowering
value rather than written policy and rules. In some cases, the builders are creating their own
security microcosms and pressing forward when SSG involvement might be too cumbersome.
We often don’t explicitly point out whether a given activity is to be carried out by the SSG,
builders, or testers. Each organization should come up with an approach that makes sense and
accounts for its own workload and software lifecycles.

Testers
• Be it DAST tools in QA, access control of new cloud native platforms, or test cases driven by
a framework such as Cucumber or Selenium, or by more strategic testing of failure through
frameworks like ChaosMonkey, some testing regimes are beginning to incorporate nontrivial
security test cases. Facilities provided by cloud service providers actively encourage consideration
of failure and abuse across the full stack of deployment, such as a microservice component, a data
center, or even an entire region going dark.
• Similarly, QA practices will have to consider how systems are configured and deployed by, for
example, testing configurations for virtualization and cloud components. In many organizations
today, software is built in anticipation of failure, and the associated test cases go directly into
regression suites run by QA groups or run directly through automation. Increasingly, testers
participate in feedback loops with operations teams to understand runtime failures and
continuously improve test coverage.

23 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


INTERPRETING BSIMM MEASUREMENTS
The most important use of the BSIMM is as a measuring stick to determine where your SSI currently stands relative
to other firms. A direct comparison of all 122 activities is perhaps the most obvious use of the BSIMM. You can simply
note which activities you already have in place, find them in the SSF, and then build your scorecard and compare it to
the BSIMM12 scorecard.
On page 25, Table 3 depicts an example firm that performs 41 BSIMM activities (noted as 1s in its scorecard columns),
including nine activities that are the most common in their respective practices (light blue). Note the firm does not
perform the most commonly observed activities in the other three practices (dark blue boxes) and should take some
time to determine whether these are necessary or useful to its overall SSI. The BSIMM12 columns show the number
of observations (currently out of 128) for each activity, allowing the firm to understand the general popularity of an
activity among the 128 BSIMM12 firms.
Once you have determined where you stand with activities, you can devise a plan to enhance practices with other
activities or perhaps scale current activities across more of the software portfolio. By providing actual measurement data
from the field, the BSIMM makes it possible to build a long-term plan for an SSI and track progress against that plan.
Note that there’s no inherent reason to adopt all activities in every level for each practice. Adopt the ones that make
sense for your organization and ignore those that don’t—but it’s a wise idea to revisit those choices periodically. Once
they’ve adopted an activity set, most organizations then begin to work on the depth, breadth, and cost-effectiveness
(e.g., via automation) of each activity in accordance with their view of the associated risk.
In our work using the BSIMM to assess initiatives, we found that creating a spider chart yielding a high-water mark
(based on three activity levels per practice) is sufficient to obtain a low-resolution feel for maturity, especially when
working with data from a particular vertical market. We assign the high-water mark with a simple algorithm: if we
observed a level 3 activity in a given practice, we assign a high-water mark of “3” without regard for whether any level
2 or 1 activities were also observed. We assign a high-water mark of 2, 1, or 0 similarly.
One meaningful use of a spider chart comparison is to chart your own high-water mark against the graphs we’ve
published to see how your initiative stacks up. In Figure 8, we have plotted data from the example firm against the
BSIMM12 AllFirms data.
Recall that the breakdown of activities into levels for each practice reflects only the observation frequency for the
activities. As such, the levels might illustrate a natural progression through the activities associated with each
practice, but it isn’t necessary to carry out all activities in a given level before moving on to activities at a different
level in the same practice. That said, the levels we use hold water under statistical scrutiny. Level 1 activities (often
straightforward and universally applicable) are those that are most commonly observed, level 2 (often more difficult to
implement and requiring more coordination) are less frequently observed, and level 3 activities (usually more difficult
to implement and not always applicable) are more rarely observed. For many organizations, maturity improves
directly as a result of scaling activity efforts across more of the software portfolio and the stakeholders as opposed to
aiming specifically at implementing level 3 activities just because they’re level 3.

24 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


GOVERNANCE INTELLIGENCE SSDL TOUCHPOINTS DEPLOYMENT
BSIMM12 BSIMM12 BSIMM12 BSIMM12
EXAMPLE EXAMPLE EXAMPLE EXAMPLE
ACTIVITY FIRMS ACTIVITY FIRMS ACTIVITY FIRMS ACTIVITY FIRMS
FIRM FIRM FIRM FIRM
(128) (128) (128) (128)

STRATEGY & METRICS ATTACK MODELS ARCHITECTURE ANALYSIS PENETRATION TESTING

[SM1.1] 91 1 [AM1.2] 77 [AA1.1] 113 1 [PT 1 .1] 111


[SM1.3] 81 [AM1.3] 41 [AA1.2] 49 1 [PT 1 .2] 98 1
[SM1.4] 118 [AM1.5] 61 1 [AA1.3] 37 1 [PT 1 .3] 88 1
[SM2.1] 63 [AM2.1] 14 [AA1.4] 62 [PT2.2] 33
[SM2.2] 60 [AM2.2] 11 1 [AA2.1] 29 [PT2.3] 34
[SM2.3] 60 [AM2.5] 13 1 [AA2.2] 28 1 [PT3.1] 23 1
[SM2.6] 62 [AM2.6] 10 [AA3.1] 16 [PT3.2] 12
[SM2.7] 64 1 [AM2.7] 16 [AA3.2] 2
[SM3.1] 22 [AM3.1] 5 [AA3.3] 11
[SM3.2] 10 [AM3.2] 4
[SM3.3] 21 [AM3.3] 6
[SM3.4] 6
COMPLIANCE & POLICY SECURITY FEATURES & DESIGN CODE REVIEW SOFTWARE ENVIRONMENT

[CP1.1] 98 1 [SFD1.1] 102 1 [CR1.2] 80 1 [SE1.1] 80


[CP1.2] 114 1 [SFD1.2] 83 1 [CR1.4] 102 1 [SE1.2] 117 1
[CP1.3] 88 1 [SFD2.1] 33 [CR1.5] 49 [SE2.2] 48 1
[CP2.1] 55 [SFD2.2] 55 [CR1.6] 32 1 [SE2.4] 32
[CP2.2] 49 [SFD3.1] 16 [CR1.7] 51 [SE2.5] 44 1
[CP2.3] 67 [SFD3.2] 15 [CR2.6] 25 1 [SE2.6] 59 1
[CP2.4] 54 [SFD3.3] 5 [CR2.7] 17 [SE2.7] 33 1
[CP2.5] 74 1 [CR3.2] 9 [SE3.2] 13
[CP3.1] 24 [CR3.3] 4 [SE3.3] 9
[CP3.2] 18 [CR3.4] 1 [SE3.6] 14
[CP3.3] 6 [CR3.5] 0
TRAINING STANDARDS & REQUIREMENTS SECURITY TESTING CONFIG. MGMT. & VULN. MGMT.

[T 1 .1] 76 1 [SR1.1] 90 1 [ST 1 .1] 100 1 [CMVM1.1] 108 1


[T 1 .7] 53 1 [SR1.2] 88 [ST 1 .3] 87 1 [CMVM1.2] 96
[T 1 .8] 46 [SR1.3] 99 1 [ST 1 .4] 50 [CMVM2.1] 92 1
[T2.5] 39 [SR2.2] 64 1 [ST2.4] 19 [CMVM2.2] 93
[T2.8] 27 1 [SR2.4] 74 [ST2.5] 21 [CMVM2.3] 61
[T2.9] 35 1 [SR2.5] 55 1 [ST2.6] 15 [CMVM3.1] 4
[T3.1] 6 [SR3.1] 35 [ST3.3] 8 [CMVM3.2] 11
[T3.2] 23 [SR3.2] 13 [ST3.4] 2 [CMVM3.3] 14
[T3.3] 23 [SR3.3] 9 [ST3.5] 2 [CMVM3.4] 20 1
[T3.4] 24 [SR3.4] 20 [ST3.6] 2 [CMVM3.5] 10 1
[T3.5] 9 [CMVM3.6] 0
[T3.6] 4 [CMVM3.7] 0

ACTIVITY 122 BSIMM12 activities, shown in 4 domains and 12 practices BSIMM12 FIRMS Count of firms (out of 128) observed performing each activity

The most common activity within a practice Most common activity in practice was observed in this assessment

Most common activity in practice was not observed A practice where firm’s high-water mark score is below the BSIMM12 average

TABLE 3. BSIMM12 EXAMPLEFIRM SCORECARD. A scorecard is helpful for understanding efforts currently underway. It also
helps visualize the activities observed by practice and by level to serve as a guide on where to focus next.

25 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


CONFIGURATION STRATEGY
MANAGEMENT & & METRICS
VULNERABILITY COMPLIANCE
3.0
MANAGEMENT & POLICY
2.5

2.0
SOFTWARE
ENVIRONMENT 1.5 TRAINING

1.0

0.5

PENETRATION ATTACK
0.0
TESTING MODELS

SECURITY SECURITY FEATURES


TESTING & DESIGN

CODE STANDARDS &


REVIEW REQUIREMENTS
ARCHITECTURE
ANALYSIS

ALLFIRMS (128) EXAMPLEFIRM

FIGURE 8. ALLFIRMS VS. EXAMPLEFIRM SPIDER CHART. Charting high-water mark values provides a low-resolution view
of maturity that can be useful for comparisons between firms, between business units, and within the same firm over time.

By identifying activities from each practice that are useful for you, and by ensuring proper balance with respect to
domains, you can create a strategic plan for your SSI moving forward. Note that most SSIs are multiyear efforts with
real budget, mandate, and ownership behind them. In addition, while all initiatives look different and are tailored to
fit a particular organization, all initiatives share common core activities (see “Table 4. Most Common Activities Per
Practice” in Part Three).

26 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


PART TWO
USIN G T H E BSIMM
USING THE BSIMM
The BSIMM is not a single-purpose SSI benchmarking tool—it also eases management and evolution for anyone in
charge of software security, whether that person is currently in a central governance-focused position or in a more local
engineering-focused team. Firms of all maturity levels, sizes, and verticals use the BSIMM as a reference guide when
building new SSIs from the ground up and when evolving their initiatives through various maturity phases over time.

SSI PHASES
No matter an organization’s culture, all firms strive to reach similar waypoints on their software security journey. Over
time, we find that SSIs typically progress through three states:
• EMERGING. An organization is tasked with booting a new SSI from scratch or formalizing nascent or ad
hoc security activities into a holistic strategy. An emerging SSI has defined its initial strategy, implemented
foundational activities (e.g., such as those observed most frequently in each practice), acquired some resources,
and might even have a roadmap for the next 12 to 24 months of its evolution. SSI leaders working on a program’s
foundations are often resource-constrained on both people and budget, so they might create a small SSG that
uses compliance requirements or other executive mandates as the initial drivers to continue adding activities.
Managing frictions with key stakeholders that are resistant to adopting even the most basic process discipline
requires strong executive support.
• MATURING. An organization with an existing or emerging software security approach is connected to executive
expectations for managing software security risk and progressing along a roadmap to scale security capabilities.
A maturing SSI can be adding and improving activities and capabilities across a wide range of qualitative
dimensions. Most commonly, they are making changes to:
o Reduce friction across business and development stakeholders
o Protect people’s productivity gains through automation investments
o Expand the breadth and depth of coverage of key activities such as defect discovery efforts
o Refactor existing efforts to happen earlier or operate more expediently within the development lifecycle
o Reallocate efforts in line with opportunity cost to maximize positive results across the array of capabilities
o Realize greater impact of defect discovery through escape analysis with an emphasis on finding systematic
solutions to systemic problems
o Examine the net impact to resiliency by real-world attacks
In our experience, strong programs make consistent, incremental improvements in the development lifecycle and
key security integrations based on the real-world data they see. Even an exhaustive collection of activities with
strong governance will fail if the program does not embrace a core tenet of ongoing improvement.
• ENABLING. Organizations that have consistently matured and made investments to overcome growing pains
often reach the point in which the goals of digital transformation efforts, such as adoption of new lifecycles, are
harmonized with the evolutionary needs of the SSI. For instance, reducing the time to remediate bugs found
in operations is often supported by the shorter release cycles enabled by DevOps transformations. Similarly,
standardizing technology stacks and developing a robust library of reusable security features expedites both
development (by providing reusable building blocks) and security activities (by reducing the footprint of the
codebase and avoiding duplicated efforts). Organizations at the enabling level are usually fanatical about
automation and about protecting their most critical resources—their people—so they have time to tackle
security innovation.
It’s compelling to imagine that organizations could self-assess and determine that by doing X number of activities,
they qualify as emerging, maturing, or enabling. However, experience shows that SSIs can expedite reaching a
“maturing” stage by focusing on the activities that are right for them (e.g., to meet external contractual or compliance
requirements) without regard for the total activity count. This is especially true when considering software portfolio
size and the relative complexity of scaling and maturing some activities across 1, 10, 100, and 1,000 applications.
In addition, organizations don’t always progress from emerging to enabling in one direction or in a straight path.
We have seen SSIs form, break up, and re-form over time, so one SSI might go through the emerging cycle a few
times over the years. Moreover, an SSI’s capabilities might not all progress through the same states at the same time.
We’ve noted cases where one capability—vendor management, for example—might be emerging, while the defect
management capability is maturing, and the defect discovery capability is at an enabling stage. There is also constant
change in tools, skill levels, external expectations, attackers, attacks, resources, culture, and everything else. Pay
attention to the relative frequency with which the BSIMM activities are observed across all the participants but use
your own metrics to determine if you’re making the progress that’s right for you.

28 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


TRADITIONAL SSI APPROACHES
Whether implicitly or explicitly, organizations choose the path for their software security journey by tailoring goals,
methods, tools, resources, and approaches to their individual cultures. There have always been two distinct cultures
in the BSIMM community:
• Organizations where the SSI was purposefully started in a central corporate group (e.g., under a CISO) and
focused on compliance, testing, and risk management. This path is seen most often in regulated industries such
as banking, insurance, FinTech, and healthcare but is also seen in some ISV and technology firms.
• Organizations where the SSI was started by engineering leadership (e.g., senior application architects) who then
created a centralized group (e.g., under a CTO or other technology executive) to set some development process,
create and manage security standards, and ensure the silos of engineering, testing, and operations are aware of
and adhere to security expectations. This path is most often seen in technology firms, cloud, and ISVs but is also
seen in other verticals.
Regardless of origin and whether the SSG currently lives in a corporate group or in engineering, both cultures
historically ended up with an SSI that is driven by a centralized, dedicated SSG whose function is to ensure
that appropriate software security activities are happening across the portfolio. That is, nearly all SSIs today are
governance-led, regardless of whether the SSI genesis was in the executive team or the software architecture team.
They practice proactive risk management through assurance-based activities, resulting in the creation of “rules that
people must follow” and “expectations software must meet” (e.g., policy, standards, checkpoints, sensors). This has
almost always resulted in a defined process that includes prescriptive testing at various times in the lifecycle and
checkpoints where engineering processes might be derailed.
When software security activities are developed within the context of an engineering team, they are often well-
tuned to that team’s risk tolerance, which might generate friction with groups like audit, compliance, or even other
engineering teams. Where central groups dictate policy and activities that make a lot of risk management sense to
the SSG, those requirements often just appear as friction to the development, testing, and operations groups. We
perceive neither culture as better for a given organization but point out that in either case, the SSG must quickly and
proactively reach out to other departments, normalize activities across disparate risk and usability tolerances, and
socialize the resulting culture organization-wide.

THE NEW WAVE IN ENGINEERING CULTURE


We’re once again seeing a wave of software security efforts emerging from engineering teams. These teams
are usually responsible for either delivering a product or value stream—such as is common within ISVs—or for
maintaining a technology domain—such as the “cloud security group” or a part of some digital transformation group.
At least two large factors are driving these new engineering-led efforts, which have been rapidly expanding over the
last few years:
• The confluence of process friction, unpredictable impacts on delivery schedules, adversarial internal relationships,
and a growing number of human-intensive processes from existing SSIs
• The demands and pressures from modern software delivery practices, be they cultural such as Agile and DevOps,
or technology-based such as cloud- and orchestration-based
One imperative common to both cultural and technology shifts is engineer self-service, typically seen as self-service
IT (cloud), self-service configuration and deployment (DevOps), and self-service development (open source use and
continuous integration). A natural result of this self-service culture is engineering groups placing more emphasis on
automation as opposed to human-driven tasks. Automation in the form of software-defined sensors and checkpoints
is removing unexpected variability in human processes but is often also replacing human discretion, conversations,
and risk decisions that should almost always involve multiple stakeholders. The effect is that many application
lifecycle processes are moving faster whether they’re ready to do risk management at that speed or not. And, perhaps
most importantly, all this software security effort is frequently happening independently from the experience and
lessons learned that a centralized SSG might provide.
The governance-driven approach we’ve seen for years along with the emerging engineering-driven efforts are
increasingly coexisting within the same organization. In addition, they often have competing objectives, even while
pulling traditional governance-driven programs into modern and evolving hybrids. Figure 9 shows this ongoing
SSG evolution.

29 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


EARLY 2000s EXECUTIVE-DRIVEN ENGINEERING-DRIVEN
COMPLIANCE-ORIENTED PROCEDURE-ORIENTED

CIRCA 2006 CENTRALIZED


GOVERNANCE
(SSG)

TODAY
2ND-GENERATION
ENGINEERING-LED EFFORTS
(DevOps)

SOON CORPORATE MODERN HYBRIDS ENGINEERING


(GRC) (DevSecOps) (Self-Service)

CORPORATE ENGINEERING

FIGURE 9. SSG EVOLUTION. These groups might have started in corporate or in engineering but, in general, settled on
enforcing compliance with tools. The new wave of engineering-led efforts is shifting where SSGs live, what they focus on, and
how they operate.

The DevOps movement has put these tensions center stage for SSG leaders to address. Given different objectives,
we find that the outcomes desired by these two approaches are usually very different. Rather than the top-down,
proactive risk management and “rules that people must follow” style of governance-minded teams, these newer
engineering-minded teams are more likely to “prototype good ideas” for securing software, which results in the
creation of even more code and infrastructure on the critical path (e.g., security features, home-spun vulnerability
discovery, security guardrails). Here, security is just another aspect of quality, and availability is just another aspect
of resilience.
To keep pace with both software development process changes (e.g., CI/CD adoption) and technology architecture
changes (e.g., cloud, container, and orchestration adoption), engineering-led efforts are independently evolving
both how they apply software security activities and, in some cases, what activities they apply. The changes these
engineering-led teams are making include downloading and integrating their own security tools, spinning up
cloud infrastructure and virtual assets as they need them, following policy on the use of open source software in
applications while routinely downloading dozens or hundreds of other open source packages to build and manage
software and processes, and so on. Engineering-led efforts and their associated fast-paced evolutionary changes
are putting governance-driven SSIs in a race to retroactively document, communicate, and even automate the
knowledge they hold.
In addition to centralized SSI efforts and engineering-led efforts, cloud service providers, software pipeline and
orchestration platforms, and even QA tools have all begun adding their view of security as a first-class citizen of their
feature sets, user guides, and knowledge bases. For example, organizations are seeing platforms like GitHub and
GitLab beginning to compete vigorously using security as a differentiator, leading both providers to create publicly
available security documentation with a 12- to 36-month vision. Evolving vendor-provided features might be signaling
to the marketplace and adopting organizations that vendors believe security must be included in developer tools
and that engineering-led security initiatives should feel comfortable relying on these platforms as the basis of their
security telemetry and even their governance workflows.

30 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Given the frequent focus by centralized governance groups on policy adherence and test results as measures of
success, such groups have historically not recognized the specific progress made by engineering-led teams. For their
part, engineering teams do not frequently broadcast improvements to vulnerability discovery or security engineering
until those changes are shareable as reusable code. We observe that the evolution of pipeline platform tools and their
security features, including governance workflows and reporting dashboards, is beginning to repair the disconnect.
The most forward-looking engineering-led efforts are making technology-specific changes to their security activities-
as-code at the same speed that their development and cloud technology changes. Similarly, centralized SSI owners
are making changes to their policies, standards, and processes at the same speed as executives are building
consensus around them. Security executives are increasingly acknowledging this mismatch in agility and explicitly
identifying the need for all stakeholders to come up to speed on modern developer tooling and culture. Combining
these two approaches and cadences, while still maintaining a single, coherent SSI direction, will require a concerted
effort by all stakeholders.

CONVERGENCE AS A GOAL
We frequently observe governance-driven SSIs planning centrally, seeking to proactively define an ideal risk posture
during their emerging phase. After that, the initial uptake of provided controls (e.g., security testing) is usually by
those teams that have experienced real security issues and are looking for help, while other teams might take a
wait-and-see approach. These firms often struggle during the maturation phase where growth will incur significant
expense and effort as the SSG scales the controls and their benefits enterprise-wide.
We also observe that emerging engineering-driven efforts prototype controls incrementally, building on existing tools
and techniques that already drive software delivery. Gains happen quickly in these emerging efforts, perhaps given
the steady influx of new tools and techniques introduced by engineering but also helped along by the fact that each
team is usually working in a homogenous culture on a single application and technology stack. Even so, these groups
sometimes struggle to institutionalize durable gains during their maturation phase, usually because the engineers have
not yet been able to turn capability into either secure-by-default functionality or automation-friendly assurance—at
least not beyond the most frequently encountered security issues and beyond their own spheres of influence.
All of this said, scaling an SSI across a software portfolio is hard for everyone. Today’s evolving cultural and
technological environments seem to require a concerted effort at converging centralized and engineering efforts to
create a cohesive SSI that ensures the software portfolio is appropriately protected.
Emerging engineering-driven groups tend to view security as an enabler of software features and code quality. These
groups recognize the need for having security standards but tend to prefer incremental steps toward governance-
as-code as opposed to a large-manual-steps-with-human-review approach to enforcement. This tends to result in
engineers building security features and frameworks into architectures, automating defect discovery techniques
within a software delivery pipeline, and treating security defects like any other defect. Traditional human-driven
security decisions are modeled into a software-defined workflow as opposed to being written into a document
and implemented in a separate risk workflow handled outside of engineering. In this type of culture, it’s not that
the traditional SDLC gates and risk decisions go away, it’s that they get implemented differently and usually have
different goals compared to those of the governance-driven groups. SSGs, and likely champions groups as well, that
begin to support this approach will likely speed up both convergence of various efforts and alignment with corporate
risk management goals.
Importantly, the delivery pipeline platforms upon which many engineering teams rely have begun to support a broader
set of security activities as on-by-default security features. As examples, OpenShift, GitHub, and GitLab are doing
external security research, responsibly disclosing vulnerabilities and notifying users, and providing some on-by-default
defect discovery, vulnerability management, and remediation management workflows. This allows engineering-driven
firms to share success between teams that use these platforms simply by sharing configuration information, thus
propagating their security policies quickly and consistently. It also allows SSGs and champions to more easily tie in at key
points in the SSDL to perform governance activities with minimal impact on software pipelines.
Though the BSIMM data and our analyses don’t dictate specific paths to SSI maturity, we have observed patterns
in the ways firms use the activities to improve their capabilities. For example, governance-led and emerging
engineering-led approaches to software security improvement embody different perspectives on risk management
that might not correlate. Governance-led groups often focus on rules, gates, and compliance, while emerging
engineering-led efforts usually focus on feature velocity, error avoidance through automation, and software resilience.
Success doesn’t require identical viewpoints, but collectively the viewpoints need to converge in order to keep the
firm safe. That means the groups must collaborate on risk management concerns to build on their strengths and
minimize their weaknesses.

31 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Given these cultural differences, how does one go about building and maturing a firm-wide SSI that protects the
entire software portfolio? Aligning governance views and engineering views is a requirement for moving forward
effectively and is an important transition that is still in progress in nearly all SSIs today.
Every organization is on its own unique software security journey. Evolving business and technology drivers, executive
expectations, security goals, and operational aspirations, as well as current organizational strengths and weaknesses,
will motivate different paths from your current state to your next waypoint.

A TALE OF TWO JOURNEYS: GOVERNANCE VS. ENGINEERING


We see the two paths—governance-led vs. engineering-led—as patterns in how SSIs are implemented and trends
in how they evolve. Many of these patterns are common to programs, whether they’re working on foundations,
scale, or efficacy. We see them across cultural boundaries, even if the security culture is one of central and proactive
governance or is engineering-driven, or if both are learning to coexist.
In the two journeys that follow, we include specific references to BSIMM activities. These references are meant to help
the reader understand associations between the topic being discussed and one or more BSIMM activities. The references
don’t mean that the topic being discussed is fully equivalent to the activity. For example, when we say, “Inventory software
[SM3.1],” we don’t mean that having an inventory encompasses the totality of [SM3.1], just that having an inventory will likely
be something you’ll do on your way to implementing [SM3.1]. To continue using [SM3.1] as an example, most organizations
will not set about implementing this activity and get it all done all at once. Instead, an organization will likely create an
initial inventory, implement a process to keep the inventory up to date, find a way to track results from testing efforts, do
some repeatable analysis, and decide how to create a risk posture view that’s meaningful for them. Every activity has its
own nuances and components, and every organizational evolution will be unique.
Although they might not use the same vocabulary or originate in the same organizational structure, nearly all SSIs
build a foundation that includes the following:
• STRUCTURE. Name an owner [SM1.1], generate awareness [SM2.7], and identify engineering participation
[SFD1.2, CMVM1.1].
• PRIORITIZATION. Define a list of Top 10 bugs [CR2.7] to prevent or attacks [AM2.5] to mitigate, prioritize portfolio
scope [AA1.4], select initial checkpoints [SM1.4], define compliance needs [CP1.2], and ensure the basics [SE1.2].
• VISIBILITY. Inventory assets [SM3.1, CP2.1, CMVM2.3], then conduct defect discovery [AA1.1, CR1.4, ST 1 .4, PT 1 .1, SR2.4]
to determine which issues are being created or are already in production.
Note that an SSI leader with a young initiative (e.g., less than one year) working on the foundations should not expect
or set out to quickly implement a large number of BSIMM activities. Firms can absorb only a limited amount of
cultural and process change at any given time. The BSIMM12 data show that SSIs having an age of one year or less at
the time of assessment have an average score of 26.7 (19 of 128 firms).

THE GOVERNANCE-LED JOURNEY


Until recently, the history of software security advancement has been largely driven by three groups. One is highly-
regulated industries—mostly financial services—striving to both educate regulators and stay ahead of their demands.
Another is forward-looking ISV and high-technology firms that have worked diligently to strike a balance between
governance and engineering agility. And of course, software security experts have also spent a couple of decades
helping mature the broad discipline.
Leadership
Governance-driven SSIs almost always begin by appointing an SSI owner tasked with shepherding the organization
through understanding scope, approach, and priorities. Once an SSI owner is in place, their first order of business
is likely to establish a centralized structure. This structure might not involve hiring staff immediately, but it will
likely entail assembling a full-time team to implement key foundational activities central to supporting assurance
objectives that are further defined and institutionalized in policy [CP1.3], standards [SR1.1], and processes [SM1.1].
Inventory Software
We observe governance-led SSIs seeking an enterprise-wide perspective when building an initial view into their
software portfolio. Engaging directly with application business owners, these cultures prefer to cast a wide net using
questionnaire-style data gathering to build their initial application inventory [CMVM2.3]. These SSIs tend to focus on
applications (with owners who are responsible for risk management) as the unit of measure in their inventory rather
than software, which might also include many vital components that aren’t applications. In addition to understanding
application profile characteristics (e.g., programming language, architecture type such as web or mobile, revenue
generated) as a view into risk, these cultures tend to focus on understanding where sensitive data reside and flow
(e.g., PII inventory) [CP2.1] along with the status of active development projects.

32 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


The nature of inventorying software is evolving dynamically.
Simple spreadsheet lists of application names have given way
to the need for open source enumeration, a problem still being
GOVERNANCE-LED
addressed by many firms. Even with these efforts, many firms are CHECKLIST FOR
still only inventorying their source code. Expanding inventories GETTING STARTED
to include all binaries, dependencies, and calls to services is
more complicated, and accounting for all software running in 1. LEADERSHIP. Put someone in
production is aspirational for most firms. charge of software security and
provide the resources they will
Select In-Scope Software need to succeed.
With an application inventory in hand, governance-led SSIs
impose security requirements top-down using formalized risk- 2. INVENTORY SOFTWARE. Know
based approaches to blanket as much of their software portfolio what you have, where it is, and
as possible. Using simple criteria (e.g., application size, regulatory when it changes.
constraints, internal vs. external facing, data classification), these 3. SELECT IN-SCOPE SOFTWARE.
cultures assign a risk classification (e.g., high, medium, low) to each Decide what you’re going to focus
application in their inventory [AA1.4]. SSI leaders then define the on first, then contribute to its
initial set of software and project teams with which to prototype work streams.
security activities. Although application risk classifications are
4. ENSURE HOST AND NETWORK
often the primary driver, we have observed firms using other
SECURITY BASICS. Don’t put
information, such as whether a major change in application
good software on bad systems or
architecture is being undertaken (e.g., shift to a cloud environment
in poorly constructed networks
with a native architecture) when selecting foundational SSI efforts.
(cloud or otherwise).
We also observe that firms find it beneficial to include in the
selection process some engineering teams that are already doing 5. DO DEFECT DISCOVERY.
some security activity organically. Determine the issues in today’s
in-progress and production
Ensure Host and Network Security Basics software and plan for tomorrow.
One of the most commonly observed activities today regardless
of SSG age is [SE1.2 Ensure host and network security basics are 6. ENGAGE DEVELOPMENT. Identify
in place]. A common strength for governance-minded firms that those responsible for software
have tight controls over the infrastructure assets they manage, delivery pipelines, key design,
these basics are accomplished through a combination of IT and code, and involve them in
provisioning controls, written policy, pre-built and tested golden the planning, implementation,
images, sensors and monitoring capabilities, server hardening and and roll-out at scale of security
configuration standards, and entire groups dedicated to patching. activities.
As firms migrate infrastructure off-premise to cloud environments, 7. SELECT SECURITY CONTROLS.
governance-led firms remain keen on re-implementing their Start with controls that establish
assurance-based controls to verify adherence to security policy, some risk management to
calling out cloud provider dependencies. They sometimes must prevent recurrence of issues
deploy custom solutions to overcome limitations in a cloud you’re seeing today.
provider’s ability to meet desired policy in an attempt to keep tabs
8. REPEAT. Expand the team,
on the growing number of virtual assets created by engineering
improve the inventory, automate
groups and their automation.
the basics, do more prevention,
Do Defect Discovery and then repeat again.
Initial defect discovery efforts in governance-led cultures tend to
be one-off (by using centralized commercial tools [CR1.2]) and tend
to target the most critical software, with a plan to scale efforts over
time. Often, a previous breach or near miss focuses everyone’s
attention on one particular type of technology stack or security defect. While not always automated or repeatable,
conducting some vulnerability discovery in order to get a feel for the current risk posture allows firms to prioritize
remediation and motivate the necessary conversations with stakeholders to gain buy-in for an SSG. The type of
vulnerability discovery doesn’t necessarily matter at this stage and might be selected because it applies to the current
phase of the software lifecycle that the intended target is naturally progressing through (e.g., “shift everywhere” to
do threat modeling at design time, SAST during development, and penetration testing on deployed software for a
critical application).

33 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Engage Development
Engineering teams are likely already thinking about various aspects of security related to software, configuration,
infrastructure, and related topics. Engaging development begins by creating mutual awareness of how the SSI and
development teams see the next steps in maturing security efforts. Successfully engaging development early on relies
on bridge-building and credentialing the SSG as competent in development culture, toolchains, and technologies,
building awareness around what security practices constitute an SSDL, and roughly determining how those practices
are expected to be conducted. Building consensus on what role each department will play in improving capabilities
over the next evolutionary cycle greatly facilitates success.
Select Security Controls
Based on the kinds of software in inventory, the reasons for selecting certain applications to be within your program’s
scope, and the issues uncovered in initial defect discovery efforts, SSI leaders select those BSIMM activities directly
applicable to incrementally improving the security of their application portfolio (e.g., policy [CP1.3], testing [AA1.2,
CR1.4, ST 1 .4, PT 1 .3, SR2.4], training [T2.9]) and implement them in a quick-win approach. Governance-minded
cultures tend to prefer showing adherence to well-known guidance and often choose initial security controls in
response to general industry guidance (e.g., regulators, OWASP, CWE, NIST, analysts) that applies to as much of their
software as possible. Initial selection is likely focused on detective controls (e.g., testing) to maximize visibility into the
organization’s risk posture.

MATURING GOVERNANCE-LED SSIs


With the foundations for centralized governance established, SSI leaders shift their attention to scaling risk-based
controls across the entire software portfolio and enabling development to find and fix issues early in the software
lifecycle. Driven centrally and communicated top-down, these cultures prescribe when security activities must occur
at each phase of the software lifecycle [SM1.4] and begin onboarding application teams into the SSDL to improve
overall risk posture.
Document and Socialize the SSDL
Because software security requirements come from the top, these firms typically prefer to create process, policy, and
security standards in document and presentation form. Specifically, these firms document a process (e.g., a prototype
SSDL) to generalize the SSI efforts above and communicate it to everyone for the organization’s use [SM1.1]. Creating
even a single-page view of the defect discovery activities to be conducted allows the organization to institutionalize
its initial awareness-building efforts as the first revision of its governance and testing regimen.
We observe governance-minded firms publishing security policies and standards through already established
governance, risk, and compliance (GRC) channels, complementing existing IT security standards. The SSI leader can
also create a security portal (e.g., website, wiki) that houses SSDL information centrally [SR1.2]. Similar to the approach
for prioritizing defect discovery efforts, we observe these firms driving initial standards creation from industry top N
risks, leveraging sources such as MITRE, ISO, and NIST to form baseline requirements [AM2.5, CR2.7].
Finally, in governance-led SSIs, getting the word out about the organization’s top N risks and what can be done about
them becomes a key part of the SSI leader’s job. We observe these leaders using every channel possible (e.g., town
halls, brown bags, communities of practice forums, messaging channels) to socialize the software security message
and raise awareness of the SSDL [SM2.7].
Balance Detective and Preventive Controls
As evidence is gathered through initial defect discovery efforts that highlight the organization’s serious security
defects in its most important software assets, SSI leaders in governance-led firms seek to balance detective controls
with preventive controls for avoiding security issues and changing developer behavior.
Because initial defect discovery often targets deployable software (e.g., in a pre-production environment over to
the right in the SDLC), SSI leaders begin educating the organization on the need to shift left through the adoption
of tools that can be integrated into developer workflows. These firms typically rely on tool vendor rulesets and
vulnerability coverage to expand their generic top N-focused defect discovery, often starting with static [CR1.4] and
dynamic analysis [ST 1 .4, ST2.6] to complement existing penetration testing [PT 1 .1, PT 1 .3] efforts. To get started with
tool adoption, we observe SSI leaders dedicating some portion of their staff to serve as tool mentors and coaches to
help development teams not only integrate the tools but also triage and interpret results [CR1.7]. Seeking portfolio
coverage when evaluating and selecting tools, governance-led SSIs often consider language support, environment
interoperability, ease of deployment, and results accuracy as key success criteria. Note that this effort to shift left
almost never extends outside the SDLC itself to requirements and design (and the effort almost never shifts right to
include production security testing).

34 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


To scale the security mindset as tools are adopted, we observe governance-led firms focusing on establishing security
champions within application teams [SM2.3]. Although the primary objective is to embed security leadership inside
development, these individuals also serve as both key points of contact and interface points for the SSG to interact
with application teams and monitor progress. Because they are local to teams, champions also facilitate defect
management goals, such as tracking recurring issues to drive remediation [PT 1 .2]. To the extent that organizations
are beginning to rely more heavily on open source defect discovery tools, champions are serving as a means of
implementing these tools for development teams, as well as being the in-team tool mentor.
Similar to policy, the need for tool adoption and building a satellite of champions is often communicated top-down
by the SSI leader. Starting at the executive level, SSI leaders often inform the CISO, CTO, or similar executive with data
from initial defect discovery efforts to tell stories about the consequences of having insecure software and how the
SSDL will help [SM2.1]. At the developer level, SSI leaders begin rolling out foundational software security training
material tailored to the most common security defects identified through defect discovery efforts, often cataloged by
technology stack [T 1 .7].
Support Incident Response and Feedback to Development
In some governance-led cultures, establishing a link between the SSG and those doing the monitoring for security
incidents often happens naturally when CISOs or similar executives also own security operations. We observe SSI
leaders participating in software-related security incidents in a trusted advisor role to provide guidance to operations
teams on applying a temporary compensating control (e.g., a short-term web application firewall [WAF] rule) and
support to development teams on fixing a root cause (e.g., refactor code, upgrade a library) [CMVM1.1]. For firms that
have an emerging satellite, those champions are often pulled into the conversation to address the issue at its source,
creating a new interdepartmental bridge between the SSG, security operations, and development teams.

ENABLING GOVERNANCE-LED SSIs


Achieving software security scale—of expertise, portfolio coverage, tool integration, vulnerability discovery accuracy,
process consistency, and so on—remains a top priority. However, firms often scale one or two capabilities (e.g., defect
discovery, training) but fail to scale others (e.g., architecture analysis, vendor management). Once scaled, there’s a
treasure trove of data to be harvested and included in KPI and KRI reporting dashboards. Then executives start asking
very difficult questions: Are we getting better? Is our implementation working well? Where are we lagging? How
can we go faster with less overhead? What’s our message to the Board? The efficacy of an SSI will be supported by
ongoing data collection and metrics reporting that seeks to answer such questions [SM3.3].
Progress Isn’t a Straight Line
As mentioned earlier, organizations don’t always progress from emerging to enabling in one try or on a straight path,
and some SSI capabilities might be enabling while others are still emerging. Based on our experience, firms with
some portion of their SSI operating in an enabling state have likely been in existence for longer than three years.
Although we don’t have enough data to generalize this class of initiative, we do see common themes for those who
strive to reach to this state:
• TOP N RISK REDUCTION. Everyone relentlessly identifies and closes top N weaknesses, placing emphasis on
obtaining visibility into all sources of vulnerability, whether in-house developed code, open source code [SR3.1],
vendor code [SR3.2], toolchains, or any associated environments and processes [SE1.2, SE2.6, SE3.3]. These top N
risks are most useful when specific to the organization, evaluated at least annually, and tied to metrics as a way to
prioritize SSI efforts to improve risk posture.
• TOOL CUSTOMIZATION. SSI leaders place a concerted effort into tuning tools (e.g., customization for static
analysis, fuzzing, penetration testing) to improve integration, accuracy, consistency, and depth of analysis [CR2.6].
Customization focuses not only on improving result fidelity and applicability across the portfolio but also on
pipeline integration and timely execution, improving ease of use for everyone.
• FEEDBACK LOOPS. Loops are specifically created between SSDL activities to improve effectiveness as
deliverables from SSI capabilities ebb and flow with each other. For example, an expert within QA might leverage
architecture analysis results when creating security test cases [ST2.4]. Likewise, feedback from the field might
be used to drive SSDL improvement through enhancements to a hardening standard [CMVM3.2]. The concept of
routinely conducting blameless postmortems seems to be gaining ground in some firms.
• DATA-DRIVEN GOVERNANCE. More mature groups instrument everything to collect data that in turn become
metrics for measuring SSI efficiency and effectiveness against KRIs and KPIs [SM3.1]. As an example, a metric such
as defect density might be leveraged to track performance of individual business units and application teams.
Metrics choices are very specific to each organization and also evolve over time.

35 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Push for Agile-Friendly SSIs
In recent years, we’ve observed governance-led firms—often out of necessity to remain in sync with development
changes—evolving to become more agile-friendly:
• Putting “Sec” in DevOps is becoming a mission-critical objective. SSI leaders routinely partner with IT, cloud,
development, QA, and operations leadership to ensure the SSG mission aligns with DevOps values and principles.
• SSI leaders realize they need in-house talent with coding expertise to improve not only their credibility with
engineering but also their understanding of modern software delivery practices. Job descriptions for SSG roles
now mention experience and qualification requirements such as cloud, mobile, containers, and orchestration.
We expect this list to grow as other topics become more mainstream, such as architecture and testing
requirements around serverless computing and single-page application languages.
• To align better with DevOps values (e.g., agility, collaboration, responsiveness), SSI leaders are beginning to replace
traditional people-driven activities with people-optional, pipeline-driven automated tasks. This often comes in the
form of automated security tool execution, bugs filed automatically to defect notification channels, builds flagged
for critical issues, and automated triggers to respond to real-time operational events.
• Scaling outreach and expertise through the implementation of an ever-growing satellite is viewed as a short-term
rather than long-term goal. Organizations report improved responsiveness and engagement as part of DevOps
initiatives when they’ve localized security expertise in the engineering teams. Champions are also becoming
increasingly sophisticated in building reusable artifacts (e.g., security sensors) in development and deployment
streams to directly support SSI activities.
• SSI leaders are partnering with operations to implement application-layer production monitoring and automated
mechanisms for responding to security events. There is a high degree of interest in consuming real-time security
events for data collection and analysis to produce useful metrics.

THE ENGINEERING-LED JOURNEY


From an activity perspective, we observe that emerging engineering-led software security efforts build on a
foundation very similar to governance-led organizations. How they go about accomplishing these activities differs
and usually parallels their software-delivery focus.
Inventory Software
One of the first activities an organization can do is to take an initial inventory of its local software portfolio [CMVM2.3].
Rather than taking an organizational structure and owner-based view of this problem, we observe emerging
engineering-led efforts attempting to understand software inventory by extracting it from the same tools they use
to manage their IT assets. By scraping these software and infrastructure configuration management databases
(CMDBs), they craft an inventory brick-by-brick rather than top-down. They then use the metadata and tagging that
these content databases provide to reflect their software’s architecture as well as their organization’s structure.

ENGINEERING-LED CHECKLIST FOR GETTING STARTED


1. INVENTORY SOFTWARE. Know what you have, where it is, and when it changes.
2. SELECT IN-SCOPE SOFTWARE. Decide what you’re going to focus on first, then contribute to
its value streams.
3. ENSURE HOST AND NETWORK SECURITY BASICS. Don’t put good software on bad systems or
in poorly constructed networks (cloud or otherwise).
4. CHOOSE APPLICATION CONTROLS. Apply controls that deliver the right security features and
also help prevent some classes of vulnerabilities.
5. REPEAT. Expand the team, improve the inventory, automate the basics, do more prevention,
reinforce the culture, and then repeat again.

36 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


To this end, engineering-led efforts can combine two or more of the following approaches to inventory creation:
• Discovery, import, and visualization of assets managed by the organization’s cloud and data center virtualization
management consoles
• Scraping and extracting assets and tags from infrastructure-as-code held in code repositories, as well as
processing metadata from container and other artifact registries
• Outside-in web and network scanning for publicly discoverable assets, connectivity to known organizational
assets, and related ownership and administrative information
That last bullet is particularly interesting because we’ve observed organizations learning that this kind of external
discovery is essential, despite substantial efforts to develop or purchase other means of internal discovery such as
described by the first two bullets. In other words, despite increasing efforts, many organizations have substantially
more software in production environments today than is captured by their existing inventory processes. As one
simple example, does a monolithic web application replaced by a smaller application and 25 microservices become
26 entries in your CMDB? When the answer is no, all organizations struggle to find all their software after the fact.
Select In-Scope Software
We observe security leadership within engineering-led security efforts informally prioritizing in-scope software rather
than using a published and socialized risk formula. As software solutions pivot to meet changing customer demand,
the list of software that is in scope of security governance is likely more fluid than in governance-driven groups.
In engineering-led efforts, informal prioritization that is revisited with much greater frequency helps them better
respond and prioritize the appropriate software changes.
For much of an engineering-led effort, an activity is often first prototyped by a security engineer participating within
a software delivery team. These engineers individually contribute to a team’s critical path activities. When they, for
example, complete production of test automation [ST2.5], vulnerability discovery tooling, or security features [SFD1.1],
one of the security engineers might move onto another delivery team, bringing along their accomplishments,
or they might seek to personally evangelize their use and gain leverage from those accomplishments through
the organization’s knowledge management systems. This level of intimacy between a developer and the security
improvements they spread to other projects and teams—often regardless of whether that improvement closely aligns
with governance-driven rules—makes scoping and prioritizing stakeholder involvement in the software inventory
process vitally important.

PRIORITIZING IN-SCOPE SOFTWARE


Drivers differ by organization, but engineering-led groups have been observed using the following
as input when prioritizing in-scope software:
1. VELOCITY. Teams conducting active new development or major refactoring.
2. REGULATION. Those services or data repositories to which specific development or
configuration requirements for security or privacy apply [CP1.1, CP1.2].
3. OPPORTUNITY. Those teams solving critical technical challenges or adopting key technologies
that potentially serve as proving grounds for emerging security controls.

Beyond immutable constraints like the applicability of regulation, we see evidence that assignment can be rather
opportunistic and perhaps be driven bottom-up by security engineers and development managers themselves. In
these cases, the security initiative’s leader often seeks opportunities to cull their efforts and scale key successes rather
than direct the use of controls top-down.

37 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Ensure Host and Network Security Basics
Compared to governance-led organizations, [SE1.2 Ensure host and network security basics are in place] is observed
no less frequently for engineering-led groups. Security engineers might begin by conducting this work manually,
then baking these settings and changes into their software-defined infrastructure scripts to ensure both consistent
application within a development team and scalable sharing across the organization.
Forward-looking organizations that have adopted software and network orchestration technologies (e.g., Kubernetes,
Envoy, Istio) get maximum impact from this activity with the efforts of even an individual contributor, such as a
security-minded DevOps engineer. While organizations often have hardened container or host images on which
software deployments are based, software-defined networks and features from cloud service providers allow
additional control at the scale of infrastructure.
Though many of the technologies in which security engineers specify those hardening and security settings are
human-readable, engineering-led groups don’t typically take the time to extract and distill a document-based
security policy from these codebases. Without a policy that’s easy to consume by nontechnical humans, it’s difficult
for centralized elements of growing and maturing security initiatives—governance-based groups, for example—to
inspect and update this implicit policy on a portfolio-wide basis.
Choose Application Controls
Engineering-driven security cultures naturally favor security controls they can apply to software directly in the
form of features [SFD1.1]. This is unsurprising, as delivering security features has all the hallmarks of such a culture’s
objectives: delivering subject-matter expertise as software, impacting the critical path of delivery, and accelerating
that delivery. Depending on the way an organization delivers to its customers, application controls can take the form
of microservices (e.g., authentication or other identity and access management), common product libraries (e.g.,
encryption) [SFD2.1], or even infrastructure security controls (e.g., controlling scope of access to production secrets
through vault technologies).
Defensively, some engineering-led security groups have taken steps to tackle the prevention of certain classes of
vulnerability in a wholesale manner [CMVM3.1], using development frameworks that obviate them, an effort we’ve
seen decrease in governance-led organizations. Security engineers in these groups are often asked their opinion about
framework choices and are often empowered to incorporate their understanding of security features and security
posture tradeoffs as part of the selection and implementation process. As part of the critical path to software delivery,
these engineers can then tune the framework’s implementation to the team’s and organization’s specific situation.

MATURING ENGINEERING-LED EFFORTS


As the foundations of an engineering-led security effort become more concrete, its leaders seek to deepen
the technical controls applied, apply all controls to a broader base of the organization’s software portfolio and
infrastructure, and generally scale their efforts.
Because engineering-led security culture relies heavily on the individual contributions of security engineers
distributed within development teams, these firms seek to follow through on what these dispersed engineers have
started. Whereas an emerging practice might have focused on automation to ensure host and network security
basics, they will now also undertake and incrementally improve vulnerability discovery and management. They will
continue to broaden the catalog of security features delivered by security engineers to meet their view of security,
usually as aligned with quality and resiliency rather than centralized corporate governance. Instead of creating new
policy, for example, engineering-led groups might formalize the incident response experience accumulated to date
into enabling internal processes and using code snippets to communicate incident feedback to development teams.
In addition to incremental progress on the activities that security engineers have begun to define, engineering-led
security efforts will also seek to apply to the organization as a whole what security engineers have delivered to one
development team. This means documenting and sharing software processes, extracting explicit organizational
policies and standards from existing automation, and formalizing identification of data-driven obligations such as
those due to PCI or to other PII use.
Upgrade Incident Response
Governance-based and engineering-led groups alike conduct incident response, but engineering-led teams tend
to directly leverage DevOps engineers to help make the connections between those events and alerts raised in
production and the artifacts, pipelines, repositories, and teams responsible [CMVM1.1]. This crucial traceability
mechanism allows these groups to effectively prioritize security issues on which the security initiative will focus.
Feedback from the field on what is actually happening essentially replaces the top N lists many governance-led
organizations use to establish priorities.

38 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Security engineers who are in development teams and are more familiar with application logic might be able to
facilitate more instructive monitoring and logging. They can coordinate with DevOps engineers to generate in-
application defenses that are tailored for business logic and expected behavior, therefore likely being more effective
than, for example, WAF rules. Introducing such functionality will in turn provide richer feedback and allow a more
tailored response to application behavior [SE3.3].
Organizations deploying cloud-native applications using orchestration might respond to incidents, or to data
indicating imminent incidents, with an increase in logging, perhaps by adjusting traffic to the distribution of image
types in production. Much of this is possible only with embedded security engineers who are steeped in the business
context of a development team and have good relationships with that team’s DevOps engineers; satellite members
(security champions) can be a good source for these individuals. Under these circumstances, incident response moves
at the speed of a well-practiced single team rather than that of an interdepartmental playbook.
Do Defect Discovery
As firms mature, we see evidence of them building out scalable defect discovery and management practices.
Specifically, these cultures seek to provide highly actionable information about potential security issues to developers
proactively. However, visibility into potential vulnerability must come without disrupting CI/CD pipelines (via tools
that have long execution times), without generating large volumes of perceived false positives, and without impeding
delivery velocity (e.g., through broken builds or inadmissible promotion) except under exceptionally clear and
convincing circumstances.
Our observation is that engineering-led groups build discovery capability incrementally, sometimes with security
engineers prototyping new detective capability shoulder-to-shoulder with development teams. Prototyping likely
includes in-band triage and developer buy-in to the test findings, their accuracy, and their importance, as well as the
suggested remediation.
Because they’re often free, easy to find, and easy to use, engineering-led cultures tend to start with open source or
“freemium” security tools to accomplish various in-band automated code review and testing activities. These tools are
often small and run quickly due to their limited focus. After these tools are integrated within a pipeline and producing
baseline measurements, other stakeholders such as champions might augment the tools and rulesets to, for example,
scan for adherence to internal secure coding standards on an incremental basis and across more of the portfolio. The
organization’s larger, commercial security testing tools that execute over extended periods typically continue to get
used at checkpoints relevant to governance groups, with those tests usually being out-of-band of CI/CD pipelines due
to the time taken to execute and the friction introduced.

ENGINEERING-LED HEURISTICS
Our observations are that engineering-led groups start with open source and home-grown security
tools, with much less reliance on “big box” vulnerability discovery products. Generally, these groups
hold to two heuristics:
• Lengthening time to (or outright preventing) delivery is unacceptable. Instead, they organize to
provide telemetry and then respond asynchronously through virtual patching, rollback, or other
compensating controls.
• Depending solely on purchased boxed security standards delivered as part of a vendor’s core
ruleset in a given tool is likely unacceptable. Instead, or in addition, they build vulnerability
detective capability incrementally, in line with a growing understanding of software misuse and
abuse and associated business risk.

These groups might build on top of in-place test scaffolding, might purposefully extend open source scanners that
integrate cleanly with their development toolchain, or both. Extension often focuses on a different set of issues
than characterized in general lists such as the OWASP Top 10 or even the broader set of vulnerabilities found by
commercial tools. Instead, these groups might focus on issues such as denial of service, misuse/abuse of business
functionality, or enforcement of the organization’s technology-specific coding standards (even when these are
implicit rather than written down) as defects to be discovered and remediated.

39 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Document the SSDL
Engineering-led cultures typically eschew document- or presentation-based deliverables in favor of code-based
deliverables. As the SSG seeks to apply its efforts to a larger percentage of the firm’s software, it might work to
institute some form of knowledge-sharing process to get security activities applied across teams. To this end, security
leaders might create a “one-pager” describing the security tools and techniques to be applied throughout the
software portfolio’s lifecycle [SM1.1], make that resource available through organizational knowledge management,
and evangelize it through internal knowledge-sharing forums [SR1.2].
Unlike many governance-driven groups, we have found that engineering-led groups explicitly and purposefully
incentivize security engineers to talk externally about those security tools they’ve created (or customized), such as at
regional meet-ups and conferences. Security leaders might use these incentives and even open source projects as a
way to invite external critique and ensure frequent maintenance and improvement on the tools and frameworks their
engineers create, without continuing to tie up 100% of a specific engineer’s bandwidth indefinitely.
SSDL documentation might be made available through an internal or even external source code repository, along
with other related material that aids uptake and implementation by development teams. A seemingly simple step,
this makes it very easy for development teams to conform to the SSDL within their existing toolchains and cultural
norms. It seems especially useful for onboarding new team members by decreasing the time to productivity.

ENABLING ENGINEERING-LED EFFORTS


To some extent, engineering-led groups take an enabling approach from the beginning. Security efforts are built on
contributions of engineers who deliver software early and often, constantly improving it rather than relying on explicit
strategy backed by policies, built top-down, or pushed everywhere through organizational mandate over time.
It’s clear that some of these groups have achieved an “enabling” state of maturity for some of their software security
capabilities. However, the BSIMM does not yet contain enough data to generalize about this type of effort in terms
of which activities such groups are likely to conduct or how implementation of those activities might differ from
their form in governance-driven groups. We will continue to track engineering-led groups and look for patterns
and generalizations in the data that might give us insight into how they achieve some maturity that aligns with
expectations across the organization and the software portfolio. It might be the case that over the next few years,
we will once again see a merging of engineering-led efforts and governance-led efforts into a new type of
firm-wide SSI.

40 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


PART THREE
BUILD IN G BLOCKS
OF T H E BSIMM
BUILDING BLOCKS OF THE BSIMM
Activities are the building blocks and smallest unit of granularity that are implemented across organizations to build
SSIs. Rather than dictating a set of prescriptive activities, the purpose of the BSIMM is to descriptively observe and
quantify the actual activities carried out by various kinds of SSIs across many organizations. This section will highlight
commonly observed activities and then detail all 122 activities in the SSF.
Table 4 shows the most common activities per practice. Although we can’t directly conclude that these 12 activities
are necessary for all SSIs, we can say with confidence that they’re commonly found in highly successful initiatives.
This suggests that if an organization is working on an initiative of its own, it should consider these 12 activities
particularly carefully.

BSIMM12 MOST COMMON ACTIVITY PER PRACTICE

ACTIVITY DESCRIPTION

[SM1.4] Implement lifecycle instrumentation and use to define governance.

[CP1.2] Identify PII obligations.

[T 1 .1] Conduct software security awareness training.

[AM1.2] Create a data classification scheme and inventory.

[SFD1.1] Integrate and deliver security features.

[SR1.3] Translate compliance constraints to requirements.

[AA1.1] Perform security feature review.

[CR1.4] Use automated tools.

[ST 1 .1] Ensure QA performs edge/boundary value condition testing.

[PT 1 .1] Use external penetration testers to find problems.

[SE1.2] Ensure host and network security basics are in place.

[CMVM1.1] Create or interface with incident response.

TABLE 4. MOST COMMON ACTIVITIES PER PRACTICE. This table shows the most commonly observed activity in each
of the 12 BSIMM practices for the entire data pool of 128 firms. This frequent observation means that each activity has broad
applicability across a wide variety of SSIs. See Table 5 for the most common activities across all 122 BSIMM12 activities.

42 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Of course, the most common activity in each practice isn’t the same as the list of the most common activities across
the entire data pool (Table 5). If you’re working on improving your company’s SSI, you should consider these 20
activities as likely important to your plans.

BSIMM12 TOP 20 ACTIVITIES BY OBSERVATION COUNT

1 118 [SM1.4] Implement lifecycle instrumentation and use to define governance.

2 117 [SE1.2] Ensure host and network security basics are in place.

3 114 [CP1.2] Identify PII obligations.

4 113 [AA1.1] Perform security feature review.

5 111 [PT 1 .1] Use external penetration testers to find problems.

6 108 [CMVM1.1] Create or interface with incident response.

7 102 [SFD1.1] Integrate and deliver security features.

8 102 [CR1.4] Use automated tools.

9 100 [ST 1 .1] Ensure QA performs edge/boundary value condition testing.

10 99 [SR1.3] Translate compliance constraints to requirements.

11 98 [CP1.1] Unify regulatory pressures.

12 98 [PT 1 .2] Feed results to the defect management and mitigation system.

13 96 [CMVM1.2] Identify software defects found in operations monitoring and feed them back to development.

14 93 [CMVM2.2] Track software bugs found in operations through the fix process.

15 92 [CMVM2.1] Have emergency response.

16 91 [SM1.1] Publish process and evolve as necessary.

17 90 [SR1.1] Create security standards.

18 88 [SR1.2] Create a security portal.

19 88 [CP1.3] Create policy.

20 88 [PT 1 .3] Use penetration testing tools internally.

TABLE 5. TOP 20 ACTIVITIES BY OBSERVATION COUNT. Shown here are the most commonly observed activities in the
BSIMM12 data.

43 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


GOVERNANCE
GOVERNANCE: STRATEGY & METRICS (SM)
The Strategy & Metrics practice encompasses planning, assigning roles and responsibilities, identifying software
security goals, determining budgets, and identifying metrics and software release conditions.

SM LEVEL 1
[SM1.1: 91] Publish process and evolve as necessary.
The process for addressing software security is published and broadcast to all stakeholders so that everyone knows
the plan. Goals, roles, responsibilities, and activities are explicitly defined. Most organizations pick an existing
methodology, such as the Microsoft SDL or the Synopsys Touchpoints, then tailor it to meet their needs. Security
activities, such as those grouped into an SSDL process, are adapted to software lifecycle processes (e.g., waterfall,
agile, CI/CD, DevOps) so activities will evolve with both the organization and the security landscape. In many cases,
the process is defined by the SSG and only published internally; it doesn’t need to be publicly promoted outside
the firm to have the desired impact. In addition to publishing the written process, some firms also encode it into an
application lifecycle management (ALM) tool as software-defined workflow (see [SM3.4 Integrate software-defined
lifecycle governance]).
[SM1.3: 81] Educate executives on software security.
Executives are regularly shown the ways malicious actors attack software and the negative business impacts
those attacks can have on the organization. This education goes past the reporting of open and closed defects
to show what other organizations are doing to mature software security, including how they deal with the risks
of adopting emerging engineering methodologies with no oversight. By understanding both the problems and
their proper resolutions, executives can support the SSI as a risk management necessity. In its most dangerous
form, security education arrives courtesy of malicious hackers or public data exposure incidents. Preferably, the
SSG will demonstrate a worst-case scenario in a controlled environment with the permission of all involved (e.g.,
by showing working exploits and their business impact). In some cases, presentation to the Board can help garner
resources for new or ongoing SSI efforts. For example, demonstrating the need for new skill-building training in
evolving areas, such as DevOps groups using cloud-native technologies, can help convince leadership to accept SSG
recommendations when they might otherwise be ignored in favor of faster release dates or other priorities. Bringing
in an outside guru is often helpful when seeking to bolster executive attention.
[SM1.4: 118] Implement lifecycle instrumentation and use to define governance.
The software security process includes conditions for release (such as gates, checkpoints, guardrails, milestones,
etc.) at one or more points in a software lifecycle. The first two steps toward establishing security-specific release
conditions are to identify locations that are compatible with existing development practices and to then begin
gathering the input necessary to make a go/no-go decision, such as risk-ranking thresholds or defect data.
Importantly, the conditions might not be verified at this stage—for example, the SSG can collect security testing
results for each project prior to release, then provide their informed opinion on what constitutes sufficient testing or
acceptable test results without trying to stop a project from moving forward. In CI/CD environments, shorter release
cycles often require creative approaches to collecting the right evidence and rely heavily on automation. The idea of
defining governance checks in the process first and enforcing them later is extremely helpful in moving development
toward software security without major pain (see [SM2.2 Verify release conditions with measurements and track
exceptions]). Socializing the conditions and then verifying them once most projects already know how to succeed
is a gradual approach that can motivate good behavior without requiring it.

SM LEVEL 2
[SM2.1: 63] Publish data about software security internally and drive change.
To facilitate improvement, data are published internally about the state of software security within the organization.
This information might come in the form of a dashboard with metrics for executives and software development
management. Sometimes, these published data won’t be shared with everyone in the firm but only with relevant
stakeholders who then drive change in the organization. In other cases, open book management and data published
to all stakeholders help everyone know what’s going on, the philosophy being that sunlight is the best disinfectant.
If the organization’s culture promotes internal competition between groups, this information can add a security
dimension. Increasingly, security telemetry is used to gather measurements quickly and accurately, and might
initially focus less on historical trends (e.g., bugs per release) and more on speed (e.g., time to fix) and quality (e.g.,
defect density). Some SSIs might publish these data primarily for software development management in engineering
groups within pipeline platform dashboards, democratizing measurement for developer self-improvement.

44 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[SM2.2: 60] Verify release conditions with measurements and track exceptions.
Security release conditions (or gates, checkpoints, guardrails, milestones, etc.) are verified for every project (e.g.,
code change, infrastructure access control changes, deployment blueprints), so each project must either meet an
established measure or obtain a waiver in order to move forward normally, with the SSG tracking exceptions. In
some cases, measures are directly associated with regulations, contractual agreements, and other obligations, with
exceptions tracked as required by statutory or regulatory drivers. In other cases, measures yield some manner of
KPIs that are used to govern the process. Automatically giving software a passing grade or granting waivers without
due consideration defeats the purpose of verifying conditions. Even seemingly innocuous software projects must
successfully satisfy the prescribed security conditions in order to progress to or remain in production. Similarly, APIs,
frameworks, libraries, bespoke code, microservices, container configurations, and so on are all software that must
satisfy security release conditions. It’s possible, and often very useful, to have verified the conditions both before and
after the development process itself. In modern development environments, the measurement process for conditions
will increasingly become automated (see [SM3.4 Integrate software-defined lifecycle governance]).
[SM2.3: 60] Create or grow a satellite.
There is a collection of people scattered across the organization who show an above-average level of security interest
or skill—a satellite—and who contribute software security expertise to development, QA, and operations teams.
Forming this social network of advocates, sometimes referred to as champions, is a good step toward scaling security
into software engineering. One way to build the initial group is to track the people who stand out during introductory
training courses; see [T3.6 Identify new satellite members through observation]. Another way is to ask for volunteers.
In a more top-down approach, initial satellite membership is assigned to ensure good coverage of development
groups, but ongoing membership is based on actual performance. A strong satellite is a good sign of a mature SSI.
The satellite can act as a sounding board for new SSG projects and, in new or fast-moving technology areas, help
combine software security skills with domain knowledge that might be under-represented in the SSG or engineering
teams. Agile coaches, scrum masters, and DevOps engineers can make particularly useful satellite members,
especially for detecting and removing process friction. In some agile environments, satellite-led efforts are being
replaced by automation.
[SM2.6: 62] Require security sign-off prior to software release.
The organization has an initiative-wide process for accepting security risk and documenting accountability, with a risk
owner signing off on the state of all software prior to release. The sign-off policy might require the head of a business
unit to sign off on critical vulnerabilities that have not been mitigated or on SSDL steps that have been skipped, for
example. In addition to internally-developed code being deployed in data centers, the sign-off policy must also apply
to outsourced projects and to projects that will be deployed in external environments, such as the cloud, and must
also account for all the new kinds of code that aren’t applications (e.g., container configurations, individual APIs,
hardening scripts). Informal or uninformed risk acceptance alone isn’t a security sign-off because the act of accepting
risk is more effective when it’s formalized (e.g., with a signature, a form submission, or something similar) and
captured for future reference. Similarly, simply stating that certain projects don’t need sign-off at all won’t achieve the
desired risk management results. In some cases, however, the risk owner can provide the sign-off on a particular set
of software project acceptance criteria, which are then implemented in automation to provide governance-as-code
(see [SM3.4 Integrate software-defined lifecycle governance]), but there must be an ongoing verification that
the criteria remain accurate and the automation is actually working.
[SM2.7: 64] Create evangelism role and perform internal marketing.
The SSG builds support for software security throughout the organization through ongoing evangelism. This internal
marketing function, often performed by a variety of stakeholder roles, keeps executives and others up to date on the
magnitude of the software security problem and the elements of its solution. A scrum master familiar with security,
for example, could help teams adopt better software security practices as they transform to an agile methodology.
Similarly, a cloud expert could demonstrate the changes needed in security architecture and testing for software to
be deployed in serverless environments. Evangelists can increase understanding and build credibility by giving talks
to internal groups (including executives), publishing roadmaps, authoring technical papers for internal consumption,
or creating a collection of papers, books, and other resources on an internal website and promoting its use.

45 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


SM LEVEL 3
[SM3.1: 22] Use a software asset tracking application with portfolio view.
The SSG uses centralized tracking automation to chart the progress of every piece of software and deployable artifact
in its purview, regardless of development methodology. The automation records the security activities scheduled, in
progress, and completed, incorporating results from SSDL activities even when they happen in a tight loop or during
deployment. The combined inventory and security posture view enables timely decision-making. The SSG uses the
automation to generate portfolio reports for multiple metrics and, in many cases, publishes these data at least among
executives. Depending on the culture, this can cause interesting effects via internal competition. As an initiative
matures and activities become more distributed, the SSG uses the centralized reporting system to keep track of all
the moving parts.
[SM3.2: 10] SSI efforts are part of external marketing.
To build external awareness, the SSG helps market the SSI beyond internal teams. In this way, software security can
grow its risk reduction exercises into a competitive advantage or market differentiator. The SSG might publish papers
or books about its software security capabilities or have a public blog. It might provide details at external conferences
or trade shows. In some cases, a complete SSDL methodology can be published and promoted outside the firm, and
governance-as-code concepts can make interesting case studies. Regardless of venue, the process of sharing details
externally and inviting critique is used to bring new perspectives into the firm.
[SM3.3: 21] Identify metrics and use them to drive resourcing.
The SSG and its management choose the metrics that define and measure SSI progress in quantitative terms. These
metrics are reviewed on a regular basis and drive the initiative’s budgeting and resource allocations, so simple counts
and out-of-context measurements won’t suffice here. On the technical side, one such metric could be defect density,
a reduction of which could be used to show a decreasing cost of remediation over time, assuming, of course, that
testing depth has kept pace with software changes. Recall that, in agile methodologies, metrics are best collected
early and often using event-driven processes with telemetry rather than calendar-driven data collection. The key is
to tie security results to business objectives in a clear and obvious fashion in order to justify resourcing. Because the
concept of security is already tenuous to many businesspeople, making an explicit tie-in can be helpful.
[SM3.4: 6] Integrate software-defined lifecycle governance.
Organizations begin replacing traditional document-, presentation-, and spreadsheet-based lifecycle management
with software-based delivery platforms. Humans, sometimes aided by tools, are no longer the primary drivers of
progression from each software lifecycle phase to the next. Instead, organizations rely on automation to drive the
management and delivery process with ALM/ADLM software such as Spinnaker or pipeline platform software like
GitHub, and humans participate asynchronously (and often optionally), like services. Automation often extends
beyond the scope of CI/CD to include functional and nonfunctional aspects of delivery, including health checks, cut-
over on failure, rollback to known-good software, defect discovery and management, compliance verification, and a
way to ensure adherence to policies and standards. Some organizations are also evolving their lifecycle management
approach by integrating their compliance and defect discovery data to begin moving from a series of point-in-time
go/no-go decisions (e.g., release conditions) to a future state of continuous accumulation of assurance data [see
CMVM3.6 Publish risk data for deployable artifacts]. Lifecycle governance extends beyond defect discovery and often
includes incorporation of intelligence feeds and third-party security research, vulnerability disclosure and patching
processes, as well as other activities.

GOVERNANCE: COMPLIANCE & POLICY (CP)


The Compliance & Policy practice is focused on identifying controls for compliance regimens such as PCI DSS and
HIPAA, developing contractual controls such as SLAs to help manage COTS software risk, setting organizational
software security policy, and auditing against that policy.

CP LEVEL 1
[CP1.1: 98] Unify regulatory pressures.
If the business or its customers are subject to regulatory or compliance drivers such as PCI security standards;
GLBA, SOX, and HIPAA in the US; or GDPR in the EU, the SSG acts as a focal point for understanding the constraints
such drivers impose on software security. In some cases, the SSG creates or collaborates on a unified approach
that removes redundancy and conflicts from overlapping compliance requirements. A formal approach will map
applicable portions of regulations to controls applied to software to explain how the organization complies. As an
alternative, existing business processes run by legal, product management, or other risk and compliance groups
outside the SSG could also serve as the regulatory focal point, with the SSG providing software security knowledge.
A unified set of software security guidance for meeting regulatory pressures ensures that compliance work is
completed as efficiently as possible. Some firms move on to influencing the regulatory environment directly by
becoming involved in standards groups exploring new technologies and mandates.

46 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[CP1.2: 114] Identify PII obligations.
The SSG plays a key role in identifying and describing PII obligations stemming from regulation and customer
expectations by using this information to promote best practices related to privacy and helping translate these
obligations into software requirements. The way software handles PII might be explicitly regulated, but even if it isn’t,
privacy is an important topic. For example, if the organization processes credit card transactions, the SSG will help in
identifying the constraints that the PCI DSS places on the handling of cardholder data and then inform all stakeholders.
Note that outsourcing to hosted environments (e.g., the cloud) doesn’t relax PII obligations and can even increase
the difficulty of recognizing and meeting all associated obligations. Also note that firms creating software products
that process PII (where those firms don’t necessarily handle PII directly) might meet this need by providing privacy
controls and guidance for their customers. Evolving consumer privacy expectations, the proliferation of “software is in
everything,” and data scraping and correlation (e.g., social media) add additional expectations for PII protection.
[CP1.3: 88] Create policy.
The SSG guides the rest of the organization by creating or contributing to software security policy that satisfies
internal, regulatory, and customer-driven security requirements. This policy is what is permitted and denied at the
initiative level; if it’s not mandatory, it’s not policy. It includes a unified approach for satisfying the (potentially lengthy)
list of security drivers at the governance level, so project teams can avoid keeping up with the details involved in
complying with all applicable regulations or other mandates. Likewise, project teams won’t need to relearn customer
security requirements on their own. Architecture standards and coding guidelines aren’t examples of policy, but
policy that prescribes and mandates their use for certain software categories falls under that umbrella. In many
cases, policy statements are translated into automation to provide governance-as-code, not just a process enforced
by humans, but even policy that’s been automated must be mandatory. In some cases, policy will be documented
exclusively as governance-as-code, often as tool configuration, but must still be readily readable, auditable, and
editable by humans. In some cases, satellite practitioners or similar roles are innovating and driving SSI policy locally
in engineering, where effort around new topics—codifying decisions around, for example, cloud-native technologies—
can rekindle interest in setting policy. Similarly, it might be necessary to explain what can and can’t be automated into
CI/CD pipelines (see [SM3.4 Integrate software-defined lifecycle governance]).

CP LEVEL 2
[CP2.1: 55] Build PII inventory.
The organization identifies and tracks the kinds of PII processed or stored by each of its systems, along with their
associated data repositories. In general, simply noting which applications process PII isn’t enough; the type of PII and
where it is stored are necessary so the inventory can be easily referenced in critical situations. This usually includes
making a list of databases that would require customer notification if breached or a list to use in crisis simulations
(see [CMVM3.3 Simulate software crises]). A PII inventory might be approached in two ways: starting with each
individual application by noting its PII use or starting with particular types of PII and noting the applications that
touch each type. System architectures have evolved such that PII will flow into cloud-based service and endpoint
device ecosystems, and come to rest there (e.g., content delivery networks, social networks, mobile devices, IoT
devices), making it tricky to keep an accurate PII inventory.
[CP2.2: 49] Require security sign-off for compliance-related risk.
The organization has a formal compliance risk acceptance and accountability process that addresses all software
development projects. In that process, the SSG acts as an advisor while the risk owner signs off on the software’s state
prior to release based on adherence to documented criteria. For example, the sign-off policy might require the head
of the business unit to sign off on compliance issues that haven’t been mitigated or on compliance-related SSDL
steps that have been skipped. Sign-off is explicit and captured for future reference, with any exceptions tracked, even
under the fastest of application lifecycle methodologies. Note that an application without security defects might
still be noncompliant, so clean security testing results are not a substitute for a compliance sign-off. Even in DevOps
organizations where engineers have the technical ability to release software, there is still a need for a deliberate risk
acceptance step even if the criteria are embedded in automation (see [SM3.4 Integrate software-defined lifecycle
governance]). In cases where the risk owner signs off on a particular set of compliance acceptance criteria that are
then implemented in automation to provide governance-as-code, there must be an ongoing verification that the
criteria remain accurate and the automation is actually working.

47 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[CP2.3: 67] Implement and track controls for compliance.
The organization can demonstrate compliance with applicable requirements because its SSDL is aligned with the
control statements developed by the SSG in collaboration with compliance stakeholders (see [CP1.1 Unify regulatory
pressures]). The SSG collaborates with stakeholders to track controls, navigate problem areas, and ensure auditors
and regulators are satisfied. The SSG might be able to remain in the background because following the SSDL
automatically generates the desired compliance evidence predictably and reliably. Increasingly, the DevOps approach
of embedding compliance controls in automation shows up within software-defined infrastructure and networks
rather than in human process and manual intervention. A firm doing this properly can explicitly associate satisfying
its compliance concerns with following its SSDL.
[CP2.4: 54] Include software security SLAs in all vendor contracts.
Software vendor contracts include an SLA to ensure that the vendor won’t jeopardize the organization’s compliance
story or SSI. Each new or renewed contract contains provisions requiring the vendor to address software security and
deliver a product or service compatible with the organization’s security policy (see [SR2.5 Create SLA boilerplate]). In
some cases, open source licensing concerns initiate the vendor management process, which can open the door for
additional software security language in the SLA. Typical provisions set requirements for policy conformance, incident
management, training, defect management, and response times for addressing software security issues. Traditional
IT security requirements and a simple agreement to allow penetration testing or another defect discovery activity
aren’t sufficient here.
[CP2.5: 74] Ensure executive awareness of compliance and privacy obligations.
To gain executive buy-in around compliance and privacy activities, the SSG provides executives with plain-language
explanations of the organization’s compliance and privacy obligations, along with the potential consequences
of failing to meet those obligations. For some organizations, explaining the direct cost and likely fallout from a
compliance failure or data breach can be an effective way to broach the subject. For others, having an outside expert
address the Board works because some executives value an outside perspective more than an internal one. A sure
sign of proper executive buy-in is an acknowledgment of the need and adequate allocation of resources to meet
those obligations. While useful for bootstrapping efforts, be aware that the sense of urgency typically following a
breach will not last.

CP LEVEL 3
[CP3.1: 24] Create a regulator compliance story.
The SSG has information regulators want, so a combination of written policy, controls documentation, and artifacts
gathered through the SSDL gives the SSG the ability to demonstrate the organization’s software security compliance
story without a fire drill for every audit. Often, regulators, auditors, and senior management will be satisfied with
the same kinds of reports that can be generated directly from various tools. In some cases, particularly where
organizations leverage shared responsibility cloud services, the organization will require additional information from
vendors about how that vendor’s controls support organizational compliance needs. It will often be necessary to
normalize information that comes from disparate sources. While they are often the biggest, governments aren’t the
only regulators of behavior.
[CP3.2: 18] Impose policy on vendors.
Vendors are required to adhere to the same policies used internally and must submit evidence that their software
security practices pass muster. For a given organization, vendors might comprise cloud providers, middleware
providers, virtualization providers, container and orchestration providers, bespoke software creators, contractors, and
many more, and each might be held to different policy requirements. Evidence of their compliance could include
results from SSDL activities or from tests built directly into automation or infrastructure. Vendors might attest to the
fact that they perform certain SSDL processes. Policy enforcement might be through a point-in-time review (like that
which assures acceptance criteria), enforced by automated checks (such as those applied to pull requests, committed
artifacts like containers, or similar), or a matter of convention and protocol (e.g., services cannot connect unless
particular security settings are correct, identifying certificates are present).

48 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[CP3.3: 6] Drive feedback from software lifecycle data back to policy.
Information from the software lifecycle is routinely fed back into the policy creation and maintenance process to
prevent defects from occurring in the first place and to help strengthen governance-as-code practices (see [SM3.4
Integrate software-defined lifecycle governance]). With this process, blind spots can be eliminated by mapping
them to trends in SSDL failures. The regular appearance of inadequate architecture analysis, recurring vulnerabilities,
ignored security release conditions, or the wrong firm choice for carrying out a penetration test can expose policy
weakness (see [CP1.3 Create policy]). As an example, lifecycle data might indicate that policies impose too much
bureaucracy by introducing friction that prevents engineering from meeting the expected delivery cadence. Rapid
technology evolution might also create policy gaps that must be addressed. Over time, policies become more
practical and easier to carry out (see [SM1.1 Publish process and evolve as necessary]). Ultimately, policies are refined
with SSDL data to enhance and improve a firm’s effectiveness.

GOVERNANCE: TRAINING (T)


Training has always played a critical role in software security because organizational stakeholders across GRC, legal,
engineering, operations, and other groups often start with little security knowledge.

T LEVEL 1
[T 1 .1: 76] Conduct software security awareness training.
To promote a culture of software security throughout the organization, the SSG conducts periodic software security
awareness training. As examples, the training might be delivered via SSG members, an outside firm, the internal
training organization, or e-learning. Here, the course content doesn’t necessarily have to be tailored for a specific
audience—for example, all developers, QA engineers, and project managers could attend the same “Introduction
to Software Security” course—but this effort should be augmented with a tailored approach that addresses the
firm’s culture explicitly, which might include the process for building security in, avoiding common mistakes, and
technology topics such as CI/CD and DevSecOps. Generic introductory courses that cover basic IT or high-level
security concepts don’t generate satisfactory results. Likewise, awareness training aimed only at developers and not
at other roles in the organization is insufficient.
[T 1 .7: 53] Deliver on-demand individual training.
The organization lowers the burden on students and reduces the cost of delivering training by offering on-
demand training for individuals across roles. The most obvious choice, e-learning, can be kept up to date through
a subscription model, but an online curriculum must be engaging and relevant to the students in various roles to
achieve its intended purpose. Ineffective (e.g., aged, off-topic) training or training that isn’t used won’t create any
change. Hot topics like containerization and security orchestration and new delivery styles such as gamification will
attract more interest than boring policy discussions. For developers, it’s possible to provide training directly through
the IDE right when it’s needed, but in some cases, building a new skill (such as cloud security or threat modeling)
might be better suited for instructor-led training, which can also be provided on demand.
[T 1 .8: 46] Include security resources in onboarding.
The process for bringing new hires into an engineering organization requires timely completion of a training module
about software security. The generic new hire process usually covers topics like picking a good password and avoiding
phishing, but this orientation period should be enhanced to cover topics such as how to create, deploy, and operate
secure code, the SSDL, and internal security resources (see [SR1.2 Create a security portal]). The objective is to ensure
that new hires contribute to the security culture as soon as possible. Although a generic onboarding module is useful,
it doesn’t take the place of a timely and more complete introductory software security course.

T LEVEL 2
[T2.5: 39] Enhance satellite through training and events.
The SSG strengthens the satellite network (see [SM2.3 Create or grow satellite]) by inviting guest speakers or holding
special events about advanced topics (e.g., the latest software security techniques for DevOps or serverless code
technologies). This effort is about providing to the satellite customized training so that it can fulfill its assigned
responsibilities, not about inviting satellite members to routine brown bags or signing them up for standard
computer-based training. Similarly, a standing conference call with voluntary attendance won’t get the desired
results, which are as much about building camaraderie as they are about sharing knowledge and organizational
efficiency. Face-to-face meetings are by far the most effective, even if they happen only once or twice a year and some
participants must attend over videoconferencing. In teams with many geographically dispersed and work-from-home
members, simply turning on cameras and ensuring everyone gets a chance to speak makes a substantial difference.

49 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[T2.8: 27] Create and use material specific to company history.
To make a strong and lasting change in behavior, training includes material specific to the company’s history of
software security challenges. When participants can see themselves in a problem, they’re more likely to understand
how the material is relevant to their work as well as when and how to apply what they’ve learned. One way to do
this is to use noteworthy attacks on the company’s software as examples in the training curriculum. Both successful
and unsuccessful attacks, as well as notable results from penetration tests and red team exercises, can make good
teachable moments. Stories from company history can help steer training in the right direction, but only if those
stories are still relevant and not overly censored. This training shouldn’t cover platforms not used by developers
(developers orchestrating containers probably won’t care about old virtualization problems) or examples of problems
relevant only to languages no longer in common use (e.g., Go developers probably don’t need to understand how
buffer overflows happen in C).
[T2.9: 35] Deliver role-specific advanced curriculum.
Software security training goes beyond building awareness (see [T 1 .1 Conduct software security awareness training])
by enabling students to incorporate security practices into their work. This training is tailored to cover the tools,
technology stacks, development methodologies, and bugs that are most relevant to students. An organization could
offer tracks for its engineers, for example: one each for architects, developers, operations, site reliability engineers,
and testers. Tool-specific training is also commonly needed in such a curriculum. While perhaps more concise than
engineering training, role-specific training is necessary for many stakeholders within an organization, including
product management, executives, and others. In any case, the training must be taken by a broad enough audience
to build the desired skillsets.

T LEVEL 3
[T3.1: 6] Reward progression through curriculum.
Knowledge is its own reward, but progression through the security curriculum brings other benefits, too, such as
career advancement. The reward system can be formal and lead to a certification or an official mark in the human
resources system, or it can be less formal and include motivators such as documented praise at annual review time.
Involving a corporate training department and/or human resources team can make security’s impact on career
progression more obvious, but the SSG should continue to monitor security knowledge in the firm and not cede
complete control or oversight. Coffee mugs and t-shirts can build morale, but it usually takes the possibility of real
career progression to change behavior.
[T3.2: 23] Provide training for vendors and outsourced workers.
Vendors and outsourced workers receive the same level of software security training given to employees. Spending
time and effort helping suppliers get security right at the outset is much easier than trying to determine what went
wrong later on, especially if the development team has moved on to other projects. Training individual contractors
is much more natural than training entire outsourced firms and is a reasonable place to start. It’s important that
everyone who works on the firm’s software has an appropriate level of training that increases their capability of
meeting the software security expectations for their role, regardless of their employment status. Of course, some
vendors and outsourced workers might have received adequate training from their own firms, but that should
always be verified.
[T3.3: 23] Host software security events.
The organization hosts security events featuring external speakers and content in order to strengthen its security
culture. Good examples of such events are Intel iSecCon and AWS re:Inforce, which invite all employees, feature
external presenters, and focus on helping engineering create, deploy, and operate better code. Employees benefit
from hearing outside perspectives, especially those related to fast-moving technology areas, and the organization
benefits from putting its security credentials on display (see [SM3.2 SSI efforts are part of external marketing]).
Events open only to small, select groups won’t result in the desired culture change across the organization.
[T3.4: 24] Require an annual refresher.
Everyone involved in the SSDL is required to take an annual software security refresher course. This course keeps
the staff up to date on the organization’s security approach and ensures the organization doesn’t lose focus due to
turnover, evolving methodologies, or changing deployment models. The SSG might give an update on the security
landscape and explain changes to policies and standards. A refresher could also be rolled out as part of a firm-wide
security day or in concert with an internal security conference, but it’s useful only if it’s fresh. Sufficient coverage of
topics and changes from the previous year will likely comprise a significant amount of content.

50 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[T3.5: 9] Establish SSG office hours.
The SSG or equivalent stakeholder offers help to anyone during an advertised meet-up period or regularly scheduled
office hours. By acting as an informal resource for people who want to solve security problems, the SSG leverages
teachable moments and emphasizes the carrot over the stick approach to security best practices. Office hours might
be hosted one afternoon per week by a senior SSG member, perhaps inviting briefings from product or application
groups working on hard security problems. Slack and other messaging applications can capture questions 24x7 as
a good way to seed office hours conversations, functioning as an office hours platform when appropriate subject-
matter experts are consistently part of the conversation and ensuring that the answers generated align with SSI
expectations.
[T3.6: 4] Identify new satellite members through observation.
Future satellite members (e.g., champions) are recruited by noting people who stand out during training courses, office
hours, capture-the-flag exercises, hack-a-thons, and other opportunities that show skill and enthusiasm and then
encouraging them to join the satellite. Pay particular attention to practitioners contributing code, security configuration
for orchestration, or defect discovery rules. The satellite often begins as an assigned collection of people scattered across
the organization who show an above-average level of security interest or advanced knowledge of new technology stacks
and development methodologies (see [SM2.3 Create or grow a satellite]). Identifying future members proactively is a
step toward creating a social network that speeds the adoption of security into software development and operations.
A group of enthusiastic and skilled volunteers will be easier to lead than a group that is drafted.

INTELLIGENCE
INTELLIGENCE: ATTACK MODELS (AM)
Attack Models capture information used to think like an attacker, including threat modeling inputs, abuse cases, data
classification, and technology-specific attack patterns.
AM LEVEL 1
[AM1.2: 77] Create a data classification scheme and inventory.
Security stakeholders in an organization agree on a data classification scheme and use it to inventory software,
delivery artifacts (e.g., containers), and associated persistent stores according to the kinds of data processed or
services called, regardless of deployment model (e.g., on- or off-premise). This allows applications to be prioritized
by their data classification. Many classification schemes are possible—one approach is to focus on PII, for example.
Depending on the scheme and the software involved, it could be easiest to first classify data repositories (see [CP2.1
Build PII inventory]) and then derive classifications for applications according to the repositories they use. Other
approaches to the problem include data classification according to protection of intellectual property, impact of
disclosure, exposure to attack, relevance to GDPR, and geographic boundaries.
[AM1.3: 41] Identify potential attackers.
The SSG periodically identifies potential attackers in order to understand their motivations and abilities. The outcome
of this exercise could be a set of attacker profiles that includes outlines for categories of attackers and more detailed
descriptions for noteworthy individuals to be used in end-to-end design review (see [AA1.2 Perform design review
for high-risk applications]). In some cases, a third-party vendor might be contracted to provide this information.
Specific and contextual attacker information is almost always more useful than generic information copied from
someone else’s list. Moreover, a list that simply divides the world into insiders and outsiders won’t drive useful results.
Identification of attackers should also consider the organization’s evolving software supply chain, attack surface,
theoretical internal attackers, and contract staff.
[AM1.5: 61] Gather and use attack intelligence.
The SSG ensures the organization stays ahead of the curve by learning about new types of attacks and vulnerabilities.
Attending technical conferences and monitoring attacker forums, then correlating that information with what’s
happening in the organization (perhaps by leveraging automation to mine operational logs and telemetry) helps
the SSG learn more about emerging vulnerability exploitation. In many cases, a subscription to a commercial service
can provide a reasonable way of gathering basic attack intelligence related to applications, APIs, containerization,
orchestration, cloud environments, and so on. Regardless of its origin, attack information must be adapted to the
organization’s needs and made actionable and useful for a variety of consumers, which might include developers,
testers, DevOps, security operations, and reliability engineers, among others.

51 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


AM LEVEL 2
[AM2.1: 14] Build attack patterns and abuse cases tied to potential attackers.
The SSG prepares the organization for SSDL activities such as design review and penetration testing by working
with stakeholders to build attack patterns and abuse cases tied to potential attackers (see [AM1.3 Identify potential
attackers]). However, these resources don’t have to be built from scratch for every application in order to be useful;
rather, standard sets, such as the MITRE ATT&CK framework, might exist for applications with similar profiles, and
the SSG can add to the pile based on its own attack stories. For example, a story about an attack against a poorly
designed cloud-native application could lead to a containerization attack pattern that drives a new type of testing.
If a firm tracks the fraud and monetary costs associated with particular attacks, this information can in turn be used
to prioritize the process of building attack patterns and abuse cases. Evolving software architectures (e.g., zero trust,
serverless) might require organizations to evolve their attack pattern and abuse case creation approach and content.
[AM2.2: 11] Create technology-specific attack patterns.
The SSG facilitates technology-specific attack pattern creation by collecting and providing knowledge about attacks
relevant to the organization’s technologies. For example, if the organization’s cloud software relies on a cloud
vendor’s security apparatus (e.g., key and secrets management), the SSG can help catalog the quirks of the crypto
package and how it might be exploited. Attack patterns directly related to the security frontier (e.g., serverless)
can be useful here as well. It’s often easiest to start with existing generalized attack patterns to create the needed
technology-specific attack patterns, but simply adding “for microservices” at the end of a generalized pattern name,
for example, won’t suffice.
[AM2.5: 13] Maintain and use a top N possible attacks list.
The SSG periodically digests the ever-growing list of attack types, focuses the organization on prevention efforts for
a prioritized short list—the top N—and then uses it to drive change. This initial list almost always combines input
from multiple sources, both inside and outside the organization. Some organizations prioritize their list according
to perception of potential business loss while others might prioritize according to successful attacks against their
software. The top N list doesn’t need to be updated with great frequency, and attacks can be coarsely sorted. For
example, the SSG might brainstorm twice a year to create lists of attacks the organization should be prepared to
counter “now,” “soon,” and “someday.”
[AM2.6: 10] Collect and publish attack stories.
To maximize the benefit from lessons that don’t always come cheap, the SSG collects and publishes stories about
attacks against the organization’s software. Both successful and unsuccessful attacks can be noteworthy, and
discussing historical information about software attacks has the added effect of grounding software security in a
firm’s reality. This is particularly useful in training classes to help counter a generic approach that might be overly
focused on other organizations’ top N lists or outdated platform attacks (see [T2.8 Create and use material specific
to company history]). Hiding or overly sanitizing information about attacks from people building new systems fails
to garner any positive benefits from a negative happenstance.
[AM2.7: 16] Build an internal forum to discuss attacks.
The organization has an internal, interactive forum where the SSG, the satellite, incident response, and others discuss
attacks and attack methods. The discussion serves to communicate the attacker perspective to everyone. It’s useful
to include all successful attacks in the discussion, regardless of attack source, such as supply chain threats, internal
attackers, consultants, or bug bounty contributors. The SSG can also maintain an internal communications channel
that encourages subscribers to discuss the latest information on publicly known incidents. Dissection of attacks and
exploits that are relevant to a firm are particularly helpful when they spur discussion of development, infrastructure,
and other mitigations. Simply republishing items from public mailing lists doesn’t achieve the same benefits as active
and ongoing discussions, nor does a closed discussion hidden from those actually creating code. Everyone should feel
free to ask questions and learn about vulnerabilities and exploits (see [T3.5 Establish SSG office hours]).

AM LEVEL 3
[AM3.1: 5] Have a research group that develops new attack methods.
A research group works to identify and defang new classes of attacks before attackers even know that they exist.
Because the security implications of new technologies might not have been fully explored in the wild, doing it in-
house is sometimes the best way forward. This isn’t a penetration testing team finding new instances of known
types of weaknesses—it’s a research group that innovates new types of attacks. Some firms provide researchers time
to follow through on their discoveries using bug bounty programs or other means of coordinated disclosure (see
[CMVM3.7 Streamline incoming responsibility vulnerability disclosure]). Others allow researchers to publish their
findings at conferences like DEF CON to benefit everyone.

52 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[AM3.2: 4] Create and use automation to mimic attackers.
The SSG arms engineers, testers, and incident response with automation to mimic what attackers are going to do.
For example, a new attack method identified by an internal research group or a disclosing third party could require
a new tool, so the SSG could package the tool and distribute it to testers. The idea here is to push attack capability
past what typical commercial tools and offerings encompass, and then make that knowledge and technology easy
for others to use. Tailoring these new tools to a firm’s particular technology stacks and potential attackers increases
the overall benefit. When technology stacks and coding languages evolve faster than vendors can innovate, creating
tools and automation in-house might be the best way forward. In the DevOps world, these tools might be created
by engineering and embedded directly into toolchains and automation (see [ST3.6 Implement event-driven security
testing in automation]).
[AM3.3: 6] Monitor automated asset creation.
The SSG guides the implementation of technology controls that provide a continuously updated view of the various
network, machine, software, and related infrastructure assets being instantiated by engineering teams as part of their
ALM processes. To help ensure proper coverage, the SSG works with engineering teams to understand orchestration,
cloud configuration, and other self-service means of software delivery used to quickly stand-up servers, databases,
networks, and entire clouds for software deployments. Monitoring the changes in application design (e.g., moving
a monolithic application to microservices) is also part of this effort. This monitoring requires a specialized effort—
normal system, network, and application logging and analysis won’t suffice. Success might require a multi-pronged
approach, including consuming orchestration and virtualization metadata, querying cloud service provider APIs, and
outside-in web crawling and scraping. As processes improve, the data will be helpful for threat modeling efforts (see
[AA1.1 Perform security feature review]).

INTELLIGENCE: SECURITY FEATURES & DESIGN (SFD)


The Security Features & Design practice is charged with creating usable security patterns for major security controls
(meeting the standards defined in the Standards & Requirements practice), building middleware frameworks for
those controls, and creating and publishing proactive security guidance.
SFD LEVEL 1
[SFD1.1: 102] Integrate and deliver security features.
Rather than having each project team implement its own security features (e.g., authentication, role management,
key management, logging, cryptography, protocols), the SSG provides proactive guidance by acting as or facilitating
a clearinghouse of security features for engineering groups to use. These features might be discovered during SSDL
activities, created by the SSG or specialized development teams, or defined in configuration templates (e.g., cloud
blueprints) and delivered via mechanisms such as containers, microservices, and APIs. Generic security features
often have to be tailored for specific platforms. For example, each mobile and cloud platform will likely need their
own means by which users are authenticated and authorized, secrets are managed, and user actions are centrally
logged and monitored. Project teams benefit from implementations that come preapproved by the SSG, and the SSG
benefits by not having to repeatedly track down the kinds of subtle errors that often creep into security features. It’s
the implementation of these defined security features that generate real progress, not simply making the list.
[SFD1.2: 83] Engage the SSG with architecture teams.
Security is a regular topic in the organization’s software architecture discussions, with the architecture team taking
responsibility for security in the same way that it takes responsibility for performance, availability, scalability, and
resiliency. One way to keep security from falling out of these architecture discussions is to have an SSG member
participate. Increasingly, architecture discussions include developers and site reliability engineers who are governing
all types of software components, such as open source, APIs, containers, and cloud services. In other cases, enterprise
architecture teams have the knowledge to help the SSG create secure designs that integrate properly into corporate
design standards. Proactive engagement by the SSG is key to success here. In addition, it’s never safe for one team to
assume another team has addressed security requirements. For example, even moving a well-known system to the
cloud means reengaging the SSG.

53 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


SFD LEVEL 2
[SFD2.1: 33] Leverage secure-by-design components and services.
The SSG takes a proactive role in software design by building or providing pointers to approved secure-by-design
software components and services, whether created internally or available from service providers. In addition to
teaching by example, these resilient and reusable building blocks aid important efforts such as architecture analysis
and code review by making it easier to spot errors and avoid mistakes. In addition, these components and services
often have features (e.g., application identity, RBAC) that enable uniform usage across disparate environments. Prior
to approving and publishing secure-by-design software components and services, including open source and cloud
services, the SSG must carefully assess them for security. This assessment process to declare a component secure-
by-design is usually more rigorous and in-depth than that for typical projects. Similarly, the SSG might further take
advantage of this defined list by tailoring code review rules specifically for the components it offers (see [CR2.6 Use
automated tools with tailored rules]).
[SFD2.2: 55] Create capability to solve difficult design problems.
The SSG contributes to building resilient architectures by solving design problems unaddressed by organizational
security components or services, or by cloud service providers, thus minimizing the negative impact that security
has on other constraints (e.g., feature velocity). Involving the SSG in the design of a new protocol, microservice, or
architecture decision (e.g., containerization) enables timely analysis of the security implications of existing defenses
and identifies elements that should be refactored, duplicated, or avoided. Likewise, having a security architect
understand the security implications of moving a seemingly well-understood application to the cloud saves a lot
of headaches later. Designing for security up front is more efficient than analyzing an existing design for security
and refactoring when flaws are uncovered, so the SSG should be involved early in the new project process. The SSG
could also get involved in what could have historically been purely engineering discussions, as even rudimentary
(e.g., “Hello, world!”) use of cloud-native technologies requires configurations and other capabilities that have direct
implications on security posture. Note that some design problems will require specific expertise outside of the
SSG—even the best expert can’t scale to cover the needs of an entire software portfolio.

SFD LEVEL 3
[SFD3.1: 16] Form a review board or central committee to approve and maintain secure design patterns.
A review board or central committee formalizes the process of reaching and maintaining consensus on design needs
and security tradeoffs. Unlike a typical architecture committee focused on functions, this group focuses on providing
security guidance, often in the form of patterns, standards, features, or frameworks, and also periodically reviews
already published design guidance (especially around authentication, authorization, and cryptography) to ensure
that design decisions don’t become stale or out of date. A review board can help control the chaos associated with
adoption of new technologies when development groups might otherwise make decisions on their own without
engaging the SSG. Review board security guidance can also serve to inform outsourced software providers about
security expectations (see [CP3.2 Impose policy on vendors]).
[SFD3.2: 15] Require use of approved security features and frameworks.
Implementers take their security features and frameworks from an approved list or repository (see [SFD2.1 Leverage
secure-by-design components and services]). There are two benefits to this activity: developers don’t spend time
reinventing existing capabilities, and review teams don’t have to contend with finding the same old defects in new
projects or when new platforms are adopted. Essentially, the more a project uses proven components, the easier
testing, code review, and architecture analysis become (see [AA1.1 Perform security feature review]). Reuse is a
major advantage of consistent software architecture and is particularly helpful for agile development and velocity
maintenance in CI/CD pipelines. Packaging and applying required components facilitate the delivery of services
as software features (e.g., identity-aware proxies). Containerization makes it especially easy to package and reuse
approved features and frameworks (see [SE2.5 Use application containers to support security goals]).
[SFD3.3: 5] Find and publish secure design patterns from the organization.
The SSG fosters centralized design reuse by collecting secure design patterns (sometimes referred to as security
blueprints) from across the organization and publishing them for everyone to use. A section of the SSG website
could promote positive elements identified during threat modeling or architecture analysis so that good ideas are
spread. This process is formalized—an ad hoc, accidental noticing isn’t sufficient. In some cases, a central architecture
or technology team can facilitate and enhance this activity. Common design patterns accelerate development, so
it’s important to use secure design patterns not just for applications but for all software assets (microservices, APIs,
containers, infrastructure, and automation).

54 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


INTELLIGENCE: STANDARDS & REQUIREMENTS (SR)
The Standards & Requirements practice involves eliciting explicit security requirements from the organization,
determining which COTS tools to recommend, building standards for major security controls (such as authentication,
input validation, and so on), creating security standards for technologies in use, and creating a standards review board.

SR LEVEL 1
[SR1.1: 90] Create security standards.
The SSG meets the organization’s demand for security guidance by creating standards that explain the required way
to adhere to policy and carry out specific security-centric operations. A standard might, for example, describe how
to perform identity-based application authentication or how to implement transport-level security, perhaps with the
SSG ensuring the availability of a reference implementation. Standards often apply to software beyond the scope
of an application’s code, including container construction, orchestration (e.g., infrastructure-as-code), and service
mesh configuration. Standards can be deployed in a variety of ways to keep them actionable and relevant. They can
be automated into development environments (e.g., worked into an IDE or toolchain) or explicitly linked to code
examples and deployment artifacts (e.g., containers). In any case, to be considered standards, they must be adopted
and enforced.
[SR1.2: 88] Create a security portal.
The organization has a well-known central location for information about software security. Typically, this is an
internal website maintained by the SSG that people refer to for the latest and greatest on security standards and
requirements, as well as for other resources provided by the SSG (e.g., training). An interactive portal is better than
a static portal with guideline documents that rarely change. Organizations can supplement these materials with
mailing lists, chat channels, and face-to-face meetings. Development teams are increasingly putting software security
knowledge directly into toolchains and automation that is outside the organization (e.g., GitHub), but that does not
remove the need for SSG-led knowledge management.
[SR1.3: 99] Translate compliance constraints to requirements.
Compliance constraints are translated into software requirements for individual projects and communicated to
the engineering teams. This is a linchpin in the organization’s compliance strategy: by representing compliance
constraints explicitly with requirements and informing stakeholders, the organization demonstrates that compliance
is a manageable task. For example, if the organization routinely builds software that processes credit card
transactions, PCI DSS compliance plays a role in the SSDL during the requirements phase. In other cases, technology
standards built for international interoperability can include security guidance on compliance needs. Representing
these standards as requirements also helps with traceability and visibility in the event of an audit. It’s particularly
useful to codify the requirements into reusable code or artifact deployment specifications.

SR LEVEL 2
[SR2.2: 64] Create a standards review board.
The organization creates a review board to formalize the process used to develop standards and to ensure that all
stakeholders have a chance to weigh in. This review board could operate by appointing a champion for any proposed
standard, putting the onus on the champion to demonstrate that the standard meets its goals and to get buy-in and
approval from the review board. Enterprise architecture or enterprise risk groups sometimes take on the responsibility
of creating and managing standards review boards. When the standards are implemented directly as software, the
responsible champion might be a DevOps manager, release engineer, or whoever owns the associated deployment
artifact (e.g., orchestration code).
[SR2.4: 74] Identify open source.
Open source components included in the software portfolio and integrated at runtime are identified and reviewed to
understand their dependencies. Organizations use a variety of tools and metadata provided by delivery pipelines to
discover old versions of components with known vulnerabilities or that their software relies on multiple versions of the
same component. Automated tools for finding open source, whether whole components or large chunks of borrowed
code, are one way to approach this activity. Some software development pipeline platforms, container registries, and
middleware platforms have begun to provide this visibility as metadata resulting from behind-the-scenes artifact
scanning. A manual review or a process that relies solely on developers asking for permission does not generate
satisfactory results. Some organizations combine composition analysis results from multiple phases of the
software lifecycle in order to get a more complete and accurate view of the open source being included in
production software.

55 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[SR2.5: 55] Create SLA boilerplate.
The SSG works with the legal department to create standard SLA boilerplate for use in contracts with vendors and
outsource providers (including cloud providers) to require software security efforts on their part. The legal department
understands that the boilerplate also helps prevent compliance and privacy problems. Under the agreement, vendors
and outsource providers must meet company-mandated software security standards (see [CP2.4 Include software
security SLAs in all vendor contracts]). Boilerplate language might call for objective third-party insight into software
security efforts, such as BSIMMsc measurements or BSIMM scores.

SR LEVEL 3
[SR3.1: 35] Control open source risk.
The organization has control over its exposure to the risks that come along with using open source components
and all the involved dependencies, including dependencies integrated at runtime. The use of open source could
be restricted to predefined projects or to a short list of open source versions that have been through an approved
security screening process, have had unacceptable vulnerabilities remediated, and are made available only through
approved internal repositories and containers. For some use cases, policy might preclude any use of open source.
The legal department often spearheads additional open source controls due to the “viral” license problem associated
with GPL code. SSGs that partner with and educate the legal department on open source security risks can help
move an organization to improve its open source risk management practices, which must be applied across the
software portfolio to be effective.
[SR3.2: 13] Communicate standards to vendors.
The SSG works with vendors to educate them and promote the organization’s security standards. A healthy
relationship with a vendor isn’t guaranteed through contract language alone (see [CP2.4 Include software security
SLAs in all vendor contracts]), so the SSG should engage with vendors, discuss vendor security practices, and explain
in simple terms (rather than legalese) what the organization expects. Any time a vendor adopts the organization’s
security standards, it’s a clear sign of progress. Note that standards implemented as security features or infrastructure
configuration could be a requirement to services integration with a vendor (see [SFD1.1 Integrate and deliver security
features] and [SFD2.1 Leverage secure-by-design components and services]). When the firm’s SSDL is publicly
available, communication regarding software security expectations is easier. Likewise, sharing internal practices and
measures can make expectations clear.
[SR3.3: 9] Use secure coding standards.
Secure coding standards help the organization’s developers avoid the most obvious bugs and provide ground
rules for code review. These standards are necessarily specific to a programming language or platform, and they
can address the use of popular frameworks, APIs, libraries, and infrastructure automation. Platforms might include
mobile or IoT runtimes, cloud service provider APIs, orchestration recipes, and SaaS platforms (e.g., Salesforce, SAP).
If the organization already has coding standards for other purposes, its secure coding standards should build upon
them. A clear set of secure coding standards is a good way to guide both manual and automated code review, as well
as to provide relevant examples for security training. Some groups might choose to integrate their secure coding
standards directly into automation. While enforcement isn’t the point at this stage (see [CR3.5 Enforce secure coding
standards]), violation of standards is a teachable moment for all stakeholders. Socializing the benefits of following
standards is also a good first step to gaining widespread acceptance (see [SM2.7 Create evangelism role and perform
internal marketing]).
[SR3.4: 20] Create standards for technology stacks.
The organization standardizes on specific technology stacks. This translates into a reduced workload because teams
don’t have to explore new technology risks for every new project. The organization might create a secure base
configuration (commonly in the form of golden images, Terraform definitions, etc.) for each technology stack, further
reducing the amount of work required to use the stack safely. In cloud environments, hardened configurations likely
include up-to-date security patches, security configuration, and security services, such as logging and monitoring.
In traditional on-premise IT deployments, a stack might include an operating system, a database, an application
server, and a runtime environment (e.g., a LAMP stack). Where the technology will be reused, such as containers,
microservices, or orchestration code, the security frontier is a good place to find traction; standards for secure use
of these reusable technologies means that getting security right in one place positively impacts the security posture
of all downstream dependencies (see [SE2.5 Use application containers to support security goals]).

56 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


SSDL TOUCHPOINTS
SSDL TOUCHPOINTS: ARCHITECTURE ANALYSIS (AA)
Architecture Analysis encompasses capturing software architecture in concise diagrams, applying lists of risks and
threats, adopting a process for review (such as STRIDE or Architecture Risk Analysis), and building an assessment and
remediation plan for the organization.

AA LEVEL 1
[AA1.1: 113] Perform security feature review.
Security-aware reviewers identify the security features in an application and its deployment configuration
(authentication, access control, use of cryptography, etc.), then inspect the design and runtime parameters for
problems that would cause these features to fail at their purpose or otherwise prove insufficient. For example, this
kind of review would identify both a system that was subject to escalation of privilege attacks because of broken
access control as well as a mobile application that incorrectly puts PII in local storage. In some cases, use of the firm’s
secure-by-design components can streamline this process (see [SFD2.1 Leverage secure-by-design components and
services]). Organizations often carry out security feature review with checklist-driven analysis and procedural threat
modeling efforts. Many modern applications are no longer simply “3-tier” but instead involve components architected
to interact across a variety of tiers: browser/endpoint, embedded, web, microservices, orchestration engines,
deployment pipelines, third-party SaaS, and so on. Some of these environments might provide robust security feature
sets, whereas others might have key capability gaps that require careful consideration, so organizations should not
consider the applicability and correct use of security features in just one tier of the application but across all tiers that
constitute the architecture and operational environment.
[AA1.2: 49] Perform design review for high-risk applications.
The organization can learn the benefits of design review by seeing real results for a few high-risk, high-profile
applications. Reviewers must have some experience performing detailed design reviews and breaking the design
under consideration, especially for new platforms or environments. Even if the SSG isn’t yet equipped to perform an
in-depth architecture analysis (see [AA2.1 Define and use AA process]), smart people with an adversarial mindset can
find important design problems. In all cases, a design review should produce a set of flaws and a plan to mitigate
them. An organization can use consultants to do this work, but it should participate actively. Ad hoc review paradigms
that rely heavily on expertise can be used here, but they don’t tend to scale in the long run. A review focused only on
whether a software project has performed the right process steps won’t generate useful results about flaws. Note that
a sufficiently robust design review process can’t be executed at CI/CD speed.
[AA1.3: 37] Have SSG lead design review efforts.
The SSG takes a lead role in performing a design review (see [AA1.2 Perform design review for high-risk applications])
to uncover flaws. Breaking down an architecture is enough of an art that the SSG must be proficient at it before it
can turn the entire job over to architects or other engineering teams, and proficiency requires practice. That practice
might then enable, for example, champions to take the day-to-day lead while the SSG maintains leadership around
knowledge and process. The SSG can’t be successful on its own, either—it will likely need help from architects or
implementers to understand the design. With a clear design in hand, the SSG might be able to carry out the detailed
review with a minimum of interaction with the project team. Over time, the responsibility for leading review efforts
should shift toward software security architects. Approaches to design review evolve over time, so it’s wise to not
expect to set a process and use it forever.
[AA1.4: 62] Use a risk methodology to rank applications.
To facilitate security feature and design review processes, the SSG or other assigned groups use a defined risk
methodology, which might be implemented via questionnaire or similar method—whether manual or automated—
to collect information about each application in order to assign a risk classification and associated prioritization.
Information needed for an assignment might include, “Which programming languages is the application written
in?” or “Who uses the application?” or “Is the application’s deployment software-orchestrated?” Typically, a qualified
member of the application team provides the information, but the process should be short enough to take only
a few minutes. Some teams might use automation to gather the necessary data. The SSG can use the answers to
categorize the application as, for example, high, medium, or low risk. Because a risk questionnaire can be easy to
game, it’s important to put into place some spot-checking for validity and accuracy. An overreliance on self-reporting
or automation can render this activity useless.

57 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


AA LEVEL 2
[AA2.1: 29] Define and use AA process.
The SSG defines and documents a process for AA and applies it in the reviews it conducts to find flaws and identify
risk. This process includes a standardized approach for thinking about attacks, vulnerabilities, and various security
properties. In addition to the technical impact discussions, the process includes a focus on the associated risk, such
as through frequency or probability analysis, that gives stakeholders the information necessary to manage risk. The
process is defined well enough that people outside the SSG can carry it out. It’s important to document both the
architecture under review and any security flaws uncovered, as well as risk information people can understand and
use. Microsoft Threat Modeling, Versprite PASTA, and Synopsys ARA are examples of such a process, although even
these methodologies for AA have evolved greatly over time. Uncalibrated or ad hoc AA approaches don’t count as a
defined process.
[AA2.2: 28] Standardize architectural descriptions.
Threat modeling, design review, or AA processes use an agreed-upon format (e.g., diagramming language and icons,
not a Word document template) to describe architecture, including a means for representing data flow. Standardizing
architecture descriptions between those who generate the models and those who analyze and annotate them will
make analysis more tractable and scalable. High-level network diagrams, data flow, and authorization flows are always
useful, but the model should also go into detail about how the software itself is structured. A standard architecture
description can be enhanced to provide an explicit picture of information assets that require protection, including
useful metadata. Standardized icons that are consistently used in diagrams, templates, and dry-erase board squiggles
are especially useful, too.

AA LEVEL 3
[AA3.1: 16] Have engineering teams lead AA process.
Engineering teams lead the AA process most of the time. This effort requires a well-understood and well-documented
process (see [AA2.1 Define and use AA process]), although the SSG still might contribute to AA in an advisory capacity
or under special circumstances. Even with a good process, consistency is difficult to attain because breaking
architecture requires experience, so be sure to provide architects with SSG or outside expertise on novel issues. In
any given organization, the identified engineering team might normally have responsibilities such as development,
DevOps, cloud security, operations security, security architecture, or a variety of similar roles.
[AA3.2: 2] Drive analysis results into standard design patterns.
Failures identified during threat modeling, design review, or AA are fed back to security and engineering teams
so that similar mistakes can be prevented in the future through use of improved design patterns, whether local or
formally approved (see [SFD3.1 Form a review board or central committee to approve and maintain secure design
patterns]). Cloud service providers have learned a lot about how their platforms and services fail to resist attack and
have codified this experience into patterns for secure use. Organizations that heavily rely on these services might
base their application-layer patterns on those building blocks provided by the cloud service provider (for example,
AWS CloudFormation and Azure Blueprints) in making their own. Note that security design patterns can interact in
surprising ways that break security, so the analysis process should be applied even when vetted design patterns are
in standard use.
[AA3.3: 11] Make the SSG available as an AA resource or mentor.
To build AA capability outside of the SSG, the SSG advertises itself as a resource or mentor for teams using the AA
process (see [AA2.1 Define and use AA process]). This effort might enable, for example, security champions, site
reliability engineers, DevSecOps engineers, and others to take the lead while the SSG offers advice. A principal point
of guidance should be tailoring reusable process inputs to make them more actionable within their own technology
stacks. These reusable inputs help protect the team’s time so they can focus on the problems that require creative
solutions rather than enumerating known bad habits. While the SSG might answer AA questions during office hours,
they will often assign a mentor to work with a team for the duration of the analysis. In the case of high-risk software,
the SSG should play a more active mentorship role in applying the AA process.

58 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


SSDL TOUCHPOINTS: CODE REVIEW (CR)
The Code Review practice includes use of code review tools, development of tailored rules, customized profiles for
tool use by different roles (for example, developers vs. auditors), manual analysis, and tracking/measuring results.

CR LEVEL 1
[CR1.2: 80] Perform opportunistic code review.
The SSG ensures that code review for high-risk applications is performed in an opportunistic fashion, such as by
following up a design review with a code review looking for security issues in source code and dependencies, and
perhaps also in deployment artifact configuration (e.g., containers) and automation metadata (e.g., infrastructure-as-
code). This informal targeting often evolves into a systematic approach. Code review could involve the use of specific
tools and services, or it might be manual, but it has to be part of a proactive process. When new technologies pop up,
new approaches to code review might become necessary.
[CR1.4: 102] Use automated tools.
Static analysis incorporated into the code review process makes the review more efficient and consistent. Automation
won’t replace human judgement, but it does bring definition to the review process and security expertise to reviewers
who typically aren’t security experts. Note that a specific tool might not cover an entire portfolio, especially when new
languages are involved, so additional local effort might be useful. Some organizations might progress to automating
tool use by instrumenting static analysis into source code management workflows (e.g., pull requests) and delivery
pipeline workflows (build, package, and deploy) to make the review more efficient, consistent, and in line with release
cadence. Whether use of automated tools is to review a portion of the source code incrementally, such as a developer
committing new code or small changes, or to conduct full-program analysis by scanning the entire codebase,
this service should be explicitly connected to a larger SSDL defect management process applied during software
development, not just used to “check the security box” on the path to deployment.
[CR1.5: 49] Make code review mandatory for all projects.
A security-focused code review is mandatory for all projects under the SSG’s purview, with a lack of code review or
unacceptable results stopping a release, slowing it down, or causing it to be recalled. While all projects must undergo
code review, the process might be different for different kinds of projects. The review for low-risk projects might rely
more heavily on automation, for example, whereas high-risk projects might have no upper bound on the amount
of time spent by reviewers. Having a minimum acceptable standard forces projects that don’t pass to be fixed and
reevaluated. A code review tool with nearly all the rules turned off (so it can run at CI/CD automation speeds, for
example) won’t provide sufficient defect coverage. Similarly, peer code review or tools focused on quality and style
won’t provide useful security results.
[CR1.6: 32] Use centralized reporting to close the knowledge loop.
The bugs found during code review are tracked in a centralized repository that makes it possible to do both summary
and trend reporting for the organization. The code review information can be incorporated into a CISO-level
dashboard that might include feeds from other parts of the security organization (e.g., penetration tests, security
testing, DAST). Given the historical code review data, the SSG can also use the reports to demonstrate progress (see
[SM3.3 Identify metrics and use them to drive resourcing]) and then, for example, drive the training curriculum.
Individual bugs make excellent training examples (see [T2.8 Create and use material specific to company history]).
Some organizations have moved toward analyzing this data and using the results to drive automation.
[CR1.7: 51] Assign tool mentors.
Mentors are available to show developers how to get the most out of code review tools. If the SSG has the most
skill with the tools, it could use office hours or other outreach to help developers establish the right configuration
and get started on interpreting and remediating results. Alternatively, someone from the SSG might work with a
development team for the duration of the first review they perform. Centralized use of a tool can be distributed into
the development organization or toolchains over time through the use of tool mentors, but providing installation
instructions and URLs to centralized tools isn’t the same as mentoring. Increasingly, mentorship extends to tools
associated with deployment artifacts (e.g., container security) and infrastructure (e.g., cloud configuration). In many
organizations, satellite members or champions take on the tool mentorship role.
CR LEVEL 2
[CR2.6: 25] Use automated tools with tailored rules.
Custom rules are created and used to help uncover security defects specific to the organization’s coding standards
or the framework-based or cloud-provided middleware it uses. The same group that provides tool mentoring (see
[CR1.7 Assign tool mentors]) will likely spearhead this customization. Tailored rules can be explicitly tied to proper
usage of technology stacks in a positive sense and avoidance of errors commonly encountered in a firm’s codebase in
a negative sense. To reduce the workload for everyone, many organizations also create rules to remove repeated false
positives and turn off checks that aren’t relevant.

59 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[CR2.7: 17] Use a top N bugs list (real data preferred).
The SSG maintains a living list of the most important kinds of bugs that it wants to eliminate from the organization’s
code and uses it to drive change. Many organizations start with a generic list pulled from public sources, but lists
such as the OWASP Top 10 rarely reflect an organization’s bug priorities. The list’s value comes from being specific to
the organization, being built from real data gathered from code review (see [CR1.6 Use centralized reporting to close
the knowledge loop]), testing (see [PT 1 .2 Feed results to the defect management and mitigation system]), software
composition analysis, and actual incidents, and then being prioritized for prevention efforts. Simply sorting the day’s
bug data by number of occurrences won’t produce a satisfactory list because these data change so often. To
increase interest, the SSG can periodically publish a “most wanted” report after updating the list. One potential pitfall
with a top N list is that it tends to only include known problems. Of course, just building the list won’t accomplish
anything—everyone has to use it to fix bugs.

CR LEVEL 3
[CR3.2: 9] Build a capability to combine assessment results.
Assessment results are combined so that multiple analysis techniques feed into one reporting and remediation
process. In addition to code review, analysis techniques might include dynamic analysis, software composition
analysis, container scanning, cloud services monitoring, and so on. The SSG might write scripts or acquire software
to gather data automatically and combine the results into a format that can be consumed by a single downstream
review and reporting solution. The tricky part of this activity is normalizing vulnerability information from disparate
sources that use conflicting terminology. In some cases, using a standardized taxonomy (e.g., a CWE-like approach)
can help with normalization. Combining multiple sources helps drive better-informed risk mitigation decisions.
[CR3.3: 4] Create capability to eradicate bugs.
When a new kind of bug is discovered in the firm’s software, the SSG ensures rules are created to find it (see
[CR2.6 Use automated tools with tailored rules]) and helps use these rules to identify all occurrences of the new
bug throughout the codebases and runtime environments. It becomes possible to eradicate the bug type entirely
without waiting for every project to reach the code review portion of its lifecycle. A firm with only a handful of software
applications built on a single technology stack will have an easier time with this activity than firms with many large
applications built on a diverse set of technology stacks. A new development framework or library, rules in RASP or
a next-generation firewall, or cloud configuration tools that provide guardrails can often help in (but not replace)
eradication efforts.
[CR3.4: 1] Automate malicious code detection.
Automated code review is used to identify dangerous code written by malicious in-house developers or outsource
providers. Examples of malicious code that could be targeted include backdoors, logic bombs, time bombs,
nefarious communication channels, obfuscated program logic, and dynamic code injection. Although out-of-the-box
automation might identify some generic malicious-looking constructs, custom rules for the static analysis tools used
to codify acceptable and unacceptable code patterns in the organization’s codebase will likely become a necessity.
Manual code review for malicious code is a good start but insufficient to complete this activity at scale. While not all
backdoors or similar code were meant to be malicious when they were written (e.g., a developer’s feature to bypass
authentication during testing), such things have a tendency to stay in deployed code and should be treated as
malicious code until proven otherwise.
[CR3.5: 0] Enforce secure coding standards.
The enforced portions of an organization’s secure coding standards (see [SR3.3 Use secure coding standards]) often
start out as a simple list of banned functions, with a violation of these standards being sufficient grounds for rejecting
a piece of code. Other useful coding standard topics might include proper use of cloud APIs, use of approved
cryptography, memory sanitization, and many others. Code review against standards must be objective—it shouldn’t
become a debate about whether the noncompliant code is exploitable. In some cases, coding standards are specific
to language constructs and enforced with tools (e.g., codified into SAST rules). In other cases, published coding
standards are specific to technology stacks and enforced during the code review process or using automation.
Standards can be positive (“do it this way”) or negative (“do not use this API”), but they must be enforced to be useful.

SSDL TOUCHPOINTS: SECURITY TESTING (ST)


The Security Testing practice is concerned with prerelease defect discovery, including integrating security into
standard QA processes. The practice includes the use of opaque-box security tools (including fuzz testing) as a smoke
test in QA, risk-driven crystal-box testing, application of the attack model, and code coverage analysis. Security testing
focuses on vulnerabilities in construction.

60 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


ST LEVEL 1
[ST 1 .1: 100] Ensure QA performs edge/boundary value condition testing.
QA efforts go beyond functional testing to perform basic adversarial tests and probe simple edge cases and boundary
conditions, with no particular attacker skills required. When QA understands the value of pushing past standard
functional testing that uses expected input, it begins to move slowly toward thinking like an adversary. A discussion
of boundary value testing can lead naturally to the notion of an attacker probing the edges on purpose (for example,
determining what happens when someone enters the wrong password over and over).
[ST 1 .3: 87] Drive tests with security requirements and security features.
QA targets declarative security mechanisms with tests derived from requirements and security features. A test could
try to access administrative functionality as an unprivileged user, for example, or verify that a user account becomes
locked after some number of failed authentication attempts. For the most part, security features can be tested in a
fashion similar to other software features—security mechanisms such as account lockout, transaction limitations,
entitlements, and so on are tested with both expected and unexpected input as derived from requirements. Software
security isn’t security software, but testing security features is an easy way to get started. New software architectures
and deployment automation, such as with container and cloud infrastructure orchestration, might require novel test
approaches.
[ST 1 .4: 50] Integrate opaque-box security tools into the QA process.
The organization uses one or more opaque-box security testing tools as part of the QA process. Such tools are
valuable because they encapsulate an attacker’s perspective, albeit generically. Traditional dynamic analysis scanners
are relevant for web applications, while similar tools exist for cloud environments, containers, mobile applications,
embedded systems, and so on. In some situations, other groups might collaborate with the SSG to apply the tools.
For example, a testing team could run the tool but come to the SSG for help with interpreting the results. When
testing is integrated into agile development approaches, opaque-box tools might be hooked into internal toolchains,
provided by cloud-based toolchains, or used directly by engineering. Regardless of who runs the opaque-box tool,
the testing should be properly integrated into a QA cycle of the SSDL and will often include both authenticated and
unauthenticated reviews.

ST LEVEL 2
[ST2.4: 19] Share security results with QA.
The SSG or others with security testing data routinely share results from security reviews with those responsible for
QA. Using security testing results as the basis for a conversation about common attack patterns or the underlying
causes of security defects allows QA to generalize that information into new test approaches. Organizations that
have chosen to leverage software pipeline platforms such as GitHub, or CI/CD platforms such as the Atlassian stack,
can benefit from QA receiving various testing results automatically, which should then facilitate timely stakeholder
conversations. In some cases, these platforms can be used to integrate QA into an automated remediation workflow
and reporting by generating change request tickets for developers, lightening the QA workload. Over time, QA learns
the security mindset, and the organization benefits from an improved ability to create security tests tailored to the
organization’s code.
[ST2.5: 21] Include security tests in QA automation.
Security tests are included in an automation framework and run alongside functional, performance, and other QA
tests. Many groups trigger these tests manually, whereas in a modern toolchain, these tests are likely part of the
pipeline and triggered through automation. When test creators who understand the software create security
tests, they can uncover more specialized or more relevant localized defects than commercial tools might (see
[ST 1 .4 integrate opaque-box security tools into the QA process]). Security tests might be derived from typical failures
of security features (see [SFD1.1 Integrate and deliver security features]), from creative tweaks of functional tests and
developer tests, or even from guidance provided by penetration testers on how to reproduce an issue. Tests that
are performed manually or out-of-band likely will not provide timely feedback.
[ST2.6: 15] Perform fuzz testing customized to application APIs.
QA efforts include running a customized fuzzing framework against APIs critical to the organization. Testers could
begin from scratch or use an existing fuzzing toolkit, but the necessary customization often goes beyond creating
custom protocol descriptions or file format templates to giving the fuzzing framework a built-in understanding of
the application interfaces it will be calling into. Test harnesses developed explicitly for particular applications make
good places to integrate fuzz testing.

61 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


ST LEVEL 3
[ST3.3: 8] Drive tests with risk analysis results.
Testers use threat modeling, design review, or AA results to direct their work. If the results determine that “the security
of the system hinges on the transactions being atomic and not being interrupted partway through,” for example, then
torn transactions will become a primary target in adversarial testing. Adversarial tests like these can be developed
according to risk profile, with high-risk flaws at the top of the list. Security testing results shared with QA (see [ST 2.4
Share security results with QA]) can help focus test creation on areas of potential vulnerability that can, in turn, help
prove the existence of identified high-risk flaws.
[ST3.4: 2] Leverage coverage analysis.
Testers measure the code coverage of their security tests (see [ST2.5 Include security tests in QA automation]) to
identify code that isn’t being exercised and then adjust automation (see [ST3.6 Implement event-driven security
testing in automation]) to incrementally improve coverage. In turn, code coverage analysis drives increased security
testing depth. Standard issue opaque-box testing tools achieve exceptionally low coverage, leaving a majority of the
software under test unexplored, which isn’t a testing best practice. Coverage analysis is easier when using standard
measurements such as function coverage, line coverage, or multiple condition coverage.
[ST3.5: 2] Begin to build and apply adversarial security tests (abuse cases).
QA begins to incorporate test cases based on abuse cases (see [AM2.1 Build attack patterns and abuse cases tied to
potential attackers]) as testers move beyond verifying functionality and take on the attacker’s perspective. One way
to do this is to systematically attempt to replicate incidents from the organization’s history. Abuse and misuse cases
based on the attacker’s perspective can also be derived from security policies, attack intelligence, standards, and the
organization’s top N attacks list (see [AM2.5 Maintain and use a top N possible attacks list]). This effort turns the corner
from testing features to attempting to break the software under test.
[ST3.6: 2] Implement event-driven security testing in automation.
The SSG guides implementation of automation for continuous, event-driven application security testing. Event-driven
testing implemented in ALM automation typically moves the testing closer to the conditions driving the testing
requirement (whether shift left toward design or shift right toward operations), repeats the testing as often as the
event is triggered as software moves through ALM, and helps ensure the right testing is executed for a given set of
conditions. This might be instead of or in addition to software arriving at a gate or checkpoint and triggering a point-
in-time security test. Success with this approach depends on the broad use of sensors (e.g., agents, bots) that monitor
engineering processes, execute contextual rules, and provide telemetry to automation that initiates the specified
testing whenever event conditions are met. More mature configurations proceed to including risk-driven conditions.

DEPLOYMENT
DEPLOYMENT: PENETRATION TESTING (PT)
The Penetration Testing practice involves standard outside g in testing of the sort carried out by security specialists.
Penetration testing focuses on vulnerabilities in the final configuration and provides direct feeds to defect
management and mitigation.

PT LEVEL 1
[PT 1 .1: 111] Use external penetration testers to find problems.
External penetration testers are used to demonstrate that the organization’s code needs help. Breaking a high-profile
application to provide unmistakable evidence that the organization isn’t somehow immune to the problem often gets
the right attention. Over time, the focus of penetration testing moves from trying to determine if the code is broken
in some areas to a sanity check done before shipping. External penetration testers that bring a new set of experiences
and skills to the problem are the most useful.
[PT 1 .2: 98] Feed results to the defect management and mitigation system.
Penetration testing results are fed back to engineering through established defect management or mitigation
channels, with development and operations responding via a defect management and release process. Testing
often targets container and infrastructure configuration in addition to applications, and results are commonly
provided in machine-readable formats to enable automated tracking. Properly done, this exercise demonstrates the
organization’s ability to improve the state of security, and many firms are emphasizing the critical importance of
not just identifying but actually fixing security problems. One way to ensure attention is to add a security flag to the
bug-tracking and defect management system. The organization might leverage developer workflow or social tooling
(e.g., Slack, JIRA) to communicate change requests, but those requests are still tracked explicitly as part of
a vulnerability management process.

62 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[PT 1 .3: 88] Use penetration testing tools internally.
The organization creates an internal penetration testing capability that uses tools. This capability (e.g., group, team)
can be part of the SSG or part of a specialized team elsewhere in the organization, with the tools complementing
manual efforts to improve the efficiency and repeatability of the testing process. Tools used usually include off-
the-shelf products built specifically for application penetration testing, network penetration tools that specifically
understand the application layer, container and cloud configuration testing tools, and custom scripts. Free-time or
crisis-driven efforts aren’t the same as an internal capability.

PT LEVEL 2
[PT2.2: 33] Penetration testers use all available information.
Penetration testers, whether internal or external, routinely use available source code, design documents, architecture
analysis results, misuse and abuse cases, code review results, and cloud environment and other deployment
configuration to do deeper analysis and find more interesting problems. To effectively do their job, penetration testers
often need everything created throughout the SSDL, so an SSDL that creates no useful artifacts about the code will
make this effort harder. Having access to the artifacts is not the same as using them.
[PT2.3: 34] Schedule periodic penetration tests for application coverage.
The SSG collaborates in periodic security testing of all applications in its purview, which could be tied to a calendar
or a release cycle. High-profile applications should get a penetration test at least once a year, for example, even if
new releases haven’t yet moved into production. Other applications might receive different kinds of security testing
on a similar schedule. Of course, any security testing performed must focus on discovering vulnerabilities, not just
checking a process or compliance box. This testing serves as a sanity check and helps ensure that yesterday’s software
isn’t vulnerable to today’s attacks. The testing can also help maintain the security of software configurations and
environments, especially for containers and components in the cloud. One important aspect of periodic security
testing across the portfolio is to make sure that the problems identified are actually fixed and don’t creep back into
the build. New automation created for CI/CD deserves penetration testing as well.

PT LEVEL 3
[PT3.1: 23] Use external penetration testers to perform deep-dive analysis.
The organization uses external penetration testers to do deep-dive analysis for critical projects or technologies and
to introduce fresh thinking into the SSG. These testers should be domain experts and specialists who keep the
organization up to speed with the latest version of the attacker’s perspective and have a track record for breaking the
type of software being tested. Skilled penetration testers will always break a system, but the question is whether they
demonstrate new kinds of thinking about attacks that can be useful when designing, implementing, and hardening
new systems. Creating new types of attacks from threat intelligence and abuse cases typically requires extended
timelines, which is essential when it comes to new technologies, and prevents checklist-driven approaches that look
only for known types of problems.
[PT3.2: 12] Customize penetration testing tools.
The SSG collaborates in either creating penetration testing tools or adapting publicly available ones to more efficiently
and comprehensively attack the organization’s software. Tools will improve the efficiency of the penetration testing
process without sacrificing the depth of problems that the SSG can identify. Automation can be particularly valuable
in organizations using agile methodologies because it helps teams go faster. Tools that can be tailored are always
preferable to generic tools. Success here is often dependent upon both the depth of tests and their scope.

DEPLOYMENT: SOFTWARE ENVIRONMENT (SE)


The Software Environment practice deals with OS and platform patching (including in the cloud), WAFs, installation
and configuration documentation, containerization, orchestration, application monitoring, change management,
and code signing.

SE LEVEL 1
[SE1.1: 80] Use application input monitoring.
The organization monitors the input to the software that it runs in order to spot attacks. For web code, a WAF can
do this job, while other kinds of software likely require other approaches, including runtime instrumentation. The
SSG might be responsible for the care and feeding of the monitoring system, but incident response isn’t part of this
activity. For web applications, WAFs that write log files can be useful if someone periodically reviews the logs and
takes action. Other software and technology stacks, such as mobile and IoT, likely require their own input monitoring
solutions. Serverless and containerized software can require interaction with vendor software to get the appropriate
logs and monitoring data. Cloud deployments and platform-as-a-service usage can add another level of difficulty to
the monitoring, collection, and aggregation approach.

63 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[SE1.2: 117] Ensure host and network security basics are in place.
The organization provides a solid foundation for its software by ensuring that host (whether bare metal or virtual
machine) and network security basics are in place across its data centers and networks and that these basics remain
in place during new releases. Evolving network perimeters, increased connectivity and data sharing, and increasing
dependence on vendors (e.g., content delivery, load balancing, and content inspection services) add a degree of
difficulty to getting and keeping the basics right. Doing software security before getting host and network security
in place is like putting on shoes before putting on socks.

SE LEVEL 2
[SE2.2: 48] Define secure deployment parameters and configurations.
The SSDL requires creating an installation guide or a clearly described configuration for deployable software artifacts
and the infrastructure-as-code necessary to deploy them, helping teams install and configure software securely.
When special steps are required to ensure a deployment is secure, these steps can be outlined in a configuration
guide or explicitly described in deployment automation, including information on COTS, vendor, and cloud services
components. In some cases, installation guides are not used internally but are distributed to customers who buy
the software. All deployment automation should be understandable by humans, not just by machines. Increasingly,
secure deployment parameters and configuration are codified into infrastructure scripting (e.g., Terraform, Helm,
Ansible, and Chef).
[SE2.4: 32] Protect code integrity.
The organization can attest to the provenance, integrity, and authorization of important code before allowing it to
execute. While legacy and mobile platforms accomplished this with point-in-time code signing and permissions
activity, protecting modern containerized software demands actions in various lifecycle phases. Organizations can
use build systems to verify sources and manifests of dependencies, creating their own cryptographic attestation
of both. Packaging and deployment systems can sign and verify binary packages, including code, configuration,
metadata, code identity, and authorization to release material. In some cases, organizations allow only code from
their own registries to execute in certain environments. With many DevOps practices greatly increasing the number
of people who can touch the code, organizations should also use permissions and peer review to govern code
commits within source code management to help protect integrity.
[SE2.5: 44] Use application containers to support security goals.
The organization uses application containers to support its software security goals, which likely include ease of
deployment, a tighter coupling of applications with their dependencies, immutability, integrity (see [SE2.4 Protect code
integrity]), and some isolation benefits without the overhead of deploying a full operating system on a virtual machine.
Containers provide a convenient place for security controls to be applied and updated consistently. While containers can
be useful in development and test environments, production use provides the needed security benefits.
[SE2.6: 59] Ensure cloud security basics.
Organizations should already be ensuring that their host and network security basics are in place, but they must
also ensure that basic requirements are met in cloud environments. Of course, cloud-based virtual assets often have
public-facing services that create an attack surface (e.g., cloud-based storage) that is different from the one in a
private data center, so these assets require customized security configuration and administration. In the increasingly
software-defined world, the SSG has to help everyone explicitly implement cloud-based security features and controls
(some of which can be built in, for example, cloud provider administration consoles) that are comparable to those
built with cables and physical hardware in private data centers. Detailed knowledge about cloud provider shared
responsibility security models is always necessary.
[SE2.7: 33] Use orchestration for containers and virtualized environments.
The organization uses automation to scale service, container, and virtualized environments in a disciplined way.
Orchestration processes take advantage of built-in and add-on security features (see [SFD2.1 Leverage secure-by-
design components and services]), such as hardening against drift, secrets management, and rollbacks, to ensure
that each deployed workload meets predetermined security requirements. Setting security behaviors in aggregate
allows for rapid change when the need arises. Orchestration platforms are themselves software that become part of
your production environment, which in turn requires hardening and security patching and configuration; in other
words, if you use Kubernetes, make sure you patch Kubernetes.

64 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


SE LEVEL 3
[SE3.2: 13] Use code protection.
To protect intellectual property and make exploit development harder, the organization erects barriers to reverse
engineering its software (e.g., anti-tamper, debug protection, anti-piracy features, runtime integrity). This is
particularly important for widely distributed code, such as mobile applications and JavaScript distributed to browsers.
For some software, obfuscation techniques could be applied as part of the production build and release process. In
other cases, these protections could be applied at the software-defined network or software orchestration layer when
applications are being dynamically regenerated post-deployment. On some platforms, employing Data Execution
Prevention (DEP), Safe Structured Handling (SafeSEH), and Address Space Layout Randomization (ASLR) can be a
good start at making exploit development more difficult, but be aware that yesterday’s protection mechanisms may
not hold up to today’s attacks.
[SE3.3: 9] Use application behavior monitoring and diagnostics.
The organization monitors production software to look for misbehavior or signs of attack. This activity goes beyond
host and network monitoring to look for software-specific problems, such as indications of malicious behavior, fraud,
and related issues. Intrusion detection and anomaly detection systems at the application level might focus on an
application’s interaction with the operating system (through system calls) or with the kinds of data that an application
consumes, originates, and manipulates. In any case, the signs that an application isn’t behaving as expected will be
specific to the software and its environment, so one-size-fits-all solutions probably won’t generate satisfactory results.
In some types of environments (e.g., PaaS), some of this data and the associated predictive analytics might come
from a vendor.
[SE3.6: 14] Enhance application inventory with operations bill of materials.
A list of applications and their locations in production environments is essential information for the enterprise (see
[CMVM2.3 Develop an operations inventory of software delivery value streams]). In addition, a manifest detailing
the components, dependencies, configurations, external services, and so on for all production software helps the
organization to tighten its security posture—that is, to react with agility as attackers and attacks evolve, compliance
requirements change, and the number of items to patch grows quite large. Knowing where all the components live
in running software—whether they’re in private data centers, in clouds, or sold as box products—allows for timely
response when unfortunate events occur. Done properly, institutional use of container security solutions can make
inventory efforts much simpler.

DEPLOYMENT: CONFIGURATION MANAGEMENT &


VULNERABILITY MANAGEMENT (CMVM)
The Configuration Management & Vulnerability Management practice concerns itself with operations processes,
patching and updating applications, version control, defect tracking and remediation, and incident handling.
CMVM LEVEL 1
[CMVM1.1: 108] Create or interface with incident response.
The SSG is prepared to respond to an event or alert and is regularly included in the incident response process, either
by creating its own incident response capability or by regularly interfacing with the organization’s existing team.
A standing meeting between the SSG and the incident response team can keep information flowing in both
directions. Having pre-built communication channels with critical vendors (e.g., infrastructure, SaaS, PaaS) is also
very important.
[CMVM1.2: 96] Identify software defects found in operations monitoring and feed them back to development.
Defects identified in production through operations monitoring are fed back to development and used to change
developer behavior. The contents of production logs can be revealing (or can reveal the need for improved logging).
Entering incident triage data into an existing bug-tracking system (perhaps by making use of a special security flag)
can close the information loop and make sure that security issues get fixed. Increasingly, organizations are relying
on telemetry provided by agents packaged with software as part of cloud security posture monitoring, container
configuration monitoring, RASP, or similar products to detect software defects or adversaries’ exploration prior to exploit.
In most cases, organizations must also rely on additional analysis tools (e.g., ELK, Datadog) to aggregate and correlate
that avalanche of data and provide useful input to development. In the best of cases, processes in the SSDL can be
improved based on operational data (see [CMVM3.2 Enhance the SSDL to prevent software bugs found in operations]).

65 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


CMVM LEVEL 2
[CMVM2.1: 92] Have emergency response.
The organization can make quick code and configuration changes when software (e.g., application, API, microservice,
infrastructure) is under attack, with a rapid-response team working in conjunction with application owners,
developers, operators, and the SSG to study the code and the attack, find a resolution, and fix the production code
(e.g., push a patch into production, rollback to a known-good version, deploy a new container). Often, the emergency
response team is the engineering team itself. A well-defined process is a must here, but a process that has never been
used might not actually work.
[CMVM2.2: 93] Track software bugs found in operations through the fix process.
Defects found in operations (see [CMVM1.2 Identify software defects found in operations monitoring and feed them
back to development]) are entered into established defect management systems and tracked through the fix process.
This capability could come in the form of a two-way bridge between bug finders and bug fixers or possibly occurring
through intermediaries, but make sure the loop is closed completely. Bugs can crop up in all types of deployable
artifacts, deployment automation, and infrastructure configuration. Setting a security flag in the bug-tracking system
can help facilitate tracking.
[CMVM2.3: 61] Develop an operations inventory of software delivery value streams.
The organization has a map of its software deployments and related containerization, orchestration, and deployment
automation code, along with the respective owners that contribute to business value streams. If a software asset
needs to be changed, operations or DevOps teams can reliably identify both the stakeholders and all the places
where the change needs to be deployed. Common components can be noted so that, when an error occurs in one
application, other applications that share the same components can be fixed as well. Accurately representing an
inventory will likely involve enumerating at least the source code, the open source incorporated both during the build
and during dynamic production updates, the orchestration software incorporated into production images, and any
service discovery or invocation that occurs in production.

CMVM LEVEL 3
[CMVM3.1: 4] Fix all occurrences of software bugs found in operations.
The organization fixes all instances of a bug found during operations (see [CMVM1.2 Identify software defects found
in operations monitoring and feed them back to development])—not just the small number of instances that trigger
bug reports—to meet risk management, timeliness, recovery, continuity, and resiliency goals. Doing this proactively
requires the ability to reexamine the entire inventory of software delivery value streams when new kinds of bugs
come to light (see [CR3.3 Create capability to eradicate bugs]). One way to approach reexamination is to create a
ruleset that generalizes a deployed bug into something that can be scanned for via automated code review. In
some environments, fixing a bug might comprise removing it from production immediately and making the actual
fix in some priority order before redeployment. Use of orchestration can greatly simplify deploying the fix for all
occurrences of a software bug (see [SE2.7 Use orchestration for containers and virtualized environments]).
[CMVM3.2: 11] Enhance the SSDL to prevent software bugs found in operations.
Experience from operations leads to changes in the SSDL (see [SM1.1 Publish process and evolve as necessary]), which
can in turn be strengthened to prevent the reintroduction of bugs found during operations. To make this process
systematic, incident response postmortem includes a feedback-to-SSDL step. The outcomes of the postmortem
might result in changes such as tool-based policy rulesets in a CI/CD pipeline and adjustments to automated
deployment configuration (see [SM3.4 Integrated software-defined lifecycle governance]). This works best when root-
cause analysis pinpoints where in the software lifecycle an error could have been introduced or slipped by uncaught
(e.g., a defect escape). DevOps engineers might have an easier time with this because all the players are likely involved
in the discussion and the solution. An ad hoc approach to SSDL improvement isn’t sufficient.
[CMVM3.3: 14] Simulate software crises.
The SSG simulates high-impact software security crises to ensure software incident detection and response
capabilities minimize damage. Simulations could test for the ability to identify and mitigate specific threats or, in
other cases, begin with the assumption that a critical system or service is already compromised and evaluate the
organization’s ability to respond. Planned chaos engineering can be effective at triggering unexpected conditions
during simulations. The exercises must include attacks or other software security crises at the appropriate software
layer to generate useful results (e.g., at the application layer for web applications and at lower layers for IoT devices).
When simulations model successful attacks, an important question to consider is the time required to clean up.
Regardless, simulations must focus on security-relevant software failure and not on natural disasters or other types
of emergency response drills. Organizations that are highly dependent on vendor infrastructure (e.g., cloud service
providers, SaaS, PaaS) and security features will naturally include those things in crisis simulations.

66 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


[CMVM3.4: 20] Operate a bug bounty program.
The organization solicits vulnerability reports from external researchers and pays a bounty for each verified and
accepted vulnerability received. Payouts typically follow a sliding scale linked to multiple factors, such as vulnerability
type (e.g., remote code execution is worth $10,000 vs. CSRF is worth $750), exploitability (demonstrable exploits
command much higher payouts), or specific service and software versions (widely deployed or critical services
warrant higher payouts). Ad hoc or short-duration activities, such as capture-the-flag contests or informal crowd-
sourced efforts, don’t constitute a bug bounty program.
[CMVM3.5: 10] Automate verification of operational infrastructure security.
The SSG works with engineering teams to facilitate a controlled self-service process that replaces some traditional
IT efforts, such as application and infrastructure deployment, and includes verification of security properties
(e.g., adherence to agreed-upon security hardening). Engineers now create networks, containers, and machine
instances, orchestrate deployments, and perform other tasks that were once IT’s sole responsibility (see [AM3.3
Monitor automated asset creation]). In facilitating this change, the organization uses machine-readable policies and
configuration standards to automatically detect issues and report on infrastructure that does not meet expectations
(see [SE2.2 Define secure deployment parameters and configurations]). In some cases, the automation makes
changes to running environments to bring them into compliance. In many cases, organizations use a single policy to
manage automation in different environments, such as in multi- and hybrid-cloud environments.
[CMVM3.6: 0] Publish risk data for deployable artifacts.
The organization collects and publishes risk information—whether captured through manual processes or telemetry
automation—about the applications, services, APIs, containers, and other software it deploys. Published information
extends beyond basic software security (see [SM2.1 Publish data about software security internally and drive change])
and inventory data (see [CMVM2.6 Develop an operations inventory of software delivery value streams]) to include risk
information, such as constituency of the software (e.g., bill of materials), what group created it and how, and the risks
associated with known vulnerabilities, deployment models, security controls, or other security characteristics intrinsic
to each artifact. This approach stimulates cross-functional coordination and helps stakeholders take informed risk
management action. In some cases, much of this information is created by automated processes and associated with
a registry that provides stakeholder visibility, but it might also include a significant amount of manual effort in data
gathering, analysis, and scoring. Making a list of risks that aren’t used for decision support won’t achieve useful results.
[CMVM3.7: 0] Streamline incoming responsible vulnerability disclosure.
The SSG provides external bug reporters with a line of communication to internal security experts through a low-
friction, public entry point. These experts work with bug reporters to invoke any necessary organizational responses
and coordinate with the external entities throughout the defect management lifecycle. Successful disclosure
processes require insight from internal stakeholders such as legal, marketing, and public relations roles to simplify
and expedite the decision-making during software security crises. Although bug bounties may be important to
motivate some researchers (see [CMVM3.4] Operate a bug bounty program), proper public attribution and a low-
friction reporting process is often sufficient motivation for researchers to participate in a coordinated disclosure.
Most organizations will use a combination of easy-to-find landing pages, common email addresses (security@),
and embedded product documentation when appropriate (security.txt) as an entry point for external reporters to
invoke this process.

67 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


PART FOUR
A PPE N D IX
APPENDIX
This appendix provides some history on the BSIMM project and how it serves as a longitudinal study of software
security efforts, along with the most recent changes to the model. To help understand how various vertical markets
approach their SSIs, we also provide a scorecard showing activity observation counts per vertical market. Finally, we
provide a list of the 122 BSIMM12 activities.

BUILDING A MODEL FOR SOFTWARE SECURITY


In the late 1990s, software security began to flourish as a discipline separate from computer and network security.
Researchers began to put more emphasis on studying the ways in which a programmer can contribute to or
unintentionally undermine the security of a computer system and started asking some specific questions: What
kinds of bugs and flaws lead to security problems? How can we identify problems systematically?
By the middle of the following decade, there was an emerging consensus that building secure software required
more than just smart individuals toiling away. Getting security right, especially across a software portfolio, meant
being directly involved in the software development process, even as the process evolves.
Since then, practitioners have come to learn that process and developer tools alone are insufficient. Software security
encompasses business, social, and organizational aspects as well.
Tables A and B show how the BSIMM has grown over the years. (Recall that our data freshness constraints,
introduced with BSIMM-V and later tightened, cause data from firms with aging measurements to be removed from
the dataset.) BSIMM12 describes the work of 9,285 SSG and satellite people working directly in software security,
impacting the security efforts of 398,544 developers.

69 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


BSIMM NUMBERS OVER TIME
BSIMM12 BSIMM11 BSIMM10 BSIMM9 BSIMM8 BSIMM7 BSIMM6 BSIMM-V BSIMM4 BSIMM3 BSIMM2 BSIMM1

FIRMS 128 130 122 120 109 95 78 67 51 42 30 9

MEASUREMENTS 341 357 339 320 256 237 202 161 95 81 49 9

2ND MEASURES 31 32 50 42 36 30 26 21 13 11 0 0

3RD MEASURES 14 12 32 20 16 15 10 4 1 0 0 0

4TH MEASURES 4 7 8 7 5 2 2

SSG MEMBERS 2,837 1,801 1,596 1,600 1,268 1,111 1,084 976 978 786 635 370

SATELLITE MEMBERS 6,448 6,656 6,298 6,291 3,501 3,595 2,111 1,954 2,039 1,750 1,150 710

DEVELOPERS 398,544 490,167 468,500 415,598 290,582 272,782 287,006 272,358 218,286 185,316 141,175 67,950

APPLICATIONS 153,519 176,269 173,233 135,881 94,802 87,244 69,750 69,039 58,739 41,157 28,243 3,970

AVG. SSG AGE (YEARS) 4.41 4.32 4.53 4.13 3.88 3.94 3.98 4.28 4.13 4.32 4.49 5.32

AVG. SSG RATIO 2.59/100 2.01/100 1.37/100 1.33/100 1.60/100 1.61/100 1.51/100 1.4/100 1.95/100 1.99/100 1.02/100 1.13/100

TABLE A. BSIMM NUMBERS OVER TIME. The chart shows how the BSIMM study has grown over the years.

BSIMM VERTICALS OVER TIME


BSIMM12 BSIMM11 BSIMM10 BSIMM9 BSIMM8 BSIMM7 BSIMM6 BSIMM-V BSIMM4 BSIMM3 BSIMM2 BSIMM1

FINANCIAL 38 42 57 50 47 42 33 26 19 17 12 4

FINTECH 21 21

ISVs 42 46 43 42 38 30 27 25 19 15 7 4

TECH 28 27 20 22 16 14 17 14 13 10 7 2

HEALTHCARE 14 14 16 19 17 15 10

INTERNET OF THINGS 18 17 13 16 12 12 13

CLOUD 26 30 20 17 16 15

INSURANCE 13 14 11 10 11 10

RETAIL 7 8 9 10

TABLE B. BSIMM VERTICALS OVER TIME. The vertical representation has grown over the years. Remember that a firm can
appear in more than one vertical.

THE BSIMM AS A LONGITUDINAL STUDY


Fifty-two of the 128 firms in BSIMM12 have been measured at least twice. On average, the time between first and second
measurements for those 52 firms was 30.8 months. Although individual activities among the 12 practices come and go
(as shown in Table C), in general, remeasurement over time shows a clear trend of increased maturity. The raw score
went up in 48 of the 52 firms and remained the same in two firms. Across all 52 firms, the score increased by an average
of 13.4 (43.4%) from their first to their second measurement. Simply put, SSIs mature over time.

70 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


GOVERNANCE INTELLIGENCE SSDL TOUCHPOINTS DEPLOYMENT
BSIMM BSIMM BSIMM BSIMM BSIMM BSIMM BSIMM BSIMM
ACTIVITY ROUND 1 ROUND 2 ACTIVITY ROUND 1 ROUND 2 ACTIVITY ROUND 1 ROUND 2 ACTIVITY ROUND 1 ROUND 2
(OF 52) (OF 52) (OF 52) (OF 52) (OF 52) (OF 52) (OF 52) (OF 52)

[SM1.1] 24 41 [AM1.2] 31 41 [AA1.1] 48 49 [PT 1 .1] 45 49


[SM1.3] 28 37 [AM1.3] 12 19 [AA1.2] 13 21 [PT 1 .2] 34 39
[SM1.4] 44 48 [AM1.5] 21 28 [AA1.3] 11 16 [PT 1 .3] 31 37
[SM2.1] 16 31 [AM2.1] 4 8 [AA1.4] 23 31 [PT2.2] 10 11
[SM2.2] 17 23 [AM2.2] 2 5 [AA2.1] 6 16 [PT2.3] 11 14
[SM2.3] 20 39 [AM2.5] 6 5 [AA2.2] 5 10 [PT3.1] 4 5
[SM2.6] 18 26 [AM2.6] 5 5 [AA3.1] 3 7 [PT3.2] 1 3
[SM2.7] 19 34 [AM2.7] 4 8 [AA3.2] 1 1
[SM3.1] 8 17 [AM3.1] 2 1 [AA3.3] 4 5
[SM3.2] 2 3 [AM3.2] 0 0
[SM3.3] 6 17 [AM3.3] 0 3
[SM3.4] 0 2

[CP1.1] 31 40 [SFD1.1] 36 45 [CR1.2] 27 34 [SE1.1] 25 35


[CP1.2] 42 47 [SFD1.2] 32 39 [CR1.4] 30 44 [SE1.2] 45 48
[CP1.3] 22 39 [SFD2.1] 10 19 [CR1.5] 12 22 [SE2.2] 18 21
[CP2.1] 18 29 [SFD2.2] 18 24 [CR1.6] 15 24 [SE2.4] 11 13
[CP2.2] 16 18 [SFD3.1] 4 11 [CR1.7] 8 23 [SE2.5] 4 10
[CP2.3] 13 25 [SFD3.2] 4 9 [CR2.6] 7 13 [SE2.6] 1 14
[CP2.4] 13 26 [SFD3.3] 0 2 [CR2.7] 9 12 [SE2.7] 1 5
[CP2.5] 23 28 [CR3.2] 0 5 [SE3.2] 5 3
[CP3.1] 7 14 [CR3.3] 0 1 [SE3.3] 3 4
[CP3.2] 8 14 [CR3.4] 0 0 [SE3.6] 0 4
[CP3.3] 1 5 [CR3.5] 0 1

[T 1 .1] 30 36 [SR1.1] 30 41 [ST 1 .1] 40 42 [CMVM1.1] 45 46


[T 1 .7] 17 30 [SR1.2] 30 39 [ST 1 .3] 41 36 [CMVM1.2] 44 39
[T 1 .8] 7 16 [SR1.3] 31 43 [ST 1 .4] 14 23 [CMVM2.1] 37 42
[T2.5] 10 21 [SR2.2] 13 28 [ST2.4] 3 8 [CMVM2.2] 35 40
[T2.8] 10 9 [SR2.4] 13 28 [ST2.5] 3 9 [CMVM2.3] 21 32
[T2.9] 7 21 [SR2.5] 9 26 [ST2.6] 6 5 [CMVM3.1] 1 0
[T3.1] 1 4 [SR3.1] 5 12 [ST3.3] 2 3 [CMVM3.2] 2 6
[T3.2] 2 8 [SR3.2] 7 10 [ST3.4] 0 0 [CMVM3.3] 2 4
[T3.3] 2 10 [SR3.3] 7 8 [ST3.5] 2 3 [CMVM3.4] 2 9
[T3.4] 0 13 [SR3.4] 12 12 [ST3.6] 0 1 [CMVM3.5] 0 0
[T3.5] 0 6 [CMVM3.6] 0 0
[T3.6] 2 4 [CMVM3.7] 0 0

TABLE C. BSIMM12 REASSESSMENTS SCORECARD ROUND 1 VS. ROUND 2. This chart shows how 52 SSIs changed
between their first and second assessment.

71 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Figure A shows the average high-water marks per practice for the 52 firms in their first and second assessment.
Over the average of about 10 quarters between the two assessments, we see clear growth in every practice, with the
smallest growth occurring in the Penetration Testing practice.

CONFIGURATION STRATEGY
MANAGEMENT & & METRICS
VULNERABILITY COMPLIANCE
3.0
MANAGEMENT & POLICY
2.5

2.0
SOFTWARE
ENVIRONMENT 1.5 TRAINING

1.0

0.5

PENETRATION ATTACK
0.0
TESTING MODELS

SECURITY SECURITY FEATURES


TESTING & DESIGN

CODE STANDARDS &


REVIEW REQUIREMENTS
ARCHITECTURE
ANALYSIS

ALLFIRMS ROUND 1 (52) ALLFIRMS ROUND 2 (52)

FIGURE A. ALLFIRMS ROUND 1 VS. ALLFIRMS ROUND 2 SPIDER CHART. This diagram illustrates the high-water mark
change in 52 firms between their first and second BSIMM assessment.

72 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


CHANGES TO LONGITUDINAL SCORECARD
There are two obvious factors causing the numerical change seen in the longitudinal scorecard (Table C, showing
52 BSIMM12 firms moving from their first to second assessment). The first factor is newly observed activities in the
second assessment. Figure B shows the activities where we see the biggest increase in new observations.

OBSERVATIONS

0 5 10 15 20 25

SM2.3

SM1.1

CP1.3

SR2.5

SM2.1

SM2.7

SR2.2

SR2.4

CR1.7

FIGURE B. ACTIVITY INCREASES BETWEEN FIRST AND SECOND MEASUREMENTS. Between initial measurements,
firms most commonly formalize or adopt activities in the Governance domain.

The second factor is that newly observed activities overwrite aged out data. For example, [CR1.2 Perform opportunistic
code review] was newly observed in 17 firms, but it was either no longer observed in 10 firms or decreased due to
data aging out, giving a total change of seven (as shown in the scorecard). In a different example, the activity [SM2.7
Create evangelism role and perform internal marketing] was newly observed in 20 firms and dropped out of the data
pool or was no longer observed in five firms. Therefore, the total observation count changed by 15 on the scorecard.
Interestingly, while we continue to see an increase in observation counts for [SM2.7 Create evangelism role and
perform internal marketing] for firms performing their second assessment, the overall observation count for that
activity across all 128 firms decreased in BSIMM12.

73 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


GOVERNANCE INTELLIGENCE SSDL TOUCHPOINTS DEPLOYMENT
BSIMM BSIMM BSIMM BSIMM BSIMM BSIMM BSIMM BSIMM
ACTIVITY ROUND 1 Round 3 ACTIVITY ROUND 1 ROUND 3 ACTIVITY ROUND 1 ROUND 3 ACTIVITY ROUND 1 ROUND 3
(OF 21) (of 21) (OF 21) (OF 21) (OF 21) (OF 21) (OF 21) (OF 21)

[SM1.1] 11 19 [AM1.2] 13 19 [AA1.1] 19 20 [PT 1 .1] 18 19


[SM1.3] 10 16 [AM1.3] 4 15 [AA1.2] 6 9 [PT 1 .2] 12 21
[SM1.4] 19 21 [AM1.5] 10 11 [AA1.3] 5 8 [PT 1 .3] 12 16
[SM2.1] 5 16 [AM2.1] 3 8 [AA1.4] 11 14 [PT2.2] 4 6
[SM2.2] 6 12 [AM2.2] 1 5 [AA2.1] 3 7 [PT2.3] 7 5
[SM2.3] 9 15 [AM2.5] 3 4 [AA2.2] 0 6 [PT3.1] 2 3
[SM2.6] 9 11 [AM2.6] 1 2 [AA3.1] 2 3 [PT3.2] 1 3
[SM2.7] 6 18 [AM2.7] 2 5 [AA3.2] 0 0
[SM3.1] 5 7 [AM3.1] 0 1 [AA3.3] 3 3
[SM3.2] 0 6 [AM3.2] 0 2
[SM3.3] 3 6 [AM3.3] 0 0
[SM3.4] 0 1

[CP1.1] 13 21 [SFD1.1] 18 18 [CR1.2] 12 18 [SE1.1] 11 17


[CP1.2] 18 20 [SFD1.2] 15 18 [CR1.4] 12 20 [SE1.2] 18 21
[CP1.3] 10 17 [SFD2.1] 5 9 [CR1.5] 4 10 [SE2.2] 5 5
[CP2.1] 10 13 [SFD2.2] 7 14 [CR1.6] 7 10 [SE2.4] 4 5
[CP2.2] 5 7 [SFD3.1] 1 5 [CR1.7] 3 13 [SE2.5] 0 5
[CP2.3] 8 15 [SFD3.2] 3 8 [CR2.6] 2 5 [SE2.6] 0 7
[CP2.4] 6 10 [SFD3.3] 0 0 [CR2.7] 4 6 [SE2.7] 0 4
[CP2.5] 10 13 [CR3.2] 0 2 [SE3.2] 0 2
[CP3.1] 5 6 [CR3.3] 0 1 [SE3.3] 2 2
[CP3.2] 6 3 [CR3.4] 0 0 [SE3.6] 0 1
[CP3.3] 1 2 [CR3.5] 0 0

[T 1 .1] 10 18 [SR1.1] 14 18 [ST 1 .1] 14 18 [CMVM1.1] 18 20


[T 1 .7] 7 15 [SR1.2] 14 20 [ST 1 .3] 17 19 [CMVM1.2] 21 20
[T 1 .8] 3 12 [SR1.3] 14 19 [ST 1 .4] 7 12 [CMVM2.1] 19 20
[T2.5] 5 9 [SR2.2] 7 15 [ST2.4] 2 2 [CMVM2.2] 14 18
[T2.8] 3 8 [SR2.4] 4 15 [ST2.5] 0 2 [CMVM2.3] 12 15
[T2.9] 2 10 [SR2.5] 4 8 [ST2.6] 3 4 [CMVM3.1] 0 0
[T3.1] 0 2 [SR3.1] 3 9 [ST3.3] 1 2 [CMVM3.2] 1 2
[T3.2] 0 5 [SR3.2] 5 5 [ST3.4] 0 1 [CMVM3.3] 1 4
[T3.3] 0 3 [SR3.3] 4 4 [ST3.5] 2 1 [CMVM3.4] 0 6
[T3.4] 0 5 [SR3.4] 8 6 [ST3.6] 0 0 [CMVM3.5] 0 1
[T3.5] 0 3 [CMVM3.6] 0 0
[T3.6] 1 1 [CMVM3.7] 0 0

TABLE D. BSIMM12 REASSESSMENTS SCORECARD ROUND 1 VS. ROUND 3. This chart shows how 21 SSIs changed from
their first to their third assessment.

74 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Twenty-one of the 128 BSIMM12 firms have had a third measurement. Table D captures the ongoing growth that
occurs in these SSIs.
Figure C shows the average high-water marks per practice for the 21 firms in their first and third assessment.

CONFIGURATION STRATEGY
MANAGEMENT & & METRICS
VULNERABILITY COMPLIANCE
3.0
MANAGEMENT & POLICY
2.5

2.0
SOFTWARE
ENVIRONMENT 1.5 TRAINING

1.0

0.5

PENETRATION ATTACK
0.0
TESTING MODELS

SECURITY SECURITY FEATURES


TESTING & DESIGN

CODE STANDARDS &


REVIEW REQUIREMENTS
ARCHITECTURE
ANALYSIS

ALLFIRMS ROUND 1 (21) ALLFIRMS ROUND 3 (21)

FIGURE C. ALLFIRMS ROUND 1 VS. ALLFIRMS ROUND 3 SPIDER CHART. This diagram illustrates the high-water mark
change in 21 firms between their first and third BSIMM assessment.

Interestingly, while this chart shows growth in every practice, it shows only a slight increase in the Compliance &
Policy and Penetration Testing practices.
Figure D shows how the average observation count increases by practice from the first to the second assessment and
then from the first to the third for the 21 firms that have performed at least three measurements. When comparing
how firms grew in the second and third measurements, we can see the largest increase is in Strategy & Metrics with
smaller increases in Attack Models, Standards & Requirements, and Software Environment. Although we noticed an
increase in observation count in Security Features & Design and Architecture Analysis from the first to the second
assessment, we did not see a corresponding increase from the second to the third. Drivers for this might include that
budgeting for some of the human-intensive activities is hard and can only be done periodically, some activities are
more difficult than others and are only attempted periodically, and some activities are easier to apply to more of the
portfolio and thus don’t need to be adjusted often.

75 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


3.5

AVERAGE ADDITIONAL ACTIVITIES PERFORMED 3.0

2.5

2.0

1.5

1.0

0.5

0.0
SM CP T AM SFD SR AA CR ST PT SE CMVM

21 FIRMS ROUND 1 21 FIRMS ROUND 2 21 FIRMS ROUND 1 21 FIRMS ROUND 3

FIGURE D. LONGITUDINAL INCREASES BETWEEN 21 FIRMS ROUND 2 AND ROUND 3 BY PRACTICE. This chart shows
which practices organizations exert more effort in over time by showing average increase as firms move from their first to
second and then first to third measurement.

Digging into the practices further allows us to recognize individual activities that have highly dissimilar observation
rates across measurement rounds. For example, there are some activities that are rare across firms’ first
measurements but are much more common for their second measurements. Table E illustrates this for level 3
activities. We can conclude that while some organizations do some level 3 activities early in their maturity journey,
many more turn to level 3 activities after reaching a good benchmark security posture.

% OBSERVED % OBSERVED
76 R1 FIRMS 52 R2 FIRMS

[SM3.1] Use software asset tracking application with portfolio view 7% 33%

[SM3.3] Identify metrics and use them to dive resourcing 5% 33%

[CP3.2] Impose policy on vendors 5% 27%

[CP3.3] Drive feedback from software lifecycle data back to policy 1% 10%

[SFD3.1] Form a review board or central committee to approve and maintain secure
7% 21%
design patterns

[SR3.2] Communicate standards to vendors 4% 19%

[SR3.3] Use secure coding standards 1% 15%

TABLE E. OBSERVATION RATE OF SELECTED LEVEL 3 ACTIVITIES FOR 76 R1 AND 52 R2 FIRMS. We see a significantly
higher observation rate for some level 3 activities, where the increase in observation counts from the first to the second
assessment round is much higher than the growth rate for other activities. Therefore, these might represent high-impact
activities that a firm should consider as it goes through the maturing and enabling phases of SSI maturity. Don’t overlook
activities just because they are level 3; remember, some activities are level 3 because they are newly added.

76 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


CHARTS, GRAPHS, AND SCORECARDS
In this section, we present the BSIMM scorecard and the BSIMM skeleton showing the activities and their observation
rates, along with other useful charts.
The BSIMM data yield very interesting analytical results as shown throughout this document. Table F shows the
highest-resolution BSIMM data that are published. Organizations can use these data to note how often we observe
each activity across all 128 participants and then use that information to help plan the next areas of focus. Activities
that are broadly popular across all vertical markets will likely benefit your organization as well.

77 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


GOVERNANCE INTELLIGENCE SSDL TOUCHPOINTS DEPLOYMENT
BSIMM12 BSIMM12 BSIMM12 BSIMM12
BSIMM12 BSIMM12 BSIMM12 BSIMM12
FIRMS FIRMS FIRMS FIRMS
ACTIVITY FIRMS ACTIVITY FIRMS ACTIVITY FIRMS ACTIVITY FIRMS
(out of (out of (out of (out of
(%) (%) (%) (%)
128) 128) 128) 128)

STRATEGY & METRICS ATTACK MODELS ARCHITECTURE ANALYSIS PENETRATION TESTING

[SM1.1] 91 71.1% [AM1.2] 77 60.2% [AA1.1] 113 88.3% [PT 1 .1] 111 86.7%

[SM1.3] 81 63.3% [AM1.3] 41 32.0% [AA1.2] 49 38.3% [PT 1 .2] 98 76.6%

[SM1.4] 118 92.2% [AM1.5] 61 47.7% [AA1.3] 37 28.9% [PT 1 .3] 88 68.8%

[SM2.1] 63 49.2% [AM2.1] 14 10.9% [AA1.4] 62 48.4% [PT2.2] 33 25.8%

[SM2.2] 60 46.9% [AM2.2] 11 8.6% [AA2.1] 29 22.7% [PT2.3] 34 26.6%

[SM2.3] 60 46.9% [AM2.5] 13 10.2% [AA2.2] 28 21.9% [PT3.1] 23 18.0%

[SM2.6] 62 48.4% [AM2.6] 10 7.8% [AA3.1] 16 12.5% [PT3.2] 12 9.4%

[SM2.7] 64 50.0% [AM2.7] 16 12.5% [AA3.2] 2 1.6%

[SM3.1] 22 17.2% [AM3.1] 5 3.9% [AA3.3] 11 8.6%

[SM3.2] 10 7.8% [AM3.2] 4 3.1%

[SM3.3] 21 16.4% [AM3.3] 6 4.7%

[SM3.4] 6 4.7%

COMPLIANCE & POLICY SECURITY FEATURES & DESIGN CODE REVIEW SOFTWARE ENVIRONMENT

[CP1.1] 98 76.6% [SFD1.1] 102 79.7% [CR1.2] 80 62.5% [SE1.1] 80 62.5%

[CP1.2] 114 89.1% [SFD1.2] 83 64.8% [CR1.4] 102 79.7% [SE1.2] 117 91.4%

[CP1.3] 88 68.8% [SFD2.1] 33 25.8% [CR1.5] 49 38.3% [SE2.2] 48 37.5%

[CP2.1] 55 43.0% [SFD2.2] 55 43.0% [CR1.6] 32 25.0% [SE2.4] 32 25.0%

[CP2.2] 49 38.3% [SFD3.1] 16 12.5% [CR1.7] 51 39.8% [SE2.5] 44 34.4%

[CP2.3] 67 52.3% [SFD3.2] 15 11.7% [CR2.6] 25 19.5% [SE2.6] 59 46.1%

[CP2.4] 54 42.2% [SFD3.3] 5 3.9% [CR2.7] 17 13.3% [SE2.7] 33 25.8%

[CP2.5] 74 57.8% [CR3.2] 9 7.0% [SE3.2] 13 10.2%

[CP3.1] 24 18.8% [CR3.3] 4 3.1% [SE3.3] 9 7.0%

[CP3.2] 18 14.1% [CR3.4] 1 0.8% [SE3.6] 14 10.9%

[CP3.3] 6 4.7% [CR3.5] 0 0%

TRAINING STANDARDS & REQUIREMENTS SECURITY TESTING CONFIG. MGMT. & VULN. MGMT.

[T 1 .1] 76 59.4% [SR1.1] 90 70.3% [ST 1 .1] 100 78.1% [CMVM1.1] 108 84.4%

[T 1 .7] 53 41.4% [SR1.2] 88 68.8% [ST 1 .3] 87 68.0% [CMVM1.2] 96 75.0%

[T 1 .8] 46 35.9% [SR1.3] 99 77.3% [ST 1 .4] 50 39.1% [CMVM2.1] 92 71.9%

[T2.5] 39 30.5% [SR2.2] 64 50.0% [ST2.4] 19 14.8% [CMVM2.2] 93 72.7%

[T2.8] 27 21.1% [SR2.4] 74 57.8% [ST2.5] 21 16.4% [CMVM2.3] 61 47.7%

[T2.9] 35 27.3% [SR2.5] 55 43.0% [ST2.6] 15 11.7% [CMVM3.1] 4 3.1%


[T3.1] 6 4.7% [SR3.1] 35 27.3% [ST3.3] 8 6.3% [CMVM3.2] 11 8.6%
[T3.2] 23 18.0% [SR3.2] 13 10.2% [ST3.4] 2 1.6% [CMVM3.3] 14 10.9%
[T3.3] 23 18.0% [SR3.3] 9 7.0% [ST3.5] 2 1.6% [CMVM3.4] 20 15.6%
[T3.4] 24 18.8% [SR3.4] 20 15.6% [ST3.6] 2 1.6% [CMVM3.5] 10 7.8%
[T3.5] 9 7.0% [CMVM3.6] 0 0%
[T3.6] 4 3.1% [CMVM3.7] 0 0%

TABLE F. BSIMM12 SCORECARD. This scorecard shows how often we observed each of the BSIMM activities in the BSIMM12
data pool of 128 firms. Note that the observation count data fall naturally into levels per practice.

78 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


In the BSIMM12 scorecard, we also identified the most common activity in each practice (highlighted in blue). These
12 activities were observed in at least 76 (nearly 60%) of the 128 firms we studied (see Part Three for “Table 4. Most
Common Activities Per Practice”).
To provide another view into this table data, we created spider charts by noting the highest-level activity observed
for each practice per BSIMM participant (a high-water mark), then averaging these values over the group of 128 firms
to produce 12 numbers (one for each practice). The resulting spider chart (Figure E) plots these values on 12 spokes
corresponding to the 12 BSIMM practices. Note that performing level 3 (the outside edge) activities is often a sign
of SSI maturity but only because organizations tend to start with common activities (level 1) and build from there
toward uncommon activities. Other interesting analyses are possible, of course, such as https://ieeexplore.ieee.org/
document/8409917 (gated).
By computing these high-water mark values and an observed score for each firm in the study, we can also compare
relative and average maturity for one firm against the others. The range of observed scores in the current data pool is
10 for the lower score and 83 for the higher score, indicating a wide range of SSI maturity levels in the BSIMM12 data.

STRATEGY & METRICS


3.0
CONFIGURATION MANAGEMENT &
VULNERABILITY MANAGEMENT COMPLIANCE & POLICY
2.5

2.0

SOFTWARE ENVIRONMENT 1.5


TRAINING

1.0

0.5

PENETRATION TESTING 0.0 ATTACK MODELS

SECURITY TESTING SECURITY FEATURES & DESIGN

CODE REVIEW STANDARDS & REQUIREMENTS

ARCHITECTURE ANALYSIS

ALLFIRMS (128)

FIGURE E. ALLFIRMS SPIDER CHART. This diagram shows the average of the high-water mark collectively reached in
each practice by the 128 BSIMM12 firms. Collectively across these firms, we observed more level 2 and 3 activities in practices
such as Strategy & Metrics, Compliance & Policy, and Standards & Requirements compared to practices such as Attack Models,
Architecture Analysis, Code Review, and Security Testing.

79 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


THE BSIMM12 EXPANDED SKELETON
The BSIMM skeleton provides a way to view the model at a glance and is useful when assessing an SSI. We showed
a streamlined version of the skeleton in Part Two. Table G shows a more detailed version, with the number and
percentages of firms (out of 128) performing that activity in their own SSI.

STRATEGY & METRICS (SM)

ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Publish process and evolve as necessary. [SM1.1] 91 71.1%

Educate executives on software security. [SM1.3] 81 63.3%

Implement lifecycle instrumentation and use to define governance. [SM1.4] 118 92.2%

LEVEL 2

Publish data about software security internally and drive change. [SM2.1] 63 49.2%

Verify release conditions with measurements and track exceptions. [SM2.2] 60 46.9%

Create or grow a satellite. [SM2.3] 60 46.9%

Require security sign-off prior to software release. [SM2.6] 62 48.4%

Create evangelism role and perform internal marketing. [SM2.7] 64 50.0%

LEVEL 3

Use a software asset tracking application with portfolio view. [SM3.1] 22 17.2%

SSI efforts are part of external marketing. [SM3.2] 10 7.8%

Identify metrics and use them to drive resourcing. [SM3.3] 21 16.4%

Integrate software-defined lifecycle governance. [SM3.4] 6 4.7%

COMPLIANCE & POLICY (CP)

ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Unify regulatory pressures. [CP1.1] 98 76.6%

Identify PII obligations. [CP1.2] 114 89.1%

Create policy. [CP1.3] 88 68.8%

LEVEL 2

Build PII inventory. [CP2.1] 55 43.0%

Require security sign-off for compliance-related risk. [CP2.2] 49 38.3%

Implement and track controls for compliance. [CP2.3] 67 52.3%

Include software security SLAs in all vendor contracts. [CP2.4] 54 42.2%

Ensure executive awareness of compliance and privacy obligations. [CP2.5] 74 57.8%

LEVEL 3

Create a regulator compliance story. [CP3.1] 24 18.8%

Impose policy on vendors. [CP3.2] 18 14.1%

Drive feedback from software lifecycle data back to policy. [CP3.3] 6 4.7%

TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.

80 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


TRAINING (T)
ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Conduct software security awareness training. [T 1 .1] 76 59.4%

Deliver on-demand individual training. [T 1 .7] 53 41.4%

Include security resources in onboarding. [T 1 .8] 46 35.9%

LEVEL 2

Enhance satellite through training and events. [T2.5] 39 30.5%

Create and use material specific to company history. [T2.8] 27 21.1%

Deliver role-specific advanced curriculum. [T2.9] 35 27.3%

LEVEL 3

Reward progression through curriculum. [T3.1] 6 4.7%

Provide training for vendors and outsourced workers. [T3.2] 23 18.0%

Host software security events. [T3.3] 23 18.0%

Require an annual refresher. [T3.4] 24 18.8%

Establish SSG office hours. [T3.5] 9 7.0%

Identify new satellite members through observation. [T3.6] 4 3.1%

ATTACK MODELS (AM)


ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Create a data classification scheme and inventory. [AM1.2] 77 60.2%

Identify potential attackers. [AM1.3] 41 32.0%

Gather and use attack intelligence. [AM1.5] 61 47.7%

LEVEL 2

Build attack patterns and abuse cases tied to potential attackers. [AM2.1] 14 10.9%

Create technology-specific attack patterns. [AM2.2] 11 8.6%

Maintain and use a top N possible attacks list. [AM2.5] 13 10.2%

Collect and publish attack stories. [AM2.6] 10 7.8%

Build an internal forum to discuss attacks. [AM2.7] 16 12.5%

LEVEL 3

Have a research group that develops new attack methods. [AM3.1] 5 3.9%

Create and use automation to mimic attackers. [AM3.2] 4 3.1%

Monitor automated asset creation. [AM3.3] 6 4.7%

SECURITY FEATURES & DESIGN (SFD)


ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Integrate and deliver security features. [SFD1.1] 102 79.7%

Engage the SSG with architecture teams. [SFD1.2] 83 64.8%

TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.

81 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


LEVEL 2

Leverage secure-by-design components and services. [SFD2.1] 33 25.8%

Create capability to solve difficult design problems. [SFD2.2] 55 43.0%

LEVEL 3

Form a review board or central committee to approve and maintain


[SFD3.1] 16 12.5%
secure design patterns.
Require use of approved security features and frameworks. [SFD3.2] 15 11.7%

Find and publish secure design patterns from the organization. [SFD3.3] 5 3.9%

STANDARDS & REQUIREMENTS (SR)


ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Create security standards. [SR1.1] 90 70.3%

Create a security portal. [SR1.2] 88 68.8%

Translate compliance constraints to requirements. [SR1.3] 99 77.3%

LEVEL 2

Create a standards review board. [SR2.2] 64 50.0%

Identify open source. [SR2.4] 74 57.8%

Create SLA boilerplate. [SR2.5] 55 43.0%

LEVEL 3

Control open source risk. [SR3.1] 35 27.3%

Communicate standards to vendors. [SR3.2] 13 10.2%

Use secure coding standards. [SR3.3] 9 7.0%

Create standards for technology stacks. [SR3.4] 20 15.6%

ARCHITECTURE ANALYSIS (AA)


ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Perform security feature review. [AA1.1] 113 88.3%

Perform design review for high-risk applications. [AA1.2] 49 38.3%

Have SSG lead design review efforts. [AA1.3] 37 28.9%

Use a risk methodology to rank applications. [AA1.4] 62 48.4%

LEVEL 2

Define and use AA process. [AA2.1] 29 22.7%

Standardize architectural descriptions. [AA2.2] 28 21.9%

LEVEL 3

Have engineering teams lead AA process. [AA3.1] 16 12.5%

Drive analysis results into standard design patterns. [AA3.2] 2 1.6%

Make the SSG available as an AA resource or mentor. [AA3.3] 11 8.6%

CODE REVIEW (CR)


ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Perform opportunistic code review. [CR1.2] 80 62.5%

TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.

82 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Use automated tools. [CR1.4] 102 79.7%

Make code review mandatory for all projects. [CR1.5] 49 38.3%

Use centralized reporting to close the knowledge loop. [CR1.6] 32 25.0%

Assign tool mentors. [CR1.7] 51 39.8%

LEVEL 2

Use automated tools with tailored rules. [CR2.6] 25 19.5%

Use a top N bugs list (real data preferred). [CR2.7] 17 13.3%

LEVEL 3

Build a capability to combine assessment results. [CR3.2] 9 7.0%

Create capability to eradicate bugs. [CR3.3] 4 3.1%

Automate malicious code detection. [CR3.4] 1 0.8%

Enforce coding standards. [CR3.5] 0 0

SECURITY TESTING (ST)


ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Ensure QA performs edge/boundary value condition testing. [ST 1 .1] 100 78.1%

Drive tests with security requirements and security features. [ST 1 .3] 87 68.0%

Integrate opaque-box security tools into the QA process. [ST 1 .4] 50 39.1%

LEVEL 2

Share security results with QA. [ST2.4] 19 14.8%

Include security tests in QA automation. [ST2.5] 21 16.4%

Perform fuzz testing customized to application APIs. [ST2.6] 15 11.7%

LEVEL 3

Drive tests with risk analysis results. [ST3.3] 8 6.3%

Leverage coverage analysis. [ST3.4] 2 1.6%

Begin to build and apply adversarial security tests (abuse cases). [ST3.5] 2 1.6%

Implement event-driven security testing in automation. [ST3.6] 2 1.6%

PENETRATION TESTING (PT)


ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Use external penetration testers to find problems. [PT 1 .1] 111 86.7%

Feed results to the defect management and mitigation system. [PT 1 .2] 98 76.6%

Use penetration testing tools internally. [PT 1 .3] 88 68.8%

LEVEL 2

Penetration testers use all available information. [PT2.2] 33 25.8%

Schedule periodic penetration tests for application coverage. [PT2.3] 34 26.6%

LEVEL 3

Use external penetration testers to perform deep-dive analysis. [PT3.1] 23 18.0%

Customize penetration testing tools. [PT3.2] 12 9.4%

TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.

83 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


SOFTWARE ENVIRONMENT (SE)
ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Use application input monitoring. [SE1.1] 80 62.5%

Ensure host and network security basics are in place. [SE1.2] 117 91.4%

LEVEL 2

Define secure deployment parameters and configurations. [SE2.2] 48 37.5%

Protect code integrity. [SE2.4] 32 25.0%

Use application containers to support security goals. [SE2.5] 44 34.4%

Ensure cloud security basics. [SE2.6] 59 46.1%


Use orchestration for containers and virtualized environments. [SE2.7] 33 25.8%
LEVEL 3
Use code protection. [SE3.2] 13 10.2%
Use application behavior monitoring and diagnostics. [SE3.3] 9 7.0%
Enhance application inventory with operations bill of materials. [SE3.6] 14 10.9%

CONFIGURATION MANAGEMENT & VULNERABILITY MANAGEMENT (CMVM)


ACTIVITY DESCRIPTION ACTIVITY OBSERVATIONS PARTICIPANTS

LEVEL 1

Create or interface with incident response. [CMVM1.1] 108 84.4%

Identify software defects found in operations monitoring and feed


[CMVM1.2] 96 75.0%
them back to development.

LEVEL 2

Have emergency response. [CMVM2.1] 92 71.9%

Track software bugs found in operations through the fix process. [CMVM2.2] 93 72.7%

Develop an operations inventory of software delivery value streams. [CMVM2.3] 61 47.7%

LEVEL 3

Fix all occurrences of software bugs found in operations. [CMVM3.1] 4 3.1%

Enhance the SSDL to prevent software bugs found in operations. [CMVM3.2] 11 8.6%

Simulate software crises. [CMVM3.3] 14 10.9%

Operate a bug bounty program. [CMVM3.4] 20 15.6%

Automate verification of operational infrastructure security. [CMVM3.5] 10 7.8%

Publish risk data for deployable artifacts. [CMVM3.6] 0 0

Streamline incoming responsible vulnerability disclosure [CMVM3.7] 0 0

TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.

84 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Figure F shows the distribution of scores among the population of 128 participating firms. To create this graph, we
divided the scores into six bins that are then further divided by the round of BSIMM measurement. As you can see,
the scores represent a slightly skewed bell curve. We also plotted the average age of the firms’ SSIs in each bin as the
horizontal line on the graph. In general, firms where more BSIMM activities have been observed have older SSIs and
have performed multiple BSIMM measurements.

35

30

25
NUMBER OF FIRMS

20

15

9.6
10

5.7
4.1 4.5
5
2.6
1.9

0-20 21-30 31-40 41-50 51-60 61-122

ALLFIRMS ROUND 1 ALLFIRMS ROUND 2 ALLFIRMS ROUND 3+ AVERAGE SSG AGE

FIGURE F. BSIMM SCORE DISTRIBUTION. Firm scores most frequently fall into the 31 to 40 range, with an average SSG age
of 4.1 years.

COMPARING VERTICALS
Table H shows the BSIMM scorecards for the nine verticals compared side by side. In the Activity columns, we have
highlighted the most common activity in each practice as observed in the entire BSIMM data pool (128 firms). See
Part Two for discussion.

85 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


GOVERNANCE

ISV FINANCIAL TECH CLOUD FINTECH IOT HEALTHCARE INSURANCE RETAIL


ACTIVITY
(OF 42) (OF 38) (OF 28) (OF 26) (OF 21) (OF 18) (OF 14) (OF 13) (OF 7)

[SM1.1] 29 26 23 20 16 16 11 10 5

[SM1.3] 27 25 18 17 11 11 9 9 4

[SM1.4] 36 38 27 22 20 17 13 13 6

[SM2.1] 18 26 9 17 15 7 7 8 4

[SM2.2] 19 22 15 11 11 8 5 3 2

[SM2.3] 22 15 14 12 10 11 6 7 3

[SM2.6] 18 24 16 11 10 10 6 4 3

[SM2.7] 23 16 18 13 11 11 8 5 4

[SM3.1] 5 9 5 4 6 4 1 1 1

[SM3.2] 5 0 2 2 4 2 1 1 1

[SM3.3] 5 9 3 3 5 2 3 3 0

[SM3.4] 1 2 1 0 2 0 1 1 0

[CP1.1] 32 28 22 17 18 15 14 10 3

[CP1.2] 33 36 22 23 21 16 14 13 7

[CP1.3] 24 30 17 16 18 12 7 9 5

[CP2.1] 15 19 9 12 11 9 7 3 4

[CP2.2] 13 17 15 6 6 11 8 4 2

[CP2.3] 21 21 17 12 12 10 10 5 2

[CP2.4] 20 18 10 11 7 7 6 7 4

[CP2.5] 25 21 16 17 14 10 8 5 2

[CP3.1] 3 11 3 3 6 3 3 3 1

[CP3.2] 4 8 3 1 1 2 3 3 1

[CP3.3] 3 2 2 3 2 1 0 0 0

[T 1 .1] 23 23 18 17 17 15 6 6 5

[T 1 .7] 15 20 10 13 10 9 3 6 4

[T 1 .8] 16 16 10 12 9 6 4 8 2

[T2.5] 15 8 10 8 8 6 2 5 2

[T2.8] 11 2 12 8 3 9 3 2 2

[T2.9] 11 13 9 7 6 7 3 5 4

[T3.1] 2 3 2 2 0 1 0 2 0

[T3.2] 8 8 6 8 6 5 2 3 1

[T3.3] 8 10 6 2 3 5 1 2 0

[T3.4] 5 10 4 3 5 2 3 5 1

[T3.5] 1 4 3 0 2 0 0 1 1

[T3.6] 2 1 1 2 1 0 0 0 0

TABLE H. VERTICAL COMPARISON SCORECARD. This table allows for easy comparisons of observation rates for the nine
verticals tracked in BSIMM12.

86 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


INTELLIGENCE

ISV FINANCIAL TECH CLOUD FINTECH IOT HEALTHCARE INSURANCE RETAIL


ACTIVITY
(OF 42) (OF 38) (OF 28) (OF 26) (OF 21) (OF 18) (OF 14) (OF 13) (OF 7)

[AM1.2] 17 34 9 13 14 8 11 10 6

[AM1.3] 7 17 7 4 7 5 5 6 2

[AM1.5] 13 22 12 11 13 9 9 6 3

[AM2.1] 2 6 3 1 3 4 2 3 1

[AM2.2] 3 5 4 2 2 3 1 1 0

[AM2.5] 5 3 6 3 2 5 2 1 1

[AM2.6] 4 1 5 2 3 4 1 0 0

[AM2.7] 7 5 5 4 1 4 2 1 0

[AM3.1] 2 0 1 1 2 2 0 0 1

[AM3.2] 2 0 2 2 0 1 1 1 0

[AM3.3] 2 3 0 2 1 1 0 0 0

[SFD1.1] 30 32 20 21 18 11 12 10 7

[SFD1.2] 32 19 21 19 16 16 11 8 5

[SFD2.1] 14 8 11 8 9 7 2 1 1

[SFD2.2] 22 13 14 14 8 12 4 3 4

[SFD3.1] 1 11 2 0 2 1 2 2 1

[SFD3.2] 6 4 2 6 3 1 0 1 1

[SFD3.3] 2 1 2 0 0 2 1 0 1

[SR1.1] 21 30 19 14 18 14 10 10 6

[SR1.2] 31 25 21 21 14 15 9 7 5

[SR1.3] 30 27 24 17 18 16 13 9 6

[SR2.2] 10 29 12 10 11 6 4 10 4

[SR2.4] 24 26 16 13 14 13 7 9 2

[SR2.5] 19 17 13 10 8 10 7 6 4

[SR3.1] 11 13 6 7 10 4 3 4 1

[SR3.2] 4 3 3 2 1 2 3 3 0

[SR3.3] 2 1 4 2 3 2 0 0 0

[SR3.4] 8 7 4 6 4 4 1 0 1

TABLE H. VERTICAL COMPARISON SCORECARD. This table allows for easy comparisons of observation rates for the nine
verticals tracked n BSIMM12.

87 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


SSDL TOUCHPOINTS

ISV FINANCIAL TECH Cloud FINTECH IOT HEALTHCARE INSURANCE RETAIL


ACTIVITY
(OF 42) (OF 38) (OF 28) (of 26) (OF 21) (OF 18) (OF 14) (OF 13) (OF 7)

[AA1.1] 35 35 24 23 21 16 12 12 6

[AA1.2] 15 12 18 7 5 12 6 4 1

[AA1.3] 10 10 12 4 4 8 6 4 1

[AA1.4] 11 30 5 8 13 4 10 11 7

[AA2.1] 11 8 14 5 0 8 4 3 0

[AA2.2] 10 8 13 4 0 8 4 4 0

[AA3.1] 6 4 9 3 0 6 1 2 0

[AA3.2] 0 1 0 0 0 0 0 0 1

[AA3.3] 4 2 5 3 0 4 1 1 0

[CR1.2] 27 22 19 14 11 14 9 8 4

[CR1.4] 31 33 19 22 19 14 9 10 6

[CR1.5] 16 11 11 9 12 6 6 5 2

[CR1.6] 8 11 5 7 7 3 4 3 4

[CR1.7] 18 16 12 12 11 6 4 6 4

[CR2.6] 4 9 3 6 9 1 2 2 1

[CR2.7] 3 6 4 5 3 2 1 4 1

[CR3.2] 2 2 2 1 3 1 0 1 1

[CR3.3] 1 1 0 2 1 0 1 1 0

[CR3.4] 0 0 0 0 1 0 0 0 0

[CR3.5] 0 0 0 0 0 0 0 0 0

[ST 1 .1] 34 31 25 21 18 15 8 10 4

[ST 1 .3] 31 25 22 17 15 16 8 11 3

[ST 1 .4] 19 15 15 8 10 11 4 5 3

[ST2.4] 8 2 9 5 5 7 1 1 1

[ST2.5] 8 5 6 5 5 6 2 2 1

[ST2.6] 9 1 10 3 1 7 1 0 0

[ST3.3] 4 0 5 3 0 5 1 1 0

[ST3.4] 0 1 1 0 0 1 1 1 0

[ST3.5] 2 0 1 2 0 1 0 0 0

[ST3.6] 1 1 0 1 1 0 0 0 0

TABLE H. VERTICAL COMPARISON SCORECARD. This table allows for easy comparisons of observation rates for the nine
verticals tracked in BSIMM12.

88 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


DEPLOYMENT

ISV FINANCIAL TECH CLOUD FINTECH IOT HEALTHCARE INSURANCE RETAIL


ACTIVITY
(of 42) (OF 38) (OF 28) (OF 26) (OF 21) (OF 18) (OF 14) (OF 13) (OF 7)

[PT 1 .1] 37 32 25 19 20 15 13 12 6

[PT 1 .2] 32 27 21 19 20 13 9 8 7

[PT 1 .3] 26 29 17 18 16 10 9 9 7

[PT2.2] 15 7 10 12 5 7 2 2 1

[PT2.3] 13 13 5 7 7 2 2 4 2

[PT3.1] 8 5 11 5 5 6 2 1 2

[PT3.2] 4 4 4 3 4 2 0 0 0

[SE1.1] 17 33 12 14 16 10 11 10 4

[SE1.2] 35 37 27 24 20 17 14 12 6

[SE2.2] 14 14 16 8 8 13 2 3 1

[SE2.4] 16 2 18 8 5 13 1 1 1

[SE2.5] 16 11 10 13 9 4 3 3 3

[SE2.6] 19 20 9 16 8 6 6 7 1

[SE2.7] 14 9 6 13 4 1 2 3 1

[SE3.2] 6 1 9 2 2 5 1 1 1

[SE3.3] 4 3 1 2 3 1 2 2 1

[SE3.6] 6 5 4 4 2 5 0 0 0

[CMVM1.1] 33 35 23 20 19 14 11 12 7

[CMVM1.2] 30 28 21 21 18 16 9 9 7

[CMVM2.1] 30 29 19 19 17 14 10 9 7

[CMVM2.2] 31 26 23 18 17 15 10 8 6

[CMVM2.3] 18 26 10 13 9 6 4 5 3

[CMVM3.1] 0 2 1 1 2 0 0 0 0

[CMVM3.2] 5 2 5 4 3 3 1 0 0

[CMVM3.3] 4 3 3 3 5 2 2 1 1

[CMVM3.4] 7 5 3 8 6 2 0 2 1

[CMVM3.5] 2 4 2 4 3 1 0 1 0

[CMVM3.6] 0 0 0 0 0 0 0 0 0

[CMVM3.7] 0 0 0 0 0 0 0 0 0

TABLE H. VERTICAL COMPARISON SCORECARD. This table allows for easy comparisons of observation rates for the nine
verticals tracked in BSIMM12.

89 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


MODEL CHANGES OVER TIME
Being a unique, real-world reflection of actual software security practices, the BSIMM naturally changes over time.
While each release of the BSIMM captures the current dataset and provides the most useful guidance, reflection upon
past changes can help clarify the ebb and flow of particular activities. Table I shows the activity moves and adds that
have occurred since the BSIMM’s creation.

ACTIVITY CHANGES OVER TIME

• SM1.2 Create evangelism role and perform internal marketing became SM2.7
• T 1 .5 Deliver role-specific advanced curriculum became T2.9
BSIMM12
• ST2.1 Integrate opaque-box security tools into the QA process became ST 1 .4
122 ACTIVITIES
• SE3.5 Use orchestration for containers and virtualized environments became SE2.7
• CMVM3.7 Streamline incoming responsible vulnerability disclosure added to the model

• T2.6 Include security resources in onboarding became T 1 .8


• CR2.5 Assign tool mentors became CR1.7
BSIMM11 • SE3.4 Use application containers to support security goals became SE2.5
121 ACTIVITIES • SE3.7 Ensure cloud security basics became SE2.6
• ST3.6 Implement event-driven security testing in automation added to the model
• CMVM3.6 Publish risk data for deployable artifacts added to the model

• T 1 .6 Create and use material specific to company history became T2.8


• SR2.3 Create standards for technology stacks moves to become SR3.4
BSIMM10
• SM3.4 Integrate software-defined lifecycle governance added to the model
119 ACTIVITIES
• AM3.3 Monitor automated asset creation added to the model
• CMVM3.5 Automate verification of operational infrastructure security added to the model

• SM2.5 Identify metrics and use them to drive resourcing became SM3.3
• SR2.6 Use secure coding standards became SR3.3
BSIMM9
• SE3.5 Use orchestration for containers and virtualized environments added to the model
116 ACTIVITIES
• SE3.6 Enhance application inventory with operations bill of materials added to the model
• SE3.7 Ensure cloud security basics added to the model

BSIMM8 • T2.7 Identify new satellite through training became T3.6


113 ACTIVITIES • AA2.3 Make SSG available as AA resource or mentor became AA3.3

• AM1.1 Maintain and use a top N possible attacks list became AM2.5
• AM1.4 Collect and publish attack stories became AM2.6
BSIMM7 • AM1.6 Build an internal forum to discuss attacks became AM2.7
113 ACTIVITIES • CR1.1 Use a top N bugs list became CR2.7
• CR2.2 Enforce coding standards became CR3.5
• SE3.4 Use application containers to support security goals added to model

TABLE I. ACTIVITY CHANGES OVER TIME. This table allows for historical review of how BSIMM activities have been added
or moved over time.

90 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


• SM1.6 Require security sign-off prior to software release became SM2.6
BSIMM6 • SR1.4 Use secure coding standards became SR2.6
112 ACTIVITIES • ST3.1 Include security tests in QA automation became ST2.5
• ST3.2 Perform fuzz testing customized to application APIs became ST2.6

• SFD2.3 Find and publish mature design patterns from the organization became SFD3.3
• SR2.1 Communicate standards to vendors became SR3.2
BSIMM-V
• CR3.1 Use automated tools with tailored rules became CR2.6
112 ACTIVITIES
• ST2.3 Begin to build and apply adversarial security tests (abuse cases) became ST3.5
• CMVM3.4 Operate a bug bounty program added to model

• T2.1 Deliver role-specific advanced curriculum became T 1 .5


• T2.2 Company history in training became T 1 .6
• T2.4 Deliver on-demand individual training became T 1 .7
• T 1 .2 Include security resources in onboarding became T2.6
• T 1 .4 Identify new satellite members through training became T2.7
• T 1 .3 Establish SSG office hours became T3.5
BSIMM4
• AM2.4 Build an internal forum to discuss attacks became AM1.6
111 ACTIVITIES
• CR2.3 Make code review mandatory for all projects became CR1.5
• CR2.4 Use centralized reporting to close the knowledge loop became CR1.6
• ST 1 .2 Share security results with QA became ST2.4
• SE2.3 Use application behavior monitoring and diagnostics became SE3.3
• CR3.4 Automate malicious code detection added to model
• CMVM3.3 Simulate software crises added to model

• SM1.5 Identify metrics and use them to drive resourcing became SM2.5
• SM2.4 Require security sign-off became SM1.6
BSIMM3
• AM2.3 Gather and use attack intelligence became AM1.5
109 ACTIVITIES
• ST2.2 Drive tests with security requirements and security features became ST 1 .3
• PT2.1 Use pen testing tools internally became PT 1 .3

• T2.3 Require an annual refresher became T3.4


• CR2.1 Use automated tools became CR1.4
BSIMM2
• SE2.1 Use code protection became SE3.2
109 ACTIVITIES
• SE3.1 Use code signing became SE2.4
• CR1.3 removed from the model

BSIMM1
• Added 110 activities
110 ACTIVITIES

TABLE I. ACTIVITY CHANGES OVER TIME. This table allows for historical review of how BSIMM activities have been added
or moved over time.

91 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


IMPLEMENTING AN SSI FOR THE FIRST TIME
In the “Using the BSIMM to Start or Improve an SSI” section in the main text, we go into detail on how organizations—
whether governance-led, engineering-led, or a hybrid—move from emerging to enabling states of SSI maturity.
Here, we distill this knowledge to a short list of actionable steps for starting a new SSI. Because new BSIMM activities
are added at level 3 (due to their zero observation rate), the “getting started” roadmap in Figure G includes level 2
and level 3 activities that have a high impact for emerging SSIs. These are foundational activities, even if mature
organizations are just getting around to adding them to their maturing and enabling journeys. Because you’re
getting started, you have the opportunity to integrate these activities from day one.
GOVERNANCE-LED
BSIMM ACTIVITIES FOR ORGANIZATIONS THAT ARE....

CP1.1 CP1.3
CP2.1 AA1.4
CP1.2 SR1.1
GOVERNANCE-LED

AA1.1
ENGINEERING- &

AM3.3 CMVM3.4 SM1.1


SE1.2
CMVM1.1 ST1.4 SM1.4
T1.1 SE2.2 SM2.3
CMVM2.3 ST2.5 SR1.2
SE2.6
SR2.4 PT1.1 SM1.3
CR1.4
ENGINEERING-LED

SFD1.1 SM3.4
SFD1.2 ST3.6

Create software Inventory all Ensure Deploy defect Publish and Mature
security group software in the infrastructure discovery promote the
SSG’s purview security is against highest process
applied to the priority
software applications
environment

FIGURE G. BSIMM ACTIVITY ROADMAP BY ORGANIZATIONAL APPROACH. This table uses an activity-based view to
show a common path for creating and maturing an SSI.

CREATE A SOFTWARE SECURITY GROUP


The single most important software security activity for all SSIs is to have a dedicated SSG that can get resources and
drive organizational change. An SSG, even if it’s a group of one person coordinating organizational efforts, should be
an SSI’s first step. The SSG must start by understanding what is important to the business and driving toward that
goal, and using awareness training [T 1 .1] to ensure everyone understands their security responsibility.

92 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Engineering-led
STARTING GOAL: Establish a common understanding of the approach to software security. The primary role might be
to set up automated defect discovery, address security questions from developers [SFD1.1], and act as an advisor for
design decisions [SFD1.2].
Governance-led
STARTING GOAL: The starting place for this group might be to identify the risk management, compliance, and
contractual requirements that the organization must adhere to [CP1.1].

INVENTORY ALL SOFTWARE IN THE SSG’S PURVIEW


One of the first activities for any SSG is to create an initial inventory of the software portfolio under its purview
[CMVM2.3]. As a starting point, the inventory should include each application’s important characteristics (e.g.,
programming language, architecture type, open source used [SR2.4]). Particularly useful for monitoring and incident
response activities [CMVM1.1], many organizations will include relevant operational data in the inventory (e.g., where
is the application deployed, owners, emergency contacts).
Engineering-led
These efforts commonly attempt to understand software inventory by extracting it from the tools used to manage
IT assets. By scraping these software and infrastructure configuration management databases (CMDBs) or code
repositories, the team crafts a local and open source inventory brick-by-brick rather than top-down. They use the
metadata and tagging that these content databases provide to reflect their software’s architecture as well as their
organization’s structure.
Governance-led
These efforts tend to favor a top-down approach to build the initial inventory, usually starting with a questionnaire to
solicit data from business managers who serve as application owners and then using out-of-band tools to find open
source. Governance-led organizations also tend to focus on understanding where sensitive data reside and flow
(e.g., PII inventory) [CP2.1] and the resulting business risk level associated with the application (e.g., critical, high,
medium, low).
Remember, the software asset inventory is a capability to be built over time, it’s not a one-time effort. To stay
accurate and current, the inventory must be regularly monitored and updated. As with all data currency efforts, it’s
important to make sure the data are not overly burdensome to collect and are periodically spot-checked for validity.
Organizations should favor automation for asset discovery and management whenever possible.

ENSURE INFRASTRUCTURE SECURITY IS APPLIED TO THE SOFTWARE ENVIRONMENT


One of the most commonly observed activities today is ensuring that host and network security basics are in place
[SE1.2]. Security engineers might begin by conducting this work manually, then baking these settings and changes
into their software-defined infrastructure scripts [SE2.2, SE2.6] to ensure both consistent use within a development
team and scalable sharing across the organization. Forward-looking organizations that have adopted software and
network orchestration technologies (e.g., Kubernetes, Envoy, Istio) get maximum impact from this activity with
the efforts of even an individual contributor, such as a security-minded DevOps engineer. Though many of the
technologies in which security engineers specify hardening and security settings are human-readable, engineering-
led groups don’t typically take the time to extract and distill a document-based security policy from these codebases
as we see more in governance-led organizations.
DEPLOY DEFECT DISCOVERY AGAINST HIGHEST PRIORITY APPLICATIONS
Regardless of business drivers, one of the quickest ways of transitioning unknown risk to managed risk is through
defect discovery. Automated tools, both static and dynamic, can provide fast, regular insight into the portfolio security
posture, but they are not a replacement for humans in high-criticality projects [AA1.1, CMVM3.4]. As such, organizations
with strong SSIs use a combination of manual testing techniques against their most critical assets and automated
testing techniques for coverage.
Both static and dynamic techniques provide unique views into software’s security posture. Static application security
testing can uncover security issues in source code well before the release of an application. Static analysis can look for
issues inside the code the organization develops [CR1.4] and issues inside of the software’s open source components
[SR2.4]. This timely feedback can lower the burden and cost of fixing defects. On the other hand, dynamic application
security tests [ST 1 .4] can uncover immediately exploitable issues with steps to reproduce. Similarly, timely feedback
from QA groups can ensure that development streams are adhering to security expectations [ST2.5]. These types of
results assist with prioritization and displaying impact to leadership.

93 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Manual testing efforts generally start by bringing in third-party assessors [PT 1 .1] on a regular cadence, either upon
major milestones or, more commonly, a periodic out-of-band exercise to assess the most critical projects. Even where
an internal penetration testing function exists, a third party bringing in a unique perspective will be beneficial.
Every organization must balance budget, time, and friction when developing their testing strategy, but a crawl, walk,
run mindset will help speed adoption and reduce friction.
Engineering-led
These organizations tend to favor empowering pipelines and testers with automation and allowing engineering
leadership or individual engineering teams to define some aspects of mandatory testing and remediation timelines.
Governance-led
These organizations tend to organize applications via risk ranking [AA1.4] then defining a testing program that
expands across the portfolio, and tightening requirements over time (pictured in Figure H).

CRITICAL PRIORITY CRITICAL PRIORITY

• Mandatory SAST & DAST before release • Mandatory SAST & DAST before release
• Mandatory annual penetration test • Mandatory penetration test before
• Loose remediation timelines “major” release
• Targeted remediation timelines

HIGH PRIORITY HIGH PRIORITY

• Mandatory SAST & DAST before release • Mandatory SAST & DAST before release
• Loose remediation timelines • Mandatory annual penetration test
• Targeted remediation timelines

MEDIUM PRIORITY MEDIUM PRIORITY

• [Optional] SAST & DAST before release • [Optional] SAST before release
• Loose remediation timelines • Mandatory DAST before release
• Targeted remediation timelines

LOW PRIORITY LOW PRIORITY

• Nothing for low priority • [Optional] SAST & DAST before release
• Targeted remediation timelines

FIGURE H. A SAMPLE GOVERNANCE-LED ORGANIZATION’S TESTING PROGRAM OVER TIME. Successful program
evolutions start with getting engineering groups accustomed to testing, triaging, and working through the potentially lengthy
backlog of defects, then tightening expectations over time as the governance and engineering groups mature.

Before performing any organization-wide roll out, it is wise to first trial any new process. Use the trial to validate
that the process integrates into developer workflows and pipelines (pulling in data, pushing results, and creating
metadata to be collected), has sufficient technology coverage, and creates results that are easily understandable by
all stakeholders.

94 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


The first testing milestone for many organizations is to obtain coverage across their highest priority projects.
Subsequent milestones can address covering all projects [CR1.5] and continuing to do so periodically [PT2.3],
increasing the depth of tools with customization [CR2.6].

PUBLISH AND PROMOTE THE PROCESS


With a strategy in hand and expectations set with engineering teams, the SSG documents the SSDL [SM1.1] and
begins collecting telemetry [SM1.4]. The SSDL should clearly document the SSI’s goals, roles, responsibilities, and
activities. The most usable SSDLs diagram the process before going into detail and might follow the same style as
used for the organization’s SSDLs. Many organizations will find a wide variety of SSDLs in current use throughout
engineering; in these cases, the SSDL may either need to be abstract enough to account for all processes or be
tailored for various groups. Publishing this process also allows for the SSG to start a hub for software security where
the SSG can disseminate knowledge about the process and about software security as a whole [SR1.2].
Engineering-led
These organizations tend to favor implementing their view of an SSDL inside of pipelines [SM3.4] and scripts [ST3.6], or
by prescribing reusable security blocks that meet expectations.
Governance-led
In a top-down approach, these organizations favor creating policy [CP1.3] and standards [SR1.1] that can be followed
and audited like any other business process.
While executives have been likely engaged already to get the program to this point, this is a good time to ensure
that they’re being regularly kept up to date with software security. Remember, executive teams need to understand
not only how the SSI is performing but also how other firms are solving software security problems and what the
ramifications are for not performing security correctly [SM1.3].

PROGRESS TO THE NEXT STEP IN YOUR JOURNEY


At the mature stage, the SSG scales the SSI through the creation of a champions program [SM2.3], improves the
inventory based on lessons learned, automates the basics, does more prevention, and then repeats again. As the
initiative matures and the business grows, there will be new challenges the SSI will need to address. It will be crucial
for the SSG to ensure that feedback loops are in place for the program to measure and mature. Scaling always
involves governance, e.g. [SM1.4], even in engineering-led initiatives.

SUMMARY
Figure I organizes the steps to implement an SSI for the first time with the associated BSIMM activities, the notional
level of effort (people and budget), and suggested timing. The people and budget costs are expressed through a 1 – 3
rating to show relative level of effort while accounting for the differences in organizations. The effort and cost to reach
each of these goals will vary across companies, of course, but the two primary factors that affect the level of effort are
the organization’s portfolio size and culture variance. For example, deploying static analysis against 20 applications
using a common pipeline will likely have a lower level of effort than deploying static analysis against 10 applications
built on a variety of toolchains.

95 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


GOVERNANCE-LED
BSIMM ACTIVITIES FOR ORGANIZATIONS THAT ARE....
CP1.1 CP1.3
CP2.1 AA1.4
CP1.2 SR1.1
GOVERNANCE-LED

AA1.1
ENGINEERING- &

AM3.3 CMVM3.4 SM1.1


SE1.2
CMVM1.1 ST1.4 SM1.4
T1.1 SE2.2 SM2.3
CMVM2.3 ST2.5 SR1.2
SE2.6
SR2.4 PT1.1 SM1.3
CR1.4
ENGINEERING-LED

SFD1.1 SM3.4
SFD1.2 ST3.6
DEPLOYMENT COSTS

Create software Inventory all Ensure Deploy defect Publish and Mature
security group software in the infrastructure discovery promote the
SSG’s purview security is against highest process
applied to the priority
software applications
environment

GOVERNANCE-LED PEOPLE

ENGINEERING- & GOVERNANCE -LED BUDGET

ENGINEERING-LED TIME

FIGURE I. BSIMM ACTIVITY ROADMAP BY ORGANIZATIONAL APPROACH WITH COST. This roadmap is supplemented
with notional costs so that organizations can plan.

96 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


DATA TRENDS OVER TIME
As the BSIMM community evolved, we added a greater number of firms with newer SSIs and began to track new
verticals that have less software security experience. Thus, we expected to see a direct impact on the data.

40

35

30

25
SCORE

20

15

10

0
BSIMM6 BSIMM7 BSIMM8 BSIMM9 BSIMM10 BSIMM11 BSIMM12

AVERAGE SCORE MEDIAN SCORE

FIGURE J. AVERAGE BSIMM PARTICIPANT SCORE. Adding firms with less experience decreased the average score from
BSIMM6 through BSIMM8, even as remeasurements have shown that individual firm maturity increases over time. However,
starting from BSIMM9, the average and the median score started to increase.

One reason for this change in average data pool score highlighted in Figure J appears to be the mix of firms using the
BSIMM as part of their SSI journey.

97 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


4.5

4.0

3.5
AGE OF SSIs ENTERING THE BSIMM

3.0

2.5

2.0

1.5

1.0

0.5

0
BSIMM6 BSIMM7 BSIMM8 BSIMM9 BSIMM10 BSIMM11 BSIMM12

AVERAGE AGE MEDIAN AGE

FIGURE K. AVERAGE AND MEDIAN SSG AGE FOR NEW FIRMS ENTERING THE BSIMM. The median age of firms
entering BSIMM6 through BSIMM8 was declining and so did the average BSIMM score, while outliers in BSIMM7 and BSIMM8
resulted in a high average SSG age. Starting with BSIMM9, the median age of firms entering the BSIMM was higher again, which
tracks with the increase of average BSIMM scores.

A second reason appears to be an increase in firms continuing to use the BSIMM to guide their initiatives (see Figure
L). Firms using the BSIMM as an ongoing measurement tool are likely also making sufficient improvements to justify
the ongoing creation of SSI scorecards to facilitate review.

98 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


35

30
NUMBER OF REASESSMENTS
25

20

15

10

0
6

10

11

12
M
M

M
M

M
M
IM
IM

IM
IM

IM

IM
IM
BS
BS

BS
BS

BS

BS
BS
FIGURE L. NUMBER OF FIRMS THAT RECEIVED THEIR SECOND OR HIGHER ASSESSMENT. The number of
reassessments over time highlights the number of firms using the BSIMM as an ongoing measurement tool and track with the
overall increase in average BSIMM scores.

A third reason appears to be the effect of firms aging out of the data pool (see Figure M).

25

20
FIRMS AGED OUT

15

10

0
6

10

11

12
M
M

M
M

M
M
IM
IM

IM
IM

IM

IM
IM
BS
BS

BS
BS

BS

BS
BS

FIGURE M. NUMBER OF FIRMS AGED OUT OF THE BSIMM DATA POOL. A total of 113 firms have aged out since BSIMM-V.
Ten of the 113 firms that had once aged out of the BSIMM data pool have subsequently rejoined with a
new assessment.

99 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


We also see this assessment score trend in mature verticals such as financial services (see Figure N).
Note that when creating BSIMM11, we recognized the need to realign the financial services vertical. Over the past
several years, financial services and FinTech (financial technology) firms differentiated significantly, and we became
concerned that having both in one vertical bucket could affect our analysis and conclusions. Accordingly, we created
the FinTech bucket and removed the FinTech firms from the financial services bucket. This action created a new
FinTech vertical for analysis and reduced the size (but increased the homogeneity) of the financial services vertical.
To be clear, we did not carry this change backward to previous BSIMM versions, meaning that some BSIMM10 and
older financial services data are not directly comparable to BSIMM11 and newer data.

44

42
FINANCIAL FIRM SCORES

40

38

36

34

32
6

10

11

12
M
M

M
M

M
M
IM
IM

IM
IM

IM

IM
IM
BS
BS

BS
BS

BS

BS
BS

FIGURE N. AVERAGE FINANCIAL SERVICES FIRM SCORES. The average score across the financial services vertical
followed the same pattern as the average score for AllFirms (shown in Figure J). Even in mature verticals, we observe a rise
in the average scores over time.

100 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


Given their importance to overall SSI efforts, we also closely monitor satellite trends. A large number of firms with
no satellite continue to exist in the community, which causes the median satellite size to be one (63 of 128 firms had
no satellite at the time of their current assessment), and 45% of the 20 firms added for BSIMM12 had no satellite at
assessment time; see Figure O).

45

40

35

30

25

20

15

10

0
AVERAGE SSG SIZE MEDIAN SSG SIZE AVERAGE SSG AGE AVERAGE SCORE

NO SATELLITE (63 OF 128) SATELLITE (65 OF 128)

FIGURE O. STATISTICS FOR FIRMS WITH AND WITHOUT A SATELLITE OUT OF 128 BSIMM12 PARTICIPANTS. The
average SSG size for firms without a satellite was impacted by a few significant outliers. These data appear to validate the
notion that more people, both centralized and distributed into engineering teams, can help SSIs achieve higher scores. For the
65 BSIMM12 firms with a satellite at last assessment time, the average satellite size was 99 with a median of 30.

101 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12


THE BSIMM ONLINE COMMUNITY
The BSIMM Online Community is a unique, members-only forum that helps you address software security
challenges in today’s complex business environments.
Firms that have completed a BSIMM assessment have access to the members-only BSIMM
community web site.

As a member you:
• Receive regular blog and discussion posts that show best practices, tips, and case studies.
• Bounce ideas and questions off of the 700-member community.
• Attend exclusive conferences.

From content authored by industry leaders to hands-on interactions with fellow BSIMM members,
it is a powerful resource for collaborative problem solving, thought leadership, and access to valuable
resources not available anywhere else.

Find out how to unlock access to an engaged BSIMM member community,


including conferences, newsletters, and original content.

www.bsimm.com

102 BUILDING SECURITY IN MATURITY MODEL (BSIMM) FOUNDATIONS REPORT – VERSION 12

You might also like