BSIMM12 Foundations Report Overview
BSIMM12 Foundations Report Overview
2021 FOUNDATIONS
REPORT
BSIMM12 FOUNDATIONS REPORT AUTHORS
SAMMY MIGUES Principal Scientist at Synopsys
ELI ERLIKHMAN Managing Principal at Synopsys
JACOB EWERS Principal Consultant at Synopsys
KEVIN NASSERY Senior Principal Consultant at Synopsys
ACKNOWLEDGEMENTS
Our thanks to the 128 executives from the SSIs we studied from around the world to create BSIMM12,
including those who choose to remain anonymous.
• BSIMM Terminology...................................................................22
• More on Builders and Testers..............................................23 APPENDIX.......................................................................................... 69
INTERPRETING BSIMM MEASUREMENTS........................... 24 BUILDING A MODEL FOR SOFTWARE SECURITY......... 69
THE BSIMM AS A LONGITUDINAL STUDY........................... 70
• Changes to Longitudinal Scorecard...............................73
PART TWO: USING THE BSIMM.......................... 28
CHARTS, GRAPHS, AND SCORECARDS.................................77
USING THE BSIMM............................................................................... 28
• The BSIMM12 Expanded Skeleton...................................80
• SSI Phases........................................................................................ 28
• Comparing Verticals................................................................. 85
• Traditional SSI Approaches...................................................29
•
Vertical Comparison Scorecard......................................86
• The New Wave in Engineering Culture........................29
MODEL CHANGES OVER TIME.................................................... 90
• Convergence as a Goal..............................................................31
IMPLEMENTING SN SSI FOR THE FIRST TIME...................92
• A Tale of Two Journeys:
Governance vs. Engineering................................................32 • Create a Software Security Group.....................................92
The Governance-Led Journey........................................32 • Inventory All Software in the SSG’s Purview.............93
Governance-Led Checklist • Ensure Infrastructure Security Is Applied to
for Getting Started.................................................................33 the Software Environment....................................................93
Maturing Governance-Led SSIs................................... 34 • Deploy Defect Discovery Against Highest
Enabling Governance-Led SSIs.....................................35 Priority Applications...................................................................93
The Engineering-Led Journey.......................................36 • Publish and Promote the Process....................................95
Engineering-Led Checklist • Progress to the Next Step in Your Journey.................95
for Getting Started.................................................................36 • Summary...........................................................................................95
Prioritizing In-Scope Software.......................................37 DATA TRENDS OVER TIME............................................................. 97
Maturing Engineering-Led Efforts............................ 38
Engineering-Led Heuristics.............................................39
Enabling Engineering-Led Efforts.............................40
BSIMM12 TABLE OF CONTENTS
We built the first version of the BSIMM over a decade ago (late 2008) as follows:
• We relied on our own knowledge of software security practices to create the software security
framework (SSF).
• We conducted a series of in-person interviews with nine executives in charge of SSIs. From these interviews,
we identified a set of common activities that we organized according to the SSF.
• We then created scorecards for each of the nine initiatives that showed which activities the initiatives carry
out. To validate our work, we asked each participating firm to review the framework, the practices, and the
scorecard we created for their initiative.
Today, we continue to evolve the model by looking for new activities as participants are added and as current
participants are remeasured. We also adjust the model according to observation rates for each of the activities.
THE MODEL
The BSIMM is a data-driven model that evolves over time. Over the years, we have added, deleted, and adjusted the
levels of various activities based on the data observed throughout the project’s evolution. To preserve backward
compatibility, we make all changes by adding new activity labels to the model, even when an activity has simply
changed levels (e.g., by adding a new CR#.# label for both new and moved activities in the Code Review practice).
When considering whether to add a new activity, we analyze whether the effort we’re observing is truly new to the
model or simply a variation on an existing activity. When considering whether to move an activity between levels, we
use the results of an intralevel standard deviation analysis and the trend in observation counts.
Whenever possible, we use an in-person interview technique to conduct BSIMM assessments, done with a total of 231
firms so far. In addition, we’ve conducted assessments for 10 organizations that have rejoined the community after
aging out. In 40 cases, we assessed both the software security group (SSG) and one or more business units as part of
creating the corporate SSI view.
For most organizations, we create a single aggregated scorecard, whereas in others, we create individual scorecards
for the SSG and each business unit assessed. However, each firm is represented by only one set of data in the model
published here. (“Table A. BSIMM Numbers Over Time” in the appendix shows changes in the data pool over time.)
As a descriptive model, the only goal of the BSIMM is to observe and report. We like to say we visited a neighborhood
to see what was happening and observed that “there are robot vacuum cleaners in X of the Y houses we visited.” Note
that the BSIMM does not say, “all houses must have robot vacuum cleaners,” “robots are the only acceptable kind
of vacuum cleaners,” “vacuum cleaners must be used every day,” or any other value judgements. We offer simple
observations, simply reported.
Of course, during our assessment efforts across hundreds of organizations, we also make qualitative observations
about how SSIs are evolving and report many of those as key takeaways, themes, and other topical discussions.
Our “just the facts” approach is hardly novel in science and engineering, but in the realm of software security, it has
not previously been applied at this scale. Other work around SSI modeling has either described the experience of a
single organization or offered prescriptive guidance based on a combination of personal experience and opinion.
DOMAINS
Practices that help Practices that result in Practices associated Practices that interface
organize, manage, and collections of corporate with analysis and with traditional
measure a software knowledge used in assurance of particular network security and
security initiative. carrying out software software development software maintenance
Staff development security activities artifacts and processes. organizations. Software
is also a central throughout the All software security configuration,
governance practice. organization. Collections methodologies include maintenance, and other
include both proactive these practices. environment issues
security guidance have direct impact on
and organizational software security.
threat modeling.
PRACTICES
1. Strategy & Metrics 4. Attack Models 7. Architecture Analysis 10. Penetration Testing
(SM) (AM) (AA) (PT)
2. Compliance & Policy 5. Security Features 8. Code Review 11. Software Environment
(CP) & Design (CR) (SE)
(SFD)
3. Training 9. Security Testing 12. Configuration
(T) 6. Standards & (ST) Management &
Requirements Vulnerability
(SR) Management
(CMVM)
FIGURE 1. THE SOFTWARE SECURITY FRAMEWORK. This figure shows how the 12 practices align with the four
high-level domains.
GOVERNANCE
STRATEGY & METRICS (SM) COMPLIANCE & POLICY (CP) TRAINING (T)
• [SM1.1] Publish process and evolve • [CP1.1] Unify regulatory pressures. • [T 1 .1] Conduct software security
as necessary. awareness training.
• [CP1.2] Identify PII obligations.
• [SM1.3] Educate executives on • [T 1 .7] Deliver on-demand
• [CP1.3] Create policy.
software security. individual training.
• [SM1.4] Implement lifecycle • [T 1 .8] Include security resources
instrumentation and use to in onboarding.
define governance.
• [SM2.1] Publish data about software • [CP2.1] Build PII inventory. • [T2.5] Enhance satellite through
security internally and drive change. training and events.
• [CP2.2] Require security sign-off
• [SM2.2] Verify release conditions with for compliance-related risk. • [T2.8] Create and use material
measurements and track exceptions. specific to company history.
• [CP2.3] Implement and track
• [SM2.3] Create or grow a satellite. controls for compliance. • [T2.9] Deliver role-specific
advanced curriculum.
• [SM2.6] Require security sign-off • [CP2.4] Include software security
prior to software release. SLAs in all vendor contracts.
• [SM2.7] Create evangelism role and • [CP2.5] Ensure executive awareness
perform internal marketing. of compliance and privacy
obligations.
• [SM3.1] Use an internal tracking • [CP3.1] Create a regulator • [T3.1] Reward progression
application with portfolio view. compliance story. through curriculum.
• [SM3.2] SSI efforts are part of • [CP3.2] Impose policy on vendors. • [T3.2] Provide training for vendors
external marketing. and outsourced workers.
• [CP3.3] Drive feedback from
• [SM3.3] Identify metrics and use software lifecycle data back to policy. • [T3.3] Host software security events.
them to drive resourcing.
• [T3.4] Require an annual refresher.
• [SM3.4] Integrate software-defined
• [T3.5] Establish SSG office hours.
lifecycle governance.
• [T3.6] Identify new satellite members
through observation.
FIGURE 2. THE BSIMM SKELETON. Within the SSF, the 122 activities are organized across different levels.
ATTACK MODELS (AM) SECURITY FEATURES & DESIGN (SFD) STANDARDS & REQUIREMENTS (SR)
• [AM1.2] Create a data classification • [SFD1.1] Integrate and deliver • [SR1.1] Create security standards.
scheme and inventory. security features.
• [SR1.2] Create a security portal.
• [AM1.3] Identify potential attackers. • [SFD1.2] Engage the SSG
• [SR1.3] Translate compliance
with architecture teams.
• [AM1.5] Gather and use constraints to requirements.
attack intelligence.
• [AM2.1] Build attack patterns • [SFD2.1] Leverage secure-by-design • [SR2.2] Create a standards
and abuse cases tied to components and services. review board.
potential attackers.
• [SFD2.2] Create capability to solve • [SR2.4] Identify open source.
• [AM2.2] Create technology-specific difficult design problems.
• [SR2.5] Create SLA boilerplate.
attack patterns.
• [AM2.5] Maintain and use a top N
possible attacks list.
• [AM2.6] Collect and publish
attack stories.
• [AM2.7] Build an internal forum
to discuss attacks.
• [AM3.1] Have a research group that • [SFD3.1] Form a review board or • [SR3.1] Control open source risk.
develops new attack methods. central committee to approve and
• [SR3.2] Communicate standards
maintain secure design patterns.
• [AM3.2] Create and use automation to vendors.
to mimic attackers. • [SFD3.2] Require use of approved
• [SR3.3] Use secure coding standards.
security features and frameworks.
• [AM3.3] Monitor automated
• [SR3.4] Create standards
asset creation. • [SFD3.3] Find and publish secure
for technology stacks.
design patterns from
the organization.
FIGURE 2. THE BSIMM SKELETON. Within the SSF, the 122 activities are organized across different levels.
• [AA1.1] Perform security • [CR1.2] Perform opportunistic • [ST 1 .1] Ensure QA performs edge/
feature review. code review. boundary value condition testing.
• [AA1.2] Perform design review for • [CR1.4] Use automated tools. • [ST 1 .3] Drive tests with security
high-risk applications. requirements and security features.
• [CR1.5] Make code review mandatory
• [AA1.3] Have SSG lead design for all projects. • [ST 1 .4] Integrate opaque-box security
review efforts. tools into the QA process.
• [CR1.6] Use centralized reporting to
• [AA1.4] Use a risk methodology close the knowledge loop.
to rank applications.
• [CR1.7] Assign tool mentors.
• [AA2.1] Define and use AA process. • [CR2.6] Use automated tools with • [ST2.4] Share security results with QA.
tailored rules.
• [AA2.2] Standardize • [ST2.5] Include security tests in
architectural descriptions. • [CR2.7] Use a top N bugs list (real QA automation.
data preferred).
• [ST2.6] Perform fuzz testing
customized to application APIs.
• [AA3.1] Have engineering teams lead • [CR3.2] Build a capability to combine • [ST3.3] Drive tests with risk
AA process. assessment results. analysis results.
• [AA3.2] Drive analysis results into • [CR3.3] Create capability to • [ST3.4] Leverage coverage analysis.
standard design patterns. eradicate bugs.
• [ST3.5] Begin to build and apply
• [AA3.3] Make the SSG available as • [CR3.4] Automate malicious adversarial security tests
an AA resource or mentor. code detection. (abuse cases).
• [CR3.5] Enforce coding standards. • [ST3.6] Implement event-driven
security testing in automation.
FIGURE 2. THE BSIMM SKELETON. Within the SSF, the 122 activities are organized across different levels.
• [PT 1 .1] Use external penetration • [SE1.1] Use application • [CMVM1.1] Create or interface with
testers to find problems. input monitoring. incident response.
• [PT 1 .2] Feed results to the defect • [SE1.2] Ensure host and network • [CMVM1.2] Identify software defects
management and mitigation system. security basics are in place. found in operations monitoring and
feed them back to development.
• [PT 1 .3] Use penetration testing
tools internally.
• [PT2.2] Penetration testers use all • [SE2.2] Define secure deployment • [CMVM2.1] Have
available information. parameters and configurations. emergency response.
• [PT2.3] Schedule periodic • [SE2.4] Protect code integrity. • [CMVM2.2] Track software bugs found
penetration tests for in operations through the fix process.
• [SE2.5] Use application containers
application coverage.
to support security goals. • [CMVM2.3] Develop an operations
inventory of software delivery
• [SE2.6] Ensure cloud security basics.
value streams.
• [SE2.7] Use orchestration for
containers and virtualized
environments.
• [PT3.1] Use external penetration • [SE3.2] Use code protection. • [CMVM3.1] Fix all occurrences of
testers to perform deep-dive software bugs found in operations.
• [SE3.3] Use application behavior
analysis.
monitoring and diagnostics. • [CMVM3.2] Enhance the SSDL to
• [PT3.2] Customize penetration prevent software bugs found
• [SE3.6] Enhance application inventory
testing tools. in operations.
with operations bill of materials.
• [CMVM3.3] Simulate software crises.
• [CMVM3.4] Operate a bug
bounty program.
• [CMVM3.5] Automate verification of
operational infrastructure security.
• [CMVM3.6] Publish risk data for
deployable artifacts.
• [CMVM3.7] Streamline incoming
responsible vulnerability disclosure.
FIGURE 2. THE BSIMM SKELETON. Within the SSF, the 122 activities are organized across different levels.
We also carefully considered but did not adjust [CMVM2.2 Track software bugs found in operations through the fix
process] and [SR2.4 identify open source] at this time; we will do so if the observation rates continue to increase.
As concrete examples of how the BSIMM functions as an observational model, consider the activities that are now
SM3.3 and SR3.3, which both started as level 1 activities. The BSIMM1 activity [SM1.5 Identify metrics and use them to
drive resourcing] became SM2.5 in BSIMM3 and is now SM3.3 due to its observation rate remaining fairly static while
other activities in the practice became observed much more frequently. Similarly, the BSIMM1 activity [SR1.4 Use
coding standards] became SR2.6 in BSIMM6 and is now SR3.3 as its observation rate has decreased.
To date, no activity has migrated from level 3 to level 1, but we see significant observation increases in recently
added cloud- and DevOps-related activities, with [SE2.6 Ensure cloud security basics] being a probable candidate
after having quickly migrated from level 3 to level 2. See Table 1 for the observation growth in activities added
since BSIMM7.
SE3.4
0 4 11 14 31 44
(now SE2.5)
SE3.5
0 5 22 33
(now SE2.7)
SE3.6 0 3 12 14
SE3.7
0 9 36 59
(now SE2.6)
SM3.4 0 1 6
AM3.3 0 4 6
CMVM3.5 0 8 10
ST3.6 0 2
CMVM3.6 0 0
CMVM3.7 0
TABLE 1. NEW ACTIVITIES. Some of the most recently added activities have seen exceptional growth (highlighted above) in
observation counts, perhaps demonstrating their widespread utility.
OBSERVATION COUNTS 4
0
7
10
11
12
M
M
M
M
M
IM
IM
IM
IM
IM
IM
BS
BS
BS
BS
BS
BS
AA3.2 CR3.5
FIGURE 3. NUMBER OF OBSERVATIONS FOR [AA3.2] AND [CR3.5] OVER TIME. [AA3.2 Drive analysis results into
standard design patterns] had zero observations during BSIMM7 and BSIMM8, while [CR3.5 Enforce coding standards]
decreased to zero observations over the last five BSIMM iterations. There are another two activities with zero observations in
BSIMM12: [CMVM3.6 Publish risk data for deployable artifacts], which was added in BSIMM11, and [CMVM3.7 Streamline incoming
responsible vulnerability disclosure], which was just added in BSIMM12.
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
NORTH AMERICA UNITED KINGDOM/ ASIA/
EUROPEAN UNION PACIFIC
FIGURE 4. ONGOING USE OF THE BSIMM IN DRIVING ORGANIZATIONAL MATURITY. Organizations are continuing to
do remeasurements to show that their efforts are achieving the desired results.
CISO 67
TO 19
CTO 18
NEAREST EXECUTIVE TO SSG
FIGURE 5. NEAREST EXECUTIVE TO SSG. Although many SSGs seem to be gravitating toward having a CISO as their nearest
executive, we see a variety of executives overseeing software security efforts.
Of course, not all people with the same title perform, prioritize, enforce, or otherwise provide resources for the same
efforts the same way across various organizations.
The significant number of SSGs reporting through a technology organization, in addition to those reporting through
the CTO, remains relatively flat. A future increase here might reflect a growth of engineering-led initiatives chartered
with “building security in” to the software delivery process, rather than existing within a compliance-centric mandate.
Organizational alignment for software security is evolving rapidly, and there are constant natural reorganizations over
time, so we look forward to next year’s data.
In BSIMM-V, we saw CISOs as the nearest executive in 21 of 67 firms, which grew in BSIMM6 to 31 of 78, and again for
BSIMM7 with 52 of 95. Since then, the percentage has remained relatively flat, as shown in Figure 6.
60%
50%
40%
30%
20%
10%
0%
-V
10
11
12
M
M
M
M
M
M
M
IM
IM
IM
IM
IM
IM
IM
IM
BS
BS
BS
BS
BS
BS
BS
BS
FIGURE 6. PERCENTAGE OF SSGS WITH THE CISO AS THEIR NEAREST EXECUTIVE. Assuming new CISOs generally
receive responsibilities for SSIs, these data suggest that CISO role creation is also flattening out.
Some SSGs are highly distributed across a firm whereas others are centralized. Even within the most distributed
organizations, we find that software security activities are coordinated by an SSG. This is true even if that SSG is staffed
by a single leader with a title such as Security Program Manager or Product Security Manager, or execution of specific
tasks is delegated to security capability owners (e.g., penetration testing teams, security testing teams, software
security architects).
As more firms emphasize software delivery speed and agility, whether under the cultural aegis of DevOps or not,
we’re increasingly seeing SSG structures manifest organically within software teams themselves. Within these teams,
the individuals focused on security conduct activities along the critical path to delivering value to customers. Whether
staff are borrowed from corporate security or employed by the engineering team directly, we see individuals taking
on roles such as product security engineer or security architect, or possessing functional titles such as Site Reliability
Engineer, DevOps Engineer, or similar.
SSG Size for 128 BSIMM12 Firms Average is 22.2, largest is 892, smallest is 1, median is 7.0
TABLE 2. THE SOFTWARE SECURITY GROUP. We calculated the ratio of full-time SSG members to developers by averaging
the ratios of each participating firm. Looking at average SSG size, it seems that while SSGs can scale with development size in
smaller organizations, the ability to scale quickly drops off in larger organizations.
SATELLITE
In addition to the SSG, many SSIs have identified individuals (often developers, testers, architects, and DevOps
engineers) who are a driving force in improving software security but are not directly employed in the SSG. When
these individuals carry out software security activities, we collectively refer to them as the satellite. Many organizations
refer to this group as their software security champions.
Satellite members are sometimes chosen for software portfolio coverage (with one or two members in each
engineering group) but are sometimes chosen for other reasons, such as technology stack coverage or geographical
reach. They’re also sometimes more focused on specific issues, such as cloud migration and IoT architecture. We’re
even beginning to see some organizations use satellite members to bootstrap the “Sec” functions they require for
transforming a given engineering team from DevOps to DevSecOps.
One of the most critical roles that a satellite can play is to act as a sounding board for the feasibility and practicality
of proposed lifecycle governance changes (e.g., new gates, tools, guardrails) or the expansion of software security
activities (e.g., broader coverage, additional rules, new automation). Understanding how SSI governance might affect
project timelines and budgets helps the SSG proactively identify potential frictions and minimize them.
In many organizations, satellite members are likely to self-select into the group to bring their particular expertise to a
broader audience. These individuals often have (usually informal) titles such as CloudSec, OpsSec, ContainerSec, and
so on, with a role that actively contributes security solutions into engineering processes. These solutions are often
infrastructure- and governance-as-code in the form of scripts, sensors, telemetry, and other friction-reducing efforts.
In any case, successful satellite groups get together regularly to compare notes, learn new technologies, and expand
stakeholder understanding of the organization’s software security challenges. Similar to a satellite—and mirroring the
community and culture of open source software—we’re seeing an increase in motivated individuals in engineering-
led organizations sharing digital work products, such as sensors, code, scripts, tools, and security features, rather than,
for example, getting together to discuss enacting a new policy. Specifically, these proactive engineers are working
bottom-up and delivering software security features and awareness through implementation regardless of whether
guidance is coming top-down from a traditional SSG.
To achieve scale and coverage, identifying and fostering a strong satellite is important to the success of many SSIs
(but not all of them). Some BSIMM activities target the satellite explicitly, as shown in Figure 7..
FIGURE 7. PERCENTAGE OF FIRMS THAT HAVE A SATELLITE ORGANIZED BY BSIMM SCORE. Presence of the satellite
and average score appear to be correlated, but we don’t have enough data to say which is the cause and which is the effect. Of
the 10 BSIMM12 firms with the lowest score, only one has a satellite.
Sixty-nine percent of the 52 BSIMM12 firms that have been assessed more than once have a satellite, while 62% of the
firms on their first assessment do not. Many firms that are new to software security take some time to identify and
develop a satellite.
These data suggest that as an SSI matures, its activities become distributed and institutionalized into the
organizational structure, and perhaps even into engineering automation as well, requiring an expanded satellite to
provide expertise and be the local voice of the SSG. Among our population of 128 firms, initiatives tend to evolve from
centralized and specialized to decentralized and distributed (with an SSG at the core orchestrating things).
EVERYBODY ELSE
SSIs are truly cross-departmental efforts that involve a variety of stakeholders:
• Builders, including developers, architects, and their managers, must practice security engineering, taking at least
some responsibility for both the definition of “secure enough” as well as ensuring what’s delivered achieves the
desired posture. Increasingly, an SSI reflects collaboration between the SSG and these engineering-led teams
coordinating to carry out the activities described in the BSIMM.
• Testers typically conduct functional and feature testing, but some move on to include security testing in their test
plans. More recently, some testers are beginning to anticipate how software architectures and infrastructures
can be attacked and are working to find an appropriate balance between implementing automation and manual
testing to ensure adequate security testing coverage. While the term “testers” usually refers to quality assurance
(QA) teams, note that some development teams create many of the functional tests (and perhaps some of the
security tests) that are applied to software.
• Operations teams must continue to design, defend, and maintain resilient environments. As you will see in the
Deployment domain of the SSF, software security doesn’t end when software is “shipped.” In accelerating trends,
development and operations are collapsing into one or more DevOps teams, and the business functionality
delivered is becoming very dynamic in the operational environment. This means an increasing amount of their
security effort, including infrastructure controls and security configuration, is becoming software-defined.
• Administrators must understand the distributed nature of modern systems, create and maintain secure builds,
and begin to practice the principle of least privilege, especially when it comes to the applications they host or
attach to as services in the cloud.
BSIMM TERMINOLOGY
Nomenclature has always been a problem in computer security, and software security is no
exception. Several terms used in the BSIMM have particular meaning for us. The following list
highlights some of the most important terms used throughout this document:
• ACTIVITY. Actions or efforts carried out or facilitated by the SSG as part of a practice. Activities
are divided into three levels in the BSIMM based on observation rates.
• CAPABILITY. A set of BSIMM activities spanning one or more practices working together to
serve a cohesive security function.
• CHAMPION. Interested and engaged developers, cloud security engineers, deployment engineers,
architects, software managers, testers, and people in similar roles who have a natural affinity for
software security and contribute to the security posture of the organization and its software.
• DOMAIN. One of the four categories our framework is divided into, i.e., Governance, Intelligence,
SSDL Touchpoints, and Deployment.
• PRACTICE. BSIMM activities are organized into 12 categories or practices. Each of the four
domains in the SSF has three practices.
• SATELLITE. A group of individuals, often called champions, that is organized and leveraged by an SSG.
• SECURE SDL (SSDL). Any software lifecycle with integrated software security release conditions
and activities.
• SOFTWARE SECURITY FRAMEWORK (SSF). The basic structure underlying the BSIMM,
comprising 12 practices divided into four domains.
• SOFTWARE SECURITY GROUP (SSG). The internal group charged with carrying out and
facilitating software security. As SSIs evolve, the SSG might be entirely a corporate team, entirely
an engineering team, or an appropriate hybrid. The team’s name might also have an appropriate
organizational focus, such as application security group or product security group. According to
our observations, the first step of an SSI is to form an SSG. (Note that in BSIMM11, we expanded
the definition of the SSG, a fundamental term in the BSIMM world, from implying that the
group is always centralized in corporate to specifically acknowledging that the group may be a
federated collection of people in corporate, engineering, and perhaps elsewhere. When reading
this document and especially when adapting the activities for use in a given organization, use
this expanded definition.)
Testers
• Be it DAST tools in QA, access control of new cloud native platforms, or test cases driven by
a framework such as Cucumber or Selenium, or by more strategic testing of failure through
frameworks like ChaosMonkey, some testing regimes are beginning to incorporate nontrivial
security test cases. Facilities provided by cloud service providers actively encourage consideration
of failure and abuse across the full stack of deployment, such as a microservice component, a data
center, or even an entire region going dark.
• Similarly, QA practices will have to consider how systems are configured and deployed by, for
example, testing configurations for virtualization and cloud components. In many organizations
today, software is built in anticipation of failure, and the associated test cases go directly into
regression suites run by QA groups or run directly through automation. Increasingly, testers
participate in feedback loops with operations teams to understand runtime failures and
continuously improve test coverage.
ACTIVITY 122 BSIMM12 activities, shown in 4 domains and 12 practices BSIMM12 FIRMS Count of firms (out of 128) observed performing each activity
The most common activity within a practice Most common activity in practice was observed in this assessment
Most common activity in practice was not observed A practice where firm’s high-water mark score is below the BSIMM12 average
TABLE 3. BSIMM12 EXAMPLEFIRM SCORECARD. A scorecard is helpful for understanding efforts currently underway. It also
helps visualize the activities observed by practice and by level to serve as a guide on where to focus next.
2.0
SOFTWARE
ENVIRONMENT 1.5 TRAINING
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
FIGURE 8. ALLFIRMS VS. EXAMPLEFIRM SPIDER CHART. Charting high-water mark values provides a low-resolution view
of maturity that can be useful for comparisons between firms, between business units, and within the same firm over time.
By identifying activities from each practice that are useful for you, and by ensuring proper balance with respect to
domains, you can create a strategic plan for your SSI moving forward. Note that most SSIs are multiyear efforts with
real budget, mandate, and ownership behind them. In addition, while all initiatives look different and are tailored to
fit a particular organization, all initiatives share common core activities (see “Table 4. Most Common Activities Per
Practice” in Part Three).
SSI PHASES
No matter an organization’s culture, all firms strive to reach similar waypoints on their software security journey. Over
time, we find that SSIs typically progress through three states:
• EMERGING. An organization is tasked with booting a new SSI from scratch or formalizing nascent or ad
hoc security activities into a holistic strategy. An emerging SSI has defined its initial strategy, implemented
foundational activities (e.g., such as those observed most frequently in each practice), acquired some resources,
and might even have a roadmap for the next 12 to 24 months of its evolution. SSI leaders working on a program’s
foundations are often resource-constrained on both people and budget, so they might create a small SSG that
uses compliance requirements or other executive mandates as the initial drivers to continue adding activities.
Managing frictions with key stakeholders that are resistant to adopting even the most basic process discipline
requires strong executive support.
• MATURING. An organization with an existing or emerging software security approach is connected to executive
expectations for managing software security risk and progressing along a roadmap to scale security capabilities.
A maturing SSI can be adding and improving activities and capabilities across a wide range of qualitative
dimensions. Most commonly, they are making changes to:
o Reduce friction across business and development stakeholders
o Protect people’s productivity gains through automation investments
o Expand the breadth and depth of coverage of key activities such as defect discovery efforts
o Refactor existing efforts to happen earlier or operate more expediently within the development lifecycle
o Reallocate efforts in line with opportunity cost to maximize positive results across the array of capabilities
o Realize greater impact of defect discovery through escape analysis with an emphasis on finding systematic
solutions to systemic problems
o Examine the net impact to resiliency by real-world attacks
In our experience, strong programs make consistent, incremental improvements in the development lifecycle and
key security integrations based on the real-world data they see. Even an exhaustive collection of activities with
strong governance will fail if the program does not embrace a core tenet of ongoing improvement.
• ENABLING. Organizations that have consistently matured and made investments to overcome growing pains
often reach the point in which the goals of digital transformation efforts, such as adoption of new lifecycles, are
harmonized with the evolutionary needs of the SSI. For instance, reducing the time to remediate bugs found
in operations is often supported by the shorter release cycles enabled by DevOps transformations. Similarly,
standardizing technology stacks and developing a robust library of reusable security features expedites both
development (by providing reusable building blocks) and security activities (by reducing the footprint of the
codebase and avoiding duplicated efforts). Organizations at the enabling level are usually fanatical about
automation and about protecting their most critical resources—their people—so they have time to tackle
security innovation.
It’s compelling to imagine that organizations could self-assess and determine that by doing X number of activities,
they qualify as emerging, maturing, or enabling. However, experience shows that SSIs can expedite reaching a
“maturing” stage by focusing on the activities that are right for them (e.g., to meet external contractual or compliance
requirements) without regard for the total activity count. This is especially true when considering software portfolio
size and the relative complexity of scaling and maturing some activities across 1, 10, 100, and 1,000 applications.
In addition, organizations don’t always progress from emerging to enabling in one direction or in a straight path.
We have seen SSIs form, break up, and re-form over time, so one SSI might go through the emerging cycle a few
times over the years. Moreover, an SSI’s capabilities might not all progress through the same states at the same time.
We’ve noted cases where one capability—vendor management, for example—might be emerging, while the defect
management capability is maturing, and the defect discovery capability is at an enabling stage. There is also constant
change in tools, skill levels, external expectations, attackers, attacks, resources, culture, and everything else. Pay
attention to the relative frequency with which the BSIMM activities are observed across all the participants but use
your own metrics to determine if you’re making the progress that’s right for you.
TODAY
2ND-GENERATION
ENGINEERING-LED EFFORTS
(DevOps)
CORPORATE ENGINEERING
FIGURE 9. SSG EVOLUTION. These groups might have started in corporate or in engineering but, in general, settled on
enforcing compliance with tools. The new wave of engineering-led efforts is shifting where SSGs live, what they focus on, and
how they operate.
The DevOps movement has put these tensions center stage for SSG leaders to address. Given different objectives,
we find that the outcomes desired by these two approaches are usually very different. Rather than the top-down,
proactive risk management and “rules that people must follow” style of governance-minded teams, these newer
engineering-minded teams are more likely to “prototype good ideas” for securing software, which results in the
creation of even more code and infrastructure on the critical path (e.g., security features, home-spun vulnerability
discovery, security guardrails). Here, security is just another aspect of quality, and availability is just another aspect
of resilience.
To keep pace with both software development process changes (e.g., CI/CD adoption) and technology architecture
changes (e.g., cloud, container, and orchestration adoption), engineering-led efforts are independently evolving
both how they apply software security activities and, in some cases, what activities they apply. The changes these
engineering-led teams are making include downloading and integrating their own security tools, spinning up
cloud infrastructure and virtual assets as they need them, following policy on the use of open source software in
applications while routinely downloading dozens or hundreds of other open source packages to build and manage
software and processes, and so on. Engineering-led efforts and their associated fast-paced evolutionary changes
are putting governance-driven SSIs in a race to retroactively document, communicate, and even automate the
knowledge they hold.
In addition to centralized SSI efforts and engineering-led efforts, cloud service providers, software pipeline and
orchestration platforms, and even QA tools have all begun adding their view of security as a first-class citizen of their
feature sets, user guides, and knowledge bases. For example, organizations are seeing platforms like GitHub and
GitLab beginning to compete vigorously using security as a differentiator, leading both providers to create publicly
available security documentation with a 12- to 36-month vision. Evolving vendor-provided features might be signaling
to the marketplace and adopting organizations that vendors believe security must be included in developer tools
and that engineering-led security initiatives should feel comfortable relying on these platforms as the basis of their
security telemetry and even their governance workflows.
CONVERGENCE AS A GOAL
We frequently observe governance-driven SSIs planning centrally, seeking to proactively define an ideal risk posture
during their emerging phase. After that, the initial uptake of provided controls (e.g., security testing) is usually by
those teams that have experienced real security issues and are looking for help, while other teams might take a
wait-and-see approach. These firms often struggle during the maturation phase where growth will incur significant
expense and effort as the SSG scales the controls and their benefits enterprise-wide.
We also observe that emerging engineering-driven efforts prototype controls incrementally, building on existing tools
and techniques that already drive software delivery. Gains happen quickly in these emerging efforts, perhaps given
the steady influx of new tools and techniques introduced by engineering but also helped along by the fact that each
team is usually working in a homogenous culture on a single application and technology stack. Even so, these groups
sometimes struggle to institutionalize durable gains during their maturation phase, usually because the engineers have
not yet been able to turn capability into either secure-by-default functionality or automation-friendly assurance—at
least not beyond the most frequently encountered security issues and beyond their own spheres of influence.
All of this said, scaling an SSI across a software portfolio is hard for everyone. Today’s evolving cultural and
technological environments seem to require a concerted effort at converging centralized and engineering efforts to
create a cohesive SSI that ensures the software portfolio is appropriately protected.
Emerging engineering-driven groups tend to view security as an enabler of software features and code quality. These
groups recognize the need for having security standards but tend to prefer incremental steps toward governance-
as-code as opposed to a large-manual-steps-with-human-review approach to enforcement. This tends to result in
engineers building security features and frameworks into architectures, automating defect discovery techniques
within a software delivery pipeline, and treating security defects like any other defect. Traditional human-driven
security decisions are modeled into a software-defined workflow as opposed to being written into a document
and implemented in a separate risk workflow handled outside of engineering. In this type of culture, it’s not that
the traditional SDLC gates and risk decisions go away, it’s that they get implemented differently and usually have
different goals compared to those of the governance-driven groups. SSGs, and likely champions groups as well, that
begin to support this approach will likely speed up both convergence of various efforts and alignment with corporate
risk management goals.
Importantly, the delivery pipeline platforms upon which many engineering teams rely have begun to support a broader
set of security activities as on-by-default security features. As examples, OpenShift, GitHub, and GitLab are doing
external security research, responsibly disclosing vulnerabilities and notifying users, and providing some on-by-default
defect discovery, vulnerability management, and remediation management workflows. This allows engineering-driven
firms to share success between teams that use these platforms simply by sharing configuration information, thus
propagating their security policies quickly and consistently. It also allows SSGs and champions to more easily tie in at key
points in the SSDL to perform governance activities with minimal impact on software pipelines.
Though the BSIMM data and our analyses don’t dictate specific paths to SSI maturity, we have observed patterns
in the ways firms use the activities to improve their capabilities. For example, governance-led and emerging
engineering-led approaches to software security improvement embody different perspectives on risk management
that might not correlate. Governance-led groups often focus on rules, gates, and compliance, while emerging
engineering-led efforts usually focus on feature velocity, error avoidance through automation, and software resilience.
Success doesn’t require identical viewpoints, but collectively the viewpoints need to converge in order to keep the
firm safe. That means the groups must collaborate on risk management concerns to build on their strengths and
minimize their weaknesses.
Beyond immutable constraints like the applicability of regulation, we see evidence that assignment can be rather
opportunistic and perhaps be driven bottom-up by security engineers and development managers themselves. In
these cases, the security initiative’s leader often seeks opportunities to cull their efforts and scale key successes rather
than direct the use of controls top-down.
ENGINEERING-LED HEURISTICS
Our observations are that engineering-led groups start with open source and home-grown security
tools, with much less reliance on “big box” vulnerability discovery products. Generally, these groups
hold to two heuristics:
• Lengthening time to (or outright preventing) delivery is unacceptable. Instead, they organize to
provide telemetry and then respond asynchronously through virtual patching, rollback, or other
compensating controls.
• Depending solely on purchased boxed security standards delivered as part of a vendor’s core
ruleset in a given tool is likely unacceptable. Instead, or in addition, they build vulnerability
detective capability incrementally, in line with a growing understanding of software misuse and
abuse and associated business risk.
These groups might build on top of in-place test scaffolding, might purposefully extend open source scanners that
integrate cleanly with their development toolchain, or both. Extension often focuses on a different set of issues
than characterized in general lists such as the OWASP Top 10 or even the broader set of vulnerabilities found by
commercial tools. Instead, these groups might focus on issues such as denial of service, misuse/abuse of business
functionality, or enforcement of the organization’s technology-specific coding standards (even when these are
implicit rather than written down) as defects to be discovered and remediated.
ACTIVITY DESCRIPTION
TABLE 4. MOST COMMON ACTIVITIES PER PRACTICE. This table shows the most commonly observed activity in each
of the 12 BSIMM practices for the entire data pool of 128 firms. This frequent observation means that each activity has broad
applicability across a wide variety of SSIs. See Table 5 for the most common activities across all 122 BSIMM12 activities.
2 117 [SE1.2] Ensure host and network security basics are in place.
12 98 [PT 1 .2] Feed results to the defect management and mitigation system.
13 96 [CMVM1.2] Identify software defects found in operations monitoring and feed them back to development.
14 93 [CMVM2.2] Track software bugs found in operations through the fix process.
TABLE 5. TOP 20 ACTIVITIES BY OBSERVATION COUNT. Shown here are the most commonly observed activities in the
BSIMM12 data.
SM LEVEL 1
[SM1.1: 91] Publish process and evolve as necessary.
The process for addressing software security is published and broadcast to all stakeholders so that everyone knows
the plan. Goals, roles, responsibilities, and activities are explicitly defined. Most organizations pick an existing
methodology, such as the Microsoft SDL or the Synopsys Touchpoints, then tailor it to meet their needs. Security
activities, such as those grouped into an SSDL process, are adapted to software lifecycle processes (e.g., waterfall,
agile, CI/CD, DevOps) so activities will evolve with both the organization and the security landscape. In many cases,
the process is defined by the SSG and only published internally; it doesn’t need to be publicly promoted outside
the firm to have the desired impact. In addition to publishing the written process, some firms also encode it into an
application lifecycle management (ALM) tool as software-defined workflow (see [SM3.4 Integrate software-defined
lifecycle governance]).
[SM1.3: 81] Educate executives on software security.
Executives are regularly shown the ways malicious actors attack software and the negative business impacts
those attacks can have on the organization. This education goes past the reporting of open and closed defects
to show what other organizations are doing to mature software security, including how they deal with the risks
of adopting emerging engineering methodologies with no oversight. By understanding both the problems and
their proper resolutions, executives can support the SSI as a risk management necessity. In its most dangerous
form, security education arrives courtesy of malicious hackers or public data exposure incidents. Preferably, the
SSG will demonstrate a worst-case scenario in a controlled environment with the permission of all involved (e.g.,
by showing working exploits and their business impact). In some cases, presentation to the Board can help garner
resources for new or ongoing SSI efforts. For example, demonstrating the need for new skill-building training in
evolving areas, such as DevOps groups using cloud-native technologies, can help convince leadership to accept SSG
recommendations when they might otherwise be ignored in favor of faster release dates or other priorities. Bringing
in an outside guru is often helpful when seeking to bolster executive attention.
[SM1.4: 118] Implement lifecycle instrumentation and use to define governance.
The software security process includes conditions for release (such as gates, checkpoints, guardrails, milestones,
etc.) at one or more points in a software lifecycle. The first two steps toward establishing security-specific release
conditions are to identify locations that are compatible with existing development practices and to then begin
gathering the input necessary to make a go/no-go decision, such as risk-ranking thresholds or defect data.
Importantly, the conditions might not be verified at this stage—for example, the SSG can collect security testing
results for each project prior to release, then provide their informed opinion on what constitutes sufficient testing or
acceptable test results without trying to stop a project from moving forward. In CI/CD environments, shorter release
cycles often require creative approaches to collecting the right evidence and rely heavily on automation. The idea of
defining governance checks in the process first and enforcing them later is extremely helpful in moving development
toward software security without major pain (see [SM2.2 Verify release conditions with measurements and track
exceptions]). Socializing the conditions and then verifying them once most projects already know how to succeed
is a gradual approach that can motivate good behavior without requiring it.
SM LEVEL 2
[SM2.1: 63] Publish data about software security internally and drive change.
To facilitate improvement, data are published internally about the state of software security within the organization.
This information might come in the form of a dashboard with metrics for executives and software development
management. Sometimes, these published data won’t be shared with everyone in the firm but only with relevant
stakeholders who then drive change in the organization. In other cases, open book management and data published
to all stakeholders help everyone know what’s going on, the philosophy being that sunlight is the best disinfectant.
If the organization’s culture promotes internal competition between groups, this information can add a security
dimension. Increasingly, security telemetry is used to gather measurements quickly and accurately, and might
initially focus less on historical trends (e.g., bugs per release) and more on speed (e.g., time to fix) and quality (e.g.,
defect density). Some SSIs might publish these data primarily for software development management in engineering
groups within pipeline platform dashboards, democratizing measurement for developer self-improvement.
CP LEVEL 1
[CP1.1: 98] Unify regulatory pressures.
If the business or its customers are subject to regulatory or compliance drivers such as PCI security standards;
GLBA, SOX, and HIPAA in the US; or GDPR in the EU, the SSG acts as a focal point for understanding the constraints
such drivers impose on software security. In some cases, the SSG creates or collaborates on a unified approach
that removes redundancy and conflicts from overlapping compliance requirements. A formal approach will map
applicable portions of regulations to controls applied to software to explain how the organization complies. As an
alternative, existing business processes run by legal, product management, or other risk and compliance groups
outside the SSG could also serve as the regulatory focal point, with the SSG providing software security knowledge.
A unified set of software security guidance for meeting regulatory pressures ensures that compliance work is
completed as efficiently as possible. Some firms move on to influencing the regulatory environment directly by
becoming involved in standards groups exploring new technologies and mandates.
CP LEVEL 2
[CP2.1: 55] Build PII inventory.
The organization identifies and tracks the kinds of PII processed or stored by each of its systems, along with their
associated data repositories. In general, simply noting which applications process PII isn’t enough; the type of PII and
where it is stored are necessary so the inventory can be easily referenced in critical situations. This usually includes
making a list of databases that would require customer notification if breached or a list to use in crisis simulations
(see [CMVM3.3 Simulate software crises]). A PII inventory might be approached in two ways: starting with each
individual application by noting its PII use or starting with particular types of PII and noting the applications that
touch each type. System architectures have evolved such that PII will flow into cloud-based service and endpoint
device ecosystems, and come to rest there (e.g., content delivery networks, social networks, mobile devices, IoT
devices), making it tricky to keep an accurate PII inventory.
[CP2.2: 49] Require security sign-off for compliance-related risk.
The organization has a formal compliance risk acceptance and accountability process that addresses all software
development projects. In that process, the SSG acts as an advisor while the risk owner signs off on the software’s state
prior to release based on adherence to documented criteria. For example, the sign-off policy might require the head
of the business unit to sign off on compliance issues that haven’t been mitigated or on compliance-related SSDL
steps that have been skipped. Sign-off is explicit and captured for future reference, with any exceptions tracked, even
under the fastest of application lifecycle methodologies. Note that an application without security defects might
still be noncompliant, so clean security testing results are not a substitute for a compliance sign-off. Even in DevOps
organizations where engineers have the technical ability to release software, there is still a need for a deliberate risk
acceptance step even if the criteria are embedded in automation (see [SM3.4 Integrate software-defined lifecycle
governance]). In cases where the risk owner signs off on a particular set of compliance acceptance criteria that are
then implemented in automation to provide governance-as-code, there must be an ongoing verification that the
criteria remain accurate and the automation is actually working.
CP LEVEL 3
[CP3.1: 24] Create a regulator compliance story.
The SSG has information regulators want, so a combination of written policy, controls documentation, and artifacts
gathered through the SSDL gives the SSG the ability to demonstrate the organization’s software security compliance
story without a fire drill for every audit. Often, regulators, auditors, and senior management will be satisfied with
the same kinds of reports that can be generated directly from various tools. In some cases, particularly where
organizations leverage shared responsibility cloud services, the organization will require additional information from
vendors about how that vendor’s controls support organizational compliance needs. It will often be necessary to
normalize information that comes from disparate sources. While they are often the biggest, governments aren’t the
only regulators of behavior.
[CP3.2: 18] Impose policy on vendors.
Vendors are required to adhere to the same policies used internally and must submit evidence that their software
security practices pass muster. For a given organization, vendors might comprise cloud providers, middleware
providers, virtualization providers, container and orchestration providers, bespoke software creators, contractors, and
many more, and each might be held to different policy requirements. Evidence of their compliance could include
results from SSDL activities or from tests built directly into automation or infrastructure. Vendors might attest to the
fact that they perform certain SSDL processes. Policy enforcement might be through a point-in-time review (like that
which assures acceptance criteria), enforced by automated checks (such as those applied to pull requests, committed
artifacts like containers, or similar), or a matter of convention and protocol (e.g., services cannot connect unless
particular security settings are correct, identifying certificates are present).
T LEVEL 1
[T 1 .1: 76] Conduct software security awareness training.
To promote a culture of software security throughout the organization, the SSG conducts periodic software security
awareness training. As examples, the training might be delivered via SSG members, an outside firm, the internal
training organization, or e-learning. Here, the course content doesn’t necessarily have to be tailored for a specific
audience—for example, all developers, QA engineers, and project managers could attend the same “Introduction
to Software Security” course—but this effort should be augmented with a tailored approach that addresses the
firm’s culture explicitly, which might include the process for building security in, avoiding common mistakes, and
technology topics such as CI/CD and DevSecOps. Generic introductory courses that cover basic IT or high-level
security concepts don’t generate satisfactory results. Likewise, awareness training aimed only at developers and not
at other roles in the organization is insufficient.
[T 1 .7: 53] Deliver on-demand individual training.
The organization lowers the burden on students and reduces the cost of delivering training by offering on-
demand training for individuals across roles. The most obvious choice, e-learning, can be kept up to date through
a subscription model, but an online curriculum must be engaging and relevant to the students in various roles to
achieve its intended purpose. Ineffective (e.g., aged, off-topic) training or training that isn’t used won’t create any
change. Hot topics like containerization and security orchestration and new delivery styles such as gamification will
attract more interest than boring policy discussions. For developers, it’s possible to provide training directly through
the IDE right when it’s needed, but in some cases, building a new skill (such as cloud security or threat modeling)
might be better suited for instructor-led training, which can also be provided on demand.
[T 1 .8: 46] Include security resources in onboarding.
The process for bringing new hires into an engineering organization requires timely completion of a training module
about software security. The generic new hire process usually covers topics like picking a good password and avoiding
phishing, but this orientation period should be enhanced to cover topics such as how to create, deploy, and operate
secure code, the SSDL, and internal security resources (see [SR1.2 Create a security portal]). The objective is to ensure
that new hires contribute to the security culture as soon as possible. Although a generic onboarding module is useful,
it doesn’t take the place of a timely and more complete introductory software security course.
T LEVEL 2
[T2.5: 39] Enhance satellite through training and events.
The SSG strengthens the satellite network (see [SM2.3 Create or grow satellite]) by inviting guest speakers or holding
special events about advanced topics (e.g., the latest software security techniques for DevOps or serverless code
technologies). This effort is about providing to the satellite customized training so that it can fulfill its assigned
responsibilities, not about inviting satellite members to routine brown bags or signing them up for standard
computer-based training. Similarly, a standing conference call with voluntary attendance won’t get the desired
results, which are as much about building camaraderie as they are about sharing knowledge and organizational
efficiency. Face-to-face meetings are by far the most effective, even if they happen only once or twice a year and some
participants must attend over videoconferencing. In teams with many geographically dispersed and work-from-home
members, simply turning on cameras and ensuring everyone gets a chance to speak makes a substantial difference.
T LEVEL 3
[T3.1: 6] Reward progression through curriculum.
Knowledge is its own reward, but progression through the security curriculum brings other benefits, too, such as
career advancement. The reward system can be formal and lead to a certification or an official mark in the human
resources system, or it can be less formal and include motivators such as documented praise at annual review time.
Involving a corporate training department and/or human resources team can make security’s impact on career
progression more obvious, but the SSG should continue to monitor security knowledge in the firm and not cede
complete control or oversight. Coffee mugs and t-shirts can build morale, but it usually takes the possibility of real
career progression to change behavior.
[T3.2: 23] Provide training for vendors and outsourced workers.
Vendors and outsourced workers receive the same level of software security training given to employees. Spending
time and effort helping suppliers get security right at the outset is much easier than trying to determine what went
wrong later on, especially if the development team has moved on to other projects. Training individual contractors
is much more natural than training entire outsourced firms and is a reasonable place to start. It’s important that
everyone who works on the firm’s software has an appropriate level of training that increases their capability of
meeting the software security expectations for their role, regardless of their employment status. Of course, some
vendors and outsourced workers might have received adequate training from their own firms, but that should
always be verified.
[T3.3: 23] Host software security events.
The organization hosts security events featuring external speakers and content in order to strengthen its security
culture. Good examples of such events are Intel iSecCon and AWS re:Inforce, which invite all employees, feature
external presenters, and focus on helping engineering create, deploy, and operate better code. Employees benefit
from hearing outside perspectives, especially those related to fast-moving technology areas, and the organization
benefits from putting its security credentials on display (see [SM3.2 SSI efforts are part of external marketing]).
Events open only to small, select groups won’t result in the desired culture change across the organization.
[T3.4: 24] Require an annual refresher.
Everyone involved in the SSDL is required to take an annual software security refresher course. This course keeps
the staff up to date on the organization’s security approach and ensures the organization doesn’t lose focus due to
turnover, evolving methodologies, or changing deployment models. The SSG might give an update on the security
landscape and explain changes to policies and standards. A refresher could also be rolled out as part of a firm-wide
security day or in concert with an internal security conference, but it’s useful only if it’s fresh. Sufficient coverage of
topics and changes from the previous year will likely comprise a significant amount of content.
INTELLIGENCE
INTELLIGENCE: ATTACK MODELS (AM)
Attack Models capture information used to think like an attacker, including threat modeling inputs, abuse cases, data
classification, and technology-specific attack patterns.
AM LEVEL 1
[AM1.2: 77] Create a data classification scheme and inventory.
Security stakeholders in an organization agree on a data classification scheme and use it to inventory software,
delivery artifacts (e.g., containers), and associated persistent stores according to the kinds of data processed or
services called, regardless of deployment model (e.g., on- or off-premise). This allows applications to be prioritized
by their data classification. Many classification schemes are possible—one approach is to focus on PII, for example.
Depending on the scheme and the software involved, it could be easiest to first classify data repositories (see [CP2.1
Build PII inventory]) and then derive classifications for applications according to the repositories they use. Other
approaches to the problem include data classification according to protection of intellectual property, impact of
disclosure, exposure to attack, relevance to GDPR, and geographic boundaries.
[AM1.3: 41] Identify potential attackers.
The SSG periodically identifies potential attackers in order to understand their motivations and abilities. The outcome
of this exercise could be a set of attacker profiles that includes outlines for categories of attackers and more detailed
descriptions for noteworthy individuals to be used in end-to-end design review (see [AA1.2 Perform design review
for high-risk applications]). In some cases, a third-party vendor might be contracted to provide this information.
Specific and contextual attacker information is almost always more useful than generic information copied from
someone else’s list. Moreover, a list that simply divides the world into insiders and outsiders won’t drive useful results.
Identification of attackers should also consider the organization’s evolving software supply chain, attack surface,
theoretical internal attackers, and contract staff.
[AM1.5: 61] Gather and use attack intelligence.
The SSG ensures the organization stays ahead of the curve by learning about new types of attacks and vulnerabilities.
Attending technical conferences and monitoring attacker forums, then correlating that information with what’s
happening in the organization (perhaps by leveraging automation to mine operational logs and telemetry) helps
the SSG learn more about emerging vulnerability exploitation. In many cases, a subscription to a commercial service
can provide a reasonable way of gathering basic attack intelligence related to applications, APIs, containerization,
orchestration, cloud environments, and so on. Regardless of its origin, attack information must be adapted to the
organization’s needs and made actionable and useful for a variety of consumers, which might include developers,
testers, DevOps, security operations, and reliability engineers, among others.
AM LEVEL 3
[AM3.1: 5] Have a research group that develops new attack methods.
A research group works to identify and defang new classes of attacks before attackers even know that they exist.
Because the security implications of new technologies might not have been fully explored in the wild, doing it in-
house is sometimes the best way forward. This isn’t a penetration testing team finding new instances of known
types of weaknesses—it’s a research group that innovates new types of attacks. Some firms provide researchers time
to follow through on their discoveries using bug bounty programs or other means of coordinated disclosure (see
[CMVM3.7 Streamline incoming responsibility vulnerability disclosure]). Others allow researchers to publish their
findings at conferences like DEF CON to benefit everyone.
SFD LEVEL 3
[SFD3.1: 16] Form a review board or central committee to approve and maintain secure design patterns.
A review board or central committee formalizes the process of reaching and maintaining consensus on design needs
and security tradeoffs. Unlike a typical architecture committee focused on functions, this group focuses on providing
security guidance, often in the form of patterns, standards, features, or frameworks, and also periodically reviews
already published design guidance (especially around authentication, authorization, and cryptography) to ensure
that design decisions don’t become stale or out of date. A review board can help control the chaos associated with
adoption of new technologies when development groups might otherwise make decisions on their own without
engaging the SSG. Review board security guidance can also serve to inform outsourced software providers about
security expectations (see [CP3.2 Impose policy on vendors]).
[SFD3.2: 15] Require use of approved security features and frameworks.
Implementers take their security features and frameworks from an approved list or repository (see [SFD2.1 Leverage
secure-by-design components and services]). There are two benefits to this activity: developers don’t spend time
reinventing existing capabilities, and review teams don’t have to contend with finding the same old defects in new
projects or when new platforms are adopted. Essentially, the more a project uses proven components, the easier
testing, code review, and architecture analysis become (see [AA1.1 Perform security feature review]). Reuse is a
major advantage of consistent software architecture and is particularly helpful for agile development and velocity
maintenance in CI/CD pipelines. Packaging and applying required components facilitate the delivery of services
as software features (e.g., identity-aware proxies). Containerization makes it especially easy to package and reuse
approved features and frameworks (see [SE2.5 Use application containers to support security goals]).
[SFD3.3: 5] Find and publish secure design patterns from the organization.
The SSG fosters centralized design reuse by collecting secure design patterns (sometimes referred to as security
blueprints) from across the organization and publishing them for everyone to use. A section of the SSG website
could promote positive elements identified during threat modeling or architecture analysis so that good ideas are
spread. This process is formalized—an ad hoc, accidental noticing isn’t sufficient. In some cases, a central architecture
or technology team can facilitate and enhance this activity. Common design patterns accelerate development, so
it’s important to use secure design patterns not just for applications but for all software assets (microservices, APIs,
containers, infrastructure, and automation).
SR LEVEL 1
[SR1.1: 90] Create security standards.
The SSG meets the organization’s demand for security guidance by creating standards that explain the required way
to adhere to policy and carry out specific security-centric operations. A standard might, for example, describe how
to perform identity-based application authentication or how to implement transport-level security, perhaps with the
SSG ensuring the availability of a reference implementation. Standards often apply to software beyond the scope
of an application’s code, including container construction, orchestration (e.g., infrastructure-as-code), and service
mesh configuration. Standards can be deployed in a variety of ways to keep them actionable and relevant. They can
be automated into development environments (e.g., worked into an IDE or toolchain) or explicitly linked to code
examples and deployment artifacts (e.g., containers). In any case, to be considered standards, they must be adopted
and enforced.
[SR1.2: 88] Create a security portal.
The organization has a well-known central location for information about software security. Typically, this is an
internal website maintained by the SSG that people refer to for the latest and greatest on security standards and
requirements, as well as for other resources provided by the SSG (e.g., training). An interactive portal is better than
a static portal with guideline documents that rarely change. Organizations can supplement these materials with
mailing lists, chat channels, and face-to-face meetings. Development teams are increasingly putting software security
knowledge directly into toolchains and automation that is outside the organization (e.g., GitHub), but that does not
remove the need for SSG-led knowledge management.
[SR1.3: 99] Translate compliance constraints to requirements.
Compliance constraints are translated into software requirements for individual projects and communicated to
the engineering teams. This is a linchpin in the organization’s compliance strategy: by representing compliance
constraints explicitly with requirements and informing stakeholders, the organization demonstrates that compliance
is a manageable task. For example, if the organization routinely builds software that processes credit card
transactions, PCI DSS compliance plays a role in the SSDL during the requirements phase. In other cases, technology
standards built for international interoperability can include security guidance on compliance needs. Representing
these standards as requirements also helps with traceability and visibility in the event of an audit. It’s particularly
useful to codify the requirements into reusable code or artifact deployment specifications.
SR LEVEL 2
[SR2.2: 64] Create a standards review board.
The organization creates a review board to formalize the process used to develop standards and to ensure that all
stakeholders have a chance to weigh in. This review board could operate by appointing a champion for any proposed
standard, putting the onus on the champion to demonstrate that the standard meets its goals and to get buy-in and
approval from the review board. Enterprise architecture or enterprise risk groups sometimes take on the responsibility
of creating and managing standards review boards. When the standards are implemented directly as software, the
responsible champion might be a DevOps manager, release engineer, or whoever owns the associated deployment
artifact (e.g., orchestration code).
[SR2.4: 74] Identify open source.
Open source components included in the software portfolio and integrated at runtime are identified and reviewed to
understand their dependencies. Organizations use a variety of tools and metadata provided by delivery pipelines to
discover old versions of components with known vulnerabilities or that their software relies on multiple versions of the
same component. Automated tools for finding open source, whether whole components or large chunks of borrowed
code, are one way to approach this activity. Some software development pipeline platforms, container registries, and
middleware platforms have begun to provide this visibility as metadata resulting from behind-the-scenes artifact
scanning. A manual review or a process that relies solely on developers asking for permission does not generate
satisfactory results. Some organizations combine composition analysis results from multiple phases of the
software lifecycle in order to get a more complete and accurate view of the open source being included in
production software.
SR LEVEL 3
[SR3.1: 35] Control open source risk.
The organization has control over its exposure to the risks that come along with using open source components
and all the involved dependencies, including dependencies integrated at runtime. The use of open source could
be restricted to predefined projects or to a short list of open source versions that have been through an approved
security screening process, have had unacceptable vulnerabilities remediated, and are made available only through
approved internal repositories and containers. For some use cases, policy might preclude any use of open source.
The legal department often spearheads additional open source controls due to the “viral” license problem associated
with GPL code. SSGs that partner with and educate the legal department on open source security risks can help
move an organization to improve its open source risk management practices, which must be applied across the
software portfolio to be effective.
[SR3.2: 13] Communicate standards to vendors.
The SSG works with vendors to educate them and promote the organization’s security standards. A healthy
relationship with a vendor isn’t guaranteed through contract language alone (see [CP2.4 Include software security
SLAs in all vendor contracts]), so the SSG should engage with vendors, discuss vendor security practices, and explain
in simple terms (rather than legalese) what the organization expects. Any time a vendor adopts the organization’s
security standards, it’s a clear sign of progress. Note that standards implemented as security features or infrastructure
configuration could be a requirement to services integration with a vendor (see [SFD1.1 Integrate and deliver security
features] and [SFD2.1 Leverage secure-by-design components and services]). When the firm’s SSDL is publicly
available, communication regarding software security expectations is easier. Likewise, sharing internal practices and
measures can make expectations clear.
[SR3.3: 9] Use secure coding standards.
Secure coding standards help the organization’s developers avoid the most obvious bugs and provide ground
rules for code review. These standards are necessarily specific to a programming language or platform, and they
can address the use of popular frameworks, APIs, libraries, and infrastructure automation. Platforms might include
mobile or IoT runtimes, cloud service provider APIs, orchestration recipes, and SaaS platforms (e.g., Salesforce, SAP).
If the organization already has coding standards for other purposes, its secure coding standards should build upon
them. A clear set of secure coding standards is a good way to guide both manual and automated code review, as well
as to provide relevant examples for security training. Some groups might choose to integrate their secure coding
standards directly into automation. While enforcement isn’t the point at this stage (see [CR3.5 Enforce secure coding
standards]), violation of standards is a teachable moment for all stakeholders. Socializing the benefits of following
standards is also a good first step to gaining widespread acceptance (see [SM2.7 Create evangelism role and perform
internal marketing]).
[SR3.4: 20] Create standards for technology stacks.
The organization standardizes on specific technology stacks. This translates into a reduced workload because teams
don’t have to explore new technology risks for every new project. The organization might create a secure base
configuration (commonly in the form of golden images, Terraform definitions, etc.) for each technology stack, further
reducing the amount of work required to use the stack safely. In cloud environments, hardened configurations likely
include up-to-date security patches, security configuration, and security services, such as logging and monitoring.
In traditional on-premise IT deployments, a stack might include an operating system, a database, an application
server, and a runtime environment (e.g., a LAMP stack). Where the technology will be reused, such as containers,
microservices, or orchestration code, the security frontier is a good place to find traction; standards for secure use
of these reusable technologies means that getting security right in one place positively impacts the security posture
of all downstream dependencies (see [SE2.5 Use application containers to support security goals]).
AA LEVEL 1
[AA1.1: 113] Perform security feature review.
Security-aware reviewers identify the security features in an application and its deployment configuration
(authentication, access control, use of cryptography, etc.), then inspect the design and runtime parameters for
problems that would cause these features to fail at their purpose or otherwise prove insufficient. For example, this
kind of review would identify both a system that was subject to escalation of privilege attacks because of broken
access control as well as a mobile application that incorrectly puts PII in local storage. In some cases, use of the firm’s
secure-by-design components can streamline this process (see [SFD2.1 Leverage secure-by-design components and
services]). Organizations often carry out security feature review with checklist-driven analysis and procedural threat
modeling efforts. Many modern applications are no longer simply “3-tier” but instead involve components architected
to interact across a variety of tiers: browser/endpoint, embedded, web, microservices, orchestration engines,
deployment pipelines, third-party SaaS, and so on. Some of these environments might provide robust security feature
sets, whereas others might have key capability gaps that require careful consideration, so organizations should not
consider the applicability and correct use of security features in just one tier of the application but across all tiers that
constitute the architecture and operational environment.
[AA1.2: 49] Perform design review for high-risk applications.
The organization can learn the benefits of design review by seeing real results for a few high-risk, high-profile
applications. Reviewers must have some experience performing detailed design reviews and breaking the design
under consideration, especially for new platforms or environments. Even if the SSG isn’t yet equipped to perform an
in-depth architecture analysis (see [AA2.1 Define and use AA process]), smart people with an adversarial mindset can
find important design problems. In all cases, a design review should produce a set of flaws and a plan to mitigate
them. An organization can use consultants to do this work, but it should participate actively. Ad hoc review paradigms
that rely heavily on expertise can be used here, but they don’t tend to scale in the long run. A review focused only on
whether a software project has performed the right process steps won’t generate useful results about flaws. Note that
a sufficiently robust design review process can’t be executed at CI/CD speed.
[AA1.3: 37] Have SSG lead design review efforts.
The SSG takes a lead role in performing a design review (see [AA1.2 Perform design review for high-risk applications])
to uncover flaws. Breaking down an architecture is enough of an art that the SSG must be proficient at it before it
can turn the entire job over to architects or other engineering teams, and proficiency requires practice. That practice
might then enable, for example, champions to take the day-to-day lead while the SSG maintains leadership around
knowledge and process. The SSG can’t be successful on its own, either—it will likely need help from architects or
implementers to understand the design. With a clear design in hand, the SSG might be able to carry out the detailed
review with a minimum of interaction with the project team. Over time, the responsibility for leading review efforts
should shift toward software security architects. Approaches to design review evolve over time, so it’s wise to not
expect to set a process and use it forever.
[AA1.4: 62] Use a risk methodology to rank applications.
To facilitate security feature and design review processes, the SSG or other assigned groups use a defined risk
methodology, which might be implemented via questionnaire or similar method—whether manual or automated—
to collect information about each application in order to assign a risk classification and associated prioritization.
Information needed for an assignment might include, “Which programming languages is the application written
in?” or “Who uses the application?” or “Is the application’s deployment software-orchestrated?” Typically, a qualified
member of the application team provides the information, but the process should be short enough to take only
a few minutes. Some teams might use automation to gather the necessary data. The SSG can use the answers to
categorize the application as, for example, high, medium, or low risk. Because a risk questionnaire can be easy to
game, it’s important to put into place some spot-checking for validity and accuracy. An overreliance on self-reporting
or automation can render this activity useless.
AA LEVEL 3
[AA3.1: 16] Have engineering teams lead AA process.
Engineering teams lead the AA process most of the time. This effort requires a well-understood and well-documented
process (see [AA2.1 Define and use AA process]), although the SSG still might contribute to AA in an advisory capacity
or under special circumstances. Even with a good process, consistency is difficult to attain because breaking
architecture requires experience, so be sure to provide architects with SSG or outside expertise on novel issues. In
any given organization, the identified engineering team might normally have responsibilities such as development,
DevOps, cloud security, operations security, security architecture, or a variety of similar roles.
[AA3.2: 2] Drive analysis results into standard design patterns.
Failures identified during threat modeling, design review, or AA are fed back to security and engineering teams
so that similar mistakes can be prevented in the future through use of improved design patterns, whether local or
formally approved (see [SFD3.1 Form a review board or central committee to approve and maintain secure design
patterns]). Cloud service providers have learned a lot about how their platforms and services fail to resist attack and
have codified this experience into patterns for secure use. Organizations that heavily rely on these services might
base their application-layer patterns on those building blocks provided by the cloud service provider (for example,
AWS CloudFormation and Azure Blueprints) in making their own. Note that security design patterns can interact in
surprising ways that break security, so the analysis process should be applied even when vetted design patterns are
in standard use.
[AA3.3: 11] Make the SSG available as an AA resource or mentor.
To build AA capability outside of the SSG, the SSG advertises itself as a resource or mentor for teams using the AA
process (see [AA2.1 Define and use AA process]). This effort might enable, for example, security champions, site
reliability engineers, DevSecOps engineers, and others to take the lead while the SSG offers advice. A principal point
of guidance should be tailoring reusable process inputs to make them more actionable within their own technology
stacks. These reusable inputs help protect the team’s time so they can focus on the problems that require creative
solutions rather than enumerating known bad habits. While the SSG might answer AA questions during office hours,
they will often assign a mentor to work with a team for the duration of the analysis. In the case of high-risk software,
the SSG should play a more active mentorship role in applying the AA process.
CR LEVEL 1
[CR1.2: 80] Perform opportunistic code review.
The SSG ensures that code review for high-risk applications is performed in an opportunistic fashion, such as by
following up a design review with a code review looking for security issues in source code and dependencies, and
perhaps also in deployment artifact configuration (e.g., containers) and automation metadata (e.g., infrastructure-as-
code). This informal targeting often evolves into a systematic approach. Code review could involve the use of specific
tools and services, or it might be manual, but it has to be part of a proactive process. When new technologies pop up,
new approaches to code review might become necessary.
[CR1.4: 102] Use automated tools.
Static analysis incorporated into the code review process makes the review more efficient and consistent. Automation
won’t replace human judgement, but it does bring definition to the review process and security expertise to reviewers
who typically aren’t security experts. Note that a specific tool might not cover an entire portfolio, especially when new
languages are involved, so additional local effort might be useful. Some organizations might progress to automating
tool use by instrumenting static analysis into source code management workflows (e.g., pull requests) and delivery
pipeline workflows (build, package, and deploy) to make the review more efficient, consistent, and in line with release
cadence. Whether use of automated tools is to review a portion of the source code incrementally, such as a developer
committing new code or small changes, or to conduct full-program analysis by scanning the entire codebase,
this service should be explicitly connected to a larger SSDL defect management process applied during software
development, not just used to “check the security box” on the path to deployment.
[CR1.5: 49] Make code review mandatory for all projects.
A security-focused code review is mandatory for all projects under the SSG’s purview, with a lack of code review or
unacceptable results stopping a release, slowing it down, or causing it to be recalled. While all projects must undergo
code review, the process might be different for different kinds of projects. The review for low-risk projects might rely
more heavily on automation, for example, whereas high-risk projects might have no upper bound on the amount
of time spent by reviewers. Having a minimum acceptable standard forces projects that don’t pass to be fixed and
reevaluated. A code review tool with nearly all the rules turned off (so it can run at CI/CD automation speeds, for
example) won’t provide sufficient defect coverage. Similarly, peer code review or tools focused on quality and style
won’t provide useful security results.
[CR1.6: 32] Use centralized reporting to close the knowledge loop.
The bugs found during code review are tracked in a centralized repository that makes it possible to do both summary
and trend reporting for the organization. The code review information can be incorporated into a CISO-level
dashboard that might include feeds from other parts of the security organization (e.g., penetration tests, security
testing, DAST). Given the historical code review data, the SSG can also use the reports to demonstrate progress (see
[SM3.3 Identify metrics and use them to drive resourcing]) and then, for example, drive the training curriculum.
Individual bugs make excellent training examples (see [T2.8 Create and use material specific to company history]).
Some organizations have moved toward analyzing this data and using the results to drive automation.
[CR1.7: 51] Assign tool mentors.
Mentors are available to show developers how to get the most out of code review tools. If the SSG has the most
skill with the tools, it could use office hours or other outreach to help developers establish the right configuration
and get started on interpreting and remediating results. Alternatively, someone from the SSG might work with a
development team for the duration of the first review they perform. Centralized use of a tool can be distributed into
the development organization or toolchains over time through the use of tool mentors, but providing installation
instructions and URLs to centralized tools isn’t the same as mentoring. Increasingly, mentorship extends to tools
associated with deployment artifacts (e.g., container security) and infrastructure (e.g., cloud configuration). In many
organizations, satellite members or champions take on the tool mentorship role.
CR LEVEL 2
[CR2.6: 25] Use automated tools with tailored rules.
Custom rules are created and used to help uncover security defects specific to the organization’s coding standards
or the framework-based or cloud-provided middleware it uses. The same group that provides tool mentoring (see
[CR1.7 Assign tool mentors]) will likely spearhead this customization. Tailored rules can be explicitly tied to proper
usage of technology stacks in a positive sense and avoidance of errors commonly encountered in a firm’s codebase in
a negative sense. To reduce the workload for everyone, many organizations also create rules to remove repeated false
positives and turn off checks that aren’t relevant.
CR LEVEL 3
[CR3.2: 9] Build a capability to combine assessment results.
Assessment results are combined so that multiple analysis techniques feed into one reporting and remediation
process. In addition to code review, analysis techniques might include dynamic analysis, software composition
analysis, container scanning, cloud services monitoring, and so on. The SSG might write scripts or acquire software
to gather data automatically and combine the results into a format that can be consumed by a single downstream
review and reporting solution. The tricky part of this activity is normalizing vulnerability information from disparate
sources that use conflicting terminology. In some cases, using a standardized taxonomy (e.g., a CWE-like approach)
can help with normalization. Combining multiple sources helps drive better-informed risk mitigation decisions.
[CR3.3: 4] Create capability to eradicate bugs.
When a new kind of bug is discovered in the firm’s software, the SSG ensures rules are created to find it (see
[CR2.6 Use automated tools with tailored rules]) and helps use these rules to identify all occurrences of the new
bug throughout the codebases and runtime environments. It becomes possible to eradicate the bug type entirely
without waiting for every project to reach the code review portion of its lifecycle. A firm with only a handful of software
applications built on a single technology stack will have an easier time with this activity than firms with many large
applications built on a diverse set of technology stacks. A new development framework or library, rules in RASP or
a next-generation firewall, or cloud configuration tools that provide guardrails can often help in (but not replace)
eradication efforts.
[CR3.4: 1] Automate malicious code detection.
Automated code review is used to identify dangerous code written by malicious in-house developers or outsource
providers. Examples of malicious code that could be targeted include backdoors, logic bombs, time bombs,
nefarious communication channels, obfuscated program logic, and dynamic code injection. Although out-of-the-box
automation might identify some generic malicious-looking constructs, custom rules for the static analysis tools used
to codify acceptable and unacceptable code patterns in the organization’s codebase will likely become a necessity.
Manual code review for malicious code is a good start but insufficient to complete this activity at scale. While not all
backdoors or similar code were meant to be malicious when they were written (e.g., a developer’s feature to bypass
authentication during testing), such things have a tendency to stay in deployed code and should be treated as
malicious code until proven otherwise.
[CR3.5: 0] Enforce secure coding standards.
The enforced portions of an organization’s secure coding standards (see [SR3.3 Use secure coding standards]) often
start out as a simple list of banned functions, with a violation of these standards being sufficient grounds for rejecting
a piece of code. Other useful coding standard topics might include proper use of cloud APIs, use of approved
cryptography, memory sanitization, and many others. Code review against standards must be objective—it shouldn’t
become a debate about whether the noncompliant code is exploitable. In some cases, coding standards are specific
to language constructs and enforced with tools (e.g., codified into SAST rules). In other cases, published coding
standards are specific to technology stacks and enforced during the code review process or using automation.
Standards can be positive (“do it this way”) or negative (“do not use this API”), but they must be enforced to be useful.
ST LEVEL 2
[ST2.4: 19] Share security results with QA.
The SSG or others with security testing data routinely share results from security reviews with those responsible for
QA. Using security testing results as the basis for a conversation about common attack patterns or the underlying
causes of security defects allows QA to generalize that information into new test approaches. Organizations that
have chosen to leverage software pipeline platforms such as GitHub, or CI/CD platforms such as the Atlassian stack,
can benefit from QA receiving various testing results automatically, which should then facilitate timely stakeholder
conversations. In some cases, these platforms can be used to integrate QA into an automated remediation workflow
and reporting by generating change request tickets for developers, lightening the QA workload. Over time, QA learns
the security mindset, and the organization benefits from an improved ability to create security tests tailored to the
organization’s code.
[ST2.5: 21] Include security tests in QA automation.
Security tests are included in an automation framework and run alongside functional, performance, and other QA
tests. Many groups trigger these tests manually, whereas in a modern toolchain, these tests are likely part of the
pipeline and triggered through automation. When test creators who understand the software create security
tests, they can uncover more specialized or more relevant localized defects than commercial tools might (see
[ST 1 .4 integrate opaque-box security tools into the QA process]). Security tests might be derived from typical failures
of security features (see [SFD1.1 Integrate and deliver security features]), from creative tweaks of functional tests and
developer tests, or even from guidance provided by penetration testers on how to reproduce an issue. Tests that
are performed manually or out-of-band likely will not provide timely feedback.
[ST2.6: 15] Perform fuzz testing customized to application APIs.
QA efforts include running a customized fuzzing framework against APIs critical to the organization. Testers could
begin from scratch or use an existing fuzzing toolkit, but the necessary customization often goes beyond creating
custom protocol descriptions or file format templates to giving the fuzzing framework a built-in understanding of
the application interfaces it will be calling into. Test harnesses developed explicitly for particular applications make
good places to integrate fuzz testing.
DEPLOYMENT
DEPLOYMENT: PENETRATION TESTING (PT)
The Penetration Testing practice involves standard outside g in testing of the sort carried out by security specialists.
Penetration testing focuses on vulnerabilities in the final configuration and provides direct feeds to defect
management and mitigation.
PT LEVEL 1
[PT 1 .1: 111] Use external penetration testers to find problems.
External penetration testers are used to demonstrate that the organization’s code needs help. Breaking a high-profile
application to provide unmistakable evidence that the organization isn’t somehow immune to the problem often gets
the right attention. Over time, the focus of penetration testing moves from trying to determine if the code is broken
in some areas to a sanity check done before shipping. External penetration testers that bring a new set of experiences
and skills to the problem are the most useful.
[PT 1 .2: 98] Feed results to the defect management and mitigation system.
Penetration testing results are fed back to engineering through established defect management or mitigation
channels, with development and operations responding via a defect management and release process. Testing
often targets container and infrastructure configuration in addition to applications, and results are commonly
provided in machine-readable formats to enable automated tracking. Properly done, this exercise demonstrates the
organization’s ability to improve the state of security, and many firms are emphasizing the critical importance of
not just identifying but actually fixing security problems. One way to ensure attention is to add a security flag to the
bug-tracking and defect management system. The organization might leverage developer workflow or social tooling
(e.g., Slack, JIRA) to communicate change requests, but those requests are still tracked explicitly as part of
a vulnerability management process.
PT LEVEL 2
[PT2.2: 33] Penetration testers use all available information.
Penetration testers, whether internal or external, routinely use available source code, design documents, architecture
analysis results, misuse and abuse cases, code review results, and cloud environment and other deployment
configuration to do deeper analysis and find more interesting problems. To effectively do their job, penetration testers
often need everything created throughout the SSDL, so an SSDL that creates no useful artifacts about the code will
make this effort harder. Having access to the artifacts is not the same as using them.
[PT2.3: 34] Schedule periodic penetration tests for application coverage.
The SSG collaborates in periodic security testing of all applications in its purview, which could be tied to a calendar
or a release cycle. High-profile applications should get a penetration test at least once a year, for example, even if
new releases haven’t yet moved into production. Other applications might receive different kinds of security testing
on a similar schedule. Of course, any security testing performed must focus on discovering vulnerabilities, not just
checking a process or compliance box. This testing serves as a sanity check and helps ensure that yesterday’s software
isn’t vulnerable to today’s attacks. The testing can also help maintain the security of software configurations and
environments, especially for containers and components in the cloud. One important aspect of periodic security
testing across the portfolio is to make sure that the problems identified are actually fixed and don’t creep back into
the build. New automation created for CI/CD deserves penetration testing as well.
PT LEVEL 3
[PT3.1: 23] Use external penetration testers to perform deep-dive analysis.
The organization uses external penetration testers to do deep-dive analysis for critical projects or technologies and
to introduce fresh thinking into the SSG. These testers should be domain experts and specialists who keep the
organization up to speed with the latest version of the attacker’s perspective and have a track record for breaking the
type of software being tested. Skilled penetration testers will always break a system, but the question is whether they
demonstrate new kinds of thinking about attacks that can be useful when designing, implementing, and hardening
new systems. Creating new types of attacks from threat intelligence and abuse cases typically requires extended
timelines, which is essential when it comes to new technologies, and prevents checklist-driven approaches that look
only for known types of problems.
[PT3.2: 12] Customize penetration testing tools.
The SSG collaborates in either creating penetration testing tools or adapting publicly available ones to more efficiently
and comprehensively attack the organization’s software. Tools will improve the efficiency of the penetration testing
process without sacrificing the depth of problems that the SSG can identify. Automation can be particularly valuable
in organizations using agile methodologies because it helps teams go faster. Tools that can be tailored are always
preferable to generic tools. Success here is often dependent upon both the depth of tests and their scope.
SE LEVEL 1
[SE1.1: 80] Use application input monitoring.
The organization monitors the input to the software that it runs in order to spot attacks. For web code, a WAF can
do this job, while other kinds of software likely require other approaches, including runtime instrumentation. The
SSG might be responsible for the care and feeding of the monitoring system, but incident response isn’t part of this
activity. For web applications, WAFs that write log files can be useful if someone periodically reviews the logs and
takes action. Other software and technology stacks, such as mobile and IoT, likely require their own input monitoring
solutions. Serverless and containerized software can require interaction with vendor software to get the appropriate
logs and monitoring data. Cloud deployments and platform-as-a-service usage can add another level of difficulty to
the monitoring, collection, and aggregation approach.
SE LEVEL 2
[SE2.2: 48] Define secure deployment parameters and configurations.
The SSDL requires creating an installation guide or a clearly described configuration for deployable software artifacts
and the infrastructure-as-code necessary to deploy them, helping teams install and configure software securely.
When special steps are required to ensure a deployment is secure, these steps can be outlined in a configuration
guide or explicitly described in deployment automation, including information on COTS, vendor, and cloud services
components. In some cases, installation guides are not used internally but are distributed to customers who buy
the software. All deployment automation should be understandable by humans, not just by machines. Increasingly,
secure deployment parameters and configuration are codified into infrastructure scripting (e.g., Terraform, Helm,
Ansible, and Chef).
[SE2.4: 32] Protect code integrity.
The organization can attest to the provenance, integrity, and authorization of important code before allowing it to
execute. While legacy and mobile platforms accomplished this with point-in-time code signing and permissions
activity, protecting modern containerized software demands actions in various lifecycle phases. Organizations can
use build systems to verify sources and manifests of dependencies, creating their own cryptographic attestation
of both. Packaging and deployment systems can sign and verify binary packages, including code, configuration,
metadata, code identity, and authorization to release material. In some cases, organizations allow only code from
their own registries to execute in certain environments. With many DevOps practices greatly increasing the number
of people who can touch the code, organizations should also use permissions and peer review to govern code
commits within source code management to help protect integrity.
[SE2.5: 44] Use application containers to support security goals.
The organization uses application containers to support its software security goals, which likely include ease of
deployment, a tighter coupling of applications with their dependencies, immutability, integrity (see [SE2.4 Protect code
integrity]), and some isolation benefits without the overhead of deploying a full operating system on a virtual machine.
Containers provide a convenient place for security controls to be applied and updated consistently. While containers can
be useful in development and test environments, production use provides the needed security benefits.
[SE2.6: 59] Ensure cloud security basics.
Organizations should already be ensuring that their host and network security basics are in place, but they must
also ensure that basic requirements are met in cloud environments. Of course, cloud-based virtual assets often have
public-facing services that create an attack surface (e.g., cloud-based storage) that is different from the one in a
private data center, so these assets require customized security configuration and administration. In the increasingly
software-defined world, the SSG has to help everyone explicitly implement cloud-based security features and controls
(some of which can be built in, for example, cloud provider administration consoles) that are comparable to those
built with cables and physical hardware in private data centers. Detailed knowledge about cloud provider shared
responsibility security models is always necessary.
[SE2.7: 33] Use orchestration for containers and virtualized environments.
The organization uses automation to scale service, container, and virtualized environments in a disciplined way.
Orchestration processes take advantage of built-in and add-on security features (see [SFD2.1 Leverage secure-by-
design components and services]), such as hardening against drift, secrets management, and rollbacks, to ensure
that each deployed workload meets predetermined security requirements. Setting security behaviors in aggregate
allows for rapid change when the need arises. Orchestration platforms are themselves software that become part of
your production environment, which in turn requires hardening and security patching and configuration; in other
words, if you use Kubernetes, make sure you patch Kubernetes.
CMVM LEVEL 3
[CMVM3.1: 4] Fix all occurrences of software bugs found in operations.
The organization fixes all instances of a bug found during operations (see [CMVM1.2 Identify software defects found
in operations monitoring and feed them back to development])—not just the small number of instances that trigger
bug reports—to meet risk management, timeliness, recovery, continuity, and resiliency goals. Doing this proactively
requires the ability to reexamine the entire inventory of software delivery value streams when new kinds of bugs
come to light (see [CR3.3 Create capability to eradicate bugs]). One way to approach reexamination is to create a
ruleset that generalizes a deployed bug into something that can be scanned for via automated code review. In
some environments, fixing a bug might comprise removing it from production immediately and making the actual
fix in some priority order before redeployment. Use of orchestration can greatly simplify deploying the fix for all
occurrences of a software bug (see [SE2.7 Use orchestration for containers and virtualized environments]).
[CMVM3.2: 11] Enhance the SSDL to prevent software bugs found in operations.
Experience from operations leads to changes in the SSDL (see [SM1.1 Publish process and evolve as necessary]), which
can in turn be strengthened to prevent the reintroduction of bugs found during operations. To make this process
systematic, incident response postmortem includes a feedback-to-SSDL step. The outcomes of the postmortem
might result in changes such as tool-based policy rulesets in a CI/CD pipeline and adjustments to automated
deployment configuration (see [SM3.4 Integrated software-defined lifecycle governance]). This works best when root-
cause analysis pinpoints where in the software lifecycle an error could have been introduced or slipped by uncaught
(e.g., a defect escape). DevOps engineers might have an easier time with this because all the players are likely involved
in the discussion and the solution. An ad hoc approach to SSDL improvement isn’t sufficient.
[CMVM3.3: 14] Simulate software crises.
The SSG simulates high-impact software security crises to ensure software incident detection and response
capabilities minimize damage. Simulations could test for the ability to identify and mitigate specific threats or, in
other cases, begin with the assumption that a critical system or service is already compromised and evaluate the
organization’s ability to respond. Planned chaos engineering can be effective at triggering unexpected conditions
during simulations. The exercises must include attacks or other software security crises at the appropriate software
layer to generate useful results (e.g., at the application layer for web applications and at lower layers for IoT devices).
When simulations model successful attacks, an important question to consider is the time required to clean up.
Regardless, simulations must focus on security-relevant software failure and not on natural disasters or other types
of emergency response drills. Organizations that are highly dependent on vendor infrastructure (e.g., cloud service
providers, SaaS, PaaS) and security features will naturally include those things in crisis simulations.
2ND MEASURES 31 32 50 42 36 30 26 21 13 11 0 0
3RD MEASURES 14 12 32 20 16 15 10 4 1 0 0 0
4TH MEASURES 4 7 8 7 5 2 2
SSG MEMBERS 2,837 1,801 1,596 1,600 1,268 1,111 1,084 976 978 786 635 370
SATELLITE MEMBERS 6,448 6,656 6,298 6,291 3,501 3,595 2,111 1,954 2,039 1,750 1,150 710
DEVELOPERS 398,544 490,167 468,500 415,598 290,582 272,782 287,006 272,358 218,286 185,316 141,175 67,950
APPLICATIONS 153,519 176,269 173,233 135,881 94,802 87,244 69,750 69,039 58,739 41,157 28,243 3,970
AVG. SSG AGE (YEARS) 4.41 4.32 4.53 4.13 3.88 3.94 3.98 4.28 4.13 4.32 4.49 5.32
AVG. SSG RATIO 2.59/100 2.01/100 1.37/100 1.33/100 1.60/100 1.61/100 1.51/100 1.4/100 1.95/100 1.99/100 1.02/100 1.13/100
TABLE A. BSIMM NUMBERS OVER TIME. The chart shows how the BSIMM study has grown over the years.
FINANCIAL 38 42 57 50 47 42 33 26 19 17 12 4
FINTECH 21 21
ISVs 42 46 43 42 38 30 27 25 19 15 7 4
TECH 28 27 20 22 16 14 17 14 13 10 7 2
HEALTHCARE 14 14 16 19 17 15 10
INTERNET OF THINGS 18 17 13 16 12 12 13
CLOUD 26 30 20 17 16 15
INSURANCE 13 14 11 10 11 10
RETAIL 7 8 9 10
TABLE B. BSIMM VERTICALS OVER TIME. The vertical representation has grown over the years. Remember that a firm can
appear in more than one vertical.
TABLE C. BSIMM12 REASSESSMENTS SCORECARD ROUND 1 VS. ROUND 2. This chart shows how 52 SSIs changed
between their first and second assessment.
CONFIGURATION STRATEGY
MANAGEMENT & & METRICS
VULNERABILITY COMPLIANCE
3.0
MANAGEMENT & POLICY
2.5
2.0
SOFTWARE
ENVIRONMENT 1.5 TRAINING
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
FIGURE A. ALLFIRMS ROUND 1 VS. ALLFIRMS ROUND 2 SPIDER CHART. This diagram illustrates the high-water mark
change in 52 firms between their first and second BSIMM assessment.
OBSERVATIONS
0 5 10 15 20 25
SM2.3
SM1.1
CP1.3
SR2.5
SM2.1
SM2.7
SR2.2
SR2.4
CR1.7
FIGURE B. ACTIVITY INCREASES BETWEEN FIRST AND SECOND MEASUREMENTS. Between initial measurements,
firms most commonly formalize or adopt activities in the Governance domain.
The second factor is that newly observed activities overwrite aged out data. For example, [CR1.2 Perform opportunistic
code review] was newly observed in 17 firms, but it was either no longer observed in 10 firms or decreased due to
data aging out, giving a total change of seven (as shown in the scorecard). In a different example, the activity [SM2.7
Create evangelism role and perform internal marketing] was newly observed in 20 firms and dropped out of the data
pool or was no longer observed in five firms. Therefore, the total observation count changed by 15 on the scorecard.
Interestingly, while we continue to see an increase in observation counts for [SM2.7 Create evangelism role and
perform internal marketing] for firms performing their second assessment, the overall observation count for that
activity across all 128 firms decreased in BSIMM12.
TABLE D. BSIMM12 REASSESSMENTS SCORECARD ROUND 1 VS. ROUND 3. This chart shows how 21 SSIs changed from
their first to their third assessment.
CONFIGURATION STRATEGY
MANAGEMENT & & METRICS
VULNERABILITY COMPLIANCE
3.0
MANAGEMENT & POLICY
2.5
2.0
SOFTWARE
ENVIRONMENT 1.5 TRAINING
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
FIGURE C. ALLFIRMS ROUND 1 VS. ALLFIRMS ROUND 3 SPIDER CHART. This diagram illustrates the high-water mark
change in 21 firms between their first and third BSIMM assessment.
Interestingly, while this chart shows growth in every practice, it shows only a slight increase in the Compliance &
Policy and Penetration Testing practices.
Figure D shows how the average observation count increases by practice from the first to the second assessment and
then from the first to the third for the 21 firms that have performed at least three measurements. When comparing
how firms grew in the second and third measurements, we can see the largest increase is in Strategy & Metrics with
smaller increases in Attack Models, Standards & Requirements, and Software Environment. Although we noticed an
increase in observation count in Security Features & Design and Architecture Analysis from the first to the second
assessment, we did not see a corresponding increase from the second to the third. Drivers for this might include that
budgeting for some of the human-intensive activities is hard and can only be done periodically, some activities are
more difficult than others and are only attempted periodically, and some activities are easier to apply to more of the
portfolio and thus don’t need to be adjusted often.
2.5
2.0
1.5
1.0
0.5
0.0
SM CP T AM SFD SR AA CR ST PT SE CMVM
FIGURE D. LONGITUDINAL INCREASES BETWEEN 21 FIRMS ROUND 2 AND ROUND 3 BY PRACTICE. This chart shows
which practices organizations exert more effort in over time by showing average increase as firms move from their first to
second and then first to third measurement.
Digging into the practices further allows us to recognize individual activities that have highly dissimilar observation
rates across measurement rounds. For example, there are some activities that are rare across firms’ first
measurements but are much more common for their second measurements. Table E illustrates this for level 3
activities. We can conclude that while some organizations do some level 3 activities early in their maturity journey,
many more turn to level 3 activities after reaching a good benchmark security posture.
% OBSERVED % OBSERVED
76 R1 FIRMS 52 R2 FIRMS
[SM3.1] Use software asset tracking application with portfolio view 7% 33%
[CP3.3] Drive feedback from software lifecycle data back to policy 1% 10%
[SFD3.1] Form a review board or central committee to approve and maintain secure
7% 21%
design patterns
TABLE E. OBSERVATION RATE OF SELECTED LEVEL 3 ACTIVITIES FOR 76 R1 AND 52 R2 FIRMS. We see a significantly
higher observation rate for some level 3 activities, where the increase in observation counts from the first to the second
assessment round is much higher than the growth rate for other activities. Therefore, these might represent high-impact
activities that a firm should consider as it goes through the maturing and enabling phases of SSI maturity. Don’t overlook
activities just because they are level 3; remember, some activities are level 3 because they are newly added.
[SM1.1] 91 71.1% [AM1.2] 77 60.2% [AA1.1] 113 88.3% [PT 1 .1] 111 86.7%
[SM1.4] 118 92.2% [AM1.5] 61 47.7% [AA1.3] 37 28.9% [PT 1 .3] 88 68.8%
[SM3.4] 6 4.7%
COMPLIANCE & POLICY SECURITY FEATURES & DESIGN CODE REVIEW SOFTWARE ENVIRONMENT
[CP1.2] 114 89.1% [SFD1.2] 83 64.8% [CR1.4] 102 79.7% [SE1.2] 117 91.4%
TRAINING STANDARDS & REQUIREMENTS SECURITY TESTING CONFIG. MGMT. & VULN. MGMT.
[T 1 .1] 76 59.4% [SR1.1] 90 70.3% [ST 1 .1] 100 78.1% [CMVM1.1] 108 84.4%
TABLE F. BSIMM12 SCORECARD. This scorecard shows how often we observed each of the BSIMM activities in the BSIMM12
data pool of 128 firms. Note that the observation count data fall naturally into levels per practice.
2.0
1.0
0.5
ARCHITECTURE ANALYSIS
ALLFIRMS (128)
FIGURE E. ALLFIRMS SPIDER CHART. This diagram shows the average of the high-water mark collectively reached in
each practice by the 128 BSIMM12 firms. Collectively across these firms, we observed more level 2 and 3 activities in practices
such as Strategy & Metrics, Compliance & Policy, and Standards & Requirements compared to practices such as Attack Models,
Architecture Analysis, Code Review, and Security Testing.
LEVEL 1
Implement lifecycle instrumentation and use to define governance. [SM1.4] 118 92.2%
LEVEL 2
Publish data about software security internally and drive change. [SM2.1] 63 49.2%
Verify release conditions with measurements and track exceptions. [SM2.2] 60 46.9%
LEVEL 3
Use a software asset tracking application with portfolio view. [SM3.1] 22 17.2%
LEVEL 1
LEVEL 2
LEVEL 3
Drive feedback from software lifecycle data back to policy. [CP3.3] 6 4.7%
TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.
LEVEL 1
LEVEL 2
LEVEL 3
LEVEL 1
LEVEL 2
Build attack patterns and abuse cases tied to potential attackers. [AM2.1] 14 10.9%
LEVEL 3
Have a research group that develops new attack methods. [AM3.1] 5 3.9%
LEVEL 1
TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.
LEVEL 3
Find and publish secure design patterns from the organization. [SFD3.3] 5 3.9%
LEVEL 1
LEVEL 2
LEVEL 3
LEVEL 1
LEVEL 2
LEVEL 3
LEVEL 1
TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.
LEVEL 2
LEVEL 3
LEVEL 1
Ensure QA performs edge/boundary value condition testing. [ST 1 .1] 100 78.1%
Drive tests with security requirements and security features. [ST 1 .3] 87 68.0%
Integrate opaque-box security tools into the QA process. [ST 1 .4] 50 39.1%
LEVEL 2
LEVEL 3
Begin to build and apply adversarial security tests (abuse cases). [ST3.5] 2 1.6%
LEVEL 1
Use external penetration testers to find problems. [PT 1 .1] 111 86.7%
Feed results to the defect management and mitigation system. [PT 1 .2] 98 76.6%
LEVEL 2
LEVEL 3
TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.
LEVEL 1
Ensure host and network security basics are in place. [SE1.2] 117 91.4%
LEVEL 2
LEVEL 1
LEVEL 2
Track software bugs found in operations through the fix process. [CMVM2.2] 93 72.7%
LEVEL 3
Enhance the SSDL to prevent software bugs found in operations. [CMVM3.2] 11 8.6%
TABLE G. BSIMM12 SKELETON. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 122
activities they contain, along with the observation rates as both counts and percentages. Highlighted activities are the most
common per practice.
35
30
25
NUMBER OF FIRMS
20
15
9.6
10
5.7
4.1 4.5
5
2.6
1.9
FIGURE F. BSIMM SCORE DISTRIBUTION. Firm scores most frequently fall into the 31 to 40 range, with an average SSG age
of 4.1 years.
COMPARING VERTICALS
Table H shows the BSIMM scorecards for the nine verticals compared side by side. In the Activity columns, we have
highlighted the most common activity in each practice as observed in the entire BSIMM data pool (128 firms). See
Part Two for discussion.
[SM1.1] 29 26 23 20 16 16 11 10 5
[SM1.3] 27 25 18 17 11 11 9 9 4
[SM1.4] 36 38 27 22 20 17 13 13 6
[SM2.1] 18 26 9 17 15 7 7 8 4
[SM2.2] 19 22 15 11 11 8 5 3 2
[SM2.3] 22 15 14 12 10 11 6 7 3
[SM2.6] 18 24 16 11 10 10 6 4 3
[SM2.7] 23 16 18 13 11 11 8 5 4
[SM3.1] 5 9 5 4 6 4 1 1 1
[SM3.2] 5 0 2 2 4 2 1 1 1
[SM3.3] 5 9 3 3 5 2 3 3 0
[SM3.4] 1 2 1 0 2 0 1 1 0
[CP1.1] 32 28 22 17 18 15 14 10 3
[CP1.2] 33 36 22 23 21 16 14 13 7
[CP1.3] 24 30 17 16 18 12 7 9 5
[CP2.1] 15 19 9 12 11 9 7 3 4
[CP2.2] 13 17 15 6 6 11 8 4 2
[CP2.3] 21 21 17 12 12 10 10 5 2
[CP2.4] 20 18 10 11 7 7 6 7 4
[CP2.5] 25 21 16 17 14 10 8 5 2
[CP3.1] 3 11 3 3 6 3 3 3 1
[CP3.2] 4 8 3 1 1 2 3 3 1
[CP3.3] 3 2 2 3 2 1 0 0 0
[T 1 .1] 23 23 18 17 17 15 6 6 5
[T 1 .7] 15 20 10 13 10 9 3 6 4
[T 1 .8] 16 16 10 12 9 6 4 8 2
[T2.5] 15 8 10 8 8 6 2 5 2
[T2.8] 11 2 12 8 3 9 3 2 2
[T2.9] 11 13 9 7 6 7 3 5 4
[T3.1] 2 3 2 2 0 1 0 2 0
[T3.2] 8 8 6 8 6 5 2 3 1
[T3.3] 8 10 6 2 3 5 1 2 0
[T3.4] 5 10 4 3 5 2 3 5 1
[T3.5] 1 4 3 0 2 0 0 1 1
[T3.6] 2 1 1 2 1 0 0 0 0
TABLE H. VERTICAL COMPARISON SCORECARD. This table allows for easy comparisons of observation rates for the nine
verticals tracked in BSIMM12.
[AM1.2] 17 34 9 13 14 8 11 10 6
[AM1.3] 7 17 7 4 7 5 5 6 2
[AM1.5] 13 22 12 11 13 9 9 6 3
[AM2.1] 2 6 3 1 3 4 2 3 1
[AM2.2] 3 5 4 2 2 3 1 1 0
[AM2.5] 5 3 6 3 2 5 2 1 1
[AM2.6] 4 1 5 2 3 4 1 0 0
[AM2.7] 7 5 5 4 1 4 2 1 0
[AM3.1] 2 0 1 1 2 2 0 0 1
[AM3.2] 2 0 2 2 0 1 1 1 0
[AM3.3] 2 3 0 2 1 1 0 0 0
[SFD1.1] 30 32 20 21 18 11 12 10 7
[SFD1.2] 32 19 21 19 16 16 11 8 5
[SFD2.1] 14 8 11 8 9 7 2 1 1
[SFD2.2] 22 13 14 14 8 12 4 3 4
[SFD3.1] 1 11 2 0 2 1 2 2 1
[SFD3.2] 6 4 2 6 3 1 0 1 1
[SFD3.3] 2 1 2 0 0 2 1 0 1
[SR1.1] 21 30 19 14 18 14 10 10 6
[SR1.2] 31 25 21 21 14 15 9 7 5
[SR1.3] 30 27 24 17 18 16 13 9 6
[SR2.2] 10 29 12 10 11 6 4 10 4
[SR2.4] 24 26 16 13 14 13 7 9 2
[SR2.5] 19 17 13 10 8 10 7 6 4
[SR3.1] 11 13 6 7 10 4 3 4 1
[SR3.2] 4 3 3 2 1 2 3 3 0
[SR3.3] 2 1 4 2 3 2 0 0 0
[SR3.4] 8 7 4 6 4 4 1 0 1
TABLE H. VERTICAL COMPARISON SCORECARD. This table allows for easy comparisons of observation rates for the nine
verticals tracked n BSIMM12.
[AA1.1] 35 35 24 23 21 16 12 12 6
[AA1.2] 15 12 18 7 5 12 6 4 1
[AA1.3] 10 10 12 4 4 8 6 4 1
[AA1.4] 11 30 5 8 13 4 10 11 7
[AA2.1] 11 8 14 5 0 8 4 3 0
[AA2.2] 10 8 13 4 0 8 4 4 0
[AA3.1] 6 4 9 3 0 6 1 2 0
[AA3.2] 0 1 0 0 0 0 0 0 1
[AA3.3] 4 2 5 3 0 4 1 1 0
[CR1.2] 27 22 19 14 11 14 9 8 4
[CR1.4] 31 33 19 22 19 14 9 10 6
[CR1.5] 16 11 11 9 12 6 6 5 2
[CR1.6] 8 11 5 7 7 3 4 3 4
[CR1.7] 18 16 12 12 11 6 4 6 4
[CR2.6] 4 9 3 6 9 1 2 2 1
[CR2.7] 3 6 4 5 3 2 1 4 1
[CR3.2] 2 2 2 1 3 1 0 1 1
[CR3.3] 1 1 0 2 1 0 1 1 0
[CR3.4] 0 0 0 0 1 0 0 0 0
[CR3.5] 0 0 0 0 0 0 0 0 0
[ST 1 .1] 34 31 25 21 18 15 8 10 4
[ST 1 .3] 31 25 22 17 15 16 8 11 3
[ST 1 .4] 19 15 15 8 10 11 4 5 3
[ST2.4] 8 2 9 5 5 7 1 1 1
[ST2.5] 8 5 6 5 5 6 2 2 1
[ST2.6] 9 1 10 3 1 7 1 0 0
[ST3.3] 4 0 5 3 0 5 1 1 0
[ST3.4] 0 1 1 0 0 1 1 1 0
[ST3.5] 2 0 1 2 0 1 0 0 0
[ST3.6] 1 1 0 1 1 0 0 0 0
TABLE H. VERTICAL COMPARISON SCORECARD. This table allows for easy comparisons of observation rates for the nine
verticals tracked in BSIMM12.
[PT 1 .1] 37 32 25 19 20 15 13 12 6
[PT 1 .2] 32 27 21 19 20 13 9 8 7
[PT 1 .3] 26 29 17 18 16 10 9 9 7
[PT2.2] 15 7 10 12 5 7 2 2 1
[PT2.3] 13 13 5 7 7 2 2 4 2
[PT3.1] 8 5 11 5 5 6 2 1 2
[PT3.2] 4 4 4 3 4 2 0 0 0
[SE1.1] 17 33 12 14 16 10 11 10 4
[SE1.2] 35 37 27 24 20 17 14 12 6
[SE2.2] 14 14 16 8 8 13 2 3 1
[SE2.4] 16 2 18 8 5 13 1 1 1
[SE2.5] 16 11 10 13 9 4 3 3 3
[SE2.6] 19 20 9 16 8 6 6 7 1
[SE2.7] 14 9 6 13 4 1 2 3 1
[SE3.2] 6 1 9 2 2 5 1 1 1
[SE3.3] 4 3 1 2 3 1 2 2 1
[SE3.6] 6 5 4 4 2 5 0 0 0
[CMVM1.1] 33 35 23 20 19 14 11 12 7
[CMVM1.2] 30 28 21 21 18 16 9 9 7
[CMVM2.1] 30 29 19 19 17 14 10 9 7
[CMVM2.2] 31 26 23 18 17 15 10 8 6
[CMVM2.3] 18 26 10 13 9 6 4 5 3
[CMVM3.1] 0 2 1 1 2 0 0 0 0
[CMVM3.2] 5 2 5 4 3 3 1 0 0
[CMVM3.3] 4 3 3 3 5 2 2 1 1
[CMVM3.4] 7 5 3 8 6 2 0 2 1
[CMVM3.5] 2 4 2 4 3 1 0 1 0
[CMVM3.6] 0 0 0 0 0 0 0 0 0
[CMVM3.7] 0 0 0 0 0 0 0 0 0
TABLE H. VERTICAL COMPARISON SCORECARD. This table allows for easy comparisons of observation rates for the nine
verticals tracked in BSIMM12.
• SM1.2 Create evangelism role and perform internal marketing became SM2.7
• T 1 .5 Deliver role-specific advanced curriculum became T2.9
BSIMM12
• ST2.1 Integrate opaque-box security tools into the QA process became ST 1 .4
122 ACTIVITIES
• SE3.5 Use orchestration for containers and virtualized environments became SE2.7
• CMVM3.7 Streamline incoming responsible vulnerability disclosure added to the model
• SM2.5 Identify metrics and use them to drive resourcing became SM3.3
• SR2.6 Use secure coding standards became SR3.3
BSIMM9
• SE3.5 Use orchestration for containers and virtualized environments added to the model
116 ACTIVITIES
• SE3.6 Enhance application inventory with operations bill of materials added to the model
• SE3.7 Ensure cloud security basics added to the model
• AM1.1 Maintain and use a top N possible attacks list became AM2.5
• AM1.4 Collect and publish attack stories became AM2.6
BSIMM7 • AM1.6 Build an internal forum to discuss attacks became AM2.7
113 ACTIVITIES • CR1.1 Use a top N bugs list became CR2.7
• CR2.2 Enforce coding standards became CR3.5
• SE3.4 Use application containers to support security goals added to model
TABLE I. ACTIVITY CHANGES OVER TIME. This table allows for historical review of how BSIMM activities have been added
or moved over time.
• SFD2.3 Find and publish mature design patterns from the organization became SFD3.3
• SR2.1 Communicate standards to vendors became SR3.2
BSIMM-V
• CR3.1 Use automated tools with tailored rules became CR2.6
112 ACTIVITIES
• ST2.3 Begin to build and apply adversarial security tests (abuse cases) became ST3.5
• CMVM3.4 Operate a bug bounty program added to model
• SM1.5 Identify metrics and use them to drive resourcing became SM2.5
• SM2.4 Require security sign-off became SM1.6
BSIMM3
• AM2.3 Gather and use attack intelligence became AM1.5
109 ACTIVITIES
• ST2.2 Drive tests with security requirements and security features became ST 1 .3
• PT2.1 Use pen testing tools internally became PT 1 .3
BSIMM1
• Added 110 activities
110 ACTIVITIES
TABLE I. ACTIVITY CHANGES OVER TIME. This table allows for historical review of how BSIMM activities have been added
or moved over time.
CP1.1 CP1.3
CP2.1 AA1.4
CP1.2 SR1.1
GOVERNANCE-LED
AA1.1
ENGINEERING- &
SFD1.1 SM3.4
SFD1.2 ST3.6
Create software Inventory all Ensure Deploy defect Publish and Mature
security group software in the infrastructure discovery promote the
SSG’s purview security is against highest process
applied to the priority
software applications
environment
FIGURE G. BSIMM ACTIVITY ROADMAP BY ORGANIZATIONAL APPROACH. This table uses an activity-based view to
show a common path for creating and maturing an SSI.
• Mandatory SAST & DAST before release • Mandatory SAST & DAST before release
• Mandatory annual penetration test • Mandatory penetration test before
• Loose remediation timelines “major” release
• Targeted remediation timelines
• Mandatory SAST & DAST before release • Mandatory SAST & DAST before release
• Loose remediation timelines • Mandatory annual penetration test
• Targeted remediation timelines
• [Optional] SAST & DAST before release • [Optional] SAST before release
• Loose remediation timelines • Mandatory DAST before release
• Targeted remediation timelines
• Nothing for low priority • [Optional] SAST & DAST before release
• Targeted remediation timelines
FIGURE H. A SAMPLE GOVERNANCE-LED ORGANIZATION’S TESTING PROGRAM OVER TIME. Successful program
evolutions start with getting engineering groups accustomed to testing, triaging, and working through the potentially lengthy
backlog of defects, then tightening expectations over time as the governance and engineering groups mature.
Before performing any organization-wide roll out, it is wise to first trial any new process. Use the trial to validate
that the process integrates into developer workflows and pipelines (pulling in data, pushing results, and creating
metadata to be collected), has sufficient technology coverage, and creates results that are easily understandable by
all stakeholders.
SUMMARY
Figure I organizes the steps to implement an SSI for the first time with the associated BSIMM activities, the notional
level of effort (people and budget), and suggested timing. The people and budget costs are expressed through a 1 – 3
rating to show relative level of effort while accounting for the differences in organizations. The effort and cost to reach
each of these goals will vary across companies, of course, but the two primary factors that affect the level of effort are
the organization’s portfolio size and culture variance. For example, deploying static analysis against 20 applications
using a common pipeline will likely have a lower level of effort than deploying static analysis against 10 applications
built on a variety of toolchains.
AA1.1
ENGINEERING- &
SFD1.1 SM3.4
SFD1.2 ST3.6
DEPLOYMENT COSTS
Create software Inventory all Ensure Deploy defect Publish and Mature
security group software in the infrastructure discovery promote the
SSG’s purview security is against highest process
applied to the priority
software applications
environment
GOVERNANCE-LED PEOPLE
ENGINEERING-LED TIME
FIGURE I. BSIMM ACTIVITY ROADMAP BY ORGANIZATIONAL APPROACH WITH COST. This roadmap is supplemented
with notional costs so that organizations can plan.
40
35
30
25
SCORE
20
15
10
0
BSIMM6 BSIMM7 BSIMM8 BSIMM9 BSIMM10 BSIMM11 BSIMM12
FIGURE J. AVERAGE BSIMM PARTICIPANT SCORE. Adding firms with less experience decreased the average score from
BSIMM6 through BSIMM8, even as remeasurements have shown that individual firm maturity increases over time. However,
starting from BSIMM9, the average and the median score started to increase.
One reason for this change in average data pool score highlighted in Figure J appears to be the mix of firms using the
BSIMM as part of their SSI journey.
4.0
3.5
AGE OF SSIs ENTERING THE BSIMM
3.0
2.5
2.0
1.5
1.0
0.5
0
BSIMM6 BSIMM7 BSIMM8 BSIMM9 BSIMM10 BSIMM11 BSIMM12
FIGURE K. AVERAGE AND MEDIAN SSG AGE FOR NEW FIRMS ENTERING THE BSIMM. The median age of firms
entering BSIMM6 through BSIMM8 was declining and so did the average BSIMM score, while outliers in BSIMM7 and BSIMM8
resulted in a high average SSG age. Starting with BSIMM9, the median age of firms entering the BSIMM was higher again, which
tracks with the increase of average BSIMM scores.
A second reason appears to be an increase in firms continuing to use the BSIMM to guide their initiatives (see Figure
L). Firms using the BSIMM as an ongoing measurement tool are likely also making sufficient improvements to justify
the ongoing creation of SSI scorecards to facilitate review.
30
NUMBER OF REASESSMENTS
25
20
15
10
0
6
10
11
12
M
M
M
M
M
M
IM
IM
IM
IM
IM
IM
IM
BS
BS
BS
BS
BS
BS
BS
FIGURE L. NUMBER OF FIRMS THAT RECEIVED THEIR SECOND OR HIGHER ASSESSMENT. The number of
reassessments over time highlights the number of firms using the BSIMM as an ongoing measurement tool and track with the
overall increase in average BSIMM scores.
A third reason appears to be the effect of firms aging out of the data pool (see Figure M).
25
20
FIRMS AGED OUT
15
10
0
6
10
11
12
M
M
M
M
M
M
IM
IM
IM
IM
IM
IM
IM
BS
BS
BS
BS
BS
BS
BS
FIGURE M. NUMBER OF FIRMS AGED OUT OF THE BSIMM DATA POOL. A total of 113 firms have aged out since BSIMM-V.
Ten of the 113 firms that had once aged out of the BSIMM data pool have subsequently rejoined with a
new assessment.
44
42
FINANCIAL FIRM SCORES
40
38
36
34
32
6
10
11
12
M
M
M
M
M
M
IM
IM
IM
IM
IM
IM
IM
BS
BS
BS
BS
BS
BS
BS
FIGURE N. AVERAGE FINANCIAL SERVICES FIRM SCORES. The average score across the financial services vertical
followed the same pattern as the average score for AllFirms (shown in Figure J). Even in mature verticals, we observe a rise
in the average scores over time.
45
40
35
30
25
20
15
10
0
AVERAGE SSG SIZE MEDIAN SSG SIZE AVERAGE SSG AGE AVERAGE SCORE
FIGURE O. STATISTICS FOR FIRMS WITH AND WITHOUT A SATELLITE OUT OF 128 BSIMM12 PARTICIPANTS. The
average SSG size for firms without a satellite was impacted by a few significant outliers. These data appear to validate the
notion that more people, both centralized and distributed into engineering teams, can help SSIs achieve higher scores. For the
65 BSIMM12 firms with a satellite at last assessment time, the average satellite size was 99 with a median of 30.
As a member you:
• Receive regular blog and discussion posts that show best practices, tips, and case studies.
• Bounce ideas and questions off of the 700-member community.
• Attend exclusive conferences.
From content authored by industry leaders to hands-on interactions with fellow BSIMM members,
it is a powerful resource for collaborative problem solving, thought leadership, and access to valuable
resources not available anywhere else.
www.bsimm.com