Gary Mcgraw, PH.D., Sammy Migues, and Jacob West
Gary Mcgraw, PH.D., Sammy Migues, and Jacob West
Gary Mcgraw, PH.D., Sammy Migues, and Jacob West
The Building Security in Maturity Model (BSIMM) is the result of a multiyear study of real-world software
security initiatives. We present the BSIMM9 model as built directly out of data observed in 120 firms.
Seventy of the firms are listed in the Acknowledgments section on page 3.
The BSIMM is a measuring stick for software security. The best way to use the BSIMM is to compare and
contrast your own initiative with the data about what other organizations are doing contained in the model.
You can then identify your own goals and objectives and refer to the BSIMM to determine which additional
activities make sense for you.
The BSIMM data show that high maturity initiatives are well-rounded, carrying out numerous activities in all
12 of the practices described by the model. The model also describes how mature software security initiatives
evolve, change, and improve over time.
BSIMM9 License
This work is licensed under the Creative Commons Attribution-Share Alike 3.0 License. To view a copy of this
license, visit http://creativecommons.org/licenses/by-sa/3.0/legalcode or send a letter to Creative Commons,
171 Second Street, Suite 300, San Francisco, California, 94105, USA.
Our thanks to the 120 executives from the world-class software security initiatives we studied from around
the world to create BSIMM9, including those who choose to remain anonymous.
Our thanks also to the more than 90 individuals who helped gather the data for BSIMM. In particular, we
thank Mike Doyle, Nabil Hannan, Jason Hills, Brenton Kohler, Iman Louis, Nick Murison, Alistair Nash, Kevin
Nassery, Denis Sheridan, and Mike Ware. In addition, a special thank you to Kathy Clark-Fisher, whose
behind-the-scenes work keeps the BSIMM science project, conferences, and community on track.
Data for the BSIMM were captured by Synopsys. Resources for data analysis were provided by Oracle.
BSIMM1–BSIMM3 were authored by Gary McGraw, Ph.D., Brian Chess, Ph.D., and Sammy Migues.
BSIMM4–BSIMM8 were authored by Gary McGraw, Ph.D., Sammy Migues, and Jacob West.
Prologue................................................................. 5 3. Appendix
1. Part One a. Adjusting BSIMM8 for BSIMM9................. 72
Cloud Transformation: Three new activities have been added to the BSIMM model that clearly
show that software security in the cloud is becoming mainstream. Furthermore, activities
observed among independent software vendors, Internet of Things companies, and cloud firms
(three of our most distinct verticals) have begun to converge, suggesting that common cloud
architectures require similar software security approaches.
Retail: A new vertical emerged in the BSIMM data pool. Software security initiatives are
maturing relatively quickly as new models focused on e-commerce become critical to sustaining
a healthy business.
Population Growth: The BSIMM now includes data from 120 firms; the number of developers it
covers grew by 43 percent, and the number of software security practitioners it measures grew
by 65 percent.
BSIMM9 incorporates the largest set of data collected about software security anywhere. By
measuring your firm with the BSIMM measuring stick, you can directly compare and contrast
your security approach to some of the best firms in the world.
We begin with a brief description of the function and importance of a software security initiative. We then
explain our model and the method we use for quantifying the state of an initiative. Since the BSIMM study
began in 2008, we have studied 167 firms, which comprise 389 distinct measurements (some firms use the
BSIMM to measure each of their business units and some have been measured more than once). To ensure
the continued relevance of the data we report, we excluded from BSIMM9 measurements older than 42
months. The current data set comprises 320 distinct measurements collected from 120 firms. Thanks to repeat
measurements, not only do we report on current practices but also on the ways in which some initiatives have
evolved over a period of years.
By the middle of the following decade, there was an emerging consensus that building secure software required
more than just smart individuals toiling away. Getting security right means being involved in the software
development process, even as the process evolves.
We classify our work as a maturity model because improving software security almost always means changing
the way an organization works, which doesn’t happen overnight. We understand that not all organizations
need to achieve the same security goals, but we believe all organizations can benefit from using the same
measuring stick.
BSIMM9 is the ninth major version of the model. It includes updated activity descriptions, data from 120 firms
in multiple vertical markets, and a longitudinal study.
Audience
The BSIMM is meant for use by anyone responsible for creating and executing an SSI. We have observed that
successful SSIs are typically run by a senior executive who reports to the highest levels in an organization.
These executives lead an internal group that we call the software security group (SSG), which is charged with
directly executing or facilitating the activities described in the BSIMM. The BSIMM is written with the SSG and
SSG leadership in mind.
Method
We built the first version of the BSIMM a decade ago (in Fall of 2008) as follows:
• We relied on our own knowledge of software security practices to create the SSF. (We present the
framework on page 13.)
• We conducted a series of in-person interviews with nine executives in charge of SSIs. From these
interviews, we identified a set of common activities, which we organized according to the SSF.
• We then created scorecards for each of the nine initiatives that show which activities the initiatives carry
out. To validate our work, we asked each participating firm to review the framework, the practices, and the
scorecard we created for their initiative.
The BSIMM is a data-driven model that evolves over time. We have added, deleted, and adjusted the levels of
various activities based on the data observed as the project has evolved. To preserve backward compatibility,
we make all changes by adding new activity labels to the model, even when an activity has simply changed
levels. We make changes by considering outliers both in the model itself and in the levels we assigned to
various activities in the 12 practices we describe later. We use the results of an intralevel standard deviation
analysis to determine which outlier activities to move between levels, focusing on changes that minimize
standard deviation in the average number of observed activities at each level.
We use an in-person interview technique to conduct BSIMM assessments, done with a total of 167 firms so far.
In 35 cases, we assessed the SSG and one or more business units as part of creating the corporate SSI view.
In some of those cases, we used one aggregated scorecard, whereas in others, we used multiple scorecards
for the SSG and each business unit. However, each firm is represented by only one set of data in the model
published here. The following table shows changes in the data pool over time.
BSIMM-V 5 67
BSIMM6 21 78
BSIMM7 13 95
BSIMM8 5 109
BSIMM9 11 120
For BSIMM9, we added 22 firms and removed 11, resulting in a data pool of 120 firms. We used the resulting
observation counts to refine the set of activities and their placement in the framework.
We have also conducted a second complete set of interviews with 42 of the current participating firms in
order to study how their initiatives have changed over time. Twenty firms have undertaken three BSIMM
assessments, seven have done four BSIMM assessments, and one has had five BSIMM assessments.
As a descriptive model, the only goal of the BSIMM is to observe and report. We like to say that we wandered
off into the jungle to see what we could see and discovered that “monkeys eat bananas in X of the Y jungles we
visited.” Note that the BSIMM does not report “you should only eat yellow bananas,” “do not run while eating
a banana,” “thou shalt not steal thy neighbors’ bananas,” or any other value judgments. Simple observations,
simply reported.
Our “just the facts” approach is hardly novel in science and engineering, but in the realm of software security,
it has not previously been applied on this scale. Other work has either described the experience of a single
organization or offered prescriptive guidance based purely on a combination of personal experience and opinion.
Participating Firms
The 120 participating organizations are drawn from eight well-represented verticals (with some overlap):
financial services (50), independent software vendors (42), technology (22), healthcare (19), cloud (17), Internet
of Things (16), insurance (10), and retail (10). Verticals with lower representation in the BSIMM population
include telecommunications, security, and energy. See the Acknowledgments section on page 3 for a list of
companies that graciously agreed to be identified.
On average, the 120 participating firms have practiced software security for 4.13 years at the time of the
current assessment (ranging from less than a year to 19 years as of June 2018). All 120 firms agree that the
success of their initiative hinges on their SSG, an internal group devoted to software security. SSG size on
average is 13.3 people (smallest 1, largest 160, median 5.5), with an average satellite group of developers,
architects, and people in the organization directly engaged in and promoting software security consisting of
52.4 people (smallest 0, largest 2,250, median 0). The average number of developers among our participants
was 3,463 people (smallest 20, largest 45,000, median 900), yielding an average percentage of SSG to
development of 1.33% (median 0.67%).
All told, the BSIMM describes the work of 1,600 SSG members working with a satellite of 6,291 people to
secure the software developed by 415,598 developers as part of a combined portfolio of 135,881 applications.
Activity: Actions carried out or facilitated by the software security group (SSG) as part of a
practice. Activities are divided into three levels in the BSIMM.
Domain: One of the four categories our framework is divided into: governance, intelligence,
secure software development lifecycle (SSDL) touchpoints, and deployment. See the SSF
section on page 13.
Practice: BSIMM activities are broken down into 12 categories or practices. Each domain in
the software security framework (SSF) has three practices, and the activities in each practice
are divided into an additional three levels. See the SSF section on page 13.
Secure Software Development Lifecycle (SSDL): Any software lifecycle with integrated
software security checkpoints and activities.
Software Security Framework (SSF): The basic structure underlying the BSIMM, comprising
12 practices divided into four domains. See the SSF section on page 13.
Software Security Group (SSG): The internal group charged with carrying out and facilitating
software security. According to our observations, the first step of a software security initiative
(SSI) is to form an SSG.
Governance. Practices that help organize, manage, and measure a software security
initiative. Staff development is also a central governance practice.
Intelligence. Practices that result in collections of corporate knowledge used in carrying out
software security activities throughout the organization. Collections include both proactive security
guidance and organizational threat modeling.
Deployment. Practices that interface with traditional network security and software
maintenance organizations. Software configuration, maintenance, and other environment
issues have direct impact on software security.
The 12 practices:
Governance
STRATEGY & METRICS (SM)
LEVEL 1
Publish process (roles, responsibilities, plan), evolve as necessary. SM1.1 59.2
Create evangelism role and perform internal marketing. SM1.2 55.0
Educate executives. SM1.3 55.8
Identify gate locations, gather necessary artifacts. SM1.4 84.2
LEVEL 2
Publish data about software security internally. SM2.1 39.2
Enforce gates with measurements and track exceptions. SM2.2 35.0
Create or grow a satellite. SM2.3 36.7
Require security sign-off. SM2.6 32.5
LEVEL 3
Use an internal tracking application with portfolio view. SM3.1 12.5
Run an external marketing program. SM3.2 5.8
Identify metrics and use them to drive budgets. SM3.3 15.0
LEVEL 1
Unify regulatory pressures. CP1.1 65.8
Identify PII obligations. CP1.2 84.2
Create policy. CP1.3 55.0
Table continued on next page >
LEVEL 2
Identify PII data inventory. CP2.1 32.5
Require security sign-off for compliance-related risk. CP2.2 31.7
Implement and track controls for compliance. CP2.3 35.8
Include software security SLAs in all vendor contracts. CP2.4 35.0
Ensure executive awareness of compliance and privacy obligations. CP2.5 39.2
LEVEL 3
Create a regulator compliance story. CP3.1 17.5
Impose policy on vendors. CP3.2 10.0
Drive feedback from SSDL data back to policy. CP3.3 4.2
TRAINING (T)
ACTIVITY DESCRIPTION ACTIVITY PARTICIPANT %
LEVEL 1
Provide awareness training. T1.1 66.7
Deliver role-specific advanced curriculum (tools, technology stacks,
and bug parade). T1.5 28.3
Create and use material specific to company history. T1.6 21.7
Deliver on-demand individual training. T1.7 39.2
LEVEL 2
Enhance satellite through training and events. T2.5 17.5
Include security resources in onboarding. T2.6 19.2
LEVEL 3
Reward progression through curriculum (certification or HR). T3.1 3.3
Provide training for vendors or outsourced workers. T3.2 6.7
Host external software security events. T3.3 7.5
Require an annual refresher. T3.4 7.5
Establish SSG office hours. T3.5 4.2
Identify a satellite through training. T3.6 2.5
LEVEL 1
Create a data classification scheme and inventory. AM1.2 62.5
Identify potential attackers. AM1.3 31.7
Gather and use attack intelligence. AM1.5 44.2
LEVEL 2
Build attack patterns and abuse cases tied to potential attackers. AM2.1 8.3
Create technology-specific attack patterns. AM2.2 8.3
Build and maintain a top N possible attacks list. AM2.5 13.3
Collect and publish attack stories. AM2.6 11.7
Build an internal forum to discuss attacks. AM2.7 9.2
LEVEL 3
Have a science team that develops new attack methods. AM3.1 3.3
Create and use automation to mimic attackers. AM3.2 1.7
LEVEL 1
Build and publish security features. SFD1.1 79.2
Engage SSG with architecture. SFD1.2 58.3
LEVEL 2
Build secure-by-design middleware frameworks and common libraries. SFD2.1 28.3
Create SSG capability to solve difficult design problems. SFD2.2 38.3
LEVEL 3
Form a review board or central committee to approve
and maintain secure design patterns. SFD3.1 7.5
Require use of approved security features and frameworks. SFD3.2 7.5
Find and publish mature design patterns from the organization. SFD3.3 1.7
LEVEL 1
Create security standards. SR1.1 62.5
Create a security portal. SR1.2 65.0
Translate compliance constraints to requirements. SR1.3 63.3
LEVEL 2
Create a standards review board. SR2.2 31.7
Create standards for technology stacks. SR2.3 19.2
Identify open source. SR2.4 32.5
Create SLA boilerplate. SR2.5 24.2
LEVEL 3
Control open source risk. SR3.1 14.2
Communicate standards to vendors. SR3.2 8.3
Use secure coding standards. SR3.3 8.3
LEVEL 1
Perform security feature review. AA1.1 84.2
Perform design review for high-risk applications. AA1.2 27.5
Have SSG lead design review efforts. AA1.3 22.5
Use a risk questionnaire to rank applications. AA1.4 47.5
LEVEL 2
Define and use AA process. AA2.1 12.5
Standardize architectural descriptions (including data flow). AA2.2 11.7
LEVEL 3
Have software architects lead design review efforts. AA3.1 3.3
Drive analysis results into standard architecture patterns. AA3.2 1.7
Make the SSG available as an AA resource or mentor. AA3.3 2.5
LEVEL 1
Have SSG perform ad hoc review. CR1.2 68.3
Use automated tools along with manual review. CR1.4 63.3
Make code review mandatory for all projects. CR1.5 33.3
Use centralized reporting to close the knowledge loop and drive training. CR1.6 36.7
LEVEL 2
Assign tool mentors. CR2.5 23.3
Use automated tools with tailored rules. CR2.6 16.7
Use a top N bugs list (real data preferred). CR2.7 20.8
LEVEL 3
Build a factory. CR3.2 3.3
Build a capability for eradicating specific bugs from the entire codebase. CR3.3 0.8
Automate malicious code detection. CR3.4 3.3
Enforce coding standards. CR3.5 2.5
LEVEL 1
Ensure QA supports edge/boundary value condition testing. ST1.1 83.3
Drive tests with security requirements and security features. ST1.3 73.3
LEVEL 2
Integrate black-box security tools into the QA process. ST2.1 25.0
Share security results with QA. ST2.4 11.7
Include security tests in QA automation. ST2.5 10.0
Perform fuzz testing customized to application APIs. ST2.6 10.8
LEVEL 3
Drive tests with risk analysis results. ST3.3 3.3
Leverage coverage analysis. ST3.4 2.5
Begin to build and apply adversarial security tests (abuse cases). ST3.5 2.5
LEVEL 1
Use external penetration testers to find problems. PT1.1 87.5
Feed results to the defect management and mitigation system. PT1.2 74.2
Use penetration testing tools internally. PT1.3 61.7
LEVEL 2
Provide penetration testers with all available information. PT2.2 21.7
Schedule periodic penetration tests for application coverage. PT2.3 17.5
LEVEL 3
Use external penetration testers to perform deep-dive analysis. PT3.1 8.3
Have the SSG customize penetration testing tools and scripts. PT3.2 5.8
LEVEL 1
Use application input monitoring. SE1.1 48.3
Ensure host and network security basics are in place. SE1.2 86.7
LEVEL 2
Publish installation guides. SE2.2 32.5
Use code signing. SE2.4 25.8
LEVEL 3
Use code protection. SE3.2 14.2
Use application behavior monitoring and diagnostics. SE3.3 3.3
Use application containers. SE3.4 9.2
Use orchestration for containers and virtualized environments. SE3.5 0.0
Enhance application inventory with operations bill of materials. SE3.6 0.0
Ensure cloud security basics. SE3.7 0.0
LEVEL 1
Create or interface with incident response. CMVM1.1 84.2
Identify software defects found in operations monitoring and feed them
back to development. CMVM1.2 85.0
LEVEL 2
Have emergency codebase response. CMVM2.1 68.3
Track software bugs found in operations through the fix process. CMVM2.2 72.5
Develop an operations inventory of applications. CMVM2.3 47.5
LEVEL 3
Fix all occurrences of software bugs found in operations. CMVM3.1 4.2
Enhance the SSDL to prevent software bugs found in operations. CMVM3.2 5.8
Simulate software crises. CMVM3.3 7.5
Operate a bug bounty program. CMVM3.4 10.8
Compliance & Policy Security Features & Design Code Review Software Environment
[CP1.1] 79 [SFD1.1] 95 [CR1.2] 82 [SE1.1] 58
[CP1.2] 101 [SFD1.2] 70 [CR1.4] 76 [SE1.2] 104
[CP1.3] 66 [SFD2.1] 34 [CR1.5] 40 [SE2.2] 39
[CP2.1] 39 [SFD2.2] 46 [CR1.6] 44 [SE2.4] 31
[CP2.2] 38 [SFD3.1] 9 [CR2.5] 28 [SE3.2] 17
[CP2.3] 43 [SFD3.2] 9 [CR2.6] 20 [SE3.3] 4
[CP2.4] 42 [SFD3.3] 2 [CR2.7] 25 [SE3.4] 11
[CP2.5] 47 [CR3.2] 4 [SE3.5] 0
[CP3.1] 21 [CR3.3] 1 [SE3.6] 0
[CP3.2] 12 [CR3.4] 4 [SE3.7] 0
[CP3.3] 5 [CR3.5] 3
Training Standards & Requirements Security Testing Config. Mgmt. & Vuln. Mgmt.
[T1.1] 80 [SR1.1] 75 [ST1.1] 100 [CMVM1.1] 101
[T1.5] 34 [SR1.2] 78 [ST1.3] 88 [CMVM1.2] 102
[T1.6] 26 [SR1.3] 76 [ST2.1] 30 [CMVM2.1] 82
[T1.7] 47 [SR2.2] 38 [ST2.4] 14 [CMVM2.2] 87
[T2.5] 21 [SR2.3] 23 [ST2.5] 12 [CMVM2.3] 57
[T2.6] 23 [SR2.4] 39 [ST2.6] 13 [CMVM3.1] 5
[T3.1] 4 [SR2.5] 29 [ST3.3] 4 [CMVM3.2] 7
[T3.2] 8 [SR3.1] 17 [ST3.4] 3 [CMVM3.3] 9
[T3.3] 9 [SR3.2] 10 [ST3.5] 3 [CMVM3.4] 13
[T3.4] 9 [SR3.3] 10
[T3.5] 5
[T3.6] 3
ACTIVITY DESCRIPTION
[CMVM1.2] Identify software bugs found in operations monitoring and feed them back to development.
We created spider charts by noting the highest-level activity observed for each practice per BSIMM firm (a
“high-water mark”) and then averaging these values over a group of firms to produce 12 numbers (one for
each practice). The resulting spider chart plots these values on 12 spokes corresponding to the 12 practices.
Note that level 3 (the outside edge) is considered more mature than level 0 (the center point). Other, more
sophisticated analyses are possible, of course.
2.0
1.0
0.5
Architecture Analysis
Earth (120)
By computing these high-water mark values and an observed score for each firm in the study, we can also
compare relative and average maturity for one firm against the others. The range of observed scores in the
current data pool is [5, 79].
The graph on the next page shows the distribution of scores among the population of 120 participating firms
(which we call Earth). To create this graph, we divided the scores into six bins. As you can see, the scores
represent a slightly skewed bell curve. We also plotted the average age of the firms’ SSIs in each bin as the
orange line on the graph. In general, firms where more BSIMM activities have been observed have older SSIs.
45
40
35
30
FIRMS
25
20
15
7.9 8.5
10
6.9
4.7
5 2.2
1.0
Earth (120) - With average SSG age (in years) per score bucket
We are pleased that the BSIMM study continues to grow year after year. The data set we report on here is over
35 times the size it was for the original publication. Note that once we exceeded a sample size of 30 firms, we
began to apply statistical analysis, yielding statistically significant results.
The scorecard you see on the next page depicts a fake firm that performs 37 BSIMM activities (noted as 1’s
in the FAKEFIRM columns), including seven activities that are the most common in their respective practices
(purple boxes). Note the firm does not perform the most commonly observed activities in the other five
practices (red boxes) and should take some time to determine whether these are necessary or useful to its
overall software security initiative. The BSIMM9 FIRMS columns show the number of observations (currently
out of 120) for each activity, allowing the firm to understand the general popularity of an activity among the
120 BSIMM9 firms.
In our own work using the BSIMM to assess initiatives, we found that creating a spider chart yielding a high-
water mark approach (based on the three levels per practice) is sufficient to obtain a low-resolution feel for
maturity, especially when working with data from a particular vertical.
2.0
1.0
0.5
Architecture Analysis
One meaningful comparison is to chart your own high-water mark against the averages we have published
to see how your initiative stacks up. Above, we have plotted data from the fake firm against the BSIMM Earth
graph. The breakdown of activities into levels for each practice is meant only as a guide. The levels provide a
natural progression through the activities associated with each practice, but it isn’t necessary to carry out all
activities in a given level before moving on to activities at a higher level in the same practice. That said, the
levels we have identified hold water under statistical scrutiny. Level 1 activities (straightforward and simple) are
those that are most commonly observed, Level 2 (more difficult and requiring more coordination) are slightly
less so, and Level 3 (rocket science) are rarely observed.
By identifying activities from each practice that could work for you, and by ensuring proper balance with
BSIMM9 Analysis
The BSIMM has produced a wealth of real-world data about software security.
2.0
1.0
0.5
Architecture Analysis
Cloud (17 of 120) Internet of Things (16 of 120) ISV (42 of 120)
Cloud, Internet of Things, and independent software vendors (ISVs) are three of the most mature verticals
in the BSIMM. On average, cloud firms are noticeably more mature in the Governance practices—Strategy &
Metrics, Compliance & Policy, and Training—compared to the ISVs and Internet of Things firms. By the same
measure, Internet of Things firms show greater maturity in the Security Testing and Software Environment
practices. Despite these obvious differences, there is a great deal of overlap. We hypothesize that technology
stacks and architectures between these three verticals are converging.
2.0
1.0
0.5
Architecture Analysis
Three verticals in the BSIMM operate in highly regulated industries: insurance, healthcare, and financial
services. In our experience, large financial services firms reacted to regulatory changes and started their SSIs
much earlier than insurance and healthcare firms. Even as the number of financial services firms doubled over
the past five years with a large influx into the BSIMM of newly-started initiatives, the financial services SSG
average age at assessment time remains 5.4 years, versus 3.1 years for insurance and 2.5 years for healthcare.
Time spent maturing their collective SSIs shows up clearly in the side-by-side comparison. Although the
insurance vertical includes some mature outliers, the data for these three regulated verticals show insurance
generally lags behind in software security. We see a starker contrast in healthcare, with virtually no outliers.
The overall maturity of the healthcare vertical remains low.
2.0
1.0
0.5
Architecture Analysis
In the BSIMM population, we can find large gaps between the maturity of verticals. Consider the spider
diagram that directly compares the cloud and healthcare verticals. In this case, the delta between technology
firms that deliver cloud services and healthcare firms that are generally just getting started with software
security is rather obvious. Fortunately for verticals that find themselves behind this curve, verticals such
as cloud provide a good roadmap to faster maturity.
2.0
1.0
0.5
Architecture Analysis
For the first time, the BSIMM presents data on the retail vertical. This group, with an average SSG age of 3.2
years and average SSG size of nearly eight full-time people, seems to track closely to the overall data pool.
The most obvious differences are in the Architecture Analysis, Software Environment, and Configuration
Management & Vulnerability Management practices, where retail participants are somewhat ahead of the
average for Earth.
In the tables on the following pages, you can see the BSIMM scorecards for the seven verticals compared side
by side. In the Activity columns, we have highlighted in yellow the most common activity in each practice as
observed in the entire BSIMM data pool (120 firms).
GOVERNANCE
[SM1.1] 36 25 12 10 9 5 11 5
[SM1.2] 25 28 17 8 13 4 12 5
[SM1.3] 30 26 14 8 10 5 12 3
[SM1.4] 47 34 18 13 11 9 13 10
[SM2.1] 24 19 9 4 6 4 10 5
[SM2.2] 25 12 8 4 5 3 7 3
[SM2.3] 16 17 11 8 8 5 5 5
[SM2.6] 23 9 6 4 4 2 6 3
[SM3.1] 7 6 3 2 2 1 4 0
[SM3.2] 1 5 3 1 2 1 3 1
[SM3.3] 12 4 2 2 2 1 2 0
[CP1.1] 37 24 13 16 12 6 11 6
[CP1.2] 45 29 18 19 14 8 16 10
[CP1.3] 37 18 7 10 5 5 10 5
[CP2.1] 18 13 6 8 6 2 9 3
[CP2.2] 22 9 7 7 4 2 5 1
[CP2.3] 19 14 10 6 6 3 7 4
[CP2.4] 21 14 7 7 5 3 8 2
[CP2.5] 18 19 9 11 8 3 10 2
[CP3.1] 15 7 1 2 1 2 4 0
[CP3.2] 6 3 2 2 1 2 4 0
[CP3.3] 3 1 1 0 0 0 1 0
[T1.1] 36 31 15 10 12 6 15 8
[T1.5] 16 14 7 3 5 3 8 3
[T1.6] 10 14 8 3 7 0 6 2
[T1.7] 26 15 8 5 6 6 8 5
[T2.5] 8 8 4 2 3 2 3 3
[T2.6] 15 5 3 2 3 2 5 2
[T3.1] 0 3 2 0 1 0 3 0
[T3.2] 4 3 3 2 3 2 3 1
[T3.3] 1 5 3 1 1 1 2 0
[T3.4] 5 3 1 2 1 1 3 0
[T3.5] 1 2 0 0 0 0 2 2
[T3.6] 0 2 2 0 2 0 2 0
[AM1.2] 41 20 11 10 11 6 12 6
[AM1.3] 18 11 7 6 6 3 3 4
[AM1.5] 26 16 11 8 8 4 7 4
[AM2.1] 2 4 3 2 2 1 2 1
[AM2.2] 2 5 5 0 3 0 4 0
[AM2.5] 6 5 6 2 3 1 2 1
[AM2.6] 4 5 3 3 3 1 5 0
[AM2.7] 2 6 6 1 4 0 4 0
[AM3.1] 1 2 2 0 1 0 1 1
[AM3.2] 0 1 2 0 1 0 0 0
[SFD1.1] 42 32 16 14 12 9 16 8
[SFD1.2] 29 28 17 12 13 6 12 6
[SFD2.1] 14 16 8 4 6 2 8 2
[SFD2.2] 18 18 10 6 8 4 8 4
[SFD3.1] 6 2 1 0 1 0 2 1
[SFD3.2] 4 4 0 1 0 2 5 1
[SFD3.3] 0 0 1 1 1 0 0 1
[SR1.1] 41 23 12 11 10 5 12 5
[SR1.2] 36 27 13 14 10 5 14 6
[SR1.3] 40 24 15 9 10 7 9 8
[SR2.2] 23 11 6 4 3 4 6 4
[SR2.3] 13 6 4 4 3 2 5 3
[SR2.4] 15 19 10 6 8 3 10 1
[SR2.5] 14 9 6 6 5 3 4 2
[SR3.1] 7 8 6 1 4 2 5 1
[SR3.2] 3 4 4 2 3 3 2 0
[SR3.3] 4 3 4 2 4 2 1 0
[AA1.1] 43 36 19 15 14 7 15 10
[AA1.2] 10 15 11 3 8 2 6 2
[AA1.3] 6 13 10 4 7 2 6 1
[AA1.4] 34 11 8 10 7 5 5 6
[AA2.1] 4 8 6 2 3 2 1 0
[AA2.2] 4 5 6 2 5 1 1 1
[AA3.1] 2 1 2 0 1 1 1 0
[AA3.2] 0 1 1 0 1 0 1 1
[AA3.3] 1 1 2 0 1 0 1 0
[CR1.2] 34 29 16 12 9 7 12 6
[CR1.4] 35 27 14 9 11 5 13 7
[CR1.5] 16 20 11 2 6 3 7 2
[CR1.6] 21 18 7 4 6 2 9 3
[CR2.5] 15 11 4 3 4 2 6 4
[CR2.6] 15 3 3 0 2 1 2 1
[CR2.7] 13 8 2 2 1 3 4 1
[CR3.2] 1 0 0 2 0 2 0 1
[CR3.3] 0 0 1 0 1 0 0 0
[CR3.4] 4 0 0 0 0 0 0 0
[CR3.5] 2 0 1 0 1 0 0 0
[ST1.1] 44 34 21 12 14 10 12 9
[ST1.3] 41 34 20 9 13 7 11 7
[ST2.1] 11 15 10 4 8 4 3 3
[ST2.4] 7 4 5 1 2 1 0 1
[ST2.5] 4 5 6 1 4 0 4 1
[ST2.6] 0 11 11 1 8 0 3 0
[ST3.3] 1 3 1 0 1 0 2 0
[ST3.4] 0 1 3 1 3 0 0 0
[ST3.5] 1 2 1 0 1 0 1 0
[PT1.1] 45 36 18 16 13 10 14 10
[PT1.2] 43 34 13 8 10 6 15 7
[PT1.3] 31 25 13 12 9 6 9 8
[PT2.2] 7 10 8 3 5 2 4 2
[PT2.3] 13 9 2 1 1 1 3 2
[PT3.1] 1 5 7 2 4 1 2 1
[PT3.2] 4 2 2 0 1 0 1 0
[SE1.1] 30 13 4 13 5 5 11 7
[SE1.2] 47 35 17 17 13 9 16 9
[SE2.2] 14 19 15 2 11 2 7 2
[SE2.4] 8 17 16 1 11 1 5 1
[SE3.2] 5 8 8 2 6 2 2 3
[SE3.3] 0 3 2 0 2 0 1 0
[SE3.4] 3 7 2 0 1 0 4 3
[SE3.5] 0 0 0 0 0 0 0 0
[SE3.6] 0 0 0 0 0 0 0 0
[SE3.7] 0 0 0 0 0 0 0 0
[CMVM1.1] 44 38 19 13 15 6 16 8
[CMVM1.2] 42 38 19 16 15 7 15 9
[CMVM2.1] 40 31 13 11 10 5 13 7
[CMVM2.2] 37 33 18 11 14 5 15 10
[CMVM2.3] 28 22 9 9 8 5 9 2
[CMVM3.1] 0 3 3 0 3 0 2 0
[CMVM3.2] 1 2 4 2 3 1 2 0
[CMVM3.3] 5 1 3 2 1 1 0 2
[CMVM3.4] 4 6 2 1 1 2 6 2
There are two ways of thinking about the change represented by the longitudinal scorecard (showing 42
BSIMM9 firms moving from their first to second assessment). We see the biggest changes in the following
activities: [SM1.1 Publish process (roles, responsibilities, plan), evolve as necessary], with 18 new observations;
[PT1.2 Feed results to the defect management and mitigation system], with 16 new observations; [T1.7 Deliver
on-demand individual training] and [CMVM2.3 Develop an operations inventory of applications], each with 15 new
In a different example, the activity [T1.6 Create and use material specific to company history] was both newly
observed in four firms and no longer observed in four firms. Therefore, the total observation count remains
unchanged on the scorecard. The same type of zero-sum churn also occurred in [AM2.2 Create technology-
specific attack patterns].
2.0
1.0
0.5
Architecture Analysis
2.0
1.0
0.5
Architecture Analysis
Firms tend to mature between measurements, as seen in the two spider charts on pages 36 and 37. Forty-two
firms have been measured twice, and 20 firms have been measured three times.
For BSIMM9, however, the average score increased to 34.0. One reason for this change—a potential reversal
of the decline in overall maturity—appears to be the mix of firms embarking on their first BSIMM assessment.
The average SSG age for new firms entering BSIMM6 was 2.9 years; it was 3.37 years for BSIMM7 and 2.83
years for BSIMM8, but increased to 4.57 years for BSIMM9. Another reason appears to be an increase in firms
continuing to use BSIMM assessments to guide their initiatives. BSIMM7 included 11 firms that received their
second or higher assessment. That figure increased to 12 firms for BSIMM8 and 16 firms for BSIMM9.
We also see this potential reversal in mature verticals such as financial services where average overall maturity
decreased to 35.6 in BSIMM8 from 36.2 in BSIMM7 and 38.3 in BSIMM6. For BSIMM9, the average financial
services score increased to 36.8. Note that five of the 11 firms dropped from BSIMM9 due to data age were
For BSIMM8, we zoomed in on two particular activities as part of our analysis. Observations of [AA3.3 Make
the SSG available as an AA resource or mentor] dropped to 2% in the BSIMM8 community, from 5% in BSIMM7,
17% in BSIMM6, and 30% in BSIMM-V. However, observations rose to 3% for BSIMM9. Observations of [SR3.3
Use secure coding standards] dropped to 14% in BSIMM8, from 18% in BSIMM7, 29% in BSIMM6, and 40% in
BSIMM-V. In this case, the slide continued to 8% for BSIMM9. This kind of change can be seen in activities
spanning all 12 practices. Instead of focusing on a robust, multiactivity approach to a given practice, many
firms have a tendency to pick one figurehead activity on which to focus their next round of investment.
Firms in the BSIMM community for multiple years have, with one or two exceptions, always increased in
maturity over time. We expect the majority of newer firms entering the BSIMM population to do the same.
BSIMM Community
The 120 firms participating in the BSIMM make up the BSIMM community. A private online community
platform with nearly 600 members provides software security personnel a forum to discuss solutions with
others who face the same issues, refine strategy with someone who has already addressed an issue, seek
out mentors from those further along a career path, and band together to solve hard problems. Community
members also receive exclusive access to topical webinars and other curated content.
The BSIMM community also hosts annual private conferences where representatives from each firm gather
in an off-the-record forum to discuss software security initiatives. To date, 15 BSIMM community conferences
have been held, eight in the United States and seven in Europe. During the conferences, representatives from
BSIMM firms give the presentations.
The BSIMM website includes a credentialed BSIMM community section where information from conferences,
working groups, and mailing list-initiated studies are posted.
Executive Leadership
Of primary interest is identifying and empowering a senior executive to manage operations, garner resources,
and provide political cover for an SSI. Grassroots approaches to software security sparked and led solely
by developers and their direct managers have a poor track record in the real world. Likewise, initiatives
spearheaded by resources from an existing network security group often run into serious trouble when
it comes time to interface with development groups. By identifying a senior executive and putting him or
her in charge of software security directly, you address two management 101 concerns: accountability and
empowerment. You also create a place in the organization where software security can take root and
begin to thrive.
The individuals in charge of the SSIs we studied have a variety of titles. Examples include Chief Data & Security
Privacy Officer, CISO, CSO, Director Enterprise Security Architecture, Director Global Security & Compliance,
Director Product Security, Executive Director Product Operations, Head Application Security Architecture &
Engineering, Head Application Security Programs, Manager InfoSec Engineering, Manager Product Security,
Managing VP of Security Engineering, Threat & Vulnerability Management Lead, VP Application Security &
Technology Analysis, VP Cybersecurity, VP InfoSec, and Web Security Manager. We observed a fairly wide
spread in exactly where the SSG is situated in
the firms we studied. In particular, 63 of the 120
participating firms have SSGs that are run by a CISO
or report to a CISO as their nearest senior executive. Developer-led
Thirteen of the firms report through a CTO as their
closest senior executive, while 10 report to a CIO, grassroots approaches
seven to a CSO, four to a COO, two to a CRO, and one
to a CAO. Twenty of the SSGs report through some
to software security
type of technology or product organization. have a poor
track record in
the real world.
SSGs come in a variety of shapes and sizes, but all good SSGs appear to include both people with deep coding
experience and people with architectural chops. As you will see below, software security can’t only be about
finding specific bugs, such as the OWASP Top Ten. Code review is an important best practice, and to perform
code review, you must actually understand code (not to mention the huge piles of security bugs). However, the
best code reviewers sometimes make poor software architects, and asking them to perform an architecture
risk analysis will only result in blank stares. Make sure that you cover architectural capabilities in your SSG
as well as you cover code. Finally, SSGs are often asked to mentor, train, and work directly with hundreds of
developers. Communication skills, teaching capability, and practical knowledge are must-haves for at least a
portion of the SSG staff. For more about this issue, see our Search Security article based on SSG structure data
gathered at the 2014 BSIMM Community Conference: How to Build a Team for Software Security Management.
Although no two of the 120 firms we examined had exactly the same SSG structure (suggesting that there is no
one set way to structure an SSG), we did observe some commonalities that are worth mentioning. At the highest
level of organization, SSGs come in five major flavors: 1) organized to provide software security services, 2)
organized around setting policy, 3) mirroring business unit organizations, 4) organized with a hybrid policy and
services approach, and 5) structured around managing a distributed network of others doing software security
work. Some SSGs are highly distributed across a firm and others are centralized. If we look across all of the SSGs
in our study, though, there are several common subgroups: people dedicated to policy, strategy, and metrics;
internal “services” groups that (often separately) cover tools, penetration testing, and middleware development
plus shepherding; incident response groups; groups responsible for training development and delivery;
externally-facing marketing and communications groups; and vendor control groups.
In the statistics reported above, we noted an average ratio of SSG to development of 1.33% across the entire
group of 120 organizations that we studied, meaning we found one SSG member for every 75 developers
when we averaged the ratios for each participating firm. The SSG with the largest ratio was 10%, and the
smallest was 0.01%. As a reminder, SSG size on average among the 120 firms was 13.3 people (smallest 1,
largest 160, median 5.5).
Satellite
In addition to the SSG, many SSIs have identified a number of individuals (often developers, testers, and
architects) who share a basic interest in software security but are not directly employed in the SSG. When
people like this carry out software security activities, we call this group a satellite.
Of particular interest, 27 of the 30 firms with the highest BSIMM scores have a satellite, with an average satellite
size of nearly 182 people. Outside the top 30, 27 of the remaining 90 firms have a satellite (30%). Of the 30 firms
with the lowest BSIMM scores, only three have a satellite, and the bottom 10 have no satellite at all.
Sixty-four percent of firms that have been assessed more than once have a satellite, while 65% of firms on
their first assessment do not. Firms that are new to software security take some time to identify and develop a
satellite. These data suggest that as an SSI matures, its activities become distributed and institutionalized into
the organizational structure. Among our population of 120 firms, initiatives tend to evolve from centralized and
specialized in the beginning to decentralized and distributed (with an SSG at the core orchestrating things).
Everybody Else
Our survey participants have engaged everyone involved in the
software development lifecycle (SDLC) as a means of addressing
Twenty-seven
software security. of the 30 firms
• Builders, including developers, architects, and their with the highest
managers, must practice security engineering, ensuring that
the systems that they build are defensible and not riddled BSIMM scores
with holes. The SSG will interact directly with builders
when they carry out the activities described in the BSIMM.
have a satellite.
Generally speaking, as an organization matures, the SSG
attempts to empower builders so that they can carry out
most BSIMM activities themselves, with the SSG helping in special cases and providing oversight. We often
don’t explicitly point out whether a given activity is to be carried out by the SSG, developers, or testers.
Each organization should come up with an approach that makes sense and accounts for its own workload
and software lifecycle.
• Testers concerned with routine testing and verification should do what they can to keep an eye out for
security problems. Some BSIMM activities in the Security Testing practice can be carried out directly by QA.
• Operations people must continue to design, defend, and maintain reasonable environments. As you will
see in the Deployment domain of the software security framework (SSF), software security doesn’t end
when software is “shipped.” This includes cloud software and DevOps shops.
• Administrators must understand the distributed nature of modern systems and begin to practice the
principle of least privilege, especially when it comes to the applications they host or attach to as services in
the cloud.
• Executives and middle management, including line of business owners and product managers, must
understand how early investment in security design and security analysis affects the degree to which users
will trust their products. Business requirements should explicitly address security needs. Any sizeable
business today depends on software to work. Software security is a business necessity.
• Vendors, including those who supply COTS, custom software, and software-as-a-service, are increasingly
subjected to SLAs and reviews (such as vBSIMM) that help ensure products are the result of a secure SDLC.
SM LEVEL 1
[SM1.1: 71] Publish process (roles, responsibilities, plan), evolve as necessary.
The process for addressing software security is broadcast to all stakeholders so that everyone knows the
plan. Goals, roles, responsibilities, and activities are explicitly defined. Most organizations pick and choose
from a published methodology, such as the Microsoft SDL or the Synopsys Touchpoints, and then tailor the
methodology to their needs. An SSDL process must be adapted to the specifics of the development process
it governs (e.g., waterfall, agile, CI/CD, DevOps, etc.) because it will evolve with both the organization and the
security landscape. A process must be published to count. In many cases, the methodology is controlled by the
SSG and published only internally. The SSDL does not need to be publicly promoted outside of the firm to have
the desired impact.
SM LEVEL 2
[SM2.1: 47] Publish data about software security internally.
The SSG publishes data internally about the state of software security within the organization to facilitate
improvement. This information might come in the form of a dashboard with metrics for executives and
software development management. Sometimes, these published data are not shared with everyone in the
firm but with the relevant executives only. In such cases, publishing the information to executives who then
drive change in the organization is necessary. In other cases, open book management and data published to all
stakeholders helps everyone know what’s going on, with the philosophy that sunlight is the best disinfectant.
If the organization’s culture promotes internal competition between groups, this information adds a security
dimension to the game. The time compression associated with CI/CD calls for measurements that can be taken
quickly and accurately, focusing less on historical trends (e.g., bugs per release) and more on speed (e.g.,
time to fix).
SM LEVEL 3
[SM3.1: 15] Use an internal tracking application with portfolio view.
The SSG uses a centralized tracking application to chart the progress of every piece of software in its purview,
regardless of development methodology. The application records the security activities scheduled, in progress,
and completed, incorporating results from activities such as architecture analysis, code review, and security
testing even when they happen in a tight loop. The SSG uses the tracking application to generate portfolio
reports for many of the metrics it uses. A combined inventory and risk posture view is fundamental. In many
cases, these data are published at least among executives. Depending on the culture, this can cause interesting
effects via internal competition. As an initiative matures and activities become more distributed, the SSG uses
the centralized reporting system to keep track of all the moving parts.
CP LEVEL 1
[CP1.1: 79] Unify regulatory pressures.
If the business or its customers are subject to regulatory or compliance drivers such as GDPR, FFIEC, GLBA,
OCC, PCI DSS, SOX, HIPAA, or others, the SSG acts as a focal point for understanding the constraints such
drivers impose on software. In some cases, the SSG creates a unified approach that removes redundancy
and conflicts from overlapping compliance requirements. A formal approach will map applicable portions
of regulations to control statements explaining how the organization complies. As an alternative, existing
business processes run by legal or other risk and compliance groups outside the SSG could also serve as the
regulatory focal point. A single set of software security guidance ensures that compliance work is completed
as efficiently as possible. Some firms move on to guide exposure by becoming directly involved in standards
groups exploring new technologies in order to influence the regulatory environment.
CP LEVEL 2
[CP2.1: 39] Identify PII data inventory.
The organization identifies the kinds of PII processed or stored by each of its systems and their data
repositories, including mobile and cloud environments. A PII inventory can be approached in two ways: starting
with each individual application by noting its PII use or starting with particular types of PII and the applications
that touch them. In either case, an inventory of data repositories is required. Note that when applications are
distributed across multiple deployment environments, PII inventory control can get tricky. Do not ignore it.
Likewise, do not ignore the constantly evolving definitions of PII. When combined with the organization’s PII
obligations, this inventory guides privacy planning. For example, the SSG can now create a list of databases
that would require customer notification if breached or referenced in crisis simulations (see [CMVM3.3 Simulate
software crises]).
CP LEVEL 3
[CP3.1: 21] Create a regulator compliance story.
The SSG has the information regulators want. A combination of written policy, controls documentation, and
artifacts gathered through the SSDL gives the SSG the ability to demonstrate the organization’s compliance story
without a fire drill for every audit or piece of paper for every sprint. In some cases, regulators, auditors, and
senior management are satisfied with the same
kinds of reports, which might be generated
directly from various tools. Governments are
not the only regulators of behavior.
T LEVEL 1
[T1.1: 80] Provide awareness training.
The SSG provides awareness training in order to promote a culture of software security throughout the
organization. Training might be delivered via SSG members, an outside firm, the internal training organization,
or e-learning. Course content isn’t necessarily tailored for a specific audience. For example, all programmers, QA
engineers, and project managers could attend the same “Introduction to Software Security” course, but this activity
should be enhanced with a tailored approach that addresses a firm’s culture explicitly. Generic introductory courses
that cover basic IT or high-level software security concepts do not generate satisfactory results. Likewise, awareness
training aimed only at developers and not at other roles in the organization is insufficient.
[T1.5: 34] Deliver role-specific advanced curriculum (tools, technology stacks, and bug parade).
Software security training goes beyond building awareness by enabling trainees to incorporate security
practices into their work. The training is tailored to cover the tools, technology stacks, development
methodologies, and bugs that are most relevant to the trainee. An organization might offer four tracks for
its engineers: one for architects, one for Java developers, one for mobile developers, and a fourth for testers.
Tool-specific training is also commonly observed in a curriculum. Don’t forget that training will be useful for
many different roles in an organization, including QA, product management, executives, and others.
T LEVEL 2
T LEVEL 3
AM LEVEL 1
[AM1.2: 75] Create a data classification scheme and inventory.
The organization agrees on a data classification scheme and uses it to inventory its software according to
the kinds of data the software handles, regardless of whether the software is on or off premise. This allows
applications to be prioritized by their data classification. Many classification schemes are possible—one
approach is to focus on PII, for example. Depending on the scheme and the software involved, it could be
AM LEVEL 2
[AM2.1: 10] Build attack patterns and abuse cases tied to potential attackers.
The SSG prepares for security testing and architecture analysis by building attack patterns and abuse cases
tied to potential attackers (see [AM1.3 Identify potential attackers]). These resources don’t have to be built from
scratch for every application to be useful. Instead, there could be standard sets for applications with similar
profiles. The SSG will add to the pile based on attack stories. For example, a story about an attack against
a poorly designed cloud application could lead to a cloud security attack pattern that drives a new type of
testing. If a firm tracks fraud and monetary costs associated with particular attacks, this information can be
used to prioritize the process of building attack patterns and abuse cases.
AM LEVEL 3
[AM3.1: 4] Have a science team that develops new attack methods.
The SSG has a science team that works to identify and defang new classes of attacks before real attackers even
know that they exist. Because the security implications of new technologies have not been fully explored in
the wild, doing it yourself is sometimes the best way forward. This isn’t a penetration testing team finding new
instances of known types of weaknesses—it’s a research group that innovates new types of attacks. A science
team may include well-known security researchers who publish their findings at conferences like DEF CON.
SFD LEVEL 2
[SFD2.1: 34] Build secure-by-design middleware frameworks and common libraries.
The SSG takes a proactive role in software design by building or providing pointers to secure-by-design
middleware frameworks or common libraries. In addition to teaching by example, this middleware aids
architecture analysis and code review because the building blocks make it easier to spot errors. For example,
the SSG could modify a popular web framework, such as Spring, to make it easy to meet input validation
requirements. Eventually, the SSG can tailor code review rules specifically for the components it offers (see
[CR3.1 Use automated tools with tailored rules]). When adopting a middleware framework (or any other widely
used software), careful vetting for security before publication is important. Encouraging adoption and use of
insecure middleware does not help the software security situation. Generic open source software security
architectures, including OWASP ESAPI, should not be considered secure by design. Bolting security on at the
end by calling a library is not the way to approach secure design.
SR LEVEL 1
[SR1.1: 75] Create security standards.
Software security requires much more than security features, but security features are part of the job as
well. The SSG meets the organization’s demand for security guidance by creating standards that explain the
SR LEVEL 2
[SR2.2: 38] Create a standards review board.
The organization creates a standards review board to formalize the process used to develop standards and
ensure that all stakeholders have a chance to weigh in. The review board could operate by appointing a
champion for any proposed standard, putting the onus on the champion to demonstrate that the standard
meets its goals and to get approval and buy-in from the review board. Enterprise architecture or enterprise
risk groups sometimes take on the responsibility of creating and managing standards review boards.
SR LEVEL 3
[SR3.1: 17] Control open source risk.
The organization has control over its exposure to the vulnerabilities that come along with using open source
components and their army of dependencies. Use of open source could be restricted to predefined projects
or to open source versions that have been through an SSG security screening process, had unacceptable
vulnerabilities remediated, and are made available only through internal repositories. The legal department
often spearheads additional open source controls due to the “viral” license problem associated with GPL code.
In general, getting the legal department to understand security risks can help move an organization to improve
its open source practices. Of course, this control must be applied across the software portfolio.
AA LEVEL 1
[AA1.1: 101] Perform security feature review.
To get started in architecture analysis, center the process on a review of security features. Security-aware
reviewers identify the security features in an application (authentication, access control, use of cryptography,
etc.) and then study the design looking for problems that would cause these features to fail at their purpose
or otherwise prove insufficient. For example, a system that was subject to escalation of privilege attacks
because of broken access control or a mobile application that stashed away PII on local storage would both be
identified in this kind of review. At higher levels of maturity, the activity of reviewing features is eclipsed by a
more thorough approach to AA. In some cases, use of the firm’s secure-by-design components can streamline
this process.
AA LEVEL 2
[AA2.1: 15] Define and use AA process.
The SSG defines and documents a process for AA and applies it in the design reviews it conducts to find
flaws. This process includes a standardized approach for thinking about attacks, security properties, and the
associated risk, and it is defined rigorously enough that people outside the SSG can be taught to carry it out.
Particular attention should be paid to documentation of both the architecture under review and any security
flaws uncovered. Tribal knowledge doesn’t count as a defined process. Microsoft’s STRIDE and Synopsys’ ARA
are examples of this process, although even these two methodologies for AA have evolved greatly over time.
AA LEVEL 3
[AA3.1: 4] Have software architects lead design review efforts.
Software architects throughout the organization lead the AA process most of the time. Although the SSG still
might contribute to AA in an advisory capacity or under special circumstances, this activity requires a well-
understood and well-documented process (see [AA2.1 Define and use AA process]). Even then, consistency is
difficult to attain because breaking architecture requires experience.
CR LEVEL 1
[CR1.2: 82] Have the SSG perform ad hoc review.
The SSG performs an ad hoc code review for high-risk applications in an opportunistic fashion, such as by
following up the design review for high-risk applications with a code review. At higher maturity levels, this
informal targeting is replaced with a systematic approach. SSG
review could involve the use of specific tools and services,
or it might be manual, but it has to be proactive. When new
technologies pop up, new approaches to code review might
become necessary. Individual bugs
[CR1.4: 76] Use automated tools along with manual
make excellent
review. training examples.
Incorporate static analysis into the code review process to
make code review more efficient and more consistent. The
automation doesn’t replace human judgment, but it does
bring definition to the review process and security expertise
to reviewers who are not security experts. Note that a specific tool might not cover an entire portfolio,
especially when new languages are involved, but that’s no excuse not to review the code. A firm may use an
external service vendor as part of a formal code review process for software security, and this service should
be explicitly connected to a larger SSDL applied during software development, not just used to “check the
security box” on the path to deployment.
[CR1.6: 44] Use centralized reporting to close the knowledge loop and drive training.
The bugs found during code review are tracked in a centralized repository that makes it possible to do both
summary and trend reporting for the organization. Code review information can be incorporated into a CISO-
level dashboard that includes feeds from other parts of the security organization (e.g., penetration tests,
security testing, black-box testing, and white-box testing). The SSG can also use the reports to demonstrate
progress and drive the training curriculum (see [SM2.5 Identify metrics and use them to drive budgets]). Individual
bugs make excellent training examples.
CR LEVEL 3
[CR3.2: 4] Build a factory.
Combine assessment results so that multiple analysis techniques feed into one reporting and remediation
process. The SSG might write scripts to invoke multiple detection techniques automatically and combine the
results into a format that can be consumed by a single downstream review and reporting solution. Analysis
engines may combine static and dynamic analysis, and different review streams, such as mobile versus
standard approaches, can be unified with a factory. The tricky part of this activity is normalizing vulnerability
information from disparate sources that use conflicting terminology. In some cases, using a standardized
taxonomy (perhaps a CWE-like approach) can help with normalization. Combining multiple sources helps
drive better-informed risk mitigation decisions.
ST LEVEL 1
[ST1.1: 100] Ensure QA supports edge/boundary value condition testing.
The QA team goes beyond functional testing to perform basic adversarial tests and probe simple edge
cases and boundary conditions, no attacker skills required. When QA understands the value of pushing past
standard functional testing using acceptable input, it begins to move slowly toward thinking like an adversary.
A discussion of boundary value testing leads naturally to the notion of an attacker probing the edges on
purpose. What happens when you enter the wrong password over and over?
[ST1.3: 88] Drive tests with security requirements and security features.
Testers target declarative security mechanisms with tests derived from requirements and security features.
A tester could try to access administrative functionality as an unprivileged user, for example, or verify that
a user account becomes locked after some number of failed authentication attempts. For the most part,
security features can be tested in a fashion similar to other software features; security mechanisms based
on requirements such as account lockout, transaction limitations, entitlements, and so on are also tested. Of
course, software security is not security software, but getting started with features is easy. New deployment
models, such as cloud, might require novel test approaches.
ST LEVEL 3
[ST3.3: 4] Drive tests with risk analysis results.
Testers use architecture analysis results (see [AA 2.1 Define and use AA process]) to direct their work. If the
architecture analysis concludes that “the security of the system hinges on the transactions being atomic and
not being interrupted partway through,” for example, then torn transactions will become a primary target in
adversarial testing. Adversarial tests like these can be developed according to risk profile, with high-risk flaws
at the top of the list.
PT LEVEL 1
[PT1.1: 105] Use external penetration testers to find problems.
Many organizations aren’t willing to address software
security until there’s unmistakable evidence that the
organization isn’t somehow magically immune to the
problem. If security has not been a priority, external If your penetration
penetration testers can demonstrate that the organization’s
code needs help. Penetration testers could be brought in tester doesn’t
to break a high-profile application to make the point. Over
time, the focus of penetration testing moves from “I told you
ask for the code,
our stuff was broken” to a smoke test and sanity check done you need a new
before shipping. External penetration testers bring a new set
of eyes to the problem. penetration tester.
[PT1.2: 89] Feed results to the defect management
and mitigation system.
Penetration testing results are fed back to development through established defect management or mitigation
channels, and development responds via a defect management and release process. Emailing them around
doesn’t count. Properly done, the exercise demonstrates the organization’s ability to improve the state
of security, and many firms are beginning to emphasize the critical importance of not just identifying but
actually fixing security problems. One way to ensure attention is to add a security flag to the bug-tracking and
defect management system. Evolving DevOps and integrated team structures do not eliminate the need for
formalized defect management systems.
[PT3.2: 7] Have the SSG customize penetration testing tools and scripts.
The SSG either creates penetration testing tools or adapts publicly available ones to more efficiently and
comprehensively attack the organization’s systems. Tools improve the efficiency of the penetration testing
process without sacrificing the depth of problems that the SSG can identify. Automation can be particularly
valuable under agile methodologies because it helps teams go faster. Tools that can be tailored are always
preferable to generic tools. This activity considers both the depth of tests and their scope.
[SE1.2: 104] Ensure host and network security basics are in place.
The organization provides a solid foundation for software by ensuring that host and network security basics
are in place. Operations security teams are usually responsible for patching operating systems, maintaining
firewalls, and properly configuring cloud services, but doing software security before network security is like
putting on pants before putting on underwear.
SE LEVEL 2
[SE2.2: 39] Publish installation guides.
The SSDL requires the creation of an installation guide or a clearly described configuration, such as for a
container, to help deployment teams and operators install and configure the software securely. If special
steps are required to ensure a deployment is secure, the steps are either outlined in the installation guide
or explicitly noted in deployment automation. The guide should include a discussion of COTS components,
too. In some cases, installation guides are distributed to customers who buy the software. Make sure that all
deployment automation can be understood by smart humans and not just by a machine. Evolving DevOps
and integrated team structures do not eliminate the need for human-readable guidance. Of course, secure by
default is always the best way to go.
SE LEVEL 3
[CMVM1.2: 102] Identify software defects found in operations monitoring and feed them
back to development.
Defects identified through operations monitoring are fed back to development and used to change developer
behavior. The contents of production logs can be revealing (or can reveal the need for improved logging). In
some cases, providing a way to enter incident triage data into an existing bug-tracking system (perhaps making
use of a special security flag) seems to work. The idea is to close the information loop and make sure that
security problems get fixed. In the best of cases, processes in the SSDL can be improved based on operational
data.
CMVM LEVEL 2
[CMVM2.1: 82] Have emergency codebase response.
The organization can make quick code changes when an application is under attack. A rapid-response
team works in conjunction with the application owners and the SSG to study the code and the attack, find a
resolution, and push a patch into production. Often, the emergency response team is the development team
itself, especially when agile methodologies are in use. Fire drills don’t count; a well-defined process is required,
and a process that has never been used might not actually work.
[CMVM2.2: 87] Track software bugs found in operations through the fix process.
Defects found in operations are fed back to development, entered into established defect management
systems, and tracked through the fix process. This capability could come in the form of a two-way bridge
between the bug finders and the bug fixers. Make sure the loop is closed completely. Setting a security flag in
the bug-tracking system can help facilitate tracking.
CMVM LEVEL 3
[CMVM3.1: 5] Fix all occurrences of software bugs found in operations.
The organization fixes all instances of each bug found during operations, not just the small number of
instances that trigger bug reports. This requires the ability to reexamine the entire codebase when new kinds
of bugs come to light (see [CR3.3 Build capability for eradicating specific bugs from entire codebase]). One way to
approach this is to create a rule set that generalizes a deployed bug into something that can be scanned
for via automated code review.
We have added, deleted, and adjusted the levels of various activities based on the data observed as the study
continues. To preserve backward compatibility, all changes are made by adding new activity labels to the
model, even when an activity has simply changed levels. We make changes by considering outliers both in
the model itself and in the levels we assigned in the 12 practices. We use the results of an intralevel standard
deviation analysis to determine which outlier activities to move between levels, focusing on changes that
minimize standard deviation in the average number of observed activities at each level.
1. [SM2.5 Identify metrics and use them to drive budgets] became SM3.3
3. [SE3.5 Use orchestration for containers and virtualized environments] added to the model
4. [SE3.6 Enhance application inventory with operations bill of materials] added to the model
We also carefully considered, but did not adjust [T1.6 Create and use material specific to company history].
The activities that are now SM3.3 and SR3.3 both started as level 1 activities. The BSIMM1 activity [SM1.5
Identify metrics and use them to drive budgets] became SM2.5 in BSIMM3 and is now moved to SM3.3.
The BSIMM1 activity [SR1.4 Use coding standards] became SR2.6 in BSIMM6 and is now moved to SR3.3.
We noted in BSIMM7 that, for the first time, one activity [AA3.2 Drive analysis results into standard architecture
patterns], was not observed in the current data set, and there were no new observations of AA3.2 for BSIMM8.
AA3.2 does have two observations in BSIMM9, and there are no activities with zero observations except for
the three just added.
One question that recently came up is, “Where do activities go to die?” We’ve noticed that a handful of activities
have moved from level 1 through level 2 to level 3 for all the wrong reasons. These activities may disappear
in future BSIMM iterations. The two most prominent contenders are [T3.5 Establish SSG office hours] and [T3.6
Identify a satellite through training], both of which appear to be going extinct. Less pronounced, but still worth
noting, are [SM3.3 Identify metrics and use them to drive budgets] and [SR3.3 Use secure coding standards].
Level 1 Activities
Governance
Strategy & Metrics (SM)
• Publish process (roles, responsibilities, plan), evolve as necessary. [SM1.1]
• Create evangelism role and perform internal marketing. [SM1.2]
• Educate executives. [SM1.3]
• Identify gate locations, gather necessary artifacts. [SM1.4]
Training (T)
• Provide awareness training. [T1.1]
• Deliver role-specific advanced curriculum (tools, technology stacks, and bug parade). [T1.5]
• Create and use material specific to company history. [T1.6]
• Deliver on-demand individual training. [T1.7]
Intelligence
Attack Models (AM)
• Create a data classification scheme and inventory. [AM1.2]
• Identify potential attackers. [AM1.3]
• Gather and use attack intelligence. [AM1.5]
Deployment
Penetration Testing (PT)
• Use external penetration testers to find problems. [PT1.1]
• Feed results to the defect management and mitigation system. [PT1.2]
• Use penetration testing tools internally. [PT1.3]
Level 2 Activities
Governance
Strategy & Metrics (SM)
• Publish data about software security internally. [SM2.1]
• Enforce gates with measurements and track exceptions. [SM2.2]
• Create or grow a satellite. [SM2.3]
• Require security sign-off. [SM2.6]
Training (T)
• Enhance satellite through training and events. [T2.5]
• Include security resources in onboarding. [T2.6]
Intelligence
Attack Models (AM)
• Build attack patterns and abuse cases tied to potential attackers. [AM2.1]
• Create technology-specific attack patterns. [AM2.2]
• Build and maintain a top N possible attacks list. [AM2.5]
• Collect and publish attack stories. [AM2.6]
• Build an internal forum to discuss attacks. [AM2.7]
SSDL Touchpoints
</>
Architecture Analysis (AA)
• Define and use AA process. [AA2.1]
• Standardize architectural descriptions (including data flow). [AA2.2]
Level 3 Activities
Governance
Strategy & Metrics (SM)
• Use an internal tracking application with portfolio view. [SM3.1]
• Run an external marketing program. [SM3.2]
• Identify metrics and use them to drive budgets. [SM3.3]
Training (T)
• Reward progression through curriculum (certification or HR). [T3.1]
• Provide training for vendors or outsourced workers. [T3.2]
• Host external software security events. [T3.3]
• Require an annual refresher. [T3.4]
• Establish SSG office hours. [T3.5]
• Identify a satellite through training. [T3.6]
SSDL Touchpoints
</>
Architecture Analysis (AA)
• Have software architects lead design review efforts. [AA3.1]
• Drive analysis results into standard architecture patterns. [AA3.2]
• Make the SSG available as an AA resource or mentor. [AA3.3]
Code Review (CR)
• Build a factory. [CR3.2]
• Build a capability for eradicating specific bugs from the entire codebase. [CR3.3]
• Automate malicious code detection. [CR3.4]
• Enforce coding standards. [CR3.5]
Security Testing (ST)
• Drive tests with risk analysis results. [ST3.3]
• Leverage coverage analysis. [ST3.4]
• Begin to build and apply adversarial security tests (abuse cases). [ST3.5]
Deployment
Penetration Testing (PT)
• Use external penetration testers to perform deep-dive analysis. [PT3.1]
• Have the SSG customize penetration testing tools and scripts. [PT3.2]
Software Environment (SE)
• Use code protection. [SE3.2]
• Use application behavior monitoring and diagnostics. [SE3.3]
• Use application containers. [SE3.4]
• Use orchestration for containers and virtualized environments. [SE3.5]
• Enhance application inventory with operations bill of materials. [SE3.6]
• Ensure cloud security basics. [SE3.7]
Configuration Management & Vulnerability Management (CMVM)
• Fix all occurrences of software bugs found in operations. [CMVM3.1]
• Enhance the SSDL to prevent software bugs found in operations. [CMVM3.2]
• Simulate software crises. [CMVM3.3]
• Operate a bug bounty program. [CMVM3.4]
Go to www.BSIMM.com