Bsimm10 PDF
Bsimm10 PDF
Bsimm10 PDF
The BSIMM is the result of a multiyear study of real-world software security initiatives (SSIs). We present the BSIMM10
model as built directly out of data observed in 122 firms. These firms are listed in the Acknowledgments section.
The BSIMM is a measuring stick for software security. The best way to use it is to compare and contrast your own
initiative with the data about what other organizations are doing. You can identify your own goals and objectives, then
refer to the BSIMM to determine which additional activities make sense for you.
The purpose of the BSIMM is to quantify the activities carried out by various kinds of SSIs. Because these initiatives
use different methodologies and different terminology, the BSIMM requires a framework that allows us to describe
any initiative in a uniform way. Our software security framework (SSF) and activity descriptions provide a common
vocabulary for explaining the salient elements of an SSI, thereby allowing us to compare initiatives that use different
terms, operate at different scales, exist in different parts of the organizational chart, operate in different vertical markets,
or create different work products.
The BSIMM data show that high maturity initiatives are well-rounded, carrying out numerous activities in all 12 of the
practices described by the model. The data also show how maturing SSIs evolve, change, and improve over time.
We classify the BSIMM as a maturity model because improving software security almost always means changing the
way an organization works, which doesn’t happen overnight. We understand that not all organizations need to achieve
the same security goals, but we believe all organizations can benefit from using a common measuring stick. The BSIMM
is not a traditional maturity model where a set of activities are repeated at multiple levels of depth and breadth—do
something at level 1, do it more at level 2, do it better at level 3, and so on. Instead, the BSIMM comprises a single set of
unique activities, and the activity levels are used only to distinguish the relative frequency with which the activities are
observed in organizations. Frequently observed activities are designated as “level 1,” less frequently observed activities are
designated “level 2,” and infrequently observed activities are designated “level 3.”
We hold the scorecards for individual firms in confidence, but we publish aggregate data describing the number of times
we have observed each activity (see the BSIMM10 Scorecard in Part Two). We also publish observations about subsets
(such as industry verticals) when our sample size for the subset is large enough to guarantee anonymity.
Our thanks also to the more than 100 individuals who helped gather the data for the BSIMM.
In particular, we thank Matthew Chartrand, Sagar Dongre, Michael Doyle, Eli Erlikhman, Jacob Ewers, Stephen
Gardner, Nabil Hannan, Iman Louis, Daniel Lyon, Nick Murison, Alistair Nash, Kevin Nassery, Donald Pollicino, and
Denis Sheridan. In addition, we give a special thank you to Kathy Clark-Fisher, whose behind-the-scenes work keeps the
BSIMM science project, conferences, and community on track.
Data for the BSIMM were captured by Synopsys. Resources for BSIMM10 data analysis were provided by ZeroNorth.
BSIMM1–BSIMM3 were authored by Gary McGraw, Ph.D., Brian Chess, Ph.D., and Sammy Migues. BSIMM4–
BSIMM9 were authored by Gary McGraw, Ph.D., Sammy Migues, and Jacob West. BSIMM10 was authored
by Sammy Migues, Mike Ware, and John Steven.
BSIMM HISTORY
We built the first version of the BSIMM a little over a decade ago (Fall 2008) as follows:
• We relied on our own knowledge of software security practices to create the software security framework
(SSF, found in Part Two).
• We conducted a series of in-person interviews with nine executives in charge of software security initiatives
(SSIs). From these interviews, we identified a set of common activities, which we organized according to
the SSF.
• We then created scorecards for each of the nine initiatives that showed which activities the initiatives carry
out. To validate our work, we asked each participating firm to review the framework, the practices, and the
scorecard we created for their initiative.
Today, we continue to evolve the model by looking for new activities as participants are added and as current participants
are remeasured. We also adjust the model according to observation rates for each of the activities.
THE MODEL
The BSIMM is a data-driven model that evolves over time. We have added, deleted, and adjusted the levels of various
activities based on the data observed as the project has evolved. To preserve backward compatibility, we make all changes
by adding new activity labels to the model, even when an activity has simply changed levels (e.g., we add a new CRx.x
label for both new and moved activities in the Code Review practice). When considering whether to add a new activity,
we analyze whether the effort we’re observing is truly new to the model or simply a variation on an existing activity. When
considering whether to move an activity between levels, we use the results of an intralevel standard deviation analysis and
the trend in observation counts.
We use an in-person interview technique to conduct BSIMM assessments, done with a total of 185 firms so far. In
addition, we conducted assessments for nine organizations who have rejoined the data pool after once aging out. In 46
cases, we assessed the software security group (SSG) and one or more business units as part of creating the corporate
SSI view.
For most organizations, we create a single aggregated scorecard, whereas in others, we create individual scorecards for
the SSG and each business unit. However, each firm is represented by only one set of data in the model published here.
(Table 3, “BSIMM Numbers Over Time” in the appendix, shows changes in the data pool over time.)
As a descriptive model, the only goal of the BSIMM is to observe and report. We like to say we visited a neighborhood to
see what was happening and observed that “there are robot vacuum cleaners in X of the Y houses we visited.” Note that
the BSIMM does not say, “all houses must have robot vacuum cleaners,” “robots are the only acceptable kind of vacuum
cleaners,” “vacuum cleaners must be used every day,” or any other value judgements. Simple observations simply reported.
Our “just the facts” approach is hardly novel in science and engineering, but in the realm of software security, it has not
previously been applied at this scale. Other work around modeling SSIs has either described the experience of a single
organization or offered prescriptive guidance based purely on a combination of personal experience and opinion.
We also carefully considered but did not adjust [AM2.2 Create technology-specific attack patterns] at this time; we
will do so if the observation rate continues to decrease. Similarly, we considered and did not adjust [CR2.5 Assign tool
mentors] but will do so if the observation rate continues to increase.
As concrete examples of how the BSIMM functions as an observational model, consider the activities that are now
SM3.3 and SR3.3, which both started as level 1 activities. The BSIMM1 activity [SM1.5 Identify metrics and use them
to drive budgets] became SM2.5 in BSIMM3 and is now SM3.3 due to its decreased observation rate. Similarly, the
BSIMM1 activity [SR1.4 Use coding standards] became SR2.6 in BSIMM6 and is now SR3.3. To date, no activity has
migrated from level 3 to level 1.
We noted in BSIMM7 that, for the first time, an activity ([AA3.2 Drive analysis results into standard architecture
patterns]) was not observed in the current dataset, and there were no new observations of AA3.2 for BSIMM8. AA3.2
did have two observations in BSIMM9 and one observation in BSIMM10; there are currently no activities with zero
observations (except for the three just added).
We continue to ponder the question, “Where do activities go when no one does them anymore?” In addition to
SM3.3 and SR3.3 mentioned above, we’ve noticed that the observation rate for other seemingly useful activities
has decreased significantly in recent years:
• [T3.5 Establish SSG office hours] – observed in 11 of 42 firms in BSIMM3 and 1 of 122 firms in BSIMM10
• [AA3.2 Drive analysis results into standard architecture patterns] – observed in 20 of 67 firms in BSIMM-V
and 4 of 122 firms in BSIMM10
• [CR3.5 Enforce coding standards] – observed in 13 of 51 firms in BSIMM4 and 2 of 122 firms in BSIMM10
We, of course, keep a close watch on the BSIMM data pool and will make adjustments if and when the time comes, which
might include dropping an activity from the model.
Fifty of the current participating firms have been through at least two assessments, allowing us to study how their
initiatives have changed over time. Twenty-one firms have undertaken three BSIMM assessments, eight have done four,
and two have had five assessments.
BSIMM10 is our first study to formally reflect software security changes driven by engineering-led efforts, meaning
efforts originating bottom-up in the development and operations teams rather than originating top-down from a
centralized SSG. These results show up here in the form of new activities, in new examples of the way existing activities
are conducted, as well as in discussion of the paths organizations might follow to maturation over time.
Table 1. BSIMM ExampleFirm Scorecard. A scorecard is helful for understanding efforts currently underway and where to focus next.
2.0
SOFTWARE TRAINING
ENVIRONMENT 1.5
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
Figure 1. AllFirms vs. ExampleFirm Spider Chart. Charting high-water mark values provides a low-resolution view of maturity that can
be useful for comparisons between firms, between business units, and within the same firm over time.
By identifying activities from each practice that could work for you, and by ensuring proper balance with respect to domains,
you can create a strategic plan for your SSI moving forward. Note that most SSIs are multiyear efforts with real budget,
mandate, and ownership behind them. Although all initiatives look different and are tailored to fit a particular organization,
all initiatives share common core activities (see “Table 7. Most Common Activities Per Practice,” in the appendix).
SSI PHASES
No matter an organization’s culture, all firms strive to reach similar peaks on their journey. Over time, we find that SSIs
typically progress through three states:
• Emerging. An organization tasked with booting a new SSI from scratch or formalizing nascent or ad hoc
security activities into a holistic strategy. An emerging SSI has defined its initial strategy, implemented
foundational activities, acquired some resources, and might have a roadmap for the next 12 to 24 months of its
evolution. SSI leadership working on a program’s foundations are often resource-constrained on both people
and budget, and might use compliance requirements or other executive mandates as the initial drivers to
continue adding activities.
• Maturing. An organization with an existing or emerging software security approach connected to executive
expectations for managing software security risk and progressing along a roadmap for scaling security
capabilities. A maturing SSI works to cover a greater percentage of the firm’s technology stacks, software
portfolio, and engineering teams (in-house and supply chain). SSI leadership maturing a program might be
adding fewer activities while working on depth, breadth, and cost effectiveness of ongoing activities.
• Optimizing. An organization that’s fine-tuning and evolving its existing security capabilities (often with a
risk-driven approach), having a clear view into operational expectations and associated metrics, adapting to
technology change drivers, and demonstrating business value as a differentiator. The SSI leader optimizing their
program might also be undergoing an evolution from technology executive to business enabler.
It’s compelling to imagine that organizations could self-assess and determine that by doing X number of activities,
they qualify as emerging, maturing, or otherwise. However, experience shows that SSIs can reach a “maturing” stage
by conducting the activities that are right for them without regard for the total count. This is especially true when
considering software portfolio size and the relative complexity of maturing or optimizing some activities across 1, 10, 100,
and 1,000 applications.
In addition, organizations don’t always progress from emerging to optimizing in one direction or in a straight path. We
have seen SSIs form, break up, and re-form over time, so one SSI might go through the emerging cycle a few times
over the years. An SSI’s capabilities might not all progress through the same states at the same time. We’ve noted cases
where one capability—vendor management, for example—might be emerging while the defect management capability
is maturing, and the defect discovery capability is optimizing. There is constant change in tools, skill levels, external
expectations, attackers and attacks, resources, and everything else. Pay attention to the relative frequency with which
the BSIMM activities are observed across all the participants, but use your own metrics to determine if you’re making the
progress that’s right for you.
MOVING FORWARD
We frequently observe governance-driven SSIs planning centrally, seeking to proactively define an ideal risk posture
during their emerging phase. After that, the initial uptake of provided controls (e.g., security testing) is usually led by
the teams that have experienced real security issues and are looking for help. These firms often struggle during the
maturation phase where growth will incur significant expense and effort as the SSG scales the controls and their benefits
enterprise-wide. We observe that emerging engineering-driven efforts prototype controls incrementally, building on the
existing tools and techniques that already drive software delivery. Gains happen quickly in these emerging efforts, perhaps
given the steady influx of new tools and techniques introduced by engineering, but also helped along by the fact that each
team is usually working in a homogenous culture on a single application and technology stack. Even so, these groups also
struggle to institutionalize durable gains during their maturation phase, usually because the engineers have not been able
to turn capability into either secure-by-default functionality or automation-friendly assurance—at least not beyond the
most frequently encountered security issues and beyond their own spheres of influence. Scaling an SSI across a software
portfolio is hard for everyone.
Emerging engineering-driven groups tend to view security as an enabler of software features and code quality. These
groups recognize the need for having security standards but tend to prefer “governance as code” as opposed to a “manual
steps with human review” approach to enforcement. This tends to result in engineers building security features and
frameworks into architectures, automating defect discovery techniques within a software delivery pipeline, and treating
security defects like any other defect. Traditional human-driven security decisions are modeled into a software-defined
workflow as opposed to written into a document and then implemented in a separate risk workflow handled outside of
engineering. In this type of culture, it’s not that the traditional SDLC gates and risk decisions go away, it’s that they get
implemented differently and they usually have different goals compared to those of the governance-driven groups.
Note that an SSI leader with a young initiative (less than one year) working on the foundations should not expect or
set out to quickly implement a large number of BSIMM activities. Firms can only absorb a limited amount of cultural
and process change at any given time. The BSIMM10 data show that SSIs having an age of less than one year at time of
assessment have an average score of 20.8 (26 of 122 firms).
GOVERNANCE-LED CULTURE
Governance-driven SSIs almost always begin their journey GETTING STARTED CHECKLIST
by appointing an SSI owner tasked with shepherding the
organization through understanding scope, approach, and 1. Leadership. Put someone in charge of
software security, and provide the resources he
priorities. Once an SSI owner is in place, his or her first order
or she will need to succeed.
of business is likely to establish centralized structure. This
structure might not involve hiring staff immediately, but it will 2. Inventory software. Know what you have,
likely entail implementing key foundational activities central where it is, and when it changes.
to supporting assurance objectives that are further defined
3. Select in-scope software. Decide what
and institutionalized in policy [CP1.3], standards [SR1.1],
you’re going to focus on first.
and processes [SM1.1].
4. Ensure host and network security basics.
Inventory Software Don’t put good software on bad systems or
We observe governance-led SSIs seeking an enterprise-wide in poorly constructed networks (cloud
perspective when building an initial view into their software or otherwise).
portfolio. Engaging directly with application business owners,
these cultures prefer to cast a wide net through questionnaire- 5. Do defect discovery. Determine the issues
style data gathering to build their initial application inventory in today’s production software and plan
[CMVM2.3]. These SSIs tend to focus on applications (with for tomorrow.
owners who are responsible for risk management) as the unit 6. Select security controls. Start with
of measure in their inventory rather than software, which controls that establish some risk management
might include many vital components that aren’t applications. to prevent recurrence of issues you’re
In addition to understanding application profile characteristics seeing today.
(e.g., programming language, architecture type such as web or
mobile, revenue generated) as a view into risk, these cultures 7. Repeat. Expand the team, improve
tend to focus on understanding where sensitive data resides the inventory, automate the basics, do more
and flows (e.g., PII inventory) [CP2.1] along with the status prevention, and then repeat again.
of active development projects.
As mentioned earlier, organizations don’t always progress from emerging to optimizing in one direction or in a straight
path, and some SSI capabilities might be optimizing while others are still emerging. Based on our experience, firms
with some portion of their SSI operating in an optimized state have likely been in existence for longer than three years.
Although we don’t have enough data to generalize this class of initiative, we do see common themes for those who strive
to reach to this state:
• Top N risk reduction. Relentlessly identify and close top N weaknesses, placing emphasis on obtaining visibility into
all sources of vulnerability, whether in-house developed code, open source code [SR3.1], vendor code [SR3.2],
tool chains, or any associated environments and processes. These top N risks are specific to the organization,
evaluated at least annually, and tied to metrics as a way to prioritize SSI efforts to improve risk posture.
• Tool customization. SSI leaders place a concerted effort into tuning tools (e.g., static analysis customization)
to improve accuracy, consistency, and depth of analysis [CR2.6]. Customization focuses on improving results
fidelity, applicability across the portfolio, and improving ease of use for everyone.
• Feedback loops. Loops are specifically created between SSDL activities to improve effectiveness, as
deliverables from SSI capabilities ebb and flow with each other. As an example, an expert within QA might
leverage architecture analysis results when creating security test cases [ST2.4]. Likewise, feedback from the
field might be used to drive SSDL improvement through enhancements to a hardening standard [CMVM3.2].
• Data-driven governance. Leaders instrument everything to collect data that in turn become metrics for
measuring SSI efficiency and effectiveness against KRIs and KPIs [SM3.1]. As an example, metrics such as
defect density might be leveraged to track performance of individual business units and application teams.
Drivers differ by organization, but engineering-led groups have been observed to use the following as input when
prioritizing in-scope software:
• Velocity. Teams conducting active new development or major refactoring.
• Regulation. Those services or data repositories to which specific development or configuration requirements
for security or privacy apply [CP1.1, CP1.2].
• Opportunity. Those teams solving critical technical challenges or adopting key technologies that potentially
serve as proving grounds for emerging security controls.
Beyond immutable constraints like the applicability of regulation, we see evidence that assignment can be rather
opportunistic and perhaps driven “bottom-up” by security engineers and development managers themselves. In these
cases, the security initiative’s leader often seeks opportunities to cull their efforts and scale key successes rather than
direct the use of controls top-down.
Ensure Host and Network Security Basics
Compared to governance-led organizations, [SE1.2 Ensure host and network security basics] is observed no less
frequently for engineering-led groups. Security engineers might begin by conducting this work manually, then baking
these settings and changes into their software-defined infrastructure scripts to ensure both consistent application within
a development team and scalable sharing across the organization.
Forward-looking organizations that have adopted software and network orchestration technologies (e.g., Kubernetes,
Envoy, Istio) get maximum impact from this activity with the efforts of even an individual contributor, such as a security-
minded DevOps engineer. While organizations often have hardened container or host images on which software
deployments are based, software-defined networks and features from cloud service providers allow additional control
at the scale of infrastructure.
Our observations are that engineering-led groups are starting with open source and home-grown security
tools, with much less reliance on “big box” vulnerability discovery products. Generally, these groups hold to
two heuristics:
• Lengthening time to (or outright preventing) delivery is unacceptable. Instead, organize to provide telemetry
and then respond asynchronously through virtual patching, rollback, or other compensating controls.
• Build vulnerability detective capability incrementally, in line with a growing understanding of software misuse
and abuse and associated business risk, rather than purchasing boxed security standards as part of a vendor’s
core rule set.
These groups might build on top of in-place test scaffolding, might purposefully extend open source scanners that
integrate cleanly with their development tool chain, or both. Extension often focuses on a different set of issues than
characterized in general lists such as the OWASP Top 10, or even the broader set of vulnerabilities found by commercial
tools. Instead, these groups sometimes focus on denial of service, misuse/abuse of business functionality, or enforcement
of the organization’s technology-specific coding standards (even when these are implicit rather than written down).
Document the SSDL
Engineering-led cultures typically eschew document- or presentation-based deliverables in favor of code-based
deliverables. As the group seeks to apply its efforts to a larger percentage of the firm’s software, it might work to
institute some form of knowledge sharing process to get the security activities applied across teams. To this end, security
leaders might create a “one-pager” describing the security tools and techniques to be applied throughout software’s
lifecycle, make that resource available through organizational knowledge management, and evangelize it through internal
knowledge-sharing forums.
Unlike governance-driven groups, we found that engineering-led groups explicitly and purposefully incentivize security
engineers to talk externally about those security tools they’ve created (or customized), such as at regional meet-ups
and conferences. Security leads might use these incentives and invite external critique to ensure frequent maintenance
and improvement on tools and frameworks their engineers create, without continuing to tie up 100% of that engineer’s
bandwidth indefinitely.
SSDL documentation might be made available through an internal or even external source code repository, along with
other related material that aids uptake and implementation by development teams. A seemingly simple step, this makes it
very easy for development teams to conform to the SSDL within their existing tool chains and cultural norms.
DOMAINS
Practices that help Practices that result in Practices associated with Practices that interface
organize, manage, and collections of corporate analysis and assurance with traditional network
measure a software knowledge used in carrying of particular software security and software
security initiative. out software security development artifacts and maintenance organizations.
Staff development is activities throughout processes. All software Software configuration,
also a central the organization. security methodologies maintenance, and other
governance practice. Collections include both include these practices. environment issues
proactive security guidance have direct impact on
and organizational software security.
threat modeling.
PRACTICES
GOVERNANCE
STRATEGY & METRICS (SM) COMPLIANCE & POLICY (CP) TRAINING (T)
CONFIGURATION
PENETRATION SOFTWARE MANAGEMENT &
TESTING (PT) ENVIRONMENT (SE) VULNERABILITY
MANAGEMENT (CMVM)
ASIA
IoT PACIFIC
(13) (5)
UNITED KINGDOM/
CLOUD EUROPE
(20) (15)
FINANCIAL
(57)
HEALTHCARE
(16)
INSURANCE
(11)
RETAIL
(9)
ISV
TECH NORTH
(43)
(20) AMERICA
(102)
Figure 2. BSIMM10 Participating Firms. These are the participant counts per tracked vertical in the BSIMM10 data pool. Note that some
firms are in multiple vertical markets and some firms are in verticals not listed here, such as energy and telecoms.
The BSIMM data yield very interesting analytical results as shown throughout this document. Shown on the next page are
the highest-resolution BSIMM data that are published. Organizations can use these data to note how often we observe
each activity across all 122 participants and use that information to help plan their next areas of focus. Activities that are
broadly popular across all vertical markets will likely benefit your organization as well.
Table 2. BSIMM10 Scorecard. The scorecard shows how often each of the activities in the BSIMM were observed in the BSIMM10 data pool
from 122 firms.
CONFIGURATION STRATEGY
MANAGEMENT & & METRICS
VULNERABILITY 3.0
COMPLIANCE
MANAGEMENT & POLICY
2.5
2.0
SOFTWARE TRAINING
ENVIRONMENT 1.5
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
Figure 3. AllFirms Spider Chart. This diagram shows the average of the high-water mark collectively reached in each practice by the
122 BSIMM10 firms.
By computing these high-water mark values and an observed score for each firm in the study, we can also compare relative
and average maturity for one firm against the others. The range of observed scores in the current data pool is [5, 83].
We’re pleased that the BSIMM study continues to grow year after year. The dataset we report on here is nearly 38 times
the size it was for the original publication. Note that once we exceeded a sample size of 30 firms, we began to apply
statistical analysis, yielding statistically significant results.
CONFIGURATION STRATEGY
MANAGEMENT & & METRICS
VULNERABILITY 3.0
COMPLIANCE
MANAGEMENT & POLICY
2.5
2.0
SOFTWARE TRAINING
ENVIRONMENT 1.5
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
Figure 4. Cloud vs. Internet of Things vs. Tech Spider Chart. Mature verticals still show distinct differences.
Cloud, Internet of Things, and high-technology firms are three of the most mature verticals in the BSIMM10 data pool.
On average, cloud firms (which are not necessarily equivalent to cloud service providers) are noticeably more mature
in the Governance and Intelligence domains compared to the technology and Internet of Things firms but noticeably
less mature in the Attack Models practice. By the same measure, technology and Internet of Things firms show greater
maturity in the Security Testing, Penetration Testing, and Software Environment practices. Despite these obvious
differences, there is a great deal of overlap. We believe that technology stacks and architectures, and therefore many of
the associated software security activities, between these three verticals are continuing to converge.
2.0
SOFTWARE TRAINING
ENVIRONMENT 1.5
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
Figure 5. Financial vs. Healthcare vs. Insurance Spider Chart. Although they share similar compliance drivers, these groups of organizations
have different average maturity levels.
Three verticals in the BSIMM operate in highly regulated industries: financial services, healthcare, and insurance. In our
experience with the BSIMM, large financial services firms reacted to regulatory changes and started their SSIs much
earlier than insurance and healthcare firms. Even as the number of financial services firms has more than doubled over
the past five years with a large influx into the BSIMM data pool of newly started initiatives, the financial services SSG
average age at last assessment time remains 5.4 years, versus 3.2 years for insurance and 3.1 years for healthcare. Time
spent by financial services firms maturing their collective SSIs shows up clearly in the side-by-side comparison. Although
organizations in the insurance vertical include some mature outliers, the data for these three regulated verticals show
insurance lags behind in the Strategy & Metrics, Compliance & Policy, and Attack Models practices, while moving above
average in the Security Testing practice. Compared to financial services firms, we see a similar contrast in healthcare,
which achieves par in Compliance & Policy, Architecture Analysis, and Penetration Testing, but lags in other practices.
The overall maturity of the healthcare vertical remains low.
2.0
SOFTWARE TRAINING
ENVIRONMENT 1.5
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
Figure 6. Tech vs. Healthcare Spider Chart. Although healthcare firms are increasingly building devices and associated services, their overall
maturity lags behind technology firms that do similar things.
In the BSIMM population, we can find large gaps between the maturity of verticals, even when the technology stacks
might be similar. Consider the spider diagram that directly compares the current technology and healthcare verticals. In
this case, there is an obvious delta between technology firms that build devices tied to back-end services and healthcare
firms that increasingly build devices tied to back-end services. The disparity in maturity extends to most practices.
Fortunately for organizations that find themselves behind the curve, the experiences of many BSIMM participants
provide a good roadmap to faster maturity.
2.0
SOFTWARE TRAINING
ENVIRONMENT 1.5
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
Figure 7. AllFirms vs. Retail Spider Chart. While it may have taken some years for retail firms to begin using the BSIMM in earnest,
they have been working on their SSIs.
For the second year, the BSIMM presents data on the retail vertical. This group, with an average SSG age at time of last
assessment of 4.0 years and average SSG size of 8.4 full-time people, seems to track closely to the overall data pool.
The most obvious differences are in the Security Features & Design, Penetration Testing, Software Environment, and
Configuration Management & Vulnerability Management practices, where retail participants are somewhat ahead of the
average for all firms.
For BSIMM9, however, the average score increased to 34.0 and increased again to 35.6 for BSIMM10. One
reason for this change—a potential reversal of the decline in overall maturity—appears to be the mix of firms
using the BSIMM:
• The average SSG age for new firms entering BSIMM6 was 2.9 years; it was 3.37 years for BSIMM7, 2.83
years for BSIMM8, and increased to 4.57 years for BSIMM9. On the other hand, the average SSG age for
new firms in BSIMM10 is 3.42 years.
• A second reason appears to be an increase in firms continuing to use the BSIMM to guide their initiatives.
BSIMM7 included 11 firms that received their second or higher assessment. That figure increased to 12 firms
for BSIMM8, 16 firms for BSIMM9, and remained at 16 firms for BSIMM10.
• A third reason appears to be the effect of firms aging out of the data pool. We removed 55 firms for
BSIMM-V through BSIMM9 and an additional 17 firms for BSIMM10; interestingly, nine of the 72 firms that
had once aged out of the BSIMM data pool have subsequently had a new assessment.
We also see this potential reversal (i.e., a return to an upward trend) in mature verticals such as financial services where
average overall maturity decreased to 35.6 in BSIMM8 from 36.2 in BSIMM7 and 38.3 in BSIMM6. For BSIMM9,
however, the average financial services score increased to 36.8 and increased again for BSIMM10 to 37.6. Of potential
impact here, five of the 11 firms dropped from BSIMM9 due to data age were in the financial services group, while that
figure was only two of 17 firms dropped for BSIMM10. On the other hand, a different trend might be starting in personnel
where, with the exception of some outliers, we observed an overall decrease in SSG size on first measurement to 9.6, but the
first measurement average had increased from 6.1 for BSIMM7 and 8.8 for BSIMM8 to 11.6 for BSIMM9.
Note that a large number of firms with no satellite continue to exist in the community, which causes the median satellite
size to be zero (65 of 122 firms had no satellite at the time of their current assessment, and nearly 50% of the firms
added for BSIMM10 had no satellite at assessment time). BSIMM participants, however, continue to report that the
existence of a satellite is directly tied to SSI maturity. For the 57 BSIMM10 firms with a satellite at assessment time, the
average size was 110 with a median of 25. Notably, the average score for the 57 firms with a satellite is 43.9, while the
average score for the 65 firms without a satellite is 28.4.
For BSIMM8, we zoomed in on two particular activities as part of our analysis. Observations of [AA3.3 Make the
SSG available as an AA resource or mentor] dropped to 2% in the BSIMM8 community, from 5% in BSIMM7, 17% in
BSIMM6, and 30% in BSIMM-V. However, observations rose to 3% for BSIMM9 and remained at 3% for BSIMM10.
Observations of [SR3.3 Use secure coding standards] dropped to 14% in BSIMM8, from 18% in BSIMM7, 29% in
BSIMM6, and 40% in BSIMM-V. In this case, the slide continued to 8% for BSIMM9 and 7% in BSIMM10. This kind of
change can be seen in activities spanning all 12 practices. In some cases, it appears that instead of focusing on a robust,
multi-activity approach to a given practice, many firms have a tendency to pick one figurehead activity (e.g., static
analysis with a tool or penetration testing) on which to focus their investment in money, people, and effort. In other
cases, it appears that some SSGs have moved away from being the source of expertise on software security architecture
and secure coding standards, without the organization having those skills and knowledge appropriately spread across the
product teams.
Firms that have been in the BSIMM community for multiple years have, with one or two exceptions, always increased the
number of activities they are able to deploy and maintain over time. We expect the majority of newer firms entering the
BSIMM population to do the same.
CP LEVEL 2
[CP2.1: 48] Identify PII inventory.
The organization identifies the kinds of PII processed or stored by each of its systems, along with their associated data
repositories. A PII inventory can be approached in two ways: starting with each individual application by noting its PII use
or starting with particular types of PII and noting the applications that touch them. System architectures have evolved
such that PII will flow into cloud-based service and end-point device ecosystems, and come to rest there (e.g., content
delivery networks, social networks, mobile devices, IoT devices), making it tricky to keep an accurate PII inventory. The
inventory must be easily referenced in critical situations, such as making a list of databases that would require customer
notification if breached or a list to use in crisis simulations (see [CMVM3.3 Simulate software crises]).
CP LEVEL 3
[CP3.1: 25] Create a regulator compliance story.
The SSG has the information regulators want, so a combination of written policy, controls documentation, and artifacts
gathered through the SSDL gives the SSG the ability to demonstrate the organization’s compliance story without a
fire drill for every audit or a piece of paper for every sprint. Often, regulators, auditors, and senior management will be
satisfied with the same kinds of reports that can be generated directly from various tools. In some cases, the organization
will require additional information from vendors about how the vendor’s controls support organizational compliance needs
(e.g., cloud providers, especially in a multi-cloud deployment). It will often be necessary to normalize information that
comes from disparate sources. While they are often the biggest, governments aren’t the only regulators of behavior.
SFD LEVEL 1
[SFD1.1: 98] Build and publish security features.
Rather than having each project team implement its own security features (e.g., authentication, role management, key
management, audit/log, cryptography, protocols), the SSG provides proactive guidance by acting as a clearinghouse of
security features for development groups to use. These features might be discovered during code review, created by the
SSG or a specialized development team, or be part of a library provided by a vendor, such as a cloud service provider.
Generic security features often have to be tailored for specific platforms. A mobile crypto feature will likely need at least
two versions to cover Android and iOS, while managing identity in the cloud might require versions specific to AWS,
Google, and Azure. Project teams benefit from implementations that come preapproved by the SSG, and the SSG
benefits by not having to repeatedly track down the kinds of subtle errors that often creep into security features.
[SFD1.2: 69] Engage the SSG with architecture teams.
Security is a regular topic in the organization’s software architecture discussions, with the architecture team taking
responsibility for security in the same way that it takes responsibility for performance, availability, and scalability.
One way to keep security from falling out of these discussions is to have an SSG member participate in architecture
discussions. In other cases, enterprise architecture teams can help the SSG create secure designs that integrate properly
into corporate design standards. Proactive engagement by the SSG is key to success here. Moving a well-known system
to the cloud means reengaging the SSG, for example. It’s never safe for one team to assume another team has addressed
security requirements.
SFD LEVEL 3
[SFD3.1: 11] Form a review board or central committee to approve and maintain secure design patterns.
A review board or central committee formalizes the process of reaching consensus on design needs and security tradeoffs.
Unlike the architecture committee, this group focuses on providing security guidance and also periodically reviews already
published design standards (especially around authentication, authorization, and cryptography) to ensure that design
decisions don’t become stale or out of date. Moreover, a review board can help control the chaos often associated with
the adoption of new technologies when development groups might otherwise make decisions on their own without ever
engaging the SSG.
[SFD3.2: 12] Require use of approved security features and frameworks.
Implementers take their security features and frameworks from an approved list or repository. There are two benefits
to this activity: developers don’t spend time reinventing existing capabilities, and review teams don’t have to contend
with finding the same old defects in new projects or when new platforms are adopted. Essentially, the more a project
uses proven components, the easier testing, code review, and architecture analysis become (see [AA1.1 Perform security
feature review]). Reuse is a major advantage of consistent software architecture and is particularly helpful for agile
development and velocity maintenance in CI/CD pipelines. Container-based approaches make it especially easy to
package and reuse approved features and frameworks (see [SE3.4 Use application containers]).
[SFD3.3: 4] Find and publish mature design patterns from the organization.
The SSG fosters centralized design reuse by collecting design patterns (sometimes referred to as security blueprints)
from across the organization and publishing them for everyone to use. A section of the SSG website could promote
positive elements identified during architecture analysis so that good ideas are spread. This process is formalized: an
ad hoc, accidental noticing isn’t sufficient. In some cases, a central architecture or technology team can facilitate and
enhance this activity. Common design patterns accelerate development, so it’s important to use secure design patterns
not just for applications but for all software (microservices, APIs, frameworks, infrastructure, and automation).
Table 3. BSIMM Numbers Over Time. The chart shows how the BSIMM has grown over the years.
Table 4. BSIMM10 Reassessments Scorecard Round 1 vs. Round 2. The chart shows how 50 SSIs changed between assessments.
CONFIGURATION STRATEGY
MANAGEMENT & & METRICS
VULNERABILITY 3.0
COMPLIANCE
MANAGEMENT & POLICY
2.5
2.0
SOFTWARE TRAINING
ENVIRONMENT 1.5
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
Figure 8. Round 1 Firms vs. Round 2 Firms Spider Chart. This diagram illustrates the high-water mark change in 50 firms between their first
and second BSIMM assessments.
There are two obvious factors causing the numerical change seen on the longitudinal scorecard (showing 50
BSIMM10 firms moving from their first to second assessment). The first factor is newly observed activities. The
activities where we see the biggest increase in new observations include the following:
• [SM1.1 Publish process and evolve as necessary], with 19 new observations
• [CMVM2.3 Develop an operations inventory of applications], with 18 new observations
• [PT1.2 Feed results to the defect management and mitigation system], with 17 new observations
• [SM2.1 Publish data about software security internally], with 16 new observations
• [SM2.3 Create or grow a satellite], with 16 new observations
• [CP2.1 Identify PII inventory], with 16 new observations
• [SR2.2 Create a standards review board], with 16 new observations
• [CR2.5 Assign tool mentors], with 16 new observations
Table 5. BSIMM10 Reassessments Scorecard Round 1 vs. Round 3. The chart shows how 21 SSIs changed from their first to their
third assessment.
CONFIGURATION STRATEGY
MANAGEMENT & & METRICS
VULNERABILITY 3.0
COMPLIANCE
MANAGEMENT & POLICY
2.5
2.0
SOFTWARE TRAINING
ENVIRONMENT 1.5
1.0
0.5
PENETRATION ATTACK
0.0
TESTING MODELS
Figure 9. Round 1 Firms vs. Round 3 Firms Spider Chart. This diagram illustrates the high-water mark change in 21 firms between their first
and third BSIMM assessments.
GOVERNANCE
Table 6. BSIMM10 Skeleton. This expanded version of the BSIMM skeleton shows the 12 BSIMM practices and the 119 activities they
contain, along with the observation rates as both counts and percentages. Highlighted activities are the most common per practice.
Table 7. Most Common Activities Per Practice. This figure shows the most common activity in each of the 12 BSIMM practices.
Of course, the list above of the most common activity in each practice isn’t the same as the list of the most
common activities. If you’re working on improving your company’s SSI, you should consider these 12 activities
particularly carefully.
Table 8. Top 20 Activities by Observation Count. Shown here are the most commonly observed activities in the BSIMM10 data.
50
45
40
35
30
FIRMS
25
20
15
9.2
10
7.5 7.2
4.8
5 2.6
0.8
Figure 10. BSIMM Score Distribution. The majority of BSIMM10 participants have a score in the 16 to 45 range, with an average SSG age
of 2.6 to 4.8 years.
GOVERNANCE
FINANCIAL ISV TECH HEALTHCARE IOT INSURANCE CLOUD RETAIL
ACTIVITY
(OF 57) (OF 43) (OF 20) (OF 16) (OF 13) (OF 11) (OF 20) (OF 9)
[SM1.1] 42 30 14 10 9 6 15 5
[SM1.2] 30 25 16 7 11 4 10 5
[SM1.3] 35 26 14 9 8 4 15 4
[SM1.4] 53 35 18 14 10 10 15 9
[SM2.1] 29 17 7 5 5 3 11 5
[SM2.2] 31 15 12 5 7 3 8 3
[SM2.3] 23 21 10 7 8 6 9 4
[SM2.6] 26 14 13 7 8 2 8 4
[SM3.1] 10 6 6 2 4 1 4 1
[SM3.2] 1 5 2 1 1 1 3 1
[SM3.3] 10 3 2 2 2 1 1 0
[SM3.4] 0 0 0 0 0 0 0 0
[CP1.1] 42 27 13 14 11 7 14 4
[CP1.2] 53 31 16 16 11 9 20 9
[CP1.3] 43 21 11 10 7 6 11 5
[CP2.1] 25 14 6 9 6 2 11 4
[CP2.2] 25 12 11 9 6 3 7 2
[CP2.3] 26 15 11 8 6 3 8 3
[CP2.4] 20 15 9 8 5 3 8 4
[CP2.5] 26 20 10 10 7 4 13 2
[CP3.1] 18 8 2 2 2 2 6 0
[CP3.2] 8 3 3 3 1 2 3 1
[CP3.3] 4 2 2 0 1 0 2 0
[T1.1] 39 29 13 8 9 6 15 7
[T1.5] 20 13 7 3 4 4 8 3
[T1.7] 28 14 7 4 5 6 8 4
[T2.5] 13 10 4 2 3 2 4 3
[T2.6] 16 10 5 2 4 3 6 2
[T2.8] 9 18 7 3 6 0 10 2
[T3.1] 0 2 2 0 1 0 2 0
[T3.2] 8 7 5 2 4 3 5 1
[T3.3] 5 8 5 1 3 1 3 0
[T3.4] 10 4 2 2 2 2 3 0
[T3.5] 2 2 0 0 0 0 1 2
[T3.6] 0 1 0 0 0 0 1 0
Table 9. Vertical Comparison Scorecard. This table allows for easy comparisons of observation rates for the eight verticals tracked in BSIMM10.
[AM1.2] 48 19 10 10 9 6 12 7
[AM1.3] 20 8 7 5 5 3 2 2
[AM1.5] 28 12 10 8 6 4 7 5
[AM2.1] 2 1 4 2 3 1 0 1
[AM2.2] 3 3 4 0 2 0 1 0
[AM2.5] 6 6 7 2 4 1 3 1
[AM2.6] 3 5 3 3 3 1 3 0
[AM2.7] 2 5 6 1 4 0 2 0
[AM3.1] 1 1 2 0 1 0 1 1
[AM3.2] 0 1 2 0 1 0 0 0
[AM3.3] 0 0 0 0 0 0 0 0
[SFD1.1] 48 31 13 12 9 9 18 9
[SFD1.2] 30 27 16 12 10 7 13 6
[SFD2.1] 14 14 6 4 4 2 7 2
[SFD2.2] 18 16 8 4 6 4 7 4
[SFD3.1] 9 1 0 0 0 0 0 1
[SFD3.2] 4 6 2 1 1 2 6 1
[SFD3.3] 0 2 2 1 2 0 0 1
[SR1.1] 47 21 12 11 9 6 13 6
[SR1.2] 40 33 15 10 11 6 16 5
[SR1.3] 45 26 16 10 9 8 13 8
[SR2.2] 33 11 7 4 3 5 8 5
[SR2.4] 22 21 10 5 7 4 11 1
[SR2.5] 15 11 8 7 5 3 5 4
[SR3.1] 10 10 6 1 2 2 6 1
[SR3.2] 4 3 3 2 1 3 3 0
[SR3.3] 4 2 3 2 2 2 1 0
[SR3.4] 15 6 3 3 3 2 7 2
Table 9. Vertical Comparison Scorecard. This table allows for easy comparisons of observation rates for the eight verticals tracked in BSIMM10.
[AA1.1] 50 35 17 14 11 9 17 9
[AA1.2] 10 12 9 3 5 2 5 2
[AA1.3] 7 9 7 4 3 2 4 1
[AA1.4] 38 13 7 10 6 6 7 7
[AA2.1] 5 8 9 2 4 2 2 0
[AA2.2] 5 5 6 2 4 1 1 0
[AA3.1] 3 2 4 0 3 1 1 0
[AA3.2] 0 0 0 0 0 0 0 1
[AA3.3] 1 1 3 0 2 0 1 0
[CR1.2] 37 29 14 11 7 7 13 5
[CR1.4] 44 29 14 7 10 6 15 7
[CR1.5] 19 18 10 4 4 2 7 3
[CR1.6] 24 16 6 3 4 2 9 4
[CR2.5] 21 16 6 3 4 3 7 5
[CR2.6] 16 4 2 0 1 1 4 1
[CR2.7] 12 9 2 2 1 3 5 1
[CR3.2] 2 1 1 2 0 2 1 1
[CR3.3] 0 1 0 0 0 0 1 0
[CR3.4] 4 0 0 0 0 0 0 0
[CR3.5] 2 0 0 0 0 0 0 0
[ST1.1] 51 33 20 9 12 10 15 7
[ST1.3] 44 30 17 9 10 7 13 6
[ST2.1] 14 14 8 4 6 5 3 4
[ST2.4] 7 6 5 1 2 1 3 1
[ST2.5] 3 4 4 1 2 0 3 1
[ST2.6] 0 7 7 1 4 0 2 0
[ST3.3] 0 2 1 0 1 0 2 0
[ST3.4] 0 0 1 1 1 0 0 0
[ST3.5] 0 2 1 0 1 0 2 0
Table 9. Vertical Comparison Scorecard. This table allows for easy comparisons of observation rates for the eight verticals tracked in BSIMM10.
[PT1.1] 49 40 20 13 13 11 17 9
[PT1.2] 47 33 15 9 10 5 17 8
[PT1.3] 40 26 13 11 6 7 13 8
[PT2.2] 7 11 8 4 4 3 6 2
[PT2.3] 13 9 3 0 1 0 4 2
[PT3.1] 1 6 7 2 4 1 3 2
[PT3.2] 3 1 2 0 1 0 1 0
[SE1.1] 38 14 6 12 6 6 12 5
[SE1.2] 54 40 18 14 11 9 19 9
[SE2.2] 15 15 12 2 8 2 7 2
[SE2.4] 8 16 12 1 9 2 5 1
[SE3.2] 4 4 5 2 3 1 1 3
[SE3.3] 1 3 1 0 1 0 1 0
[SE3.4] 4 9 2 0 1 0 6 3
[SE3.5] 1 4 0 0 0 0 3 0
[SE3.6] 1 2 2 0 2 0 1 0
[SE3.7] 4 5 0 1 0 0 4 0
[CMVM1.1] 50 39 18 11 12 7 19 8
[CMVM1.2] 48 34 16 12 12 7 16 9
[CMVM2.1] 47 32 14 10 10 6 16 8
[CMVM2.2] 43 31 15 9 11 5 15 9
[CMVM2.3] 35 24 9 7 7 5 11 3
[CMVM3.1] 0 1 1 0 1 0 0 0
[CMVM3.2] 2 4 4 2 3 1 3 0
[CMVM3.3] 5 3 4 2 2 1 1 2
[CMVM3.4] 4 7 2 1 1 2 6 2
[CMVM3.5] 0 0 0 0 0 0 0 0
Table 9. Vertical Comparison Scorecard. This table allows for easy comparisons of observation rates for the eight verticals tracked in BSIMM10.
LEVEL 1 ACTIVITIES
(Red indicates most observed BSIMM activity in that practice.)
Governance
Strategy & Metrics (SM)
• Publish process and evolve as necessary. [SM1.1]
• Create evangelism role and perform internal marketing. [SM1.2]
• Educate executives. [SM1.3]
• Identify gate locations, gather necessary artifacts. [SM1.4]
Training (T)
• Conduct awareness training. [T1.1]
• Deliver role-specific advanced curriculum. [T1.5]
• Deliver on-demand individual training. [T1.7]
SSDL Touchpoints
Architecture Analysis (AA)
• Perform security feature review. [AA1.1]
• Perform design review for high-risk applications. [AA1.2]
• Have SSG lead design review efforts. [AA1.3]
• Use a risk questionnaire to rank applications. [AA1.4]
Governance
Strategy & Metrics (SM)
• Publish data about software security internally. [SM2.1]
• Enforce gates with measurements and track exceptions. [SM2.2]
• Create or grow a satellite. [SM2.3]
• Require security sign-off. [SM2.6]
Training (T)
• Enhance satellite through training and events. [T2.5]
• Include security resources in onboarding. [T2.6]
• Create and use material specific to company history. [T2.8]
Intelligence
Attack Models (AM)
• Build attack patterns and abuse cases tied to potential attackers. [AM2.1]
• Create technology-specific attack patterns. [AM2.2]
• Build and maintain a top N possible attacks list. [AM2.5]
• Collect and publish attack stories. [AM2.6]
• Build an internal forum to discuss attacks. [AM2.7]
Deployment
Penetration Testing (PT)
• Penetration testers use all available information. [PT2.2]
• Schedule periodic penetration tests for application coverage. [PT2.3]
Governance
Strategy & Metrics (SM)
• Use an internal tracking application with portfolio view. [SM3.1]
• Run an external marketing program. [SM3.2]
• Identify metrics and use them to drive budgets. [SM3.3]
• Integrate software-defined lifecycle governance. [SM3.4]
Training (T)
• Reward progression through curriculum. [T3.1]
• Provide training for vendors or outsourced workers. [T3.2]
• Host software security events. [T3.3]
• Require an annual refresher. [T3.4]
• Establish SSG office hours. [T3.5]
• Identify new satellite members through training. [T3.6]
Intelligence
Attack Models (AM)
• Have a science team that develops new attack methods. [AM3.1]
• Create and use automation to mimic attackers. [AM3.2]
• Monitor automated asset creation. [AM3.3]
Deployment
Penetration Testing (PT)
• Use external penetration testers to perform deep-dive analysis. [PT3.1]
• Have the SSG customize penetration testing tools and scripts. [PT3.2]