CIS Benchmark Development Guide V07
CIS Benchmark Development Guide V07
Guide
v07 - 6/27/2023
1 TABLE OF CONTENTS
2 Introduction ............................................................................................................................... 4
3 The Benchmark Development Team Roles .................................................................................. 4
4 Benchmark Overview ................................................................................................................. 5
4.1 Benchmark Technology Scope .........................................................................................................5
4.2 BDP Overview .................................................................................................................................6
4.2.1 CIS WorkBench ....................................................................................................................................................... 6
4.2.2 Initial Benchmark Creation .................................................................................................................................... 6
4.2.3 Recommendation Creation .................................................................................................................................... 7
4.2.4 Consensus Review .................................................................................................................................................. 7
4.2.5 Publishing the Final Benchmark ............................................................................................................................. 8
You have volunteered to be part of a community of individuals from all types of organizations on the journey to
create and maintain the current best practice in security in your area of expertise. The term “current best practice”
is one that explicitly recognizes the only constant in our industry: change.
All CIS Benchmarks are developed by the consensus of a given Technology Community. A Technology Community
is a diverse group of people actively interested in the security of a given technology area (MS Windows, Linux,
Postgres, etc.) and the development of related Benchmarks. They contribute to the testing and feedback of the
recommendations being considered. An active Technology Community is of extreme importance to the successful
development of a Benchmark, since it is the consensus of the community that ultimately determines the final
recommendations that are included in a given Benchmark release.
The goal of all Benchmark areas is to provide practical, security-specific guidance for a given technology that
concisely:
This guide will provide an overview of the Benchmark Development Process (BDP), the roles and responsibilities
of various participants in the BDP, and an introduction to the environments and tools available to you. We are
excited to have you with us!
• CIS Technology Community Leader (TCL): A CIS employee who is responsible for shepherding the given
Technology Community and resulting Benchmark through the development process and ultimately
publishing the result.
• Editor: An individual or individuals who have been given editing rights to the underlying Benchmark source.
These individuals have generally been contributors to other Benchmarks and have been sufficiently vetted
to allow this level of trust. Editors are typically leaders in the given Technology Community and are a great
resource for new members of the community.
• Subject Matter Experts (SMEs): In general, there are two types of SMEs involved in contributing their
security expertise and/or technical expertise to the development of the detailed recommendations:
o Technology Subject Matter Experts (T-SMEs): An individual or individuals who are actively
contributing their expertise to the development and testing of the detailed technical
recommendations of a given Benchmark.
These roles do not always have to be held by unique individuals. Sometimes two or more roles can be embodied
by a single individual, depending on that individual’s expertise and availability and the overall community makeup.
The Technology Community is always actively involved in the process, monitoring and providing feedback, and
taking on the previous described roles as needed. In the end, the community provides consensus-based approval
of the recommendations in the given Benchmark.
4 BENCHMARK OVERVIEW
Before discussing the details of the BDP, let’s take a closer look at Benchmarks and their components. This
understanding is important for all roles so there will be a high level of consistency across Benchmarks and within
each Benchmark.
A Benchmark may address a single version or multiple versions of a particular technology. In general, technology
versions which are secured the same way should be covered by a single Benchmark, and technology versions
with significantly different audit and remediation instructions should be covered in separate Benchmarks.
When contemplating which technology versions a Benchmark should cover, it is taken into consideration which
versions of the technology have adequate documentation available and/or instances available for developing
and verifying audit and remediation instructions.
It’s not necessary to test a Benchmark against every version of the technology it covers, as long as there are no
significant differences among those technology versions.
Once the technology versions have been identified and any supporting platforms (e.g., operating systems running
below a client or server application), profiles will need to be defined. Each Benchmark will have one or more
profiles. A profile is a collection of recommendations for securing a technology or a supporting platform. For
example, the IBM DB2 Benchmark has profiles for the IBM DB2 software itself, the Windows host OS platform,
and the Linux host OS platform.
Currently CIS Benchmarks have at least one profile defined. The basic profile for each technology or supporting
platform is called a Level 1 profile. Level 1 profiles contain recommendations that:
Level 2 profiles may optionally be defined to extend Level 1 profiles. Generally, level 2 profiles contain all level 1
recommendations plus additional recommendations that:
• are intended for environments or use cases where security is more critical than manageability and
usability;
• may negatively inhibit the utility or performance of the technology; and
• limit the ability of remote management/access.
Each recommendation is assigned to one or (in rare cases) two profiles. For example, a single IBM DB2 Benchmark
recommendation may be assigned to both a Level 1 Windows host OS profile and a Level 1 Linux host OS profile
because it applies to both platforms, even though the audit and remediation steps on each platform may differ.
Someone who wants to use a Benchmark would select the profile that best suits their platforms, security
requirements, and operational considerations. Any recommendations not included in a selected profile would be
omitted for remediation and auditing purposes.
• Web browser-based access – Anyone with an Internet connection and a web browser can potentially join
and contribute to a community and a Benchmark. No software needs to be installed locally.
• Reach the Technology Communities – Users can join communities for the Benchmarks they are interested
in and use CIS WorkBench to interact with these communities via threaded discussions.
• Create, edit, and maintain Benchmark documents – Certain community members can use CIS WorkBench
to create and revise recommendations and accompanying prose (the actual text that makes up a
Benchmark) for a given Benchmark. This capability is generally limited to a few people in the roles of CIS
TCL and Editor.
• Suggest changes to Benchmarks – Anyone in the community can suggest changes to a Benchmark via the
WorkBench ticket or proposed change process. These tickets and proposed changes are viewed and
discussed by the community and can result in changes to a Benchmark if consensus is reached.
Details on using CIS WorkBench are covered later in this document. For now, it is important to understand that
CIS WorkBench is how technology communities and their Benchmarks are managed and developed.
• Profile (Level 1, Level 2, etc.) – the profile(s) to which this recommendation is assigned
• Mapping to CIS Critical Security Controls – the CIS Controls Safeguards to which this recommendation
maps (CIS Controls mappings are supported for v7.1 and newer)
• Description – detailed information pertaining to the security setting
• Rationale – the reason the recommendation is being made
• Impact Statement – any non-obvious adverse consequences likely to be experienced by an enterprise
putting this recommendation into effect
• Audit Procedure – a discrete set of steps used to determine whether a target system is in compliance with
the recommendation
• Remediation Procedure - a discrete set of steps used to configure the target system to the
recommendation
• References – additional external information relevant to this recommendation (URLs to vendor
documentation, articles, etc.)
This level of detail has helped make CIS Benchmarks the industry standard for quality and ease of use. This detail
involves additional work during the creation process, but is well worth the effort to create a deliverable that can
be applied by the broadest user base.
NOTE: The announcement will be sent to everyone who has joined this community and has the proper
notification settings (more details on joining a community and notifications are covered in
section 5.1.2).
During this process, community members are encouraged to review the DRAFT Benchmark and comment on, or
create new tickets and discussions with feedback for the development team on the recommendations in the
Benchmark. Anyone from that community can contribute and all feedback is welcome.
Generally, consensus review lasts for at least two weeks and can be extended depending on the amount of
feedback via comments, discussions, and tickets submitted on the draft. Tickets can be created for a number of
reasons, but each will ideally be some form of change proposal. Tickets will be discussed via comments in CIS
WorkBench and during the community's recurring open “Community Call” meetings until a conclusion is reached
and action taken (ticket can be resolved, rejected, or deferred). Sometimes it is necessary to defer a ticket to the
next version of the Benchmark.
The TCL primarily guides the Benchmark through the development process, but it is the community members that
actively develop, review, test, and approve each recommendation that is ultimately included in the final
Benchmark released.
5 THE BASICS
This section will cover the CIS WorkBench tool in more detail and the capabilities it provides to individuals in the
various roles.
Once approved, you will receive an email at your registered email address telling you that your account is ready.
At this point you can log in to CIS WorkBench with your credentials.
From the list shown, you can get an indication of how active a given community is by looking at the numbers in
the displayed columns:
In general, the more discussions in a community, the more active it is. You can learn more about a given
community prior to joining by clicking on the community’s name in the above list. For example, clicking on “CIS
Apple OS Benchmark” takes you to this community’s home page, as shown in Figure 5 through Figure 7.
From this page, you can see the CIS TCL’s name, any important announcements for this community, and the
activities in which this community is currently involved. Community Activity is a timeline showing the most
recent activity in this community. Scrolling down…
Is the Welcome section. This section describes the community and has a link to this Guide. It also lists key
contributors in this community and the Benchmarks. Finally, it will give details about any Community Call for this
community. These are generally WebEx meetings held on a regular basis (Weekly, bi-weekly, Monthly) and are
open to anyone to join and help out. Scrolling down…
• Tickets: These are the most recent change requests for the Benchmarks maintained by this community.
• Discussions: These are the most recent posts to the community’s discussions on various topics related
to the Benchmarks in this community.
• Benchmarks: These are the most recent Benchmarks being maintained by this community.
• Milestones: These are the most recent project milestones for the community’s Benchmarks in
development.
•
NOTE: You can view any of the above by clicking on the item name in the given list, but to actually
contribute (create or add a comment to a discussion or ticket, etc.) you must join the community.
Joining a community basically means you are interested in the activities of this Technology Community and the
Benchmarks they are creating, as in Figure 8. Practically, this means you will receive notifications about activities
in this community (Discussion items, Ticket items, etc.) via email, and you are able to create and comment on
tickets and discussions.
The notification settings for each currently joined community is listed here and can be adjusted.
Also, at the top of the “Notification Settings” page is a way to set the default notification type for any
community joined in the future.
This brings you to the overview page of the Benchmark, which can be considered the title page of the Benchmark,
as shown in Figure 14. Before we go any further, let’s discuss the three major areas displayed in the browser.
The leftmost pane is used for navigation within the Benchmark’s recommendations and other parts of the CIS
Apple OS Benchmark Technology Community site (related files, related tickets, etc.) The center pane is generally
the primary working area for whatever is selected in the navigation pane. The rightmost pane also changes based
on what is selected in the navigation pane, but is restricted to displaying information based on the tabbed
categories at the top of this pane (tickets, discussions, and proposed changes).
In the navigation pane, the lower section is dedicated to navigation of the specific Benchmark being displayed.
This is a recommendation tree for this Benchmark, and each item listed is one of these:
• A section/subsection: This is a set of subsections and/or recommendations. Sections and subsections are
used for logically grouping related recommendations.
• A recommendation: This contains the detailed prose for a security setting or closely related settings of
interest.
As an example, in Figure 15 we have selected recommendation 2.11.1 (Ensure Users' Accounts Do Not Have a
Password Hint), the prose of which is now displayed in the center pane. The right pane shows there are no tickets
for this recommendation.
Also, in the left pane you can see that the entire Benchmark has 8 major sections. Section 2 (System Settings) is
made up of 15 subsections. Subsection 2.11 (Touch ID & Password (Login Password) has 2 recommendations in it.
Viewing different recommendations in the Benchmark can be done by selecting the recommendation of interest
in the left pane, which will display the corresponding prose in the center pane.
NOTE: “<” and “>” buttons are also available to move between recommendations within a given
section/subsection (top left of center pane).
This process certainly works for viewing a given Benchmark, but many people would rather view the Benchmark
in a more standardized form (PDF, MS Word, or MS Excel). This can be done for published Benchmarks by going
to the “Files Area,” as shown in Figure 16. This brings you to the Files selection page, as shown in Figure 17.
The PDF version of the Benchmark contains all the prose details in the original CIS WorkBench form, but generally
is easier to read and does not need a special application (CIS WorkBench) to view. The PDF, MS Word, and Excel
versions will always be available for published Benchmarks but may not be available for Benchmarks being actively
developed. When in development, it is best to view the Benchmark in CIS WorkBench itself.
NOTE: What files are displayed depend on if you or your company are SecureSuite members. The PDF
files are available to anyone, but the other formats are only for SecureSuite members.
NOTE: During the consensus period for a new Benchmark release the files are will have a DRAFT version
of the Benchmark in PDF format for public review.
Select and download a PDF version of this Benchmark if available, as shown in Figure 18.
5.1.4 Discussions
Discussions are used by the community to talk about various subjects and are a good way to start getting involved
in a community. Discussion can be on a variety of areas, such as confusion on Recommendation usage/applicability,
issues with implementation (Audit and/or Remediation procedure), or problems with AAC (OVAL and/or Script
Check Engine (SCE) scripts, etc.).
Let’s use the CIS Apple macOS 10.14 Benchmark v1.0.0 as an example.
As can be seen in Figure 19, there are currently nine discussions for this Benchmark. Clicking on the title of the
discussion (/usr/sbin/chgrp no such file or directory) displays the discussion details in the right pane, as Figure 20
shows (Scroll down…)
The original topic description and any current comments on this discussion topic are listed. You can add a
comment to this discussion by typing in the lower text box and pressing Add Comment. Your comments will
become part of this discussion topic and will be viewable by all the community members. Members joined to this
community will be notified of any new topic, or comments added to an existing topic, based on their notification
settings.
In general, discussions should be linked to a specific object in the community (something that the discussion is
referencing). For example, objects can be:
• Technology Community: Linking a discussion here is generally done for community announcement and
other discussions of broader interest.
• A Given Benchmark: Linking a discussion here is done for general discussions about a given Benchmark.
• A Specific Section or Subsection in a Benchmark: Linking a discussion here is done for discussions about
a given section of a Benchmark.
•
• Figure 21: Left Pane Selection for the New Discussion in Figure 24.
The right pane button to do this looks a little different depending if there are existing discussions on the object
selected, as in Figure 22 and Figure 23.
Figure 22: New Discussion on a Recommendation with an Existing Figure 23: First Discussion on a Recommendation
Discussion
Either way will bring you to the same discussion entry screen, as shown in Figure 24. Here is the screen for entering
a new discussion for recommendation 3.1 (Ensure Security Auditing Is Enabled) on the CIS Apple macOS 13.0
Ventura v1.1.0 Benchmark. At this point you simply fill in the appropriate fields describing the topic in detail you
want to discuss and click Submit.
The number of discussions can also be seen in the left pane as well (Figure 25).
In this example, Section 2 (System Settings) has 3 linked discussions to it, or the subsections/Recommendation
within it.
5.1.5 Tickets
The concept of a ticket will be familiar to many people who have used other types of issue tracking software. In
general, a ticket signifies that there is a specific issue that needs to be addressed and tracked to completion.
Tickets are generally more specific than Discussions and usually cover errors and/or needed changes to a given
Recommendation, such as errors with the Audit and/or Remediation procedure, errors with AAC (OVAL and/or
Script Check Engine (SCE) scripts, etc.
NOTE: Tickets are the most common form of community communication on Benchmarks in active
development, and progress on tickets is tracked using the Milestone tool in WorkBench. Most
TCLs and Editors use tickets as a to do list for the changes and additions to a Benchmark.
Figure 26: Tickets for Apple macOS 13.0 Ventura v1.1.0 Benchmark
As shown in Figure 26, there are currently 22 tickets for this Benchmark. Clicking on the title of the ticket
(Freeform.app - ensure "Disable iCloud Sign In Prompt") displays the ticket detail in the right pane, as in Figure 27
(Scroll down…).
Tickets should be linked to a specific object in the community just like with discussions (Community, Benchmark,
Section and Recommendation). The link between a Ticket and an object is even more important since Tickets are
generally calling for a change to be made, therefore it is critical to track the history of when and why a particular
modification was made. Follow the same basic process as creating new discussion (5.1.4.1 - Creating New
Discussions), but using the “Tickets” tab on the right pane.
The number of tickets can also be seen in the left pane as well (Figure 28).
In this example, Section 2 (System Settings) has 8 linked tickets to it, or the subsections/Recommendation within
it.
With the information covered thus far, you can fully contribute to any Benchmark using discussions and tickets. If
you are ready to get started, please jump to Section 7 (How Should I Proceed?). Otherwise, please read on and
learn more about making Benchmarks and using the Workbench tool.
6 THE DETAILS
This section goes over some of the advanced capabilities of the CIS WorkBench tool and some of the activities
typically done by individuals in more advanced roles.
NOTE: Although this feature can be used by anyone to propose a change to a Benchmark, it gets into
the editing capabilities and conventions of the WorkBench tool, which can add complications.
For this reason, we suggest most users utilize the discussion and ticket capabilities discussed
previously for most issues.
Let’s walk through a proposed change on a test version of the CIS Apple macOS 13.0 Ventura v1.1.0 Benchmark,
recommendation 1.2 (Ensure Auto Update Is Enabled). You can see the initial recommendation prose in Figure 29.
Figure 29: Initial Recommendation Prose and Proposed Change Selection Highlighted
Selecting Propose Change in the left pane starts the process (see Figure 29). This will display the Edit Proposed
Recommendation screen, as shown in Figure 30.
Figure 30: Proposed Recommendation Edit Screen – For Overall Navigational Reference
This screen is essentially the same screen Benchmark editors use to create and modify the Benchmarks directly,
and it is made up of a number of areas:
In the following example, we are going to make some proposed changes to the Description and Rational fields
above. Figure 31 shows the original recommendation text. Figure 32 shows we have made three modifications:
Now, if we reselect this recommendation in the left pane navigation, we see the result in Figure 33.
The recommendation text in the center pane looks the same, but there is now some additional information and
a Show Diff button on the right pane. When the Show Diff button is pressed, the areas that changed are
highlighted in red (old version) and green (new version), as in Figure 34.
Similar to Discussions and Tickets, linked Proposed Changes can be seen on the left navigation tree (See Figure
35).
In this example, Section 2.6 (Privacy & Security Settings) has 1 linked proposed change to it, or the
subsections/Recommendation within it.
NOTE: This section covers items that are useful for Benchmark editors and are not necessary for the
general contributor. Of course, for those interested in learning more about what Benchmark
editors typically do, or the details of what make a good Benchmark, feel free to read this section.
• Front Matter
o Cover Page
o Terms of Use
o Table of Contents
• Overview
o Untitled introductory paragraph
o Intended Audience
o Consensus Guidance
Most of these components are either automatically generated (cover page, terms of use, table of contents, etc.)
or are mostly the same for every Benchmark, with minor customizations (for example, the introductory paragraph
for the overview should state which technology versions the Benchmark covers and which versions it was tested
against).
Nearly all effort put into developing and maintaining a Benchmark involves the Recommendation section, and the
rest of this chapter covers it exclusively.
In most cases, the goal state involves one or more configuration items, also known as attributes. The
recommendation would explain how to remediate the target asset’s configuration to reach the goal state—for
example, by using the asset’s administrative GUI to change the configuration, or by editing a configuration file
with a text editor. The recommendation would also explain how to audit the asset to confirm its configuration
complies with the recommendation—for example, by visually checking the value displayed in the asset’s
administrative GUI, or by viewing the contents of a configuration file. To the extent feasible, the remediation and
auditing information should be step-by-step instructions.
CIS Benchmarks should focus mainly on technical recommendations specific to the Benchmark’s target asset.
Generic recommendations, such as having a backup policy and physically protecting backup media, are usually not
as helpful as asset-specific recommendations.
NOTE: In general, the CIS Critical Security Controls cover more “Higher Level” policy type controls,
where Benchmark Recommendations are much more specific (Set X to Y).
• Automated: Represents recommendations for which assessment of a technical control can be fully
automated and validated to a pass/fail state without human intervention. Recommendations will include
the necessary information to implement automation.
• Manual: Represents recommendations for which assessment of a technical control cannot be fully
automated and requires all or some manual steps (human intervention) to validate that the configured
state is set as expected. The expected state can vary depending on the environment (for example, a
recommendation to ensure backups are centrally available).
In a few cases, our consensus process can’t provide guidance on what a compliant state is. For example, a setting
might have a distinct set of possible values from which the consensus team is unable to make an explicit
recommendation. We could still create a recommendation that an enterprise take the setting under consideration,
but we can’t state a precise recommendation for the setting. Under such circumstances, the recommendation
would be set to Manual.
In the end, BOTH Automated and Manual recommendation are equally important and users should strive to pass
them all, not just the more easily assessed Automated ones provide by a scanning tool (a typical auditor will review
ALL recommendations for compliance).
Use references to bolster the reasoning for this recommendation. In general, references for recommendations fall
into two categories:
• Cybersecurity: These are references that support the use of this setting for security, items like
documented breaches that could have been mitigated by this setting, vendor announcements about using
this setting for some threat mitigation, etc.
• Configuration: These are generally references to vendor documentation that give more details about
configuring the setting and its impacts.
Second, you must document the recommendation using several standard fields. You may prefer to do all the
research first and then document everything, or to document the recommendation while you conduct your
research. Either way is fine. However, be aware that each recommendation should cover a single attribute or an
integrated set of attributes (for example, a set of access control lists for a file). Your research may identify multiple
attributes that should be remediated and audited separately, in which case you should write one recommendation
for each attribute. It will save you time if you identify the need for multiple recommendations before documenting
them.
Each recommendation contains several mandatory fields and may also contain additional fields as needed. The
following sections describe each field in the order displayed in CIS WorkBench and provide advice on how to
populate it.
You may notice that some older Benchmarks use a different construction for titles. For example, instead of saying
“Ensure ‘Minimum Length’ is greater than or equal to 12,” a Benchmark might say “Set the ‘Minimum Length’ to
12 or greater.” This construction should not be used for new recommendations.
In terms of format, the title should mimic the examples above. The first word of the title should be capitalized,
and the names of specific settings and other proper nouns should be capitalized. All other words should be in
lower case. The title should be written as a phrase, not a complete sentence (e.g., no period at the end of the
text).
6.2.4.2 Assessment Status, Profiles, CIS Control fields, and MITRE ATT&CK Mappings
These three fields are all selection-based; you choose one or more values from already-populated lists.
• Assessment Status: Each recommendation must have an assessment status of either Automated or
Manual. See the discussion in Section 6.2.3 for more information on assessment status.
• Profiles: Each recommendation must reference one or more configuration profiles. This field has a
checkbox for each defined profile, and you may select as many of the checkboxes as needed. See the
discussion in Section 4.1 for more information on profiles.
• CIS Controls: Each recommendation should be linked to all applicable CIS control Safeguards (which are
listed and defined at https://www.cisecurity.org/controls/). For example, a recommendation for
enabling the use of authoritative time sources for clock synchronization should be linked to CIS v8 Critical
Security Control (CSC) 8.4 - Standardize Time Synchronization (“Standardize time synchronization.
Configure at least two synchronized time sources across enterprise assets, where supported.”)
o The goal here is to pick as few Safeguards to which the given Recommendation Strongly maps
(truly supports). In general, that is a maximum of 3 Safeguards.
• MITRE ATT&CK Mappings: This field is still somewhat experimental. Some Benchmarks are using it, but
most are not. In general, do not use this un less specifically directed by the TCL to do so.
In this example, the first three sentences explain the undesirable state, and the last sentence states the
recommendation to change from the undesirable state to a more secure state.
It can also be used to clarify what this setting is about (Especially in cases where many acronyms are involved).
This setting determines if DNS over HTTPS (DoH) is used by the system. DNS over HTTPS (DoH) is a protocol
for performing remote Domain Name System (DNS) resolution over the Hypertext Transfer Protocol Secure
(HTTPS). For additional information on DNS over HTTPS (DoH), visit: Secure DNS Client over HTTPS (DoH)
on Windows Server 2022 | Microsoft Docs.
The Description field should not be overly detailed. For example, there is no need for it to provide step-by-step
instructions for auditing or remediation or to specify the recommended value for the setting, since those will be
included in the Audit Procedure and Remediation Procedure fields.
It can be hard to differentiate the Description and the Rationale Statement fields. Keep in mind that the
Description field explains what implementing the recommendation is going to do to the target in terms of changing
its functionality, and the Rationale Statement field explains why implementing the recommendation is beneficial
to security. The Description is what will be done, and the Rationale Statement is why it needs to be done. Here is
an example of a Rationale Statement corresponding to the Tomcat Description example above:
Setting the shutdown attribute to a nondeterministic value will prevent malicious local users from shutting
down Tomcat.
Impact statements should focus on non-obvious impacts. Many recommendations have obvious impacts; for
example, disabling a service means the service will no longer be available. The intent of the impact statement is
to identify the impacts that are less likely to be recognized.
If there is no known impact to this recommendation, put “No known impact” in this section, but this should be
rarely used.
The beginning of the Audit Procedure should state what is to be done through “verify” language. The term “verify”
is preferred because it indicates the auditor must take action to confirm compliance.
If the Audit Procedure is very simple, such as verifying a particular policy exists, one sentence may be sufficient.
However, in most cases, more instructions will be needed. In the example above, the second and third lines specify
commands the auditor can use to verify the configuration, and the fourth line explains how to interpret the output
of the commands. Whenever feasible, provide commands, regular expressions, short scripts or code examples,
and other practical information auditors can reuse or adapt for reuse.
1. Check to see if the ScoreBoardFile is specified in any of the Apache configuration files. If it is not
present, the configuration is compliant.
2. Find the directory in which the ScoreBoardFile would be created. The default value is the
ServerRoot/logs directory.
3. Verify that the scoreboard file directory is not a directory within the Apache DocumentRoot.
4. Verify that the ownership and group of the directory is root:root (or the user under which Apache
initially starts up if not root).
5. Verify that the scoreboard file directory is on a locally mounted hard drive rather than an NFS mounted
file system.
Although this example is detailed, it does not provide step-by-step instructions. For example, item 1 does not
explain how to find the Apache configuration files or how to check each of them for ScoreBoardFile.
Providing that level of detail would make the instructions extremely long, and most readers probably wouldn’t
need them, so omitting them is acceptable.
For some Audit Procedures, a single set of instructions isn’t sufficient. There is more than one way to perform the
audit, or there is more than one set of conditions that can be met to demonstrate compliance with the
recommendation. Here’s an example of the latter:
An Audit Procedure often combines these approaches, such as having step-by-step instructions that include
commands. Here is an example of prose instructions and commands together:
1. Use the httpd -M option as root to check which auth* modules are loaded.
httpd -M | egrep ‘auth._’
2. Also use the httpd -M option as root to check for any LDAP modules which do not follow the same
naming convention.
httpd -M | egrep ‘ldap’
Another common example is a setting that can be changed both by a management User Interface (UI) and
programmatically via a command line interface (CLI) or Application Programming Interface (API). All of these
methods can be listed in the Benchmark Recommendation in separate highlighted sections.
An Audit Procedure should also list any prerequisites needed to verify compliance. For example, the auditor may
need administrator-level privileges on the target, or a particular tool may need to be installed in order to view the
setting value. Prerequisites should be specified before instructions and commands, otherwise an auditor may
attempt to follow the instructions and issue the commands before seeing the prerequisites.
NOTE: All these examples use full sentences with terminating punctuation. Sentence fragments should
only be used in cases where options are being listed, such as the example above introducing
instructions for each method by naming the method. Notes should be given if there are multiple
reasons for audits to vary and should be bolded to draw attention from the users.
The Remediation Procedure should make it clear that the target’s state is to be changed through one or more
actions. Terms such as “set,” “update,” “create,” “change,” and “perform” indicate actions.
Another example of a Remediation Procedure indicates which steps can be skipped if the specified conditions are
met:
1. If the apache user and group do not already exist, create the account and group as a unique system
account.
groupadd –r apache
useradd apache –r –g apache –d /var/www –s /sbin/nologin
2. Configure the apache user and group in the Apache configuration file httpd.conf.
User apache
Group apache
If the default value is not straightforward, it’s acceptable to have a verbose explanation here. One example is if
the default value varies based on the underlying platform, in which case you may need to list each possible
underlying platform and the associated default value.
• URLs to documentation or articles pertaining to the recommendation. URL references should first prefer
vendor sources of information, and then, exceptionally, reputable third-party sources.
• Links to security examples for this recommendation (vulnerabilities addressed, breach analysis, etc.).
The references must be numbered for usability. Here is a sample of a reference list:
1. https://tomcat.apache.org/tomcat-5.5-doc/config/server.html
2. https://httpd.apache.org/docs/2.4/programs/configure.html
There is no particular order for the references, but if there are numerous references, they should be grouped by
type at a minimum (for example, all CCE IDs, then vendor URLs, and finally third-party URLs).
NOTE: Some Benchmarks (Generally the STIG Benchmarks) have been using this field to include STIG
specific information on the Recommendation.
Figure 37: Formatting Toolbar for Editing Recommendations and Proposed Changes
Starting on the far left, the first set of three buttons is for Bold, Italic, and Heading:
• Bold is to be used sparingly to indicate caveats or other particularly important information, such as a note
about prerequisites for issuing a command.
• Italic can be used in two ways. First, it can denote the title of a book, article, or other publication. Second,
italicized text set in angle brackets <> denotes a variable for which a real value needs to be substituted.
o Many times, variables need to be denoted in a code block. In these cases, just use <> (code blocks
do not support other formatting like Bold, Italics, etc.)
• Heading is rarely used, but can be used to highlight separate subsections within a given section of a
recommendation. One possible example is separating a “UI Method” from a “CLI Method” in an Audit or
Remediation section.
• URL Link is useful since the format of a hyperlink in Markdown is easy to forget, and this will walk you
through creating it.
• URL/Image – Do not Use (this is no longer supported)
The next set of three buttons is for Unordered List, Ordered List, and Quote:
• An Unordered List is better known as a bulleted list. It should be used when there are two or more items
and they are options (look for any of the following values, etc.) It may be used when there are multiple
required items that can be performed in any sequence, but an Ordered List is generally preferred for those
cases.
• An Ordered List is a numbered list. It should be used whenever you are providing step-by-step instructions
where sequence is important. For usability reasons, an Ordered List is generally recommended for any
instructions with more than one step or item.
• The Quote button is used to indicate quoted text. Most Benchmarks do not use Quote formatting.
NOTE: There is no such thing as a list (either Ordered or Unordered) with just one item. No such lists
should be used in Benchmarks
The next button is the Preview. This can be used to view a non-editable render of what the resulting text will look
like. Push the Preview button again to return to editing.
The last two buttons are for a Code Block and Inline Code:
• The Code Block button is used to denote a block of contiguous text as code, commands, or scripts by
displaying it in a monospace font and a grey background. See the examples throughout Section 6.2.4 for
text formatted as Code Blocks.
• The Inline Code button is used to mark part of a sentence as “Code” (monospace font). This is generally
used to indicate configuration setting names, file and directory names, parameter values, and other
similar pieces of text within a sentence.
• Planning and Installation: This encompasses any recommendations that apply to preparation before
installation or options for the installation itself, such as not installing unnecessary modules, or installing
additional modules to provide more security features. Configuration options available both during and
after installation should not be placed in this section.
• Hardening: This section is for actions that reduce the target’s functionality, remove weak default values,
prevent the use of inadequately secure options, delete sample content, etc.
• Access Control: This includes user and group identities, and ownership and permissions for resources (e.g.,
files, folders/directories, shares, processes, media).
• Communications Protection: This encompasses the cryptographic settings, protocol options, and other
security features involved in protecting network communications. Examples include SSL/TLS options,
certificate requirements, cryptographic key protection, and restrictions on which versions of network and
application protocols may be used.
• Operations Support: This covers security recommendations for typical security operations, such as
configuring logging, patching, security monitoring, and vulnerability scanning.
• Attack Prevention and Detection: This addresses the recommendations intended to stop or detect attacks,
ranging from the use of features that prevent sensitive information leakage or mitigate denial of service
attacks to the use of technologies for detecting malware.
In each of these examples, the name of the section indicates the purpose of the settings. Older Benchmarks may
have inconsistent section names involving types of threats or attacks, types of vulnerabilities, etc. Use of such
names should be avoided.
Grouping recommendations into sections makes the Benchmark much easier for people to understand, but it has
additional benefits. For example, if most or all access control-related recommendations require the Benchmark
user to have administrator-level privileges, that can be stated at the access control section level as a prerequisite
instead of having to list it within each individual access control recommendation.
Most Benchmarks have a large enough number of recommendations that they have subsections within most
sections. For example, an Access Control section might have subsections for identities, ownership, file permissions,
process permissions, etc. The general rule of thumb is to use subsections when the number of recommendations
within the section is unwieldy (e.g., dozens) or when the recommendations naturally fall into two or more
categories.
Each Benchmark section should have an introductory paragraph or two. It should indicate the overall intent of the
section’s recommendations and point out any significant cautions about the recommendations. For example, a
section on hardening that includes disabling unneeded modules might include text like this in its introduction:
“This section covers specific modules that should be reviewed and disabled if not required for business
purposes. However, it's very important that the review and analysis of which modules are required for
business purposes not be limited to the modules explicitly listed.”
All TCLs have editing rights like Benchmark Editors and at times fill that role on a given Benchmark. Also, like
Benchmark editors they are similar to Maintainers in open-source projects in that they can change the underlying
source of the Benchmark based on community submissions. Every TCL leads multiple technology communities at
the same time and generally has multiple Benchmarks in development simultaneously. Due to the diversity of
technologies involved, no TCL can be an expert in all of them, so they are very dependent on the technology
community’s editors, other contributors, and the overall consensus process to develop successful Benchmarks.
In addition, TCLs have additional roles outside of those available to the community in general. These include:
The bottom line is the TCL is the overall glue that holds the Benchmark development process together and gets a
result in a reasonable period of time. There are many facets to this job, and they are quite busy, but their primary
job is always to help the communities they lead by providing the resources and guidance they need to succeed.
Please feel free to reach out to the TCL in any technology community with questions or feedback on a Benchmark
or the BDP in general. They love getting feedback!
1) Join a community of interest: Find one that you have some expertise and interest in, join it, and set your
notifications accordingly.
2) Get involved:
a. Option 1: Dive in immediately and create a new discussion on the community announcing yourself,
your expertise, and your availability. The TCL or other community leaders will soon reach out to
you to discuss how you can help.
Feel free to contribute as much or as little as you can, since we value contributions of all sizes. For example, we
have contributors that do one of the following:
• Provide spelling and grammar changes to Benchmarks. This is indispensable for the creation of a
professional result.
• Test proposed recommendations and provide feedback via tickets. This is indispensable for the creation
of a reliable and widely applicable result.
• Provide a starting point for a new Benchmark that was initially developed outside of the CIS Benchmark
process by a given company or set of individuals. This then forms the basis of an initial Benchmark and
community around it.
• Provide a detailed analysis of the variations in security configuration items from one operating system
release to another. This is an essential contribution and helps focus the community to efficiently work on
the changes that matter out of potentially hundreds of possibilities.
Diversity of expertise and viewpoints in the community is key to creating a widely applicable and used Benchmark,
and any contribution you can provide is valuable and appreciated.
8.1.1 Emphasis
• Emphasis works in workbench and word exports.
• Strike through is not available
• Emphasis does NOT work inside code blocks or inline code
8.1.4 Lists
Lists will render in MS Word and WorkBench.
• Asterisks (*)
• Minuses (-)
• Pluses (+)
---
***
___
This line is separated from the one above by two newlines, so it will be a *separate
paragraph*.
In this example, we have an existing ticket on Recommendation 1.3 of the CIS Google Chrome – Test v0 Benchmark.
It turns out that this ticket also applies to another recommendation in another Benchmark, and we would like to
keep track of this.
We will link this existing ticket to the same Recommendation (1.3) in a different Benchmark (CIS Google Chrome
– Test-Child v0). This is mimicking the very common case of a ticket being assigned to a Published Benchmark and
we would like to also assign it to the same recommendation in the updated Benchmark currently in development
(vNext).
Figure 39: We would like this same ticket linked to this Recommendation in this Benchmark
Open the ticket, by clicking the “Box – Arrow” button on the ticket in the right pane (see Figure 40)
This will display the Ticket windows, here click Edit (Figure 41).
Now in the “Add Link” text box you will type the following (Figure 42): r_2036425
NOTE: Detail on how/why this works will be given in the next section (See 8.2.1).
NOTE: WorkBench will give a preview of the target object, in the above case Recommendation 1.3 - (L1)
Ensure 'Allow Google Cast to connect to Cast devices on all IP addresses' is set to 'Disabled'.
Notice that now Recommendation 1.3 - (L1) Ensure 'Allow Google Cast to connect to Cast devices on all IP
addresses' is set to 'Disabled' is listed twice in the Linked Object. One is for each of the two different benchmarks.
The ticket on the original Benchmark (Still there –Figure 45).
8.2.1 OK, so this all seems a bit magical. How does this work?
The linking trick is to use the first letter of what you’re linking to following by ‘_’ and then the # of the item from
the end of the URL address for the item. For example:
• Benchmark: b_<#>
• Section: s_<#>
• Recommendation: r_<#>
• Ticket: t_<#>
• Discussion: d_<#>
So where exactly does the <#> come from? Here it is from the previous example.
Figure 47: Getting the target object number from the viewed object’s URL
It is the number at the end of the URL for the selected object (in this case a Recommendation). So, putting these
two things together we get: r_2036425, which is what we entered in the Add Link field for the ticket. This process
is internally known as “The Mike Method” in recognition of the BMDT member who popularized it.
Using the various prefixes above, and this general method, links can be attached to a wide variety of Community
and Benchmark objects (A Discussion, A Benchmark itself, A Section, etc.).
NOTE: This linking is not confined to a community and linking to an object in another community is
possible.