0% found this document useful (0 votes)
37 views

CIS Benchmark Development Guide V07

Uploaded by

scribd.verse916
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

CIS Benchmark Development Guide V07

Uploaded by

scribd.verse916
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

CIS Benchmark Development

Guide
v07 - 6/27/2023
1 TABLE OF CONTENTS
2 Introduction ............................................................................................................................... 4
3 The Benchmark Development Team Roles .................................................................................. 4
4 Benchmark Overview ................................................................................................................. 5
4.1 Benchmark Technology Scope .........................................................................................................5
4.2 BDP Overview .................................................................................................................................6
4.2.1 CIS WorkBench ....................................................................................................................................................... 6
4.2.2 Initial Benchmark Creation .................................................................................................................................... 6
4.2.3 Recommendation Creation .................................................................................................................................... 7
4.2.4 Consensus Review .................................................................................................................................................. 7
4.2.5 Publishing the Final Benchmark ............................................................................................................................. 8

5 The Basics .................................................................................................................................. 8


5.1 All Community Members .................................................................................................................8
5.1.1 Getting a Workbench Account ............................................................................................................................... 8
5.1.2 Joining a Community .............................................................................................................................................. 9
5.1.3 Viewing Community Benchmarks ........................................................................................................................ 14
5.1.4 Discussions ........................................................................................................................................................... 17
5.1.4.1 Creating New Discussions ........................................................................................................................... 19
5.1.5 Tickets .................................................................................................................................................................. 21

6 The Details ............................................................................................................................... 23


6.1 More Advanced Contributors......................................................................................................... 23
6.2 Benchmark Editors ........................................................................................................................ 27
6.2.1 Benchmark Structure ........................................................................................................................................... 27
6.2.2 Recommendation Purposes ................................................................................................................................. 28
6.2.3 Assessment Status ............................................................................................................................................... 28
6.2.4 Process for Creating a Recommendation ............................................................................................................. 29
6.2.4.1 Title field ..................................................................................................................................................... 30
6.2.4.2 Assessment Status, Profiles, CIS Control fields, and MITRE ATT&CK Mappings ......................................... 30
6.2.4.3 Description field .......................................................................................................................................... 30
6.2.4.4 Rationale Statement field ........................................................................................................................... 31
6.2.4.5 Impact Statement field................................................................................................................................ 31
6.2.4.6 Audit Procedure field .................................................................................................................................. 32
6.2.4.7 Remediation Procedure field ...................................................................................................................... 34
6.2.4.8 Default Value field ....................................................................................................................................... 35
6.2.4.9 References field ........................................................................................................................................... 35
6.2.4.10 Additional Information field ........................................................................................................................ 35
6.2.5 Recommendation Formatting .............................................................................................................................. 36
6.2.6 Recommendation Organization ........................................................................................................................... 37

6.3 CIS Technology Community Leads (TCLs) ........................................................................................ 38


7 How Should I Proceed? ............................................................................................................. 38
8 Appendix – Advanced WorkBench Stuff .................................................................................... 40

CIS Benchmark Development Guide V07.docx Page 2 of 46


8.1 CIS WorkBench Markdown Reference ............................................................................................ 40
8.1.1 Emphasis .............................................................................................................................................................. 40
8.1.1.1 Example code: ............................................................................................................................................. 40
8.1.1.2 Expected results: ......................................................................................................................................... 40
8.1.2 Code Blocks .......................................................................................................................................................... 40
8.1.2.1 Example code: ............................................................................................................................................. 40
8.1.2.2 Expected results: ......................................................................................................................................... 40
8.1.3 Links ..................................................................................................................................................................... 41
8.1.3.1 Example Code: ............................................................................................................................................. 41
8.1.3.2 Expected results: ......................................................................................................................................... 41
8.1.4 Lists ...................................................................................................................................................................... 41
8.1.4.1 Example Code: ............................................................................................................................................. 41
8.1.4.2 Expected Results: ........................................................................................................................................ 41
8.1.5 Headers ................................................................................................................................................................ 42
8.1.5.1 Example code: ............................................................................................................................................. 42
8.1.5.2 Expected results: ......................................................................................................................................... 42
8.1.6 Horizontal Rule: ................................................................................................................................................... 42
8.1.6.1 Example Code: ............................................................................................................................................. 42
8.1.6.2 Expected Results: ........................................................................................................................................ 42
8.1.7 Line Breaks: .......................................................................................................................................................... 42
8.1.7.1 Examples Code: ........................................................................................................................................... 42
8.1.7.2 Expected results: ......................................................................................................................................... 43

8.2 Linking Discussion and Tickets to Multiple Benchmarks .................................................................. 43


8.2.1 OK, so this all seems a bit magical. How does this work? .................................................................................... 46

CIS Benchmark Development Guide V07.docx Page 3 of 46


2 INTRODUCTION
Welcome to the Center for Internet Security (CIS) Benchmark Development Process (BDP). Organizations of all
sizes rely on CIS Benchmarks every day for secure system configuration guidance, and as a contributor you are
playing a critically important role in helping many organizations worldwide be better prepared by increasing the
protection of their ongoing computer operations.

You have volunteered to be part of a community of individuals from all types of organizations on the journey to
create and maintain the current best practice in security in your area of expertise. The term “current best practice”
is one that explicitly recognizes the only constant in our industry: change.

All CIS Benchmarks are developed by the consensus of a given Technology Community. A Technology Community
is a diverse group of people actively interested in the security of a given technology area (MS Windows, Linux,
Postgres, etc.) and the development of related Benchmarks. They contribute to the testing and feedback of the
recommendations being considered. An active Technology Community is of extreme importance to the successful
development of a Benchmark, since it is the consensus of the community that ultimately determines the final
recommendations that are included in a given Benchmark release.

The goal of all Benchmark areas is to provide practical, security-specific guidance for a given technology that
concisely:

1. Describes a generally applicable baseline for all organizations


2. Recognizes the need to securely maintain operational effectiveness

This guide will provide an overview of the Benchmark Development Process (BDP), the roles and responsibilities
of various participants in the BDP, and an introduction to the environments and tools available to you. We are
excited to have you with us!

3 THE BENCHMARK DEVELOPMENT TEAM ROLES


To successfully develop a Benchmark, a focused team is drawn from the overall Technology Community to
spearhead the effort on behalf of the overall community. In general, this focused team has the following roles
represented:

• CIS Technology Community Leader (TCL): A CIS employee who is responsible for shepherding the given
Technology Community and resulting Benchmark through the development process and ultimately
publishing the result.
• Editor: An individual or individuals who have been given editing rights to the underlying Benchmark source.
These individuals have generally been contributors to other Benchmarks and have been sufficiently vetted
to allow this level of trust. Editors are typically leaders in the given Technology Community and are a great
resource for new members of the community.
• Subject Matter Experts (SMEs): In general, there are two types of SMEs involved in contributing their
security expertise and/or technical expertise to the development of the detailed recommendations:
o Technology Subject Matter Experts (T-SMEs): An individual or individuals who are actively
contributing their expertise to the development and testing of the detailed technical
recommendations of a given Benchmark.

CIS Benchmark Development Guide V07.docx Page 4 of 46


o Security Subject Matter Experts (S-SMEs): An individual or individuals who are actively
contributing their expertise to the security goals and ramifications of recommendations of a given
Benchmark.

These roles do not always have to be held by unique individuals. Sometimes two or more roles can be embodied
by a single individual, depending on that individual’s expertise and availability and the overall community makeup.
The Technology Community is always actively involved in the process, monitoring and providing feedback, and
taking on the previous described roles as needed. In the end, the community provides consensus-based approval
of the recommendations in the given Benchmark.

4 BENCHMARK OVERVIEW
Before discussing the details of the BDP, let’s take a closer look at Benchmarks and their components. This
understanding is important for all roles so there will be a high level of consistency across Benchmarks and within
each Benchmark.

4.1 BENCHMARK TECHNOLOGY SCOPE


The first step in developing a Benchmark is to define the scope of the technology it will address. Benchmarks cover
a range of technologies, including operating systems (Microsoft, Linux, Unix, macOS), cloud platforms and services,
databases and applications, network and mobile devices. A given Benchmark defines a set of recommendations
for securing a specific technology, such as Microsoft Windows Server, Red Hat Linux, or Apple iOS. A Benchmark
may also include recommendations for platforms supporting that technology, if applicable. For example, the IBM
DB2 Benchmark has recommendations for securing the IBM DB2 software itself and the Windows or Linux host
OS on top of which the IBM DB2 software runs.

A Benchmark may address a single version or multiple versions of a particular technology. In general, technology
versions which are secured the same way should be covered by a single Benchmark, and technology versions
with significantly different audit and remediation instructions should be covered in separate Benchmarks.

When contemplating which technology versions a Benchmark should cover, it is taken into consideration which
versions of the technology have adequate documentation available and/or instances available for developing
and verifying audit and remediation instructions.

It’s not necessary to test a Benchmark against every version of the technology it covers, as long as there are no
significant differences among those technology versions.

Once the technology versions have been identified and any supporting platforms (e.g., operating systems running
below a client or server application), profiles will need to be defined. Each Benchmark will have one or more
profiles. A profile is a collection of recommendations for securing a technology or a supporting platform. For
example, the IBM DB2 Benchmark has profiles for the IBM DB2 software itself, the Windows host OS platform,
and the Linux host OS platform.

Currently CIS Benchmarks have at least one profile defined. The basic profile for each technology or supporting
platform is called a Level 1 profile. Level 1 profiles contain recommendations that:

• are the starting baseline for most organizations;


• are practical and prudent;

CIS Benchmark Development Guide V07.docx Page 5 of 46


• provide a clear security benefit; and
• do not inhibit the utility of the technology beyond acceptable means.

Level 2 profiles may optionally be defined to extend Level 1 profiles. Generally, level 2 profiles contain all level 1
recommendations plus additional recommendations that:

• are intended for environments or use cases where security is more critical than manageability and
usability;
• may negatively inhibit the utility or performance of the technology; and
• limit the ability of remote management/access.

Each recommendation is assigned to one or (in rare cases) two profiles. For example, a single IBM DB2 Benchmark
recommendation may be assigned to both a Level 1 Windows host OS profile and a Level 1 Linux host OS profile
because it applies to both platforms, even though the audit and remediation steps on each platform may differ.

Someone who wants to use a Benchmark would select the profile that best suits their platforms, security
requirements, and operational considerations. Any recommendations not included in a selected profile would be
omitted for remediation and auditing purposes.

4.2 BDP OVERVIEW


This section will give an overview of the BDP, the tools involved, and how the various roles fit into it. More details
on each part of the BDP are given later in this document.

4.2.1 CIS WorkBench


CIS WorkBench is a web-based tool used by the Technology Community to develop and maintain Benchmarks in
their technology area. The primary aspects of the tool are:

• Web browser-based access – Anyone with an Internet connection and a web browser can potentially join
and contribute to a community and a Benchmark. No software needs to be installed locally.
• Reach the Technology Communities – Users can join communities for the Benchmarks they are interested
in and use CIS WorkBench to interact with these communities via threaded discussions.
• Create, edit, and maintain Benchmark documents – Certain community members can use CIS WorkBench
to create and revise recommendations and accompanying prose (the actual text that makes up a
Benchmark) for a given Benchmark. This capability is generally limited to a few people in the roles of CIS
TCL and Editor.
• Suggest changes to Benchmarks – Anyone in the community can suggest changes to a Benchmark via the
WorkBench ticket or proposed change process. These tickets and proposed changes are viewed and
discussed by the community and can result in changes to a Benchmark if consensus is reached.

Details on using CIS WorkBench are covered later in this document. For now, it is important to understand that
CIS WorkBench is how technology communities and their Benchmarks are managed and developed.

4.2.2 Initial Benchmark Creation


For any new Benchmark being developed, the TCL for the given community will put the initial framework in place.
This new Benchmark could be for a new technology, in which case the initial framework in CIS WorkBench could
be quite sparse, or it could be a new Benchmark derived from an existing one (new Ubuntu or Windows release,
etc.) where the recommendations and prose from previous releases can be used as a starting point. In any case,
the TCL will get the ball rolling and create the initial structure of the new Benchmark.

CIS Benchmark Development Guide V07.docx Page 6 of 46


4.2.3 Recommendation Creation
Recommendations are the key component of any Benchmark. The recommendations are security configuration
settings that the Benchmark user needs to find, audit, and potentially remediate to ensure the technology is in
compliance with the recommendation. One important aspect of any CIS-approved Benchmark is that every
recommendation meets a strict criterion, including:

• Profile (Level 1, Level 2, etc.) – the profile(s) to which this recommendation is assigned
• Mapping to CIS Critical Security Controls – the CIS Controls Safeguards to which this recommendation
maps (CIS Controls mappings are supported for v7.1 and newer)
• Description – detailed information pertaining to the security setting
• Rationale – the reason the recommendation is being made
• Impact Statement – any non-obvious adverse consequences likely to be experienced by an enterprise
putting this recommendation into effect
• Audit Procedure – a discrete set of steps used to determine whether a target system is in compliance with
the recommendation
• Remediation Procedure - a discrete set of steps used to configure the target system to the
recommendation
• References – additional external information relevant to this recommendation (URLs to vendor
documentation, articles, etc.)

This level of detail has helped make CIS Benchmarks the industry standard for quality and ease of use. This detail
involves additional work during the creation process, but is well worth the effort to create a deliverable that can
be applied by the broadest user base.

4.2.4 Consensus Review


After the in-development Benchmark has reached the point that the community and the TCL feel it is ready to be
published, the Benchmark will go through consensus review. Consensus review starts by an announcement being
made to the community via a new discussion thread created in the CIS WorkBench tool by the TCL or an Editor
stating that a DRAFT version of the Benchmark is available and feedback and comments are welcome. This DRAFT
version is a PDF export of the Benchmark that has been watermarked ‘DRAFT’ and the name appended with DRAFT.
This DRAFT PDF will be uploaded to the Files area of this Benchmark.

NOTE: The announcement will be sent to everyone who has joined this community and has the proper
notification settings (more details on joining a community and notifications are covered in
section 5.1.2).

During this process, community members are encouraged to review the DRAFT Benchmark and comment on, or
create new tickets and discussions with feedback for the development team on the recommendations in the
Benchmark. Anyone from that community can contribute and all feedback is welcome.

Generally, consensus review lasts for at least two weeks and can be extended depending on the amount of
feedback via comments, discussions, and tickets submitted on the draft. Tickets can be created for a number of
reasons, but each will ideally be some form of change proposal. Tickets will be discussed via comments in CIS
WorkBench and during the community's recurring open “Community Call” meetings until a conclusion is reached
and action taken (ticket can be resolved, rejected, or deferred). Sometimes it is necessary to defer a ticket to the
next version of the Benchmark.

CIS Benchmark Development Guide V07.docx Page 7 of 46


This review period is an ideal time for quality control and completion of the development of derivative products
by the TCL, or the mapping of the CIS Controls.

4.2.5 Publishing the Final Benchmark


Once consensus is complete, the TCL will go through the process of using CIS WorkBench to create and publish
the appropriate exported files, one of which is the PDF for use on the CIS Benchmark website for anyone to access
(https://www.cisecurity.org/cis-Benchmarks/).

The TCL primarily guides the Benchmark through the development process, but it is the community members that
actively develop, review, test, and approve each recommendation that is ultimately included in the final
Benchmark released.

5 THE BASICS
This section will cover the CIS WorkBench tool in more detail and the capabilities it provides to individuals in the
various roles.

5.1 ALL COMMUNITY MEMBERS


This section gives an overview of the CIS WorkBench environment from the perspective of a new user. It is not
intended to be comprehensive, but covers the basics and encourages the new user to explore the tool, the
technology communities available, and the Benchmarks in progress. The overall goal is to give the new user
enough information and encouragement to pick at least one technology community to get involved in and
hopefully contribute to a Benchmark in active development.

5.1.1 Getting a Workbench Account


The first step in getting involved with the BDP is to get a free account for CIS WorkBench
(https://workbench.cisecurity.org/). This process is shown in Figure 1 and Figure 2 below. Once your request is
approved, you will be given basic access to CIS WorkBench and the various technology communities and
Benchmarks.

CIS Benchmark Development Guide V07.docx Page 8 of 46


Figure 1: CIS Workbench Login Page

Figure 2: Registering for a CIS Workbench Account

Once approved, you will receive an email at your registered email address telling you that your account is ready.
At this point you can log in to CIS WorkBench with your credentials.

5.1.2 Joining a Community


Once your account is created and you log in to the CIS WorkBench site, you can find a Technology Community that
you are interested in joining. You can do that by pressing the “Join A Community Today!” button shown in Figure
3. This will present you with a list of possible communities to join. Browse the list and select one or more
Technology Communities you are interested in, as shown in Figure 4.

Figure 3: User Home Page

CIS Benchmark Development Guide V07.docx Page 9 of 46


Figure 4: Finding a Technology Community

From the list shown, you can get an indication of how active a given community is by looking at the numbers in
the displayed columns:

• # of Benchmarks: The numbers of Benchmarks maintained by this community


• # of Milestones: The number of Benchmark project milestones in this community
• # of Discussions: The number of discussions that have occurred in this community

In general, the more discussions in a community, the more active it is. You can learn more about a given
community prior to joining by clicking on the community’s name in the above list. For example, clicking on “CIS
Apple OS Benchmark” takes you to this community’s home page, as shown in Figure 5 through Figure 7.

CIS Benchmark Development Guide V07.docx Page 10 of 46


Figure 5: CIS Apple OS Benchmark Technology Community Homepage (Top)

From this page, you can see the CIS TCL’s name, any important announcements for this community, and the
activities in which this community is currently involved. Community Activity is a timeline showing the most
recent activity in this community. Scrolling down…

Figure 6: CIS Apple OS Benchmark Technology Community Homepage (Middle)

Is the Welcome section. This section describes the community and has a link to this Guide. It also lists key
contributors in this community and the Benchmarks. Finally, it will give details about any Community Call for this
community. These are generally WebEx meetings held on a regular basis (Weekly, bi-weekly, Monthly) and are
open to anyone to join and help out. Scrolling down…

CIS Benchmark Development Guide V07.docx Page 11 of 46


Figure 7: CIS Apple OS Benchmark Technology Community Homepage (Bottom)

There are four important areas here:

• Tickets: These are the most recent change requests for the Benchmarks maintained by this community.
• Discussions: These are the most recent posts to the community’s discussions on various topics related
to the Benchmarks in this community.
• Benchmarks: These are the most recent Benchmarks being maintained by this community.
• Milestones: These are the most recent project milestones for the community’s Benchmarks in
development.

NOTE: You can view any of the above by clicking on the item name in the given list, but to actually
contribute (create or add a comment to a discussion or ticket, etc.) you must join the community.

Joining a community basically means you are interested in the activities of this Technology Community and the
Benchmarks they are creating, as in Figure 8. Practically, this means you will receive notifications about activities
in this community (Discussion items, Ticket items, etc.) via email, and you are able to create and comment on
tickets and discussions.

Figure 8: Technology Community Joining Acknowledgment Dialog

CIS Benchmark Development Guide V07.docx Page 12 of 46


Details on how you get notified can be modified via the “Notifications Settings” page available by clicking on your
username in the upper right corner of the screen, as shown in Figure 9. On an actively developed Benchmark you
can get a number of notifications per day, if set to “Immediate Emails”. It is strongly suggested setting this to
“Daily Digest” on communities of high interest or “Weekly Digest” for those of less interest.

Figure 9: Notification Settings

The notification settings for each currently joined community is listed here and can be adjusted.

Figure 10: Notification – Individual Joined Community Settings

Also, at the top of the “Notification Settings” page is a way to set the default notification type for any
community joined in the future.

Figure 11: Notifications – Manage Defaults Settings

CIS Benchmark Development Guide V07.docx Page 13 of 46


It is Strongly suggested you set this to “Daily Digest Email” to avoid getting a flood of emails.

Figure 12: Manage Defaults Settings – Subscription Type

5.1.3 Viewing Community Benchmarks


Since Benchmark publishing is the primary reason the Technology Communities and CIS WorkBench exist, let’s
look at a Benchmark in more detail. We’ll pick the CIS Apple macOS 13 Ventura Benchmark v1.1.0 from the
Benchmarks list on the CIS Apple OS Benchmark Technology Community Homepage from Figure 5, as shown more
closely in Figure 13. This Benchmark is currently in Draft mode and is actively being developed.

Figure 13: Selecting a Benchmark

This brings you to the overview page of the Benchmark, which can be considered the title page of the Benchmark,
as shown in Figure 14. Before we go any further, let’s discuss the three major areas displayed in the browser.

CIS Benchmark Development Guide V07.docx Page 14 of 46


Figure 14: Benchmark Overview Page

The leftmost pane is used for navigation within the Benchmark’s recommendations and other parts of the CIS
Apple OS Benchmark Technology Community site (related files, related tickets, etc.) The center pane is generally
the primary working area for whatever is selected in the navigation pane. The rightmost pane also changes based
on what is selected in the navigation pane, but is restricted to displaying information based on the tabbed
categories at the top of this pane (tickets, discussions, and proposed changes).

In the navigation pane, the lower section is dedicated to navigation of the specific Benchmark being displayed.
This is a recommendation tree for this Benchmark, and each item listed is one of these:

• A section/subsection: This is a set of subsections and/or recommendations. Sections and subsections are
used for logically grouping related recommendations.
• A recommendation: This contains the detailed prose for a security setting or closely related settings of
interest.

As an example, in Figure 15 we have selected recommendation 2.11.1 (Ensure Users' Accounts Do Not Have a
Password Hint), the prose of which is now displayed in the center pane. The right pane shows there are no tickets
for this recommendation.

CIS Benchmark Development Guide V07.docx Page 15 of 46


Figure 15: Sections/Subsections and Recommendations

Also, in the left pane you can see that the entire Benchmark has 8 major sections. Section 2 (System Settings) is
made up of 15 subsections. Subsection 2.11 (Touch ID & Password (Login Password) has 2 recommendations in it.
Viewing different recommendations in the Benchmark can be done by selecting the recommendation of interest
in the left pane, which will display the corresponding prose in the center pane.

NOTE: “<” and “>” buttons are also available to move between recommendations within a given
section/subsection (top left of center pane).

This process certainly works for viewing a given Benchmark, but many people would rather view the Benchmark
in a more standardized form (PDF, MS Word, or MS Excel). This can be done for published Benchmarks by going
to the “Files Area,” as shown in Figure 16. This brings you to the Files selection page, as shown in Figure 17.

The PDF version of the Benchmark contains all the prose details in the original CIS WorkBench form, but generally
is easier to read and does not need a special application (CIS WorkBench) to view. The PDF, MS Word, and Excel
versions will always be available for published Benchmarks but may not be available for Benchmarks being actively
developed. When in development, it is best to view the Benchmark in CIS WorkBench itself.

NOTE: What files are displayed depend on if you or your company are SecureSuite members. The PDF
files are available to anyone, but the other formats are only for SecureSuite members.

NOTE: During the consensus period for a new Benchmark release the files are will have a DRAFT version
of the Benchmark in PDF format for public review.

CIS Benchmark Development Guide V07.docx Page 16 of 46


Figure 16: Benchmark Files
Area

Figure 17: Files Selection Page

Select and download a PDF version of this Benchmark if available, as shown in Figure 18.

Figure 18: PDF Version of Benchmark

5.1.4 Discussions
Discussions are used by the community to talk about various subjects and are a good way to start getting involved
in a community. Discussion can be on a variety of areas, such as confusion on Recommendation usage/applicability,
issues with implementation (Audit and/or Remediation procedure), or problems with AAC (OVAL and/or Script
Check Engine (SCE) scripts, etc.).

Let’s use the CIS Apple macOS 10.14 Benchmark v1.0.0 as an example.

CIS Benchmark Development Guide V07.docx Page 17 of 46


Figure 19: Discussions for Apple macOS 13.0 Ventura v1.1.0 Benchmark

As can be seen in Figure 19, there are currently nine discussions for this Benchmark. Clicking on the title of the
discussion (/usr/sbin/chgrp no such file or directory) displays the discussion details in the right pane, as Figure 20
shows (Scroll down…)

Figure 20: Discussion Detail

The original topic description and any current comments on this discussion topic are listed. You can add a
comment to this discussion by typing in the lower text box and pressing Add Comment. Your comments will
become part of this discussion topic and will be viewable by all the community members. Members joined to this
community will be notified of any new topic, or comments added to an existing topic, based on their notification
settings.

In general, discussions should be linked to a specific object in the community (something that the discussion is
referencing). For example, objects can be:

• Technology Community: Linking a discussion here is generally done for community announcement and
other discussions of broader interest.
• A Given Benchmark: Linking a discussion here is done for general discussions about a given Benchmark.
• A Specific Section or Subsection in a Benchmark: Linking a discussion here is done for discussions about
a given section of a Benchmark.

CIS Benchmark Development Guide V07.docx Page 18 of 46


• A Specific Recommendation in a Benchmark: Linking a discussion here is done for discussions about a
given recommendation of a Benchmark.

5.1.4.1 Creating New Discussions


It is generally easier to select the object of interest in the left navigation pane as in Figure 21 first, and then use
the “Create New Discussion” button on the right pane. This will link the selected object (Recommendation, Section,
etc.) to the created discussion.


• Figure 21: Left Pane Selection for the New Discussion in Figure 24.

The right pane button to do this looks a little different depending if there are existing discussions on the object
selected, as in Figure 22 and Figure 23.

Figure 22: New Discussion on a Recommendation with an Existing Figure 23: First Discussion on a Recommendation
Discussion

Either way will bring you to the same discussion entry screen, as shown in Figure 24. Here is the screen for entering
a new discussion for recommendation 3.1 (Ensure Security Auditing Is Enabled) on the CIS Apple macOS 13.0
Ventura v1.1.0 Benchmark. At this point you simply fill in the appropriate fields describing the topic in detail you
want to discuss and click Submit.

CIS Benchmark Development Guide V07.docx Page 19 of 46


Figure 24: New Discussion Entry Right Pane

The number of discussions can also be seen in the left pane as well (Figure 25).

Figure 25: Discussions icon in Navigation Tree

In this example, Section 2 (System Settings) has 3 linked discussions to it, or the subsections/Recommendation
within it.

CIS Benchmark Development Guide V07.docx Page 20 of 46


• One in subsection 2.6 – Privacy & Security
o It is actually on the Recommendation 2.6.2 – Ensure Sending Diagnostic and Usage Data to
Apple Is Disabled
• One in subsection 2.10 – Lock Screen
o Could be on this section or a Subsection/Recommendation in this section
• One in subsection 2.11 – Touch ID & Password (Login Password)
o Could be on this section or a Subsection/Recommendation in this section

5.1.5 Tickets
The concept of a ticket will be familiar to many people who have used other types of issue tracking software. In
general, a ticket signifies that there is a specific issue that needs to be addressed and tracked to completion.
Tickets are generally more specific than Discussions and usually cover errors and/or needed changes to a given
Recommendation, such as errors with the Audit and/or Remediation procedure, errors with AAC (OVAL and/or
Script Check Engine (SCE) scripts, etc.

NOTE: Tickets are the most common form of community communication on Benchmarks in active
development, and progress on tickets is tracked using the Milestone tool in WorkBench. Most
TCLs and Editors use tickets as a to do list for the changes and additions to a Benchmark.

Figure 26: Tickets for Apple macOS 13.0 Ventura v1.1.0 Benchmark

As shown in Figure 26, there are currently 22 tickets for this Benchmark. Clicking on the title of the ticket
(Freeform.app - ensure "Disable iCloud Sign In Prompt") displays the ticket detail in the right pane, as in Figure 27
(Scroll down…).

Figure 27: Ticket Detail

CIS Benchmark Development Guide V07.docx Page 21 of 46


The original ticket description is listed, along with who it is currently assigned to, its status and priority. Any current
comments on this ticket topic are listed as well. You can add a comment to this ticket by typing in the lower text
box and pressing Add Comment. Your comments will become part of this ticket and will be viewable by all
community members. Members of the community will be notified of new tickets or comments added to existing
tickets based on their notification settings.

Tickets should be linked to a specific object in the community just like with discussions (Community, Benchmark,
Section and Recommendation). The link between a Ticket and an object is even more important since Tickets are
generally calling for a change to be made, therefore it is critical to track the history of when and why a particular
modification was made. Follow the same basic process as creating new discussion (5.1.4.1 - Creating New
Discussions), but using the “Tickets” tab on the right pane.

The number of tickets can also be seen in the left pane as well (Figure 28).

Figure 28: Tickets icon in Navigation Tree

In this example, Section 2 (System Settings) has 8 linked tickets to it, or the subsections/Recommendation within
it.

• One in subsection 2.3 – General


o Could be on this section or a subsection/Recommendation in this section
• Two in subsection 2.6 – Privacy & Security
o One on subsection 2.6.1 – Location Services
▪ Could be on this section or a subsection/Recommendation in this section
o One actually on the Recommendation 2.6.6 – Audit Lockdown Mode
• One in subsection 2.10 – Lock Screen

CIS Benchmark Development Guide V07.docx Page 22 of 46


o Could be on this section or a Recommendation in this section
• Three in subsection 2.11 – Touch ID & Password (Login Password)
o Could be on this section or a Recommendation in this section
• One in subsection 2.15 – Notifications
o Could be on this section or a Recommendation in this section

With the information covered thus far, you can fully contribute to any Benchmark using discussions and tickets. If
you are ready to get started, please jump to Section 7 (How Should I Proceed?). Otherwise, please read on and
learn more about making Benchmarks and using the Workbench tool.

6 THE DETAILS
This section goes over some of the advanced capabilities of the CIS WorkBench tool and some of the activities
typically done by individuals in more advanced roles.

6.1 MORE ADVANCED CONTRIBUTORS


This section gives an overview of the Proposed Change capability of Workbench. This is a more advanced feature
that allows any contributor to make proposed changes directly to the Benchmark prose. These changes can be
viewed, partially accepted, fully accepted, or rejected by the leaders of the Benchmark development (individuals
with editor rights).

NOTE: Although this feature can be used by anyone to propose a change to a Benchmark, it gets into
the editing capabilities and conventions of the WorkBench tool, which can add complications.
For this reason, we suggest most users utilize the discussion and ticket capabilities discussed
previously for most issues.

Let’s walk through a proposed change on a test version of the CIS Apple macOS 13.0 Ventura v1.1.0 Benchmark,
recommendation 1.2 (Ensure Auto Update Is Enabled). You can see the initial recommendation prose in Figure 29.

Figure 29: Initial Recommendation Prose and Proposed Change Selection Highlighted

Selecting Propose Change in the left pane starts the process (see Figure 29). This will display the Edit Proposed
Recommendation screen, as shown in Figure 30.

CIS Benchmark Development Guide V07.docx Page 23 of 46


NOTE: Figure 30 is not very legible. It is here for navigational reference and to get an idea of what it
looks like. A more detailed description of the various fields will follow.

Figure 30: Proposed Recommendation Edit Screen – For Overall Navigational Reference

This screen is essentially the same screen Benchmark editors use to create and modify the Benchmarks directly,
and it is made up of a number of areas:

CIS Benchmark Development Guide V07.docx Page 24 of 46


• Artifacts: This area is primarily used by the CIS TCL to develop Automated Assessment Content (AAC) that
corresponds to the specific test(s) described in the text for this recommendation. A full discussion of
artifacts and AAC is beyond the scope of this document, but in general, AAC are XML files following the
Security Content Automation Protocol (SCAP) set of standards, and specifically the Extensible
Configuration Checklist Description Format (XCCDF) and Open Vulnerability and Assessment Language
(OVAL) standards. AAC can be read by CIS-CAT and a wide variety of third-party assessment tools to
analyze systems for compliance to the given CIS Benchmark.
• Recommendation Properties: In general, this is the area where most changes are focused. The following
sub-properties will be described briefly here, but for a more detailed explanation, please see Section 6.2.1.
o Title: Short descriptive title for this recommendation.
o Assessment Status: In general, Automated means the CIS Community believes that this value
can be automatically collected and definitively resolved to a Pass/Fail state without human
involvement by a scanning tool. All other cases are considered Manual. Checks that
automatically assist the admin (ex. Collect and present a set of IP Addresses for the Admin to
review) are still considered Manual.
o Profiles: Predefined groups of recommendations for a given purpose.
o CIS Controls: The CIS Control Safeguards this recommendation addresses.
o MITRE ATT&CK Mappings: This field is still somewhat experimental. Some Benchmarks are
using it, but most are not. In general, do not use this unless specifically directed by the TCL to do
so.
o Description: A detailed description of how this recommendation affects the target or target
environment’s functionality.
o Rationale Statement: The specific reasoning this recommendation is being made.
o Impact Statement: Any non-obvious adverse security, functionality, or operational
consequences from following this recommendation.
o Audit Procedure: Step-by-step instructions for determining if the target system is in compliance.
o Remediation Procedure: Step-by-step instructions for applying this recommendation to the
target (bringing it into compliance).
o Default Value: The default value for the given setting in this recommendation, if known.
▪ If empty this section will not show up in MS Word export and subsequently the PDF.
o References: URLs to additional documentation on this issue, if applicable.
▪ If empty this section will not show up in MS Word export and subsequently the PDF.
o Additional Information: Supplementary information that does not correspond to any other
field.
▪ If empty this section will not show up in MS Word export and subsequently the PDF.

In the following example, we are going to make some proposed changes to the Description and Rational fields
above. Figure 31 shows the original recommendation text. Figure 32 shows we have made three modifications:

1. Description – In the first sentence, replaced “your” with “a”


2. Rational – Deleted “so as”
3. Description – Added a Note

CIS Benchmark Development Guide V07.docx Page 25 of 46


Figure 31: Original Recommendation
Figure 32: Modified Recommendation

Now, if we reselect this recommendation in the left pane navigation, we see the result in Figure 33.

Figure 33: The Recommendation After Submitting the Proposed Change

The recommendation text in the center pane looks the same, but there is now some additional information and
a Show Diff button on the right pane. When the Show Diff button is pressed, the areas that changed are
highlighted in red (old version) and green (new version), as in Figure 34.

Figure 34: Proposed Change “Diff” Highlighting

CIS Benchmark Development Guide V07.docx Page 26 of 46


When a Benchmark editor sees a Proposed Change, they will have some additional capabilities. They can Accept
or Reject a given change, and can modify the suggested change further since they have full editing rights to the
Benchmark.

Similar to Discussions and Tickets, linked Proposed Changes can be seen on the left navigation tree (See Figure
35).

Figure 35: Proposed Changes icon in Navigation Tree

In this example, Section 2.6 (Privacy & Security Settings) has 1 linked proposed change to it, or the
subsections/Recommendation within it.

• One actually on the Recommendation 2.6.6 – Audit Lockdown Mode

6.2 BENCHMARK EDITORS


Benchmark editors is a shorthand term for community members of all backgrounds who have editing rights on the
Benchmark source. These individuals have been involved as active contributors on previous Benchmarks, proven
their commitment to the Benchmark Development Process, and been vetted by CIS for this higher level of access.
In general, Benchmark editors take a leadership role in developing a given Benchmark. They propose and draft
new recommendations for review by the community. In many ways, Benchmark editors are similar to Maintainers
in open-source projects in that they can change the underlying source of the Benchmark based on community
submissions. This section covers details of recommendation development that editors typically perform or
oversee.

NOTE: This section covers items that are useful for Benchmark editors and are not necessary for the
general contributor. Of course, for those interested in learning more about what Benchmark
editors typically do, or the details of what make a good Benchmark, feel free to read this section.

6.2.1 Benchmark Structure


The structure of Benchmarks varies slightly from one Benchmark to another, but the typical high-level components
in order are:

• Front Matter
o Cover Page
o Terms of Use
o Table of Contents
• Overview
o Untitled introductory paragraph
o Intended Audience
o Consensus Guidance

CIS Benchmark Development Guide V07.docx Page 27 of 46


o Typographical Conventions
• Recommendation Definitions
o Title
o Assessment status
o Profile Definitions
o Acknowledgements
o Etc.
• Recommendations
• Appendix: Summary Table
• Appendix: CIS Controls vX Implementation Group (IG) Mappings
• Appendix: Change History

Most of these components are either automatically generated (cover page, terms of use, table of contents, etc.)
or are mostly the same for every Benchmark, with minor customizations (for example, the introductory paragraph
for the overview should state which technology versions the Benchmark covers and which versions it was tested
against).

Nearly all effort put into developing and maintaining a Benchmark involves the Recommendation section, and the
rest of this chapter covers it exclusively.

6.2.2 Recommendation Purposes


The key to creating a Benchmark is fully understanding the type of content each recommendation should contain.
Writing recommendations is easy; writing recommendations that people find clear and useful is more difficult.
Always remember that each recommendation is intended to be used in some way, usually to remediate a target
asset so it conforms to the recommendation, or to audit a target asset to confirm compliance with the
recommendation. Each recommendation should provide a goal state for the target asset, such as having the
operating system’s full disk encryption capability enabled or having a disaster recovery plan in place. Once you’ve
identified the goal state, you then write the recommendation to explain one or more methods for reaching the
goal state on the target asset (remediation) and confirming the target asset’s state (auditing).

In most cases, the goal state involves one or more configuration items, also known as attributes. The
recommendation would explain how to remediate the target asset’s configuration to reach the goal state—for
example, by using the asset’s administrative GUI to change the configuration, or by editing a configuration file
with a text editor. The recommendation would also explain how to audit the asset to confirm its configuration
complies with the recommendation—for example, by visually checking the value displayed in the asset’s
administrative GUI, or by viewing the contents of a configuration file. To the extent feasible, the remediation and
auditing information should be step-by-step instructions.

CIS Benchmarks should focus mainly on technical recommendations specific to the Benchmark’s target asset.
Generic recommendations, such as having a backup policy and physically protecting backup media, are usually not
as helpful as asset-specific recommendations.

NOTE: In general, the CIS Critical Security Controls cover more “Higher Level” policy type controls,
where Benchmark Recommendations are much more specific (Set X to Y).

6.2.3 Assessment Status


Each recommendation has an assessment status of either Automated or Manual. Benchmark conformance in most
scanning tools is measured by enumerating all automated recommendations and assessing a target against them.

CIS Benchmark Development Guide V07.docx Page 28 of 46


From time to time, when developing a recommendation, it’s difficult to ascertain what its assessment status
should be. In general, the status should be:

• Automated: Represents recommendations for which assessment of a technical control can be fully
automated and validated to a pass/fail state without human intervention. Recommendations will include
the necessary information to implement automation.
• Manual: Represents recommendations for which assessment of a technical control cannot be fully
automated and requires all or some manual steps (human intervention) to validate that the configured
state is set as expected. The expected state can vary depending on the environment (for example, a
recommendation to ensure backups are centrally available).

In a few cases, our consensus process can’t provide guidance on what a compliant state is. For example, a setting
might have a distinct set of possible values from which the consensus team is unable to make an explicit
recommendation. We could still create a recommendation that an enterprise take the setting under consideration,
but we can’t state a precise recommendation for the setting. Under such circumstances, the recommendation
would be set to Manual.

In the end, BOTH Automated and Manual recommendation are equally important and users should strive to pass
them all, not just the more easily assessed Automated ones provide by a scanning tool (a typical auditor will review
ALL recommendations for compliance).

6.2.4 Process for Creating a Recommendation


Creating a recommendation is a twofold process. First, you must identify the goal state for the recommendation,
understand why that goal state is recommended, and determine how to remediate and audit the goal state.
Usually this involves one or more methods, such as reviewing the product’s documentation and existing third-
party security guidelines or experimenting with the product in a test environment.

Use references to bolster the reasoning for this recommendation. In general, references for recommendations fall
into two categories:

• Cybersecurity: These are references that support the use of this setting for security, items like
documented breaches that could have been mitigated by this setting, vendor announcements about using
this setting for some threat mitigation, etc.
• Configuration: These are generally references to vendor documentation that give more details about
configuring the setting and its impacts.

Second, you must document the recommendation using several standard fields. You may prefer to do all the
research first and then document everything, or to document the recommendation while you conduct your
research. Either way is fine. However, be aware that each recommendation should cover a single attribute or an
integrated set of attributes (for example, a set of access control lists for a file). Your research may identify multiple
attributes that should be remediated and audited separately, in which case you should write one recommendation
for each attribute. It will save you time if you identify the need for multiple recommendations before documenting
them.

Each recommendation contains several mandatory fields and may also contain additional fields as needed. The
following sections describe each field in the order displayed in CIS WorkBench and provide advice on how to
populate it.

CIS Benchmark Development Guide V07.docx Page 29 of 46


6.2.4.1 Title field
The Title field must contain a concise summary of the recommended outcome or result, while being specific
enough that the recommendation won’t easily be confused with any other recommendation in the Benchmark.
Here are examples of possible titles:

• Ensure ‘Login Banner’ is set


• Ensure ‘Minimum Length’ is greater than or equal to 12
• Ensure WMI probing is disabled
• Ensure there is a backup policy in place
• Ensure the ‘MYSQL_PWD’ environment variable is not in use
• Ensure a Zone Protection Profile with an enabled SYN Flood Action of SYN Cookies is attached to all
untrusted zones

You may notice that some older Benchmarks use a different construction for titles. For example, instead of saying
“Ensure ‘Minimum Length’ is greater than or equal to 12,” a Benchmark might say “Set the ‘Minimum Length’ to
12 or greater.” This construction should not be used for new recommendations.

In terms of format, the title should mimic the examples above. The first word of the title should be capitalized,
and the names of specific settings and other proper nouns should be capitalized. All other words should be in
lower case. The title should be written as a phrase, not a complete sentence (e.g., no period at the end of the
text).

6.2.4.2 Assessment Status, Profiles, CIS Control fields, and MITRE ATT&CK Mappings
These three fields are all selection-based; you choose one or more values from already-populated lists.

• Assessment Status: Each recommendation must have an assessment status of either Automated or
Manual. See the discussion in Section 6.2.3 for more information on assessment status.
• Profiles: Each recommendation must reference one or more configuration profiles. This field has a
checkbox for each defined profile, and you may select as many of the checkboxes as needed. See the
discussion in Section 4.1 for more information on profiles.
• CIS Controls: Each recommendation should be linked to all applicable CIS control Safeguards (which are
listed and defined at https://www.cisecurity.org/controls/). For example, a recommendation for
enabling the use of authoritative time sources for clock synchronization should be linked to CIS v8 Critical
Security Control (CSC) 8.4 - Standardize Time Synchronization (“Standardize time synchronization.
Configure at least two synchronized time sources across enterprise assets, where supported.”)
o The goal here is to pick as few Safeguards to which the given Recommendation Strongly maps
(truly supports). In general, that is a maximum of 3 Safeguards.
• MITRE ATT&CK Mappings: This field is still somewhat experimental. Some Benchmarks are using it, but
most are not. In general, do not use this un less specifically directed by the TCL to do so.

6.2.4.3 Description field


The Description field must explain in some detail how the recommendation affects the target or target
environment’s functionality. This usually includes providing basic information about the target’s state or potential
state before the recommendation is implemented. Here is an example of a Description:

CIS Benchmark Development Guide V07.docx Page 30 of 46


Tomcat listens on TCP port 8005 to accept shutdown requests. By connecting to this port and sending the
SHUTDOWN command, all applications within Tomcat are halted. The shutdown port is not exposed to the
network as it is bound to the loopback interface. It is recommended that a nondeterministic value be set
for the shutdown attribute in $CATALINA_HOME/conf/server.xml.

In this example, the first three sentences explain the undesirable state, and the last sentence states the
recommendation to change from the undesirable state to a more secure state.

It can also be used to clarify what this setting is about (Especially in cases where many acronyms are involved).

This setting determines if DNS over HTTPS (DoH) is used by the system. DNS over HTTPS (DoH) is a protocol
for performing remote Domain Name System (DNS) resolution over the Hypertext Transfer Protocol Secure
(HTTPS). For additional information on DNS over HTTPS (DoH), visit: Secure DNS Client over HTTPS (DoH)
on Windows Server 2022 | Microsoft Docs.

The Description field should not be overly detailed. For example, there is no need for it to provide step-by-step
instructions for auditing or remediation or to specify the recommended value for the setting, since those will be
included in the Audit Procedure and Remediation Procedure fields.

6.2.4.4 Rationale Statement field


Each recommendation must have a Rationale Statement field which clearly articulates the specific reasons the
recommendation is being made. Statements that rely on phrases like "doing this is best practice" are
unacceptable. The Rationale Statement should provide clear supporting evidence for the security benefits to be
achieved by implementing the recommendation.

It can be hard to differentiate the Description and the Rationale Statement fields. Keep in mind that the
Description field explains what implementing the recommendation is going to do to the target in terms of changing
its functionality, and the Rationale Statement field explains why implementing the recommendation is beneficial
to security. The Description is what will be done, and the Rationale Statement is why it needs to be done. Here is
an example of a Rationale Statement corresponding to the Tomcat Description example above:

Setting the shutdown attribute to a nondeterministic value will prevent malicious local users from shutting
down Tomcat.

6.2.4.5 Impact Statement field


An impact statement should document any likely adverse security, functionality, or operational consequences
from following the recommendation (even if it is small). For example, if making a new setting take effect will
require rebooting a host, the impact statement should state this. Another example is a recommendation that
makes one aspect of security stronger but as a side effect weakens another aspect of security.

Impact statements should focus on non-obvious impacts. Many recommendations have obvious impacts; for
example, disabling a service means the service will no longer be available. The intent of the impact statement is
to identify the impacts that are less likely to be recognized.

If there is no known impact to this recommendation, put “No known impact” in this section, but this should be
rarely used.

CIS Benchmark Development Guide V07.docx Page 31 of 46


6.2.4.6 Audit Procedure field
The Audit Procedure must provide specific instructions—step-by-step whenever feasible—for determining if a
target is in compliance with the recommendation. Whenever applicable, this should include explicitly stating the
recommended and acceptable values for the setting. Here’s an example of a relatively simple Audit Procedure
field:

Verify the shutdown attribute in $CATALINA_HOME/conf/server.xml is not set to SHUTDOWN.


$ cd $CATALINA_HOME/conf
$ grep shutdown[[:space:]]*=[[:space:]]*”SHUTDOWN” server.xml
The above command should not yield any output.

The beginning of the Audit Procedure should state what is to be done through “verify” language. The term “verify”
is preferred because it indicates the auditor must take action to confirm compliance.

If the Audit Procedure is very simple, such as verifying a particular policy exists, one sentence may be sufficient.
However, in most cases, more instructions will be needed. In the example above, the second and third lines specify
commands the auditor can use to verify the configuration, and the fourth line explains how to interpret the output
of the commands. Whenever feasible, provide commands, regular expressions, short scripts or code examples,
and other practical information auditors can reuse or adapt for reuse.

Here’s an example of an Audit Procedure field with several items:

Perform the following to verify that the recommended state is implemented:

1. Check to see if the ScoreBoardFile is specified in any of the Apache configuration files. If it is not
present, the configuration is compliant.
2. Find the directory in which the ScoreBoardFile would be created. The default value is the
ServerRoot/logs directory.
3. Verify that the scoreboard file directory is not a directory within the Apache DocumentRoot.
4. Verify that the ownership and group of the directory is root:root (or the user under which Apache
initially starts up if not root).
5. Verify that the scoreboard file directory is on a locally mounted hard drive rather than an NFS mounted
file system.

Although this example is detailed, it does not provide step-by-step instructions. For example, item 1 does not
explain how to find the Apache configuration files or how to check each of them for ScoreBoardFile.
Providing that level of detail would make the instructions extremely long, and most readers probably wouldn’t
need them, so omitting them is acceptable.

For some Audit Procedures, a single set of instructions isn’t sufficient. There is more than one way to perform the
audit, or there is more than one set of conditions that can be met to demonstrate compliance with the
recommendation. Here’s an example of the latter:

CIS Benchmark Development Guide V07.docx Page 32 of 46


Perform the following to verify that the recommended state is implemented:

1. Search the Apache configuration files to find all <Directory> elements.


2. Ensure that either one of the following two methods is configured.
a. For the Deprecated Order/Deny/Allow method:
i. Verify there is a single Order directive with the value of Deny,Allow for each.
ii. Verify that the Allow and Deny directives have values that are appropriate for the purposes
of the directory.
b. For the Require method:
i. Verify that the Order/Deny/Allow directives are NOT used for the directory.
ii. Verify that the Require directives have values that are appropriate for the purposes of the
directory.

An Audit Procedure often combines these approaches, such as having step-by-step instructions that include
commands. Here is an example of prose instructions and commands together:

Perform the following to verify that the recommended state is implemented:

1. Use the httpd -M option as root to check which auth* modules are loaded.
httpd -M | egrep ‘auth._’

2. Also use the httpd -M option as root to check for any LDAP modules which do not follow the same
naming convention.
httpd -M | egrep ‘ldap’

Another common example is a setting that can be changed both by a management User Interface (UI) and
programmatically via a command line interface (CLI) or Application Programming Interface (API). All of these
methods can be listed in the Benchmark Recommendation in separate highlighted sections.

CIS Benchmark Development Guide V07.docx Page 33 of 46


Figure 36: Example of UI and CLI Audit Method from CIS Amazon Web Services Foundations Benchmark

An Audit Procedure should also list any prerequisites needed to verify compliance. For example, the auditor may
need administrator-level privileges on the target, or a particular tool may need to be installed in order to view the
setting value. Prerequisites should be specified before instructions and commands, otherwise an auditor may
attempt to follow the instructions and issue the commands before seeing the prerequisites.

NOTE: All these examples use full sentences with terminating punctuation. Sentence fragments should
only be used in cases where options are being listed, such as the example above introducing
instructions for each method by naming the method. Notes should be given if there are multiple
reasons for audits to vary and should be bolded to draw attention from the users.

6.2.4.7 Remediation Procedure field


The Remediation Procedure is similar to the Audit Procedure, except that the Remediation Procedure provides
instructions for implementing a recommendation for a non-compliant target. The Remediation Procedure should
cover how to implement the recommended value and may also cover how to implement other acceptable values.

Here is a simple example of a Remediation Procedure:

To set a nondeterministic value for the shutdown attribute, update it in


$CATALINA_HOME/conf/server.xml as follows:
<Server port="8005" shutdown="NONDETERMINISTICVALUE">

Note: NONDETERMINISTICVALUE should be replaced with a sequence of random characters.

The Remediation Procedure should make it clear that the target’s state is to be changed through one or more
actions. Terms such as “set,” “update,” “create,” “change,” and “perform” indicate actions.

Another example of a Remediation Procedure indicates which steps can be skipped if the specified conditions are
met:

CIS Benchmark Development Guide V07.docx Page 34 of 46


Perform the following to implement the recommendation:

1. If the apache user and group do not already exist, create the account and group as a unique system
account.
groupadd –r apache
useradd apache –r –g apache –d /var/www –s /sbin/nologin

2. Configure the apache user and group in the Apache configuration file httpd.conf.
User apache
Group apache

6.2.4.8 Default Value field


This field is used to record the default value for a setting, if it is known. If the default state is that the value isn’t
set, enter “Not set” for this field. If a default value is not applicable—for example, a recommendation that does
not involve a particular setting—just leave this field blank.

If the default value is not straightforward, it’s acceptable to have a verbose explanation here. One example is if
the default value varies based on the underlying platform, in which case you may need to list each possible
underlying platform and the associated default value.

6.2.4.9 References field


References can include, but are not limited to, the following categorical items:

• URLs to documentation or articles pertaining to the recommendation. URL references should first prefer
vendor sources of information, and then, exceptionally, reputable third-party sources.
• Links to security examples for this recommendation (vulnerabilities addressed, breach analysis, etc.).

The references must be numbered for usability. Here is a sample of a reference list:

1. https://tomcat.apache.org/tomcat-5.5-doc/config/server.html
2. https://httpd.apache.org/docs/2.4/programs/configure.html

There is no particular order for the references, but if there are numerous references, they should be grouped by
type at a minimum (for example, all CCE IDs, then vendor URLs, and finally third-party URLs).

6.2.4.10 Additional Information field


A recommendation may include an Additional Information field with supplementary information that doesn’t
correspond to any of the other fields. The Additional Information field is rarely used. One possible use is to
mention other recommended actions that fall outside the scope of the recommendation—for example, deploying
Kerberos for organizational use. Another possible use is to define possible values for a setting, especially if many
such values have been defined, each needing its own explanation. Having such lengthy, detailed material within
another field would disrupt the flow of the recommendation, so placing it in the Additional Information field keeps
it out of the way.

NOTE: Some Benchmarks (Generally the STIG Benchmarks) have been using this field to include STIG
specific information on the Recommendation.

CIS Benchmark Development Guide V07.docx Page 35 of 46


6.2.5 Recommendation Formatting
The following recommendation fields offer the set of formatting options depicted in the bar below: Description,
Rationale Statement, Audit Procedure, Remediation Procedure, Impact Statement, Default Value, and Additional
Information.

Figure 37: Formatting Toolbar for Editing Recommendations and Proposed Changes

Starting on the far left, the first set of three buttons is for Bold, Italic, and Heading:

• Bold is to be used sparingly to indicate caveats or other particularly important information, such as a note
about prerequisites for issuing a command.
• Italic can be used in two ways. First, it can denote the title of a book, article, or other publication. Second,
italicized text set in angle brackets <> denotes a variable for which a real value needs to be substituted.
o Many times, variables need to be denoted in a code block. In these cases, just use <> (code blocks
do not support other formatting like Bold, Italics, etc.)
• Heading is rarely used, but can be used to highlight separate subsections within a given section of a
recommendation. One possible example is separating a “UI Method” from a “CLI Method” in an Audit or
Remediation section.
• URL Link is useful since the format of a hyperlink in Markdown is easy to forget, and this will walk you
through creating it.
• URL/Image – Do not Use (this is no longer supported)

The next set of three buttons is for Unordered List, Ordered List, and Quote:

• An Unordered List is better known as a bulleted list. It should be used when there are two or more items
and they are options (look for any of the following values, etc.) It may be used when there are multiple
required items that can be performed in any sequence, but an Ordered List is generally preferred for those
cases.
• An Ordered List is a numbered list. It should be used whenever you are providing step-by-step instructions
where sequence is important. For usability reasons, an Ordered List is generally recommended for any
instructions with more than one step or item.
• The Quote button is used to indicate quoted text. Most Benchmarks do not use Quote formatting.

NOTE: There is no such thing as a list (either Ordered or Unordered) with just one item. No such lists
should be used in Benchmarks

The next button is the Preview. This can be used to view a non-editable render of what the resulting text will look
like. Push the Preview button again to return to editing.

The last two buttons are for a Code Block and Inline Code:

• The Code Block button is used to denote a block of contiguous text as code, commands, or scripts by
displaying it in a monospace font and a grey background. See the examples throughout Section 6.2.4 for
text formatted as Code Blocks.
• The Inline Code button is used to mark part of a sentence as “Code” (monospace font). This is generally
used to indicate configuration setting names, file and directory names, parameter values, and other
similar pieces of text within a sentence.

CIS Benchmark Development Guide V07.docx Page 36 of 46


6.2.6 Recommendation Organization
Each Benchmark has its recommendations organized into multiple sections at a minimum. The sections are unique
for each Benchmark, but they often include the following:

• Planning and Installation: This encompasses any recommendations that apply to preparation before
installation or options for the installation itself, such as not installing unnecessary modules, or installing
additional modules to provide more security features. Configuration options available both during and
after installation should not be placed in this section.
• Hardening: This section is for actions that reduce the target’s functionality, remove weak default values,
prevent the use of inadequately secure options, delete sample content, etc.
• Access Control: This includes user and group identities, and ownership and permissions for resources (e.g.,
files, folders/directories, shares, processes, media).
• Communications Protection: This encompasses the cryptographic settings, protocol options, and other
security features involved in protecting network communications. Examples include SSL/TLS options,
certificate requirements, cryptographic key protection, and restrictions on which versions of network and
application protocols may be used.
• Operations Support: This covers security recommendations for typical security operations, such as
configuring logging, patching, security monitoring, and vulnerability scanning.
• Attack Prevention and Detection: This addresses the recommendations intended to stop or detect attacks,
ranging from the use of features that prevent sensitive information leakage or mitigate denial of service
attacks to the use of technologies for detecting malware.

In each of these examples, the name of the section indicates the purpose of the settings. Older Benchmarks may
have inconsistent section names involving types of threats or attacks, types of vulnerabilities, etc. Use of such
names should be avoided.

Grouping recommendations into sections makes the Benchmark much easier for people to understand, but it has
additional benefits. For example, if most or all access control-related recommendations require the Benchmark
user to have administrator-level privileges, that can be stated at the access control section level as a prerequisite
instead of having to list it within each individual access control recommendation.

Most Benchmarks have a large enough number of recommendations that they have subsections within most
sections. For example, an Access Control section might have subsections for identities, ownership, file permissions,
process permissions, etc. The general rule of thumb is to use subsections when the number of recommendations
within the section is unwieldy (e.g., dozens) or when the recommendations naturally fall into two or more
categories.

Each Benchmark section should have an introductory paragraph or two. It should indicate the overall intent of the
section’s recommendations and point out any significant cautions about the recommendations. For example, a
section on hardening that includes disabling unneeded modules might include text like this in its introduction:

“This section covers specific modules that should be reviewed and disabled if not required for business
purposes. However, it's very important that the review and analysis of which modules are required for
business purposes not be limited to the modules explicitly listed.”

CIS Benchmark Development Guide V07.docx Page 37 of 46


6.3 CIS TECHNOLOGY COMMUNITY LEADS (TCLS)
As previously discussed, the TCLs primary role is to shepherd the various technology communities they lead by
growing, supporting, and guiding them in the development of Benchmarks. The best way to think about the TCL’s
role is not as an expert in all the technologies they lead, but instead as a skilled project manager bringing together
the needed resources to develop a given Benchmark in a reasonable period of time. The resources the TCL draws
upon always consists of technology community members but can also include key contractors and other CIS
employees with appropriate skills and expertise.

All TCLs have editing rights like Benchmark Editors and at times fill that role on a given Benchmark. Also, like
Benchmark editors they are similar to Maintainers in open-source projects in that they can change the underlying
source of the Benchmark based on community submissions. Every TCL leads multiple technology communities at
the same time and generally has multiple Benchmarks in development simultaneously. Due to the diversity of
technologies involved, no TCL can be an expert in all of them, so they are very dependent on the technology
community’s editors, other contributors, and the overall consensus process to develop successful Benchmarks.

In addition, TCLs have additional roles outside of those available to the community in general. These include:

• Set up new communities and Benchmarks in the CIS WorkBench tools


• Finalize and publish completed Benchmarks from WorkBench to the CIS public website
• Answer community questions and help contributors new to the Benchmark development process
• Promote the CIS Benchmark development process publicly and encourage involvement
• Recruit additional qualified contributors to their technology communities
• Work with technology providers for early access to releases and/or assist directly with Benchmark
development
• Develop and test appropriate artifacts and create AAC for use by CIS and various third parties
• Work with third parties to certify their tools to ensure compliance to the appropriate CIS Benchmarks
• Schedule and hold public calls on the status of their communities and Benchmarks under development

The bottom line is the TCL is the overall glue that holds the Benchmark development process together and gets a
result in a reasonable period of time. There are many facets to this job, and they are quite busy, but their primary
job is always to help the communities they lead by providing the resources and guidance they need to succeed.
Please feel free to reach out to the TCL in any technology community with questions or feedback on a Benchmark
or the BDP in general. They love getting feedback!

7 HOW SHOULD I PROCEED?


Now that we covered some of the basics of the WorkBench tool, what is the next step? Get involved! Here is a
simple process to get started:

1) Join a community of interest: Find one that you have some expertise and interest in, join it, and set your
notifications accordingly.
2) Get involved:
a. Option 1: Dive in immediately and create a new discussion on the community announcing yourself,
your expertise, and your availability. The TCL or other community leaders will soon reach out to
you to discuss how you can help.

CIS Benchmark Development Guide V07.docx Page 38 of 46


b. Option 2: If you want to start out more slowly, comment on an existing discussion or ticket, and
help resolve an issue or clarify a topic.

Feel free to contribute as much or as little as you can, since we value contributions of all sizes. For example, we
have contributors that do one of the following:

• Provide spelling and grammar changes to Benchmarks. This is indispensable for the creation of a
professional result.
• Test proposed recommendations and provide feedback via tickets. This is indispensable for the creation
of a reliable and widely applicable result.
• Provide a starting point for a new Benchmark that was initially developed outside of the CIS Benchmark
process by a given company or set of individuals. This then forms the basis of an initial Benchmark and
community around it.
• Provide a detailed analysis of the variations in security configuration items from one operating system
release to another. This is an essential contribution and helps focus the community to efficiently work on
the changes that matter out of potentially hundreds of possibilities.

Diversity of expertise and viewpoints in the community is key to creating a widely applicable and used Benchmark,
and any contribution you can provide is valuable and appreciated.

We look forward to your contributions!

CIS Benchmark Development Guide V07.docx Page 39 of 46


8 APPENDIX – ADVANCED WORKBENCH STUFF
This section is really only for those who what to learn some CIS WorkBench Tips and Tricks. Feel free to ignore this
section or read it… It is up to you!

8.1 CIS WORKBENCH MARKDOWN REFERENCE


CIS WorkBench supports a subset of Traditional Markdown. This section covers what is supported.

8.1.1 Emphasis
• Emphasis works in workbench and word exports.
• Strike through is not available
• Emphasis does NOT work inside code blocks or inline code

8.1.1.1 Example code:


• *Italics* or _Italics_
• **Bold** or __Bold__
• **Combo of asterisks and _Italics_**

8.1.1.2 Expected results:


• Italics or Italics
• Bold or Bold
• Combo of asterisks and Italics

8.1.2 Code Blocks


Code blocks work in Workbench and the MS Word format export.

8.1.2.1 Example code:


```
This is code in a code block.

It can have multiple lines.

But not formatting.


```

This is ‘Inline code`

8.1.2.2 Expected results:

CIS Benchmark Development Guide V07.docx Page 40 of 46


8.1.3 Links
Links will render in MS Word and WorkBench.

8.1.3.1 Example Code:


• Here is a link: [I'm an inline-style link](https://www.google.com)
• Here is a link: https://www.google.com

8.1.3.2 Expected results:


• Here is a link: I'm an inline-style link // The link is to https://www.google.com
• Here is a link: https://www.google.com // The link is to https://www.google.com

8.1.4 Lists
Lists will render in MS Word and WorkBench.

NOTE - Unordered list can use:

• Asterisks (*)
• Minuses (-)
• Pluses (+)

8.1.4.1 Example Code:


1. First ordered list item  The actual number used here does not matter.
1. Second ordered item
* Unordered sub-list – Item 1
* Unordered sub-list – Item 2
1. Third Ordered Item
1. Ordered sub-list – Item 1
1. Ordered sub-list – Item 2
7. Ordered sub-list – Item 3
- Unordered sub-list – Item 1

Need something here to start new ordered list.

1. New Ordered List – Item 1


1. New Ordered List – Item 2

+ New Unordered list – Item 1


+ New Unordered list – Item 2

8.1.4.2 Expected Results:

CIS Benchmark Development Guide V07.docx Page 41 of 46


8.1.5 Headers
All of these work in workbench and generally work in the MS Word exports as well (check your usage in the MS
Word export just in case your particular usage is problematic).

8.1.5.1 Example code:


# H1
## H2
### H3
#### H4

8.1.5.2 Expected results:

8.1.6 Horizontal Rule:


Renders correctly in WorkBench and MS Word export.
8.1.6.1 Example Code:
Three or more (Hyphens, Asterisks or Underscores):

---
***
___

8.1.6.2 Expected Results:


All produce the same result (A light line across the page):

8.1.7 Line Breaks:


Function identically in WorkBench and MS Word exports. Single line breaks do not function to separate
anything.
8.1.7.1 Examples Code:
Here's a line for us to start with.

This line is separated from the one above by two newlines, so it will be a *separate
paragraph*.

This line is also a separate paragraph, but...


This line is only separated by a single newline, so it's a separate sentence in the
*same paragraph*.

CIS Benchmark Development Guide V07.docx Page 42 of 46


8.1.7.2 Expected results:

8.2 LINKING DISCUSSION AND TICKETS TO MULTIPLE BENCHMARKS


The main portion of this document coved how to best link a given discussion or ticket to a single recommendation
in a Benchmark. There are times when it is desired to link a discussion or ticket to multiple recommendations in
multiple Benchmarks. This section will describe how to do this.

In this example, we have an existing ticket on Recommendation 1.3 of the CIS Google Chrome – Test v0 Benchmark.
It turns out that this ticket also applies to another recommendation in another Benchmark, and we would like to
keep track of this.

Figure 38: Ticket on original Benchmark

We will link this existing ticket to the same Recommendation (1.3) in a different Benchmark (CIS Google Chrome
– Test-Child v0). This is mimicking the very common case of a ticket being assigned to a Published Benchmark and
we would like to also assign it to the same recommendation in the updated Benchmark currently in development
(vNext).

Figure 39: We would like this same ticket linked to this Recommendation in this Benchmark

The process is as follows:

Open the ticket, by clicking the “Box – Arrow” button on the ticket in the right pane (see Figure 40)

CIS Benchmark Development Guide V07.docx Page 43 of 46


Figure 40: Opening the Ticket

This will display the Ticket windows, here click Edit (Figure 41).

Figure 41: Going to Ticket Edit Screen

Now in the “Add Link” text box you will type the following (Figure 42): r_2036425

NOTE: Detail on how/why this works will be given in the next section (See 8.2.1).

Figure 42: Adding a new link to the Ticket

NOTE: WorkBench will give a preview of the target object, in the above case Recommendation 1.3 - (L1)
Ensure 'Allow Google Cast to connect to Cast devices on all IP addresses' is set to 'Disabled'.

Then press Enter (Figure 43).

CIS Benchmark Development Guide V07.docx Page 44 of 46


Figure 43: How the new link is displayed prior to Submit

Finally click Submit (Figure 44).

Figure 44: After Submit, how the new link is displayed

Notice that now Recommendation 1.3 - (L1) Ensure 'Allow Google Cast to connect to Cast devices on all IP
addresses' is set to 'Disabled' is listed twice in the Linked Object. One is for each of the two different benchmarks.
The ticket on the original Benchmark (Still there –Figure 45).

Figure 45: Verifying the Original link is still in place

CIS Benchmark Development Guide V07.docx Page 45 of 46


The ticket on the new Benchmark (Figure 46).

Figure 46: Verifying the new link is now in place

8.2.1 OK, so this all seems a bit magical. How does this work?
The linking trick is to use the first letter of what you’re linking to following by ‘_’ and then the # of the item from
the end of the URL address for the item. For example:

• Benchmark: b_<#>
• Section: s_<#>
• Recommendation: r_<#>
• Ticket: t_<#>
• Discussion: d_<#>

So where exactly does the <#> come from? Here it is from the previous example.

Figure 47: Getting the target object number from the viewed object’s URL

It is the number at the end of the URL for the selected object (in this case a Recommendation). So, putting these
two things together we get: r_2036425, which is what we entered in the Add Link field for the ticket. This process
is internally known as “The Mike Method” in recognition of the BMDT member who popularized it.

Using the various prefixes above, and this general method, links can be attached to a wide variety of Community
and Benchmark objects (A Discussion, A Benchmark itself, A Section, etc.).

NOTE: This linking is not confined to a community and linking to an object in another community is
possible.

CIS Benchmark Development Guide V07.docx Page 46 of 46

You might also like