CompTIA CASP+ CAS-004 Certification Guide
CompTIA CASP+ CAS-004 Certification Guide
CompTIA CASP+ CAS-004 Certification Guide
CASP+ CAS-004
Certification Guide
Mark Birch
BIRMINGHAM—MUMBAI
CompTIA CASP+ CAS-004 Certification Guide
Copyright © 2022 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system,
or transmitted in any form or by any means, without the prior written permission of the
publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the
information presented. However, the information contained in this book is sold without
warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and
distributors, will be held liable for any damages caused or alleged to have been caused directly
or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies
and products mentioned in this book by the appropriate use of capitals. However, Packt
Publishing cannot guarantee the accuracy of this information.
ISBN 978-1-80181-677-9
www.packt.com
To all my students, both former and present, who motivate me to help them
achieve their learning goals.
– Mark Birch
Contributors
About the author
Mark Birch is an experienced courseware developer and lecturer in information systems
and cyber security. Mark has been helping students attain their learning goals for over 25
years. He has been developing content and teaching CompTIA CASP since its inception
in 2011 and understands the subject area in depth. He began his career working as an
engineer within the aerospace industry for BAE Systems (a major defense contractor),
gaining a thorough understanding of industrial controls, CAD/CAM systems, and
design principles. Graduating from the University of Central Lancashire with a BSc in
Information Technology, Mark has also gained accreditation in the following: Microsoft,
CompTIA, Citrix, Novell Networking, and ITIL.
I want to thank all my family for supporting me, understanding that I could
not always be "available" during the past year.
About the reviewers
Filip Korngut has over 15 years of experience in information security and systems
engineering in the oil and gas, mining, and digital health sectors. He has extensive
experience leading the development of major software and digital transformation solutions
with primary focus on cybersecurity leadership and technology. Filip has led cybersecurity
engagements for the big four consulting firms and has a passion for establishing
organizational cybersecurity programs. Filip has two sons with his beautiful wife Erin and a
dog named Lily.
Shubham Mishra is India's youngest cyber security expert and a leading name in the
field of ethical hacking. He is the founder and CEO of TOAE Security Solutions and has
dedicated his life to the robust development of cyber security methods that are being used
worldwide. Shubham has worked with some of the largest companies in the world for more
than a decade and continues to provide up-to-date and relevant content for the industry.
Table of Contents
Preface
Summary48 Answers 54
Questions48 Case study answer 55
Case study 52
2
Integrating Software Applications into the Enterprise
Integrating security into the Considerations when
development life cycle 58 integrating enterprise
Systems development life cycle 58 applications79
Development approaches 62 Customer relationship management
Versioning 67 (CRM)80
Enterprise resource planning (ERP) 81
Software assurance 67 Configuration Management
Sandboxing/development environment 68 Database (CMDB) 82
Validating third-party libraries 68 Content management systems 82
SecDevOps68
Defining the DevOps pipeline 69
Integration enablers 83
Directory services 84
Baseline and templates 72 Domain name system 84
Secure coding standards 74 Service-oriented architecture 88
Application vetting processes 74 Enterprise service bus 88
Hypertext Transfer Protocol
(HTTP) headers 75 Summary89
Application Programming Interface Questions90
(API) management 77 Answers 95
3
Enterprise Data Security, Including Secure Cloud and
Virtualization Solutions
Implementing data loss Implementing data
prevention98 loss detection 102
Blocking the use of external media 98 Watermarking 102
Print blocking 100 Digital rights management 102
Remote Desktop Protocol blocking 100 Network traffic decryption/deep
packet inspection 103
Network traffic analysis 103
Table of Contents ix
4
Deploying Enterprise Authentication and Authorization
Controls
Credential management 148 Authentication and
Hardware key manager 150 authorization protocols 161
Password policies 151 Multi-Factor Authentication
(MFA)166
Identity federation 154
Summary171
Access control 157
Questions172
Answers177
x Table of Contents
6
Vulnerability Assessment and Penetration Testing Methods
and Tools
Vulnerability scans 218 News reports 226
Credentialed versus non-credentialed
Testing methods 226
scans218
Static analysis 226
Agent-based/server-based 219
Dynamic analysis 227
Criticality ranking 219
Side-channel analysis 227
Active versus passive scans 220
Wireless vulnerability scan 227
Security Content Automation Software Composition Analysis (SCA) 228
Protocol (SCAP) 220 Fuzz testing 228
Extensible Configuration Checklist
Description Format (XCCDF) 220 Penetration testing 229
Open Vulnerability and Assessment Requirements 229
Language (OVAL) 220 Box testing 230
Common Platform Enumeration (CPE) 221 Post-exploitation232
Common Vulnerabilities and Persistence 232
Exposures (CVE) 222 Pivoting 232
Common Vulnerability Scoring Rescanning for corrections/changes 233
System (CVSS) 222
Common Configuration Security tools 233
Enumeration (CCE) 223 SCAP scanner 233
Asset Reporting Format (ARF) 224 Network traffic analyzer 235
Self-assessment versus third-party Vulnerability scanner 236
vendor assessment 224 Protocol analyzer 237
Patch management 224 Port scanner 238
HTTP interceptor 239
Information sources 224 Exploit framework 240
Advisories 225 Dependency management tools 242
Bulletins 225
Vendor websites 225 Summary242
Information Sharing and Analysis Questions243
Centers (ISACs) 226
Answers249
xii Table of Contents
7
Risk Mitigation Controls
Understanding application Cross-site scripting 264
vulnerabilities 252 Cross-site request forgery 265
Race conditions 252 Injection attacks 265
Buffer overflows 252 Sandbox escape 268
Broken authentication 253 VM hopping 268
Insecure references 253 VM escape 269
Poor exception handling 253 Border Gateway Protocol and route
Security misconfiguration 253 hijacking 269
Information disclosure 253 Interception attacks 269
Certificate errors 254 Denial of service and distributed
denial of service 270
Use of unsafe functions 258
Social engineering 270
Third-party libraries 258
VLAN hopping 271
Dependencies 258
End-of-support and end-of-life 259 Proactive and detective risk
Regression issues 259 reduction272
Assessing inherently vulnerable Hunts272
systems and applications 259 Developing countermeasures 272
Client-side processing and Deceptive technologies 272
server-side processing 259 Security data analytics 273
JSON and representational
state transfer 260
Applying preventative risk
Browser extensions 260
reduction276
Hypertext Markup Language 5 (HTML5) 261 Application control 278
Asynchronous JavaScript and Security automation 279
XML (AJAX) 261 Physical security 283
Simple Object Access Protocol (SOAP) 261
Summary284
Recognizing common attacks 262 Questions 285
Directory traversal 263 Answers291
8
Implementing Incident Response and Forensics Procedures
Understanding incident Event classifications 295
response planning 294 Triage event 295
Table of Contents xiii
Questions371 Answers377
10
Security Considerations Impacting Specific Sectors and
Operational Technologies
Identifying regulated business Historian387
sectors380 Ladder logic 388
Energy sector 380 Safety instrumented system 389
Manufacturing 381 Heating, ventilation, and air
Healthcare 382 conditioning 389
Public utilities 382
Understanding OT protocols 389
Public services 383
Controller area network bus 389
Facility services 383
Modbus391
Understanding embedded Distributed Network Protocol 3 391
systems383 Zigbee 392
Internet of things 384 Common Industrial Protocol 393
System on a chip 384 Data Distribution Service 394
Application-specific integrated circuits 384
Summary396
Field-programmable gate array 385
Questions396
Understanding ICS/SCADA 386 Answers400
PLCs 387
11
Implementing Cryptographic Protocols and Algorithms
Understanding hashing Block ciphers 405
algorithms 402 Stream ciphers 411
Secure Hashing Algorithm (SHA) 402
Understanding asymmetric
Hash-Based Message Authentication
encryption algorithms 411
Code (HMAC) 404
Rivest, Shamir, and Adleman (RSA) 412
Message Digest (MD) 404
Digital Signature Algorithm (DSA) 412
RACE integrity primitives evaluation
message digest (RIPEMD) 405 Elliptic-curve Digital Signature
Algorithm (ECDSA) 413
Understanding symmetric Diffie-Hellman (DH) 413
encryption algorithms 405 Elliptic-curve Cryptography (ECC) 414
Table of Contents xv
12
Implementing Appropriate PKI Solutions, Cryptographic
Protocols, and Algorithms for Business Needs
Understanding the PKI Certificate pinning 445
hierarchy 434 Certificate stapling 446
Certificate authority 436 CSRs 447
Registration authority 436 Common PKI use cases 449
Certificate revocation list 436 Key escrow 450
Online Certificate Status Protocol 438
Troubleshooting issues with
Understanding certificate types438 cryptographic implementations450
Wildcard certificate 438 Key rotation 450
Extended validation 439 Mismatched keys 451
Multi-domain 440 Improper key handling 451
General-purpose441 Embedded keys 451
Certificate usages/templates 441 Exposed private keys 451
Crypto shredding 452
Understanding PKI security and Cryptographic obfuscation 452
interoperability 442 Compromised keys 452
Trusted certificate providers 443
Trust models 443 Summary 452
Cross-certification certificate 444 Questions 453
Life cycle management 445 Answers 457
xvi Table of Contents
14
Compliance Frameworks, Legal Considerations, and Their
Organizational Impact
Security concerns associated Data considerations 493
with integrating diverse Understanding geographic
industries492 considerations 496
Table of Contents xvii
15
Business Continuity and Disaster Recovery Concepts
Conducting a business Content Delivery Network (CDN) 530
impact analysis 518 Testing plans 530
Maximum Tolerable Downtime (MTD) 519
Explaining how cloud
Recovery Time Objective (RTO) 519
technology aids enterprise
Recovery Point Objective (RPO) 519
resilience 531
Recovery service level 520
Using cloud solutions for
Mission-essential functions 520
business continuity and disaster
Privacy Impact Assessment (PIA) 521 recovery (BCDR) 532
Preparing a Disaster Recovery Plan/ Infrastructure versus serverless
Business Continuity Plan 522 computing 532
Backup and recovery methods 525 Collaboration tools 533
Planning for high availability Storage configurations 533
and automation 526 Cloud Access Security Broker (CASB) 534
16
Mock Exam 1
Questions 543 Assessment test answers 559
xviii Table of Contents
17
Mock Exam 2
Questions 566 Answers 584
Index
Other Books You May Enjoy
Preface
In this book, you will learn how to architect, engineer, integrate, and implement secure
solutions across complex environments to support a resilient enterprise. You will find
out how to monitor, detect, and implement incident response, and use automation to
proactively support ongoing security operations. You will learn how to apply security
practices to cloud, on-premises, endpoint, and mobile infrastructure. You will also discover
the impact of governance, risk, and compliance requirements throughout the enterprise.
Chapter 6, Vulnerability Assessment and Penetration Testing Methods and Tools, looks
at methods used to help assess an enterprise's security posture, including SCAP scans,
penetration testing, and an introduction to a wide range of security tools.
Chapter 7, Risk Mitigation Controls, looks at typical vulnerabilities that may be present
within an organization and controls to reduce risk.
Chapter 8, Implementing Incident Response and Forensics Procedures, covers incident
response preparation, including the creation of documentation and a Computer Security
Incident Response (CSIRT) in P-keyword except for brackets team. It also covers forensic
concepts and the use of forensic analysis tools.
Chapter 9, Enterprise Mobility and Endpoint Security Controls, examines enterprise
mobility management, including mobile device management tools. It also covers endpoint
security and host hardening techniques.
Chapter 10, Security Considerations Impacting Specific Sectors and Operational
Technologies, looks at regulated business sectors, challenges facing enterprises that must
support embedded systems, SCADA systems, and operational technology.
Chapter 11, Implementing Cryptographic Protocols and Algorithms, looks at protecting
enterprise data using hashing algorithms and encrypting data using both symmetric
and asymmetric algorithms. It also looks at implementing cryptography within
security protocols.
Chapter 12, Implementing Appropriate PKI Solutions, Cryptographic Protocols, and
Algorithms for Business Needs, covers Public Key Architecture (PKI), different certificate
types, and troubleshooting issues with cryptographic implementations.
Chapter 13, Applying Appropriate Risk Strategies, examines risk assessment types, risk
response strategies, including implementing policies, and security best practices.
Chapter 14, Compliance Frameworks and, Legal Considerations, and Their Organizational
Impact, covers the challenges of operating within diverse industries, regulatory
compliance, and legal regulations.
Chapter 15, Business Continuity and Disaster Recovery Concepts, teaches you how to
conduct a business impact analysis and develop business and disaster recovery plans. It
also covers high availability and deploying cloud solutions for enterprise resilience.
Chapter 16, Mock Exam 1 and Chapter 17, Mock Exam 2, test your knowledge with final
assessment tests, comprising accurate CASP+ questions.
Preface xxi
Additional practical exercises and learning content is available on the companion site:
https://casp.training.
Conventions used
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names,
filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles.
Here is an example: "A file would be created that would be of interest to the attacker,
Passwords.doc."
Any command-line input or output is written as follows:
Bold: Indicates a new term, an important word, or words that you see onscreen. For
instance, words in menus or dialog boxes appear in bold. Here is an example: "The
certificate's Subject Name value must be valid."
Get in touch
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, email us
at customercare@packtpub.com and mention the book title in the subject of
your message.
Errata: Although we have taken every care to ensure the accuracy of our content,
mistakes do happen. If you have found a mistake in this book, we would be grateful if you
would report this to us. Please visit www.packtpub.com/support/errata and fill in
the form.
Piracy: If you come across any illegal copies of our works in any form on the internet,
we would be grateful if you would provide us with the location address or website name.
Please contact us at copyright@packt.com with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise
in and you are interested in either writing or contributing to a book, please visit
authors.packtpub.com.
Preface xxiii
In this section, you will learn about the challenges that are faced by an enterprise when
supporting a large, complex, hybrid network architecture. This section will take you
through the design of traditional network architectures up to complex hybrid cloud
models. You will also understand the importance of authentication and authorization
strategies within complex environments.
This part of the book comprises the following chapters:
OSI model
No introduction to networking would be complete without a brief introduction to the
Open Systems Interconnection (OSI) 7-layer model. As we move through the chapters,
you will occasionally see references to layers. This has become a standard reference
model and it allows for different vendors to implement services, protocols, and hardware
using this reference model. Throughout the book, we will discuss applications, services,
protocols, and appliances that sit at different layers within the model. Although the
CompTIA Advanced Security Professional 004 (CASP 004) exam will not be testing
your knowledge specifically (OSI is not a listed objective), it can be useful as a reference
aid when we discuss networking subjects. The model is not actually defining a complete
working network model—it is a conceptual model. For example, to fully understand
the details of the Simple Mail Transport Protocol (SMTP), you would need to gain
access to Internet Engineering Task Force (IETF) Request for Comments (RFC)
documents. Imagine you are looking to manufacture network cables to meet Category 6
(CAT 6) standards—you could access International Organization for Standardization/
International Electrotechnical Commission (ISO/IEC 11801) standards
documentation. See the following screenshot for an overview of the OSI 7-layer model:
Physical and virtual network and security devices 5
• Network firewall
• Intrusion detection system (IDS)
• Intrusion prevention system (IPS)
6 Designing a Secure Network Architecture
Advantages
UTM has the following advantages:
Disadvantages
UTM has the following disadvantages.
• Risk from a single point of failure (SPOF) (limited hardware resources are
providing many services)
• Negative performance impact on a network due to the workload handled by
the device
IDS/IPS
Intrusion detection is an essential security function, typically implemented on the
perimeter to protect your organization from incoming threats. It will alert the security
team to inbound threats.
Intrusion prevention is the process of performing intrusion detection and then stopping
detected incidents. These security measures are available as IDS and IPS. Active
protection is the more commonly adopted approach, meaning a network intrusion
prevention system (NIPS) will be seen protecting most enterprise networks.
IDS and IPS constantly watch your network, identifying possible incidents and
logging information about them, stopping incidents, and reporting them to security
administrators. In addition, some networks use IDS/IPS for identifying problems with
security policies and deterring individuals from violating security policies. IDS/IPS have
become a necessary addition to the security infrastructure of most organizations, precisely
because they can stop attackers while they are gathering information about your network.
Examples of intrusions
Indicators of compromise (IOCs) can be unusual traffic, attacks against protocols
(such as high volumes of Internet Control Message Protocol (ICMP) traffic), and
malicious payloads. The result could be excess traffic causing denial of service (DoS) or
compromised systems through unwanted deployments of Trojans and backdoors.
There are two main IDS detection techniques that are routinely used to detect incidents, as
outlined here:
Examples:
A Secure Shell (SSH) connection using the root account would be in the ruleset.
An email with the subject password reset and an attachment with the name
passregen.exe would be identified as malicious.
• Anomaly-based detection compares definitions of what is considered a normal/
benign activity with observed events to identify significant deviations. This
detection method can be very effective at spotting previously unknown threats. This
type of detection is also known as heuristics-based detection.
Example:
The SMTP messaging server usually contributes to 23% of traffic on the network.
If the SMTP server is suddenly generating 70% of the network traffic, this would
generate alerts.
Network IDS (NIDS) does not need to be inline; it can monitor traffic but will need to
use port mirroring or spanning on the network switch to be effective, as illustrated in the
following diagram:
Wireless IPS
In addition to fixed or wired networks, many organizations may need the flexibility of a
Wi-Fi network.
A wireless IPS (WIPS) is designed to detect the use of rogue or misconfigured wireless
devices. A rogue device can spoof media access control (MAC) addresses of trusted
network devices. A WIPS can build up a database of known trusted hosts on the network
and can also be used to prevent DoS attacks.
An effective WIPS should mitigate the following types of threats:
Inline encryptors
The High Assurance Internet Protocol Encryptor Interoperability Specification
(HAIPE-IS) requires inline network encryption (INE) devices to be interoperable.
For example, Tactical Local Area Network Encryptor (TACLANE) is a product used
by the United States (US) government and the Department of Defense (DOD); it is
military-grade and meets National Security Agency (NSA) security requirements.
It is manufactured by General Dynamics. This is a device that enables encrypted
communication over untrusted networks. Commercial organizations will use site-to-site
virtual private network (VPN) links and not need this technology. The following figure
shows a TACLANE INE device:
SIEM
SIEM allows an organization to centralize security management events, forwarding logs
from security appliances to a central system. It provides correlation and normalization for
context and alerting, and also provides reporting and alerts based upon real-time logged
data inputs. The following diagram shows the architecture of centralized SIEM:
Switches
A switch is a network device that connects devices on a computer network by receiving
and forwarding data to the destination device. Switches use layer 2 MAC addresses to
forward data frames at layer 2 of the OSI model. Many enterprise switches will also
combine layer 3 functionality in the switch. Layer 3 switches allow for routing traffic
between VLANs.
14 Designing a Secure Network Architecture
Firewalls
Firewalls are there to block unwanted traffic entering your networks; they can also block
outbound traffic. They depend upon rules to block IP addresses, protocols, and ports.
More sophisticated firewalls will have more granular rules and may slow down traffic.
Physical and virtual network and security devices 15
Firewall types
Firewalls can be implemented in many different ways; enterprise deployment will
have highly capable hardware solutions from vendors such as Cisco or Check Point.
Software or host-based firewalls offer additional security with DiD. Data centers and
microsegmentation will accelerate the use of virtual firewall deployment. Different types
of firewalls are listed here:
Firewall capability
Firewalls have evolved over time, with additional capabilities and functionality.
First-generation firewalls use static packet filtering. They inspect packet headers and
implement static rules based upon IP addresses and port addresses. Their big advantage is
high performance. A router will typically perform as a static packet filter.
Second-generation firewalls also use stateful inspection, in addition to packet filtering.
This can monitor Transmission Control Protocol (TCP) streams (whole stream, not just
handshake) and dynamically open ports and track sessions for bi-directional protocols
(such as File Transfer Protocol (FTP)).
Next-generation firewalls (NGFWs) have evolved from second-generation firewalls to
meet the requirements of a multi-functional security appliance. An NGFW offers all the
functionality of the earlier generation, but will typically offer additional functionality in
the form of support for VPNs and anti-virus protection. NGFWs have DPI capability,
meaning they can offer additional security in the form of DLP and IPS protection. This
should not be confused with UTM, although they are similar. NGFWs are designed with
performance in mind.
16 Designing a Secure Network Architecture
Routers
Routers operate at layer 3 of the OSI model and are interconnection devices (they connect
networks together). Routing capability may also be provided by a switch that supports
VLANs (it will be called a layer 3 switch).
Routing tables
Routers are only able to forward packets if they have a route for the traffic or a default
gateway. Routing tables will comprise a NETWORK DESTINATION, NETMASK,
GATEWAY, INTERFACE, and METRIC value.
Here is a simple routing table:
Dynamic routing
In larger, more complex networks, it is normal to use dynamic routing rather than
configuring manual static routes. Within an autonomous network (the network managed
by your organization), you will be using interior routing protocols. It would be time-
consuming to configure routing tables statically and we would miss the resilience offered
by dynamic routing protocols.
The purpose of dynamic routing protocols includes the following:
Routing Information Protocol (RIP) is the simplest and easiest routing protocol to
configure. It is used for routing over smaller networks (allowing a maximum of 15 hops).
It is not considered a secure routing protocol.
Enhanced Interior Gateway Routing Protocol (EIGRP) is used on Cisco networks and
was developed to work around the drawbacks of using RIP. EIGRP benefits from fast
convergence times whenever the network topology is changed.
CISCO devices share their capabilities using Cisco Discovery Protocol (CDP) with
immediate neighbors. This can be disabled on a network.
You can prevent your router from receiving unwanted/poisoned route updates by
configuring neighbor router authentication; this uses Message Digest 5 (MD5)
authentication.
Open Shortest Path First (OSPF) is a good choice for larger networks because it has no
restriction on hop counts. OSPF allows routers to communicate securely, and routing
information is exchanged through link-state advertisements (LSA). RFC 2328 allows for
the use of a keyed MD5 identifier to protect OSPF neighbor updates.
Exterior routing
To keep internet working routing tables up to date, edge routers will forward route
changes via exterior routing protocols.
Border Gateway Protocol (BGP) is the routing protocol used between internet service
providers (ISPs). BGP can also be used to send routing updates between an enterprise
and its ISP. BGP can be secured so that only approved routers can exchange data with each
other (this uses MD5 authentication).
Proxy
A proxy server acts as a gateway between users and the internet services they access online.
A proxy protects your users from directly connecting with unsafe sites. It can offer
Uniform Resource Locator (URL) filtering and content filtering in addition to
performance enhancements. A proxy can be a good choice when protecting our users
from threats based upon outbound requests. Firewalls are not designed to deliver this
more granular protection. A firewall could block an outbound connection to a port and IP
address, but would not offer the same fine-tuning as a proxy server.
18 Designing a Secure Network Architecture
Figure 1.12 – Microsoft Routing and Remote Access Service (RRAS) with connected clients
NAT is an important service used in both enterprise and small business deployments.
Load balancer
A load balancer will be useful to enterprises that host server farms and would be a key
requirement for high availability (HA) e-commerce sites. When hosting a Citrix server
farm supporting remote applications, it is important that the loading on each member is
constantly evaluated to ensure new requests are forwarded to a server with the least load.
MicroSD HSM is built into a MicroSD form factor. It is useful when you need to extend
the functionality of a mobile device and could be used on a cellular phone for secure
communications. The HSM would have its own crypto-processing capability, meaning
no changes are required on the mobile device. The following screenshot shows a small
form-factor HSM:
DLP
We must ensure the enterprise does not breach legal or regulatory compliance by
exfiltration of sensitive data, either knowingly or unknowingly. It is important that
intellectual property and customer data are protected, even when compliance is not
a factor. Physical restrictions and/or enforceable policy may be used to block data
exfiltration to a removable storage medium. DLP can also be implemented on the edge
of the network, or as part of a cloud solution. Microsoft is one of many providers offering
DLP as part of the Cloud Access Security Broker (CASB) security suite. In the following
screenshot, we are selecting built-in rules to block the exfiltration of financial data:
WAF
A WAF is defined as a security solution on the web application level. It allows for HyperText
Transfer Protocol/HTTP Secure (HTTP/HTTPS) traffic to be inspected for anomalies
without slowing down the rest of the network traffic. A WAF can be implemented as an
appliance, plugin, or filter that applies a set of rules to an HTTP connection.
A WAF helps prevent attacks, such as the following:
• CSRF attacks
• Information leakage
• Broken authentication
• Insecure communications
A WAF can also provide URL encryption and site usage enforcement, as illustrated in the
following diagram:
Advantages
A WAF has the following advantages:
Disadvantages
A WAF has the following disadvantages:
• Network sniffing
• Reading of database logs
• Memory analysis
DAM tools can correlate data and provide the administrator with the tools to detect
anomalous database activity and capture a log of events, should this be required
for forensics.
As a database is often a critical line-of-business (LOB) solution, often hosting enterprise
resource planning (ERP), customer relationship management (CRM), sales order
processing, and so on, investing in this additional technology will be worth the cost.
Spam filter
A spam filter typically scans incoming emails to protect your employees from email-borne
threats. It can also scan emails leaving the organization (although this is more likely taken
care of by a DLP solution). It can be deployed on the demilitarized zone (DMZ) network,
filtering incoming SMTP traffic, and will typically perform additional tasks such as
querying Spamhaus Block List (SBL) providers, such as Spamhaus, to drop connections
from verified blocked domain names or IP addresses.
Many organizations will deploy this service in a cloud deployment, especially if the ISP
hosts the email servers.
The following section covers some of the additional considerations to allow for secure
remote working and administration.
24 Designing a Secure Network Architecture
Remote access
This is the term used when accessing systems remotely. We may need to access a desktop
to configure settings for a remote worker or configure a network appliance ruleset. In
some cases, it may be necessary to assist a remote worker by sharing their desktop. We will
compare the main types of remote access in this section.
VPN
A VPN service provides you with a secure, encrypted tunnel when you need to connect
across untrusted networks. External threat actors cannot access the tunnel and gain access
to your enterprise data.
A VPN can be used for securing remote workers and can also be used to connect sites
across untrusted networks.
Advanced network design 25
Enterprise solutions include Microsoft Direct Access, Cisco AnyConnect, and OpenVPN
(there are many more). Figure 1.16 shows a popular VPN client, OpenVPN Connect:
Many enterprises will ensure their employees' mobile devices are enabled with an
always-on VPN client. This ensures that when employees are working outside the
corporate network, they will automatically connect over a secure connection, whenever the
device is powered on. It is important that all traffic is routed through the VPN connection
using a full-tunnel configuration. Figure 1.17 shows a full-tunnel configuration:
IPsec
IP Security (IPsec) is a suite of protocols deployed in most vendor implementations of
IPv4 and is a requirement for IP version 6 (IPv6). When configured, it will protect against
replay attacks and ensure the integrity and confidentiality of the data.
Authentication headers (AHs) provide authentication, integrity, and protection against
replay attacks.
Encapsulating Security Payload (ESP) provides authentication, integrity, and
confidentiality for your data.
Advanced network design 27
When using Transport mode, encryption occurs at the internet layer, protecting all of the
layers above the network layer. It is used internally only.
Tunnel mode can be used to create site-to-site VPNs between trusted networks and to
connect a host device across an untrusted network. In the following screenshot, we can see
that Tunnel mode creates a new IP header:
SSH
SSH is a standard internet security protocol documented in RFCs 4251, 4253, and 4254.
The SSH protocol is a protocol for secure remote login and other secure network services
over an insecure network. It is recommended to use SSH in place of Telnet. (Telnet was
the main protocol for remote configuration, but it is not encrypted.)
The SSH protocol is typically used across enterprise networks for the following:
Tip
Make sure you are using SSH 2.0 as earlier implementations use a poor
cryptographic suite.
Tip
Remember that this connects to a desktop, so it will not be a choice when
administering networking hardware appliances.
Reverse proxy
A reverse proxy is commonly used when accessing large websites from a public network.
Reverse proxies can cache static content (much like a forward proxy), which reduces the
load on your web application servers. Reverse proxies can also be used as an extra security
layer, allowing for additional analysis of the incoming traffic. The following screenshot
shows a client accessing a web application through a reverse proxy:
802.1x
802.1X is an Ethernet standard using port access protocol for protecting networks via
authentication. It was originally intended for use with Ethernet 802.3 switched networks
but has become a useful addition to many different network types, including Wi-Fi and
VPN. A connecting host or device is authenticated via 802.1X for network access—if
authentication is successful, the port is opened; otherwise, it remains closed.
There are three basic pieces to 802.1X authentication, as outlined here:
There are many options when it comes to authenticating the supplicant (client device).
In the first instance, we have rudimentary (for that, read insecure) methods of
authentication.
Password Authentication Protocol (PAP) does not secure the authentication request.
Challenge-Handshake Authentication Protocol (CHAP) is an improvement over PAP as
it supports mutual authentication and uses MD5 hashing to encrypt the challenge.
As CHAP is dependent on MD5, your networks are at risk from pass-the-hash exploits.
Advanced network design 31
System on a chip
A system on a chip (SoC) consolidates multiple computer components onto a single,
integrated chip (IC). Components will typically include a graphical processing unit
(GPU), a CPU, and system random-access memory (RAM).
As an SoC integrates hardware and software, it is designed to draw less power than
traditional multi-chip solutions. The Snapdragon processor used in Microsoft Surface X
tablets has eight cores plus GPU.
Examples of this SoC technology can also be found in many IoT devices, building
automation systems, and Wi-Fi routers. Raspberry Pi is a good example of this technology,
costing as little as $5 per device. We can see a typical SoC in the following figure:
Sensors
Sensors are sophisticated devices that are frequently used to automate the collection of
information in automated or industrial environments. A sensor converts the physical
parameter (for example, temperature, blood pressure, humidity, and speed) into a signal
that can be measured electrically. Examples would include magnetic field sensors,
ultrasonic sensors, temperature sensors, flow sensors, and photoelectric sensors, to
name but a few. It is essential that the calibration of this equipment and messages sent or
received is accurate and controlled. HVAC, engineering production lines, and medical
equipment providers are just some of the environments that depend on this technology.
The following figure shows a typical monitoring sensor:
Audiovisual systems
Audiovisual (A/V) technology systems can be comprised of an assortment of hardware
that includes conference telephones, video cameras, interactive whiteboards, digital
signage, computers, smartphones, tablets, wireless connectivity, and more. Examples could
be video screens distributed throughout a building to broadcast information to employees.
Critical infrastructure
Critical infrastructure is a term to describe assets that are essential for the functioning of
a society and economy.
In the US, a new government agency was founded in 2018, offering guidance and helping
to build secure and resilient infrastructure: the Cybersecurity and Infrastructure
Security Agency (CISA).
CISA lists 16 sectors that are considered of such importance to the US that their
incapacitation or destruction would have a major negative effect on security, national
economic security, national public health, or safety. You can view which sectors these are
in the following list:
• Chemical sector
• Commercial facilities sector
• Communications sector
• Critical manufacturing sector
36 Designing a Secure Network Architecture
• Dams sector
• Defense industrial base sector
• Emergency services sector
• Energy sector
• Financial services sector
• Food and agriculture sector
• Government facilities sector
• Healthcare and public health sector
• IT sector
• Nuclear reactors, materials, and waste sector
• Transportation systems sector
• Water and wastewater systems sector
The European Commission (EC) has launched its own program to reduce the
vulnerabilities of critical infrastructures: the European Program for Critical
Infrastructure Protection (EPCIP).
SCADA systems are crucial for any organization with an industrial capacity. SCADA
allows organizations to maintain efficiency, process data for smarter decisions, and
communicate system issues to help mitigate downtime. SCADA has been used in
industrial, scientific, and medical environments since the adoption of computers in the
1950s. Some of the equipment was not always designed with security in mind.
Advanced network design 37
NetFlow
NetFlow was originally developed on Cisco networking equipment to log traffic. It allows
network engineers to gain an understanding of bandwidth usage and types of traffic flow.
It now has wide support and is supported on many other kinds of networking equipment,
including Juniper, Nokia, Huawei, and Nortel (there are many more). It is not intended
to replace protocols such as Simple Network Management Protocol (SNMP). It is useful
to establish a baseline and see anomalies on a network. Cisco supports this protocol on
most network equipment.
NetFlow consists of three main elements, as outlined here:
Devices that support NetFlow can collect IP traffic statistics on all interfaces where
NetFlow is enabled, and later export those statistics as NetFlow records toward at least one
NetFlow collector—this is normally a server that does the actual traffic analysis. We can
see an overview of the NetFlow process in the following screenshot:
sFlow
The sFlow protocol (short for sampled flow) is an alternative industry standard for data
packets in computer networks. Unlike NetFlow, this is not a proprietary protocol. Its
participants include Hewlett-Packard (HP), Brocade, Alcatel-Lucent, Extreme Networks,
Hitachi, and more. This only logs a percentage of the traffic, which is referred to as
sampling. sFlow is used on high-speed networks (gigabit-per-second speeds, and higher).
Advanced network design 39
Software-defined networking
Software-defined networking (SDN) technology is a well-established approach to network
management and has been in existence for over 10 years (established around 2011).
It has really come about due to a movement to large, centralized data centers and the
virtualization of computer systems. The move to cloud computing has also been a big
driver toward the adoption of SDN.
Advanced network design 41
There are many components to move to a true SDN model, and the components shown in
Figure 1.28 are important parts.
SDN has been designed to address the fact that traditional networks are often
decentralized and overly complex— think of all those vendor solutions (Cisco, Juniper,
HP, Foundry, and so on) with their own hardware and software solutions. SDN allows
for a more dynamic, configurable approach. Where the hardware switch (or virtual
switch) becomes the data plane and is separated from the management or control plane,
application programming interfaces (APIs) allow for dynamic updates to be controlled
by business applications and services.
The following screenshot shows a depiction of SDN:
Open SDN
SDN is based upon a set of open standards, allowing for simplified network design and
operation because instructions are provided by SDN controllers instead of multiple,
vendor-specific devices and protocols.
OpenFlow was the first standard interface for separating network control and data planes.
Open Network Operating System (ONOS) is a popular open source SDN controller.
42 Designing a Secure Network Architecture
Hybrid SDN
Many enterprise networks still have a significant investment in traditional network
infrastructure. While they move toward the goal of SDN, they need to transition and
support both technologies. They will need to support a hybrid model.
SDN overlay
This basically moves traffic across physical networking infrastructure. If you compare
Multiprotocol Label Switching (MPLS) links switching customers' VLAN tagged traffic
across a wide-area network (WAN), then this is a similar concept.
The following diagram shows an SDN overlay model:
Transport security
It is important when remotely configuring services and hardware over the network that
all connections are encrypted and authenticated. Many organizations use the Zero Trust
model, ensuring all network connections and actions must be validated.
SSH is recommended for accessing network appliances and services across the network.
Tip
When using SNMP for monitoring and management, it is important to ensure
support for version 3 (v3), with full support for encryption and authentication.
44 Designing a Secure Network Architecture
Port security
Port security means restricting access to network ports using a combination of disabling
unused network ports and deploying ACLs on network appliances.
On a layer 2 device, such as a Wi-Fi AP or switches, we can restrict access based on MAC
addresses, and we can enable port security on a per-port basis.
There are two different approaches to restricting access to ports, as outlined here:
• Dynamic locking: You can specify the maximum number of MAC addresses that
can be associated with a port. After the limit is reached, additional MAC addresses
are not added to the CAM table; only the frames with allowable-source MAC
addresses are forwarded.
Cisco refers to these dynamic addresses as sticky secure MAC addresses.
• Static locking: You can manually specify a list of MAC addresses for a port.
Route protection
It is important to ensure network traffic flow is protected. Routers will send neighbors route
updates using common dynamic routing protocols. If these routes are poisoned or tampered
with, this could allow an attacker to route all traffic through an MITM exploit, sniffing all
network traffic. Data could be sent through an endless series of loops, causing a DoS exploit.
To prevent these types of attacks, we should ensure we adopt the following practices:
Security zones
It is important to separate out network assets and services to provide the required levels
of security. There will be regulatory requirements for critical infrastructure and SCADA
networks. BUs may need to be on separate networks, while internet-facing servers must
be placed in perimeter-based networks. Segmentation of networks makes it difficult for
an attacker to gain a foothold on one compromised system and use lateral movement
through the network.
Security zones 47
• Keep critical systems separate from general systems and each other if non-related.
• Limit and monitor access to assets.
• Keep an up-to-date list of personnel authorized to access critical assets.
• Train staff to err on the side of caution when dealing with access to critical assets.
• Consider air gaps when dealing with equipment supporting critical infrastructure
(nuclear plant/petrochemical plant).
DMZ
A DMZ is like a border area between two nations where we do not trust our neighbor
100%. You are stopped at a checkpoint and if you are deemed to be a security risk, you
are turned away. In the world of networking, we must use a combination of security
techniques to implement this untrusted zone. Typically, an enterprise will create a zone
using back-to-back firewalls. The assets in the zone will be accessed by users who cannot
all be fully vetted or trusted.
The assets in the DMZ must be best prepared for hostile activity, and we may need to place
SMTP gateways, DNS servers, and web and FTP servers into this network. It is imperative
that these systems are hardened and do not run any unnecessary services.
Figure 1.33 provides an overview of a DMZ:
Summary
We have gained an understanding of security requirements, to ensure an appropriate,
secure network architecture for a new or existing network. We have looked at solutions to
provide the appropriate authentication and authorization controls.
We've studied how we can build security layers to allow access to information systems
from trusted devices, outside of the enterprise network. You have seen a wide range of
devices, including smartphones, laptops, tablets, and IoT devices, that must be secured on
a network.
We have gained knowledge and an understanding of regulatory or industry compliance
needs for strict network segmentation between processes and BUs
In this chapter, you have gained the following skills:
• Identification of the purpose of physical and virtual network and security devices
• Implementation of application- and protocol-aware technologies
• Planning for advanced network design
• Deploying the most appropriate network management and monitoring tools
• Advanced configuration of network devices
• Planning and implementing appropriate network security zones
These skills will be useful in the following chapters as we look to manage hybrid networks
using cloud and virtual data centers.
Questions
Here are a few questions to test your understanding of the chapter:
1. Which is the security module that would store an e-commerce server's private key?
A. DLP
B. HSM
C. DPI
D. 802.1x
A. Through DLP
B. Through HSM
Questions 49
C. Through DPI
D. Through 802.1x
3. What type of IDS would I be using if I needed to update my definition files?
A. Anomaly
B. Behavior
C. Heuristics
D. Signature
A. Routing
B. Firewall
C. Switching
D. Encryption
A. Telnet
B. RDP
C. SSH
D. FTP
7. What is an Ethernet standard for port access protocol, used for protecting networks
via authentication?
A. 802.11
B. 802.3
C. 802.1x
D. d) 802.5
50 Designing a Secure Network Architecture
8. What type of connectivity will allow key personnel to maintain communication with
one another and key network resources when the main network is under attack?
A. Email
B. OOB
C. Teams
D. VNC
A. Reliance on networks
B. Better use of hardware resources
C. Enhanced security
D. Standard operating environment (SOE)
10. What is used when my contractors use a tablet or thin client to access a Windows 10
desktop in my data center?
A. OOB
B. MDM
C. VDI
D. SSH
A. OSPF
B. RTBH
C. RIP
D. EIGRP
12. What is it when SOC staff are failing to respond to alerts due to excessive levels
of alerts?
A. False positive
B. Alert fatigue
C. False negative
D. True positive
Questions 51
13. What will I need to support on my network device in order to forward truncated
network traffic to a network monitoring tool?
A. NetFlow
B. sFlow
C. SIEM
D. System Logging Protocol (Syslog)
14. What type of security would I use on my layer 2 switch to isolate the finance
network from the development network?
A. VPN
B. IPsec
C. VLAN
D. RTBH
15. What type of servers would the security team place on the DMZ network?
16. What type of security label would CISA assign to the chemical sector and
communications sector?
A. Regulated industry
B. Protected infrastructure
C. SCADA
D. Critical infrastructure
17. What will protect my Wi-Fi network against common threats, including evil-twin/
rogue APs and DDoS?
A. 802.1x
B. Host-based IPS (HIPS)
C. Firewall
D. WIPS
52 Designing a Secure Network Architecture
18. What should I configure on mobile users' laptop computers to ensure they will not
be vulnerable to sniffing/eavesdropping when accessing the hotel's Wi-Fi network?
A. Anti-malware
B. Shielding
C. Cable locks
D. VPN
19. Which edge security appliance should be recommended for an organization that has
no dedicated security team and needs multiple security protection functions?
A. Router
B. WAF
C. UTM
D. DLP
20. What should be used to connect a remote government agency across public
networks (note that it needs to support the NSA suite of encryption protocols)?
A. VPN
B. HAIPE
C. VLAN
D. Protected distribution
Case study
You are employed as chief information security officer (CISO) for MORD Motor Cars
U.K. You are meeting with the network team to discuss the proposed plan for the new data
center. A new customer-facing e-commerce site will be run from a brand-new office and
data center in Coventry, United Kingdom (UK).
The data center will also allow collaboration with a Chinese manufacturing company,
through the addition of business-to-business (B2B) portals.
Case study 53
Place each device in the position that will offer the best security for the network. For
bonus points, which ports need to be opened on the firewall?
Answers
1. B
2. A
3. D
4. B
5. C
6. B and C
7. C
8. B
9. A
10. C
11. B
12. B
13. B
14. C
15. A and B
16. D
17. D
18. D
19. C
20. B
Case study answer 55
• Understanding the goals for the system, as well as understanding user expectations
and requirements (this involves capturing the user story – what exactly does the
customer expect the system to deliver?).
• Identifying project resources, such as available personnel and funding.
• Discovering whether alternative solutions are already available. Is there a more
cost-effective solution? (Note that government departments may be required to look
toward third parties such as cloud service providers.)
• Performing system and feasibility studies.
The analysis/initiation phase is critical. Proper planning saves time, money, and resources,
and it ensures that the rest of the SDLC will be performed correctly.
Development/acquisition phase
Once the development team understands the customer requirements, development can
begin. This phase includes design and modeling and will include the following:
• Must perform risk assessments. SAST, DAST, and penetration testing must
be performed.
• Plan the system security testing. This is done with the Security Requirements
Traceability Matrix (SRTM).
Implementation phase
During this phase, the system is created from the designs in the previous stages:
• Refreshing hardware
• Performance benchmarking
• Patching/updating certain components to ensure they meet the required standards
• Improving systems when necessary
Disposal phase
There must be a plan for the eventual decommissioning of the system:
Development approaches
It is important to focus on a development methodology that fits with the project's needs.
There are approaches that focus more on customer engagement throughout the project life
cycle. Some approaches allow the customer to have a clear vision of the finished system
at the beginning of the process and allow the development to be completed within strict
budgets, while other approaches lend themselves to prototyping. Whatever your approach,
it must align with the customers' requirements.
Waterfall
The waterfall has been a mainstay of systems development for many years. It is a very
rigid approach with little customer involvement after the requirements phase. This model
depends upon comprehensive documentation in the early stages.
At the beginning of the project the customer is involved, and they will define their
requirements. The development team will capture the customer requirements, and the
customer will now not be involved until release for customer acceptance testing.
The design will be done by the software engineers based upon the documentation
captured from the customer.
The next stage will be the implementation or coding, there is no opportunity for customer
feedback at this stage.
During verification, we will install, test, and debug, and then perform customer
acceptance testing. If the customer is not satisfied at this point, we must go right back to
the very start.
Integrating security into the development life cycle 63
• We need all requirements and documentation before or at the start of the project.
• No flexibility, change, or modification is possible until the end of the cycle.
64 Integrating Software Applications into the Enterprise
Agile
The Agile methodology is based on plenty of customer engagement. The customer is
involved not just in the requirements phase. It is estimated that around 80% of all current
development programs use this methodology. When using the Agile method, the entire
project is divided into small incremental builds. In Figure 2.4 we can see the Agile
development cycle:
In the event the customer is not satisfied, we will record the required changes and
incorporate this into a fresh development cycle.
Spiral
The spiral model can be used when there is a prototype. When neither the Agile nor
waterfall methods are appropriate, we can combine both approaches.
For this model to be useful, it needs to start with a prototype. We are basically refining
a prototype as we go through several iterations. At each stage, we do a risk analysis after
we have fine-tuned the prototype – this will be the final iteration that will be used to build
through to the final release.
66 Integrating Software Applications into the Enterprise
This allows the customer to be involved at regular intervals as each prototype is refined.
In Figure 2.5 we can see the spiral model:
Versioning
Version control is of paramount importance. When considering Continuous Integration/
Continuous Delivery (CI/CD), it is important to document build revisions and
incorporate this into the change management plan when considering backout plans. For
example, Microsoft brings out a major feature release of the Windows 10 operating system
bi-annually. The original release was in July 2017 with build number 10240 and version
ID 1507. The May 2021 update carries version 21H1 and build 19043. You can check the
current build using winver on CMD. Figure 2.6 shows version control:
Software assurance
It is important that the systems and services that are developed and used by millions of
enterprises, businesses, and users are robust and trustworthy. This process of software
assurance is achieved using standard design concepts/methodologies, standard coding
techniques and tools, and accepted methods of validation. We will take a look at
approaches to ensure reliable bug-free code is deployed.
68 Integrating Software Applications into the Enterprise
Sandboxing/development environment
It is important to have clearly defined segmentation when developing new systems.
Code will be initially written within an Integrated Development Environment (IDE);
testing can be done in an isolated area separate from production systems, often using
a development system or network.
SecDevOps
The term DevOps originates from software development and IT operations. When
implemented, it means continuous integration, automated testing, continuous delivery,
and continuous deployment. It is a more of a cultural methodology, meaning that
development and operations will work as a team.
Over the past few years, it has become commonplace to see Agile development
methodology and related DevOps practices being implemented. Adopting these ideas
means that the developers improve software incrementally and continuously, rather than
offering major updates on annual or bi-annual cycles.
Software assurance 69
DevOps itself does not deliver cybersecurity. What is needed is SecDevOps. The term
stresses that an organization treats security with as much importance as development
and operations. Figure 2.7 depicts the close alignment of Development, Operations,
Application Delivery, and Security:
Continuous integration
CI is the process of combining the code from individual development teams into a shared
build/repository. Depending on the size and complexity of the project, this may be done
multiple times a day.
Continuous delivery
CD is often used with CI. It will enable the development team to release incremental
updates into the operational environment.
Continuous testing
Continuous testing is the process of testing during all the stages of development, the goal
being to identify errors before they can be introduced into a production environment.
Testing will identify functional and non-functional errors in the code.
It is important when developing complex systems with large development teams that
integration testing is run on a daily basis, sometimes several times a day. It is imperative
to test all the units of code to ensure they stay within alignment.
When there is a small change of code and this change must be evaluated within the
existing environment, we should use regression testing.
Continuous operations
The concept of continuous operations ensures the availability of systems and services in
an enterprise. The result is that users will be unaware of new code releases and patches,
but the systems will be maintained. The goal is to minimize any disruption during the
introduction of new code.
Continuous operations will require a high degree of automation across a complex
heterogeneous mix of servers, operating systems, applications, virtualization, containers,
and so on. This will be best served by an orchestration architecture. In Figure 2.8 we can
see the DevOps cycle.
Software assurance 71
Code signing
When code has been tested and validated, it should be digitally signed to ensure we have
trusted builds of code modules.
It is estimated that over 80% of software breaches are due to vulnerabilities present at the
application layer. It is important to eliminate these bugs in the code before the software is
released. We will now take a look at three different approaches.
• Accurate
• Fast testing
• Easy to deploy
• On-demand feedback
72 Integrating Software Applications into the Enterprise
With demand for the rapid development of applications and new functionality, it is
important to deploy the correct tools to reduce risk.
• Injection
• Broken authentication
• Sensitive data exposure
• XML External Entities (XXE)
• Broken access control
• Security misconfiguration
• Cross-Site Scripting (XSS)
• Insecure deserialization
• Using components with known vulnerabilities
• Insufficient logging and monitoring
74 Integrating Software Applications into the Enterprise
A PDF document with more detail can be downloaded from the following link:
https://tinyurl.com/owasptoptenpdf.
Important Note
Specific attack types, including example logs and code, along with mitigation
techniques, are covered in Chapter 7, Risk Mitigation Controls.
We will now look at securing our hosting platforms, by adopting a strong security posture
on our application web servers.
Baseline and templates 75
The X-Frame-Options (XFO) security header helps to protect your customers against
clickjacking exploits. The following is the correct configuration for the header:
X-Frame-Options "SAMEORIGIN"
Strict-Transport-Security
The Strict-Transport-Security (HSTS) header instructs client browsers to always connect
via HTTPS. Without this setting, there may be the opportunity for the transmission of
unencrypted traffic, allowing for sniffing or MITM exploits. The following is the suggested
configuration for the header:
Strict-Transport-Security: max-age=315360000
In the preceding example, the time is in seconds, making the time 1 year. (The client
browser will always connect using TLS for a period not less than 1 year.)
When the preceding settings are incorporated into your site's .htaccess file or your
server configuration file, it adds an additional layer of security for web applications.
76 Integrating Software Applications into the Enterprise
If you are configuring Microsoft Internet Information Server (IIS), you can also add
these security options through the IIS administration console. In Figure 2.9 we can see
HTTP headers being secured on a Microsoft IIS web application server.
• Validate input.
• Heed compiler warnings.
• Architect and design for security policies.
• Keep it simple.
• Default deny.
• Adhere to the principle of least privilege.
• Sanitize data sent to other systems.
• Practice defense in depth.
• Use effective quality assurance techniques.
• Adopt a secure coding standard.
Baseline and templates 77
More information about their work and contribution toward secure coding standards can
be found at the following link:
https://wiki.sei.cmu.edu/confluence/display/seccode/
SEI+CERT+Coding+Standards
Microsoft SDL is a methodology that introduces security and privacy considerations
throughout all phases of the development process. It is even more important to consider
new scenarios, such as the cloud, Internet of Things (IoT), and Artificial Intelligence
(AI). There are many free tools provided for developers including plugins for Microsoft
Visual Studio:
https://www.microsoft.com/en-us/securityengineering/sdl/
resources
Container APIs
Many applications will be virtualized and deployed in containers (for Docker containers,
see Chapter 3, Enterprise Data Security, Including Secure Cloud and Virtualization
Solutions). Containers are an efficient way to scale up the delivery of applications, but
when deployed across many hardware compute platforms it can become overly complex.
Workloads need to be provisioned and de-provisioned on a large scale. There is a new
industry approach to address this need, based on an open source API named Kubernetes.
Kubernetes was developed to allow the orchestration of multiple (virtual) servers.
Containers are nested into Pods, and Pods can be scaled based upon demand.
An example might be a customer who has purchased an Enterprise Resource Planning
(ERP) system comprising multiple application modules of code. The SaaS provider has
many customers who require these services, and each customer must be isolated from the
others. Whilst this could be done manually, automation will be useful to: Deploy, monitor,
scale, and when there is an outage the system will be self-healing (it detects the problem
and can restart or replicate failed containers). Figure 2.11 shows an overview of container
API management:
Considerations when integrating enterprise applications 79
Without having access to this enterprise asset, overall control of end-to-end production
processes will be difficult.
The requirements are to add users to access content and where necessary create, edit, and
delete content. Version control is important, meaning documents can be checked out for
editing but will still be accessible for read access by authorized users of the system.
Microsoft 365 uses SharePoint to allow participants to interact with a web-based portal
in an easy-to-use and intuitive way (normally presented to the end user as OneDrive or
Microsoft Teams). Figure 2.15 shows this CMS system:
Integration enablers
It is important to consider the less glamorous services that provide unseen services, much
like the key workers who drive buses, provide healthcare services, deliver freight, and
so on. Without these services, staff would not be able to travel to work, remain in good
health, or have any inventory to manufacture products. The following sub-sections list the
common integration enablers.
84 Integrating Software Applications into the Enterprise
Directory services
The Lightweight Directory Access Protocol (LDAP) is an internet standard for
accessing and managing directory services. LDAPv3 is an internet standard documented
in RFC 4511.
There are many vendors who provide directory services, including Microsoft Active
Directory, IBM, and Oracle. Directory services are used to share information about
applications, users, groups, services, and networks across networks.
Directory services provide organized sets of records, often with a hierarchical structure
(based upon X.500 standards), such as a corporate email directory. You can think of
directory services like a telephone directory, which is a list of subscribers with their
addresses and phone numbers. LDAP uses TCP port 389 and is supported using TLS over
TCP port 636.
Authoritative name servers (those that host the DNS for the domain) communicate
updates using zone transfers (the zone file is the actual database of records). Incremental
Transfer (IXFR) describes the incremental transfers of nameserver records, while All
Transfer (AXFR) describes the transfer of the complete zone file. It is important that this
process is restricted to authorized DNS servers using Access Control Lists (ACLs). It is
important that any updates to DNS zone files are trustworthy. RFC2930 defines the use
of Trusted Signatures (TSIG) for zone file updates. When TSIG is used, two systems will
share a secret key (in an Active Directory domain, Kerberos will take care of this). This
key is then used to generate a Hash Message Authentication Code (HMAC), which is
then applied to all DNS transactions.
Figure 2.16 shows the security configuration for a zone file:
Service-oriented architecture
Service-Oriented Architecture (SOA) is an architecture developed to support service
orientation, as opposed to legacy or monolithic approaches.
Originally, systems were developed with non-reusable units. For example, a customer
can buy a complete ERP system, but a smaller customer would like to have only some of
the full functionality (perhaps just the human resource and financial elements). In this
situation, with legacy or monolithic approaches, the developers could not easily provide
these two modules independently.
Using a more modular approach, developers can easily integrate these modules or
services packaged up and can provide these services using standards. SOA allows for
a communication protocol over a network. The service, which is a single functional
unit, can be accessed remotely and offers functionality, such as a customer being able to
access order tracking of a purchased item, without needing to access an entire sales order
processing application.
Examples of SOA technologies include the following:
The term middleware is sometimes used as a term to describe these types of connectivity
services. We can see the components of an ESB in Figure 2.19:
Summary
In this chapter, we have taken a look at frameworks used for developing or commissioning
new services or software (the SDLC and SDL). We have covered how systems and services
can be built securely. As a security professional, it is important to understand how we can
provide assurance that products meet the appropriate levels of trust. We have learned how
to deploy services that can be considered trustworthy and meet recognized standards.
We have looked at the process of automation by deploying DevOps pipelines. We have
looked at the cultural aspects of combining development and operations teams (DevOps)
with a focus on security (SecDevOps).
90 Integrating Software Applications into the Enterprise
• Learned key concepts of the SDLC, including the methodology and security
frameworks.
• Gained understanding in DevOps and SecureDevOps
• Learned different development approaches including Agile, waterfall, and spiral
• An understanding of software QA including sandboxing, DevOps pipelines,
continuous operations, and static and dynamic testing
• An understanding of the importance of baselines and templates, including NCSC
recommended approaches, OWASP industry standards, and Microsoft SDL
• An understanding of the importance of integration enablers including DNS,
directory services, SOA, and ESB
These skills learned will be useful during the next chapter, when we take a journey
through the available cloud and virtualization platforms.
Questions
Here are a few questions to test your understanding of the chapter:
1. Which of the following is a container API?
A. VMware
B. Kubernetes
Questions 91
C. Hyper-V
D. Docker
2. Why would a company adopt secure coding standards? Choose all that apply.
C. Software integrity
D. Software agility
A. Network enumerator
B. Sniffer
C. Fuzzer
D. Wi-Fi analyzer
A. Compiled code
B. Dynamic code
C. Source code
D. Binary code
A. CRM
B. ERP
C. CMDB
D. DNS
10. What would be a useful tool to integrate all business functions within an enterprise?
A. CRM
B. ERP
C. CMDB
D. DNS
11. What would be a useful tool to track all configurable assets within an enterprise?
A. CRM
B. ERP
C. CMDB
D. DNS
Questions 93
12. How can I ensure content is made accessible to the appropriate users through my
web-based portal?
A. CRM
B. CMS
C. CMDB
D. CCMP
A. DMARC
B. DNSSEC
C. Strict Transport Security
D. IPSEC
14. What is it called when software developers break up code into modules, each one
being an independently functional unit?
A. SOA
B. ESB
C. Monolithic architecture
D. Legacy architecture
15. What is the most important consideration when planning for system end of life?
16. What type of software testing is used when there has been a change within the
existing environment?
A. Regression testing
B. Pen testing
C. Requirements validation
D. Release testing
94 Integrating Software Applications into the Enterprise
17. What is it called when the development and operations teams work together to
ensure that code released to the production environment is secure?
A. DevOps
B. Team-building exercises
C. Tabletop exercises
D. SecDevOps
18. What software development approach would involve regular meetings with the
customer and developers throughout the development process?
A. Agile
B. Waterfall
C. Spiral
D. Build and Fix
19. What software development approach would involve meetings with the customer
and developers at the end of a development cycle, allowing for changes to be made
for the next iteration?
A. Agile
B. Waterfall
C. Spiral
D. Build and Fix
20. What software development approach would involve meetings with the customer and
developers at the definition stage and then at the end of the development process?
A. Agile
B. Waterfall
C. Spiral
D. Build and Fix
21. Where will we ensure the proper HTTP headers are configured?
A. Domain Controller
B. DNS server
C. Web server
D. Mail server
Answers 95
Answers
1. B
2. B, C, D and E
3. B
4. A and D
5. C
6. B
7. C
8. C
9. A
10. B
11. C
12. B
13. B
14. A
15. B
16. A
17. D
18. A
19. C
20. B
21. C
m
3
Enterprise Data
Security, Including
Secure Cloud and
Virtualization
Solutions
An organization must ensure that proper due diligence and due care are exercised when
considering the storage and handling of data. Data will be stored and accessed across
complex, hybrid networks. Data types may include sensitive data, intellectual property,
and trade secrets. Regulatory compliance and legal requirements will need to be carefully
considered when planning for the storage and handling of data. Data needs to be labeled
and classified according to the business value, controls put in place to prevent data loss,
and an alert needs to be raised if these controls have any gaps. We need to plan how to
handle data throughout the life cycle, from creation/acquisition to end of life. We must
understand the implications of storing our data with third parties, such as B2B partners
and cloud providers. We must ensure that appropriate protection is applied to data at rest,
in transit, and in use.
98 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
Print blocking
It is important to recognize other means of exfiltrating data from systems. Screengrabs
and the printing of sensitive information should also be restricted. By way of an
experiment, open your mobile banking application and try to use the print screen
function. You will not be able to perform this action. To restrict unauthorized printing,
Digital Rights Management (DRM) can be utilized for sensitive documents, as shown
in Figure 3.2:
In Figure 3.4, we can some of the restrictions available through Group Policy to control
access to resources during a remote session:
Watermarking
If an organization wants to detect the theft or exfiltration of sensitive data, then
documents can be checked out from an information system, but an automatic watermark
will be applied to the document using the identity of the user who checked out the
document, as shown in Figure 3.5. If the document is shared or printed, it will clearly
show that user's identity.
Data classification
The appropriate data owner needs to be consulted within the enterprise to establish the
classification of data to ensure that appropriate controls are implemented.
104 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
Due to the amount of data that is typically held by large enterprises, automation is
a common approach. For example, keyword or string searches could be utilized to
discover documents containing a driver's license number, social security number, debit
card numbers, and so on. We have data classification blocking where necessary to prevent
data leakage. In Figure 3.6, we can see categories that could be used to label data:
Metadata/attributes
Metadata is the data that describes data. Metadata can be very useful when searching
across stores with large files. We can tag data using common attributes or store the data
within the file itself. Figure 3.7 shows metadata data of an image:
Enabling data protection 105
Attributes are used with tags; they consist of the identifier followed by a value, as shown
in Figure 3.8:
Obfuscation
Obfuscation is defined in the Oxford dictionary as the act of making something less clear
and more difficult to understand, usually deliberately. We can use this approach to
protect data.
It is important to protect data in use and data at rest, and there are many ways to
achieve this, including strong Access Control Lists (ACLs) and data encryption. When
considering the use of records or processing transactions, certain strings or keys may be
hidden from certain parts of the system. Here are some common approaches:
• Data tokenization: This is often associated with contactless payments. Your debit
card is implemented in the Google Pay app as a token. The bank allocates a unique
token to your mobile app but your actual payment details (including the security
code on the reverse of the card) are not stored with the token.
108 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
• Data scrubbing: This can be used to detect or correct any information in a database
that has some sort of error. Errors in databases can be the result of human error
in entering the data, the merging of two databases, a lack of company-wide or
industry-wide data coding standards, or due to old systems that contain inaccurate
or outdated data. This term can also be used when data has been removed from log
files; the user is likely hiding the evidence.
• Data masking: This is a way to create a bogus, but realistic, version of your
organizational data. The goal is to protect sensitive data while providing a practical
alternative when real data is not needed. This would be used for user training, sales
demos, or software testing. The format would remain the same, but, of course, the
original data records would not be visible.
The data masking process will change the values of the data while using the same format.
The goal is to create a version that cannot be deciphered or reverse engineered.
Anonymization
The use of big data and the use of business intelligence presents significant regulatory
challenges. Take, for example, a situation where governments need to track the
effectiveness of strategies during a pandemic. A goal may be to publish the fact that 25,000
citizens within the age range of 65-75 years old have been vaccinated within the city of
Perth (Scotland). It should not be possible to extract individual Personally Identifiable
Information (PII) records for any of these people. Fraser McCloud, residing at 25 Argyle
Avenue, telephone 01738 678654, does not expect his personal details to be part of this
published information.
• Create: Phase one of the life cycle is the creation/capture of data. Examples include
documents, images, mapping data, and GPS coordinates.
• Store: We must store the data within appropriate systems, including file shares,
websites, databases, and graph stores.
• Use: Once the data is held within our information system, we must ensure that data
governance is applied. This means classifying data, protecting data, and retaining it.
We must ensure we take care of legal and regulatory compliance.
• Archive: Data should be preserved to meet regulatory and legal requirements; this
should map across to data retention policies.
• Destroy: Data should be purged based upon regulatory and legal requirements.
There is no business advantage in retaining data that is not required. When we store
data for too long, we may be more exposed as a business in the event of a lawsuit.
A legal hold would require a business to make available all records pertaining to
the lawsuit.
110 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
Full backup
A full backup will back up all the files in the backup set every time it is run. Imagine this
is a Network Attached Storage (NAS) array containing 100 terabytes of data. It may take
a significant amount of time to back up all the data every day, while also considering the
storage overheads.
Enabling data protection 111
• It is time-consuming.
• Additional storage space is required.
Differential backup
A differential backup is usually run in conjunction with a full backup. The full backup
would be run when there is a generous time window, on a Sunday, for example. Each
day, the differential backup would back up any changes since Sunday's full backup. So,
Monday's backup would be relatively quick, but by the time Friday's differential backup
is run, it will have grown to perhaps five times the size of Monday's backup.
The advantages are as follows:
• Additional storage space is required (over and above the incremental backup).
Incremental backup
An incremental backup is usually run in conjunction with a full backup. The full backup
would be run when there is a generous time window – Sunday, for example. Each day, the
incremental backup would back up any changes since the previous backup. So, each daily
incremental backup would take approximately the same amount of time, while the volume
of data stored would be similar.
The advantages are as follows:
• RAID 0: This is used to aggregate multiple disks across a single volume. It will allow
for fast disk I/O operations, and there is no redundancy (hence the 0). It achieves
this performance by spreading the write operations across multiple physical disks.
It also speeds up read operations by a similar margin (25-30%). This requires two or
more disks, as shown in Figure 3.11:
• RAID 1: This is disk mirroring and uses two disks. The data is written to both disks
synchronously, creating a mirror of the data. There is no real performance gain
when deploying this RAID level (compared to a single disk). If one of the mirrored
disks fails, we can continue to access the storage. Figure 3.12 shows an example of
RAID 1 disk mirroring:
• RAID 5: This uses a minimum of three disks. The data is written to all disks
synchronously, creating a single logical disk. There is a performance gain for read
operations. If one of the RAID 5 disks fails, we can continue to access the storage.
This technology uses a single parity stripe to store the redundant data, as shown in
Figure 3.13:
• RAID 6: This uses a minimum of four disks. The data is written to all disks
synchronously, creating a single logical disk. There is a performance gain for read
operations. If two of the RAID 6 disks fail, we can continue to access the storage.
This technology uses a dual parity stripe to store the redundant data, as shown in
Figure 3.14:
• RAID 10: This uses a minimum of four disks. It combines RAID 0 and RAID 1 in
a single system. It provides security by mirroring all data on secondary drives while
using striping across each set of drives to speed up data transfers, as shown in
Figure 3.15:
Pretty much any compute node can be run virtually on top of a software layer. This
software layer is the hypervisor. User desktops, email servers, directory servers, switches,
firewalls, and routers are just a few examples of virtual machines (VMs).
Virtualization allows for more flexibility in the data center, rapid provisioning, and
scalability as workloads increase. Additional benefits include reducing an organization's
footprint in the data center (less hardware, reduced power, and so on). Figure 3.16 shows
resources being allocated to a virtual guest operating system using Microsoft Hyper-V:
Virtualization strategies
When considering virtualization, an important choice will be the type of hypervisor. If
you are planning for the computing requirements of the data center, then you will need to
choose a bare-metal hypervisor, also known as Type 1. This is going to require minimal
overhead, allowing for maximum efficiency of the underlying hardware. Testing and
developing may require desktop virtualization tools, where compatibility will be important.
If we need to move the development virtual workloads into production, it makes sense
to choose compatible models, for example, VMWare Workstation for the desktop and
VMWare ESXi for the production data center. Application virtualization may also be a
useful strategy when we have a mixture of desktop users, where compatibility may be an
issue. Containers should also be considered for their efficient use of computing resources.
118 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
Type 1 hypervisors
In effect, a type 1 hypervisor takes the place of the host operating system. Type 1 tends
to be more reliable as they are not dependent on their underlying operating system and
have fewer dependencies, which is another advantage. Type 1 hypervisors are the default
for data centers. While this approach is highly efficient, it will need management tools
to be installed on a separate computer on the network. For security, this management
computer will be segmented from regular compute nodes. Figure 3.17 shows the diagram
of Type 1 hypervisors:
Type 2 hypervisors
A type 2 hypervisor requires an installed host operating system. It is installed as an
additional software component. It is a useful tool for any job role that requires access to
more than one operating system. This type of hypervisor would allow an Apple Mac user
to run native Microsoft Windows applications using VMware Fusion.
Implementing secure cloud and virtualization solutions 119
The reason type 2 hypervisors are not suitable for the data center is that they are not
as efficient as type 1 hypervisors. They must access computing resources from the
main installed host operating system. This will cause latency issues when dealing with
enterprise workloads. Type 2 is more common when we require virtualization on
a desktop computer for testing/development purposes. Figure 3.18 type 2 hypervisors:
• VMware Fusion: Allows Mac users to run a large range of guest operating systems.
• VMware Workstation: Allows Linux and Windows users to run multiple operating
systems on a single PC.
• VMware Player: This is free but only supports a single guest OS.
• Oracle VirtualBox: Can run on Linux, macOS, and Windows operating systems.
It is a free product.
Containers
Containers are a more efficient way of deploying workloads in the data center. A container
is a package containing the software application and all the additional binary files and
library dependencies. Because the workload is isolated from the host operating system,
any potential bugs or errors thrown by the container-based application will not affect any
other applications.
Containers allow for isolation between different applications. It is important to consider
the security aspects of containers. If we are using a cloud platform, we need to be sure that
we have segmentation from other customers' containers.
Containers allow easy deployment on multiple operating system platforms and make
migration an easy process. They have a relatively low overhead as they do not need to run
on a VM.
To support containers, you will need a container management system (often called the
engine); the most popular product at the moment is Docker. There are versions of Docker
that developers can run on their desktops, to then migrate into the data center. Figure 3.19
shows a container:
Emulation
Emulation allows for the running of a program or service that would not run natively
on a given platform. Examples could include running legacy arcade games on a modern
computer by installing an emulator program. Terminal emulators are used to remote to
another device, replacing the need to connect a serial cable direct from a terminal to
a network appliance. Linux commands can be used on a Windows 10 desktop computer
by emulating the Linux command shell, as shown in Figure 3.20:
Application virtualization
Application virtualization can be useful when there is a need to support a Line of
Business (LOB) application or a legacy application across multiple platforms. We
can publish an application on a Microsoft (Remote Desktop Services) server and host
multiple sessions to that single deployed application across an RDP connection. To
scale out to an enterprise, we could deploy a server farm. We could, for example, access
Windows applications from a Linux host using this deployment model. We also allow
the application to be streamed across the network and run locally. Microsoft calls this
technology App-V. Citrix has a similar technology, named XenApp. Figure 3.21 shows
application virtualization:
VDI
VDI allows the provision of compute resources to be controlled from within the data
center or cloud. A typical model is an employee using a basic thin client to access a fully
functional desktop with all required applications. The user can access their desktop using
any device capable of hosting the remote software. Microsoft has an RDP client that can
run on most operating systems. The advantages of this approach are resilience, speed of
deployment, and security. Figure 3.22 shows a VDI environment:
Cybersecurity
The most critical consideration is cybersecurity. When we are storing data, such as
intellectual property, PII, PHI, and many other types of data, we must consider all the
risks before deploying cloud models.
Business directives
Does the CSP align with our regulatory requirements? Government agencies will only
be able to work with providers who meet the FedRAMP criteria. Federal Risk and
Authorization Management Program (FedRAMP) is a US government-wide program
that provides a standardized approach to security assessment, authorization, and
continuous monitoring for cloud products and services. Amazon Web Services (AWS)
and Microsoft Azure cloud offerings have multiple accreditations, including PCI-DSS,
FedRAMP, ISO27001, and ISO27018 to name but a few.
Cost
The cost of operating services in the cloud is one of the most important drivers when
choosing a cloud solution. Unfortunately for the Chief Financial Officer (CFO), the
least expensive may not be a workable solution. Certain industries are tied into legal or
regulatory compliance, meaning they will need to consider a private cloud or, in some
cases, community cloud models.
126 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
Scalability
What scalability constraints will there be? It was interesting in early 2020 when
organizations were forced to adopt a much more flexible remote working model, which
fully tested the scalability of the cloud. Web conferencing products such as Zoom and
Microsoft Teams reported a 50% increase in demand from March to April 2020.
Resources
What resources does the CSP have? Many CSPs operate data centers that are completely
self-sufficient regarding power, typically powered by renewable energy (wind farms,
hydroelectric, and the like). How many countries do they operate in? Do they have data
centers close to your business? Do they have a solid financial basis?
Location
The location of the CSP data centers could be critically important. Think about legal issues
relating to data sovereignty and jurisdiction. Also, does the provider offer redundancy
using geo-redundancy. If the New York data center has an environmental disaster, can we
host services in the Dallas data center?
Data protection
We must ensure that we can protect customer data with the same level of security as
on-premises data stores. We would need to ensure data is protected at rest and in transit.
In Figure 3.24, we can see the main options for cloud deployment models:
Private cloud
A private cloud allows an enterprise to host their chosen services in a completely isolated
data center. Legal or regulatory requirements may be the deciding factor when looking
at this model. Economies of scale will often result in a higher cost when implementing
this model. Government agencies, such as the Department of Defense (DoD), are prime
candidates for the security benefits of this approach. The United Kingdom's Ministry of
Defence (MoD) has been a user of the private cloud since 2015, awarding a multi-million-
pound contract for the use of Microsoft 365 services. In the USA, the DoD has followed
suit, also signing up for multi-billion dollar contracts with Microsoft and Amazon.
Government requirements are very strict. Currently, there are two ways to provide
cloud services to the United States government. One approach is to go through the Joint
Authorization Board (JAB) or directly through a government agency, the requirements
are strict, and there is a requirement for continuous monitoring. The program is managed
as part of The Federal Risk and Authorization Management Program (FedRAMP).
To allow a CSP to prepare for this process, there are baseline security audit requirements
and additional documentation available through the following link: https://www.
fedramp.gov/documents-templates/.
Public cloud
Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)
are examples of public cloud providers. They offer a service in which you can enroll and
configure your workloads.
Public cloud providers operate from large geolocated data centers to offer services to
a wide range of customers.
Range International Information Group hosts the world's largest data center. It is
located in Langfang, China. It covers an area equivalent to 110 football pitches (around
6.3 million square feet). In comparison, Microsoft's main Dublin data center occupies an
area of around 10 football pitches.
Millions of global customers are using this model. There is more flexibility and less
commitment when using the public cloud.
Most cloud providers' customers will benefit from this shared model, achieving cost
savings compared with the private cloud.
128 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
Hybrid cloud
A hybrid cloud allows an organization to use more than one cloud model. For example,
a utility provider delivering critical infrastructure may be required by regulatory
compliance to host critical services in a private cloud. The sales and marketing division
of the same enterprise may want to use business productivity tools such as salesforce.com
using a cost-effective public cloud model. A hybrid cloud allows an enterprise to meet
operational requirements using a blended model.
A community cloud allows organizations operating within the same vertical industry to
share costs. They may have the same strict regulatory requirements not suited to a multi-
tenant public cloud, but can still look to benefit from a shared cost model. Obviously, they
will not achieve the same cost savings as public cloud customers.
Hosting models
When you choose a cloud deployment model, you have, in effect, signed up for either
a totally isolated data center or a shared experience. Think of where you live. If you are
wealthy, then you can afford to live in a secure compound with your own private security
team. If you don't have that sort of money, then maybe you could live in a secure gated
community (think community cloud). If you don't want to spend too much of your salary
on housing, then maybe you could rent an apartment within a building, sharing common
walkways and elevators (think public cloud). However, other options are available.
If you pay the least amount of money possible to host services in a public cloud, then you
will most likely be using a multi-tenant model. Your hosted web server will be running as
a VM alongside other customers' VMs on the same hypervisor. Maybe your database will
be hosted on the same server as the other customers using the same schema.
Many public cloud providers will offer single-tenant services, but at a cost. So, the
provider has a public cloud, but allows customers to host services on a separate hardware
stack. Obviously, they will want to charge more money for this.
Investigating cloud deployment models 129
Service models
Once you have chosen the cloud deployment model, you can choose the services that
your organization will need. It is important to understand the responsibility of the level
of involvement that your employees will need. When buying in services for end users'
business productivity, you may want to pay a fixed annual cost without any further
involvement, or perhaps you need to host critical infrastructure that enables engineers
to have total responsibility for a hosted Supervisory Control and Data Acquisition
(SCADA) network. Figure 3.25 shows examples of cloud service models:
Software as a service
Software as a service (SaaS) is where you have the least responsibility. You pay for
a license to use a software product or service. Microsoft has a product catalog available for
customers to choose from, totaling around 3,000 items.
130 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
In a small school, there is no dedicated staff to manage servers and storage or to develop
software applications. Instead, the teachers may use Moodle to set work and monitor
student progress. They may have Microsoft 365 educational licenses to assign to the
students, allowing access to an entire software suite. This is where you would use SaaS.
You can see some examples of SaaS applications that can be selected from the Microsoft
Azure portal in Figure 3.26:
Platform as a service
When choosing Platform as a service (PaaS) as a service model, you will be wanting to
maintain control of existing enterprise applications while moving the workloads into the
cloud. Or you are looking to develop new applications using a CSP to host the workloads.
The CSP will deploy an environment for your development team, which may consist of a
Linux Enterprise Server, Apache Web Server, and MySQL database.
The servers, storage, and networking will be managed by the CSP, while developers will
still have control and management of their applications.
Investigating cloud deployment models 131
Infrastructure as a service
Infrastructure as a service (IaaS) offers the best solution to enterprises who need to
reduce capital expenditure but access a fully scalable data center. The cloud provider will
need to provide power, Heating, Ventilation, and Air Conditioning (HVAC), and the
physical hardware. But we will manage the day-to-day operations using management tools
and application programming interfaces (APIs).
When using IaaS, you will have the most responsibility. You may be managing servers,
VDI, access to storage, and controlling network flows. The hardware, however, is managed
by the cloud provider.
You would not physically work in the cloud data center, so you cannot install a server in a
rack or swap out a disk drive.
It is important to recognize the responsibilities of the CSP and the customer, using the
three popular service models. Figure 3.27 shows the responsibilities each party will have:
Micro-segmentation
Micro-segmentation is used to separate workloads, securely, in your data center
or a hosted cloud data center. In practice, this means creating policies that restrict
communication between workloads where there is no reason for east-west (server to
server) traffic. Network zoning is an important concept and can dynamically restrict
communication between the zones when a threat is detected.
Traditional security is based on north-south traffic (data moving through the network
perimeter), but now we see thousands of workloads all being hosted within the same data
center (inside the perimeter). Virtualization can allow a single hardware compute node
to host thousands of VMs. By isolating these workloads using micro-segmentation, we
can reduce the attack surface, isolate breaches, and implement more granular policies for
given workloads. Figure 3.29 shows an example of isolated workloads in the data center:
The benefits of this approach is that each individual workload can be secured.
Benefits of micro-segmentation
Organizations that adopt micro-segmentation will gain the following benefits:
Jump box
Remote management is a key requirement when managing on-premises and cloud-based
data centers. Network assets will include servers and workloads with valuable data or
may host Industrial Control Systems (ICS). To manage these environments, there is
a requirement for highly specialized management tools. It would be difficult for an
engineer to manage the SCADA systems and monitor sensitive systems without specialist
tools hosted on the SCADA network. Likewise, network engineers may need to remote
into the data center to offer 24/7 support on critical systems. One approach is to securely
connect to a designated server that has secure access to the data center or ICSes. These
servers will have the appropriate management tools already installed and the connection
will be made from validated trusted external hosts. Figure 3.30 shows an example of where
a jump box would be deployed:
A Jump box ensures no code or software can be introduced to vulnerable network systems.
File-based storage
These are regular files that are used in a traditional client-server model. Examples would
be user-mapped drives accessing shared folders on a NAS device or a file server. Other file
types could be VMs hosted by a hypervisor platform.
Database storage
Databases are described as relational and consist of two-dimensional tables with rows and
columns. A common vendor approach is to use Structured Query Language (SQL) for
retrieving and managing data. This ensures that a record is added or updated only when it
can be validated (ensuring the integrity of the data).
136 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
Block storage
Block storage stores data in fixed-sized chunks called blocks. A block will only store
a fragment of the data. This is typically used on a SAN iSCSI or Fiber Channel (FC).
Requests are generated to find the correct address of the blocks; the blocks are then
assembled to create the complete file. Block storage does not store any metadata with the
blocks. Its primary strength is fast performance, but the application and storage need to
be local (ideally on a SAN). Performance will degrade if the application and blocks are
farther apart. This storage is typically used by database applications.
Summary 137
Blob storage
Object storage is optimized for storing and retrieving large binary objects (images, files,
video and audio streams, large application data objects and documents, and virtual
machine disk images). To optimize searching through these stores, metadata tags can
be linked to a file. These may be customizable by the user (it would be difficult to search
through the raw data of a video image). These stores are ideal for large files and large data
stores as they will be very scalable.
Key/value pairs
A key/value pair is useful for storing identifiers and values. It is a useful storage type for
configuration files or hash lookups, such as rainbow table or any kind of lookup table.
This may be useful when performing a compliance scan. The database could contain
a series of identifiers and the actual value it is expecting to be set.
Summary
In this chapter, you have gained an understanding of the security considerations when
hosting data on-premises and off-premises. You learned how an enterprise will implement
secure resource provisioning and deprovisioning, and the differences between type 1
and type 2 hypervisors. We then looked at containerization and learned how to choose
an appropriate cloud deployment model. Then we learned the differences between the
different cloud service models and gained an understanding of micro-segmentation and
VPC peering, which will help us to select the correct storage model based on the storage
technologies offered by cloud providers.
In this chapter, you have acquired the following skills:
In the next chapter, we will learn about managing identities using authentication and
authorization, including Multi-Factor Authentication (MFA), Single Sign-On (SSO),
and Identity Federation.
138 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
Questions
Here are a few questions to test your understanding of the chapter:
1. What security setting is it when Group Policy prevents my flash drive from being
recognized by my Windows computer?
A. Watermarking
B. Blocking the use of external media
C. Print blocking
D. Data classification blocking
2. What stops me from capturing bank account details using my mobile banking app?
A. Watermarking
B. Blocking the use of external media
C. Print blocking
D. Data classification blocking
A. Watermarking
B. Blocking the use of external media
C. Restricted VDI
D. Data classification blocking
A. Remote Desktop
B. Protocol (RDP) blocking
C. Clipboard privacy controls
D. Web Application Firewall
5. How can you reduce the risk of administrators installing unauthorized applications
during RDP admin sessions?
A. Remote Desktop
B. Protocol (RDP) blocking
Questions 139
6. How can I ensure that my sales team can send quotations and business contracts out
to customers, but not send confidential company data?
7. The CISO needs to know who has been sharing signed-out company confidential
documents on a public web server. How can this be done?
8. Jenny wants to share a useful business-related video file with her colleague, but
when Charles attempts to play it using the same player and codecs it cannot be
viewed. What is the most likely cause?
A. DRM
B. Deep packet inspection
C. Network traffic analysis
D. Watermarking
9. What allows a forensics investigator to discover the time and location that a digital
image was taken?
A. Metadata
B. Obfuscation
C. Tokenization
D. Scrubbing
140 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
10. What may have allowed a rogue administrator to remove evidence from the
access logs?
A. Scrubbing
B. Metadata
C. Obfuscation
D. Tokenization
11. What stops the bank support desk personnel from accessing Ben's 16-digit VISA
card number and CVC code?
A. Metadata
B. Obfuscation
C. Key pairs
D. Masking
12. What ensures that medical researchers cannot unwittingly share PHI data from
medical records?
A. Anonymization
B. Encryption
C. Metadata
D. Obfuscation
13. What allows an organization to manage business data from the moment it is stored
to final destruction?
14. What is another name for a bare-metal hypervisor deployed in a data center?
A. Type 1
B. Emulation
C. Type 2
D. Containers
Questions 141
15. What allows the isolation of workloads, allowing easy migration between
vendor platforms?
A. Type 1
B. Emulation
C. Type 2
D. Containers
16. What allows Amy to play 16-bit Nintendo console games on her Windows desktop
computer?
A. Emulation
B. Middleware
C. PaaS
D. Database storage
17. What allows a legacy Microsoft office application to run on Ben's desktop alongside
Microsoft Office 365 applications?
A. Application virtualization
B. Database storage
C. Middleware
D. PaaS
18. How can we make sure that when a user leaves the organization, we can re-assign
their software licenses to the new user?
A. Deprovisioning
B. IaaS
C. Emulation
D. Off-site backups
A. Metadata
B. Indexes
C. Emulation
D. Off-site backups
142 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
20. What is the primary reason that a small family coffee shop business would choose a
public cloud model?
A. Cost
B. Scalability
C. Resources
D. Location
22. What is used to describe the situation when multiple customers are hosted on a
common hardware platform?
A. Multi-tenant
B. Platform sharing
C. Single tenant
D. Service model
23. What type of cloud service model would be used when buying 50 licenses to access
a customer relationship management (CRM) application?
A. SaaS
B. PaaS
C. IaaS
D. Security as a service (SecaaS)
24. What type of cloud service model would be used when I need to host my in-house
enterprise resource planning (ERP) suite with a CSP?
A. SaaS
B. PaaS)
C. IaaS
D. d) SecaaS
Questions 143
25. What type of cloud service model would be used when the Acme corporation needs
to deploy and manage 500 VDI instances across four geographical regions?
A. SaaS
B. PaaS
C. IaaS
D. SecaaS
26. What will my CSP configure so that I have direct communication between multiple
instances of VPC?
A. IPSEC tunnel
B. VPN
C. Inter-domain routing
D. VPC peering
27. What kind of storage model would be best for images, files, video, and audio streams?
A. File-based storage
B. Database storage
C. Block storage
D. Blob storage
E. Key/value pairs
28. What kind of storage model would be provided on a storage area network (SAN)?
A. File-based storage
B. Database storage
C. Block storage
D. Blob storage
E. Key/value pairs
29. What kind of storage model would be useful when performing a compliance scan
and the database could contain a series of identifiers and the actual value it is
expecting to be set?
A. File-based storage
B. Database storage
144 Enterprise Data Security, Including Secure Cloud and Virtualization Solutions
C. Block storage
D. Blob storage
E. Key/value pairs
30. What is used when a customer is considering their responsibilities when buying
in-cloud services.
Answers
1. B
2. C
3. C
4. B
5. C
6. A
7. C
8. A
9. A
10. A
11. D
12. A
13. A
14. A
15. D
16. A
17. A
18. A
19. A
20. A
Answers 145
21. A
22. A
23. A
24. B
25. C
26. D
27. D
28. C
29. E
30. A
4
Deploying Enterprise
Authentication
and Authorization
Controls
Large enterprises often have very complex environments to manage. There are internal
users to manage, there are internal services and external service providers. There are
customers to consider, for guest users, within Business-to-Business (B2B) relationships.
Federation services can be utilized to ensure robust, centralized authentication and access
control are addressed in these hybrid environments. To manage all these interactions with
information systems, the correct protocols must be chosen to make sure we have secure
authentication and authorization. Many modern environments require the use of an
additional factor as a single factor, such as just a user password, is known to be weak. In
this chapter, you will learn how to effectively select the appropriate solution.
148 Deploying Enterprise Authentication and Authorization Controls
• Credential management
• Identity federation
• Access control
• Authentication and authorization protocols
• Using Multi-Factor Authentication (MFA)
Credential management
Credential management is critical for an enterprise. We must consider the day-to-day
management of user credentials, including effective management of passwords. We
must ensure passwords are created, stored, and destroyed securely. They must always be
processed securely as well.
Tip
When you sign in to Google to access Gmail, Google Docs, and content on
YouTube, you will only need to sign in once with your Google account.
Password policies
Password policies need to be enabled to make sure that users will change their passwords
based on the organization's requirements. Passwords can be chosen to have a certain level
of complexity and/or meet a minimum character length. Password complexity is used to
enforce the use of different character classes (uppercase, lowercase, numbers, and special
characters: @~#!"£€%&). It is also important to have a password history so that users
cannot reuse old passwords. Guidance has changed over recent years. Instead of enforcing
super-complex passwords, it is accepted that Two-Factor Authentication (2FA) or MFA
is the preferred option to protect identities. A long passphrase is proven to be much more
secure overall than a complex password. To enforce password requirements, we could use
a policy such as the one shown in Figure 4.3:
Password complexity
Password requirements will often have a requirement for complexity. This will require at least
three from four character classes: uppercase, lowercase, numbers, and special characters.
Password length
Eight characters is generally recommended as the minimum length of a password; any
greater than this and users will adopt bad practices, such as repeating the password to
meet this requirement, for example, Bertie26Bertie26.
Password history
Password history is important. Without password history, compromised accounts may be
reset to the original password, allowing the attackers to gain access to the account again.
Password auditing
Password auditing is important as it allows the organization to check for weak
passwords. Password checking can be done when users reset their passwords. We
could use a dictionary word list to generate hashes and check that the users are not
using those weak passwords. The National Cyber Security Center (NCSC) has listed
100,000 passwords taken from https://haveibeenpwned.com. The list can be found
at the following link: https://www.ncsc.gov.uk/static-assets/documents/
PwnedPasswordsTop100k.txt. The first 20 words from the list are shown in
Figure 4.4:
Credential management 153
Reversible encryption
Passwords should be stored in a secured format within a password database. They are
normally hashed, prior to being written into the database. Password reversible encryption
allows passwords to be stored in a plain text format. This is used for some legacy
applications (perhaps a remote access management solution). These should never be used.
An example would be Microsoft Remote Access Service (RAS). This allows users to be
authenticated using Challenge Handshake Authentication Protocol (CHAP).
As well as being able to authenticate user accounts within our own sites, it is also
important to work with external entities.
154 Deploying Enterprise Authentication and Authorization Controls
Identity federation
Identity federation allows you to use your identity with a third party. Many organizations
will use services in the cloud such as software as a service. This allows the user to use SSO
in their own enterprise and when accessing these third-party applications. Users only need
to remember one identity. Typically, a token will be generated by an identity federation
service and passed securely to the third party. Microsoft provides a service called Active
Directory Federation Services (ADFS), which allows an authenticated enterprise user to
use their credentials on a third-party site. They support many of the standard protocols in
use out there including Security Assertion Markup Language (SAML).
Transitive trust
Transitive trust can be very useful within complex enterprise environments. When using
directory services, we can create security boundaries referred to as domains or Kerberos
realms. It is common to create these boundaries to separate account management
geographically. For example, a bicycle manufacturer has a Taiwanese engineering and
production plant in Taipei. They have sales, marketing, and distribution in Santa Cruz,
US, which is the head office, and the same business function in Morzine, France. Each
location has a separate domain, with the US being the root and Taiwan and France being
child domains. In Figure 4.5, we can see the trust relationships between the domains:
There are direct trusts between the root domain and the child domains. There is, however,
transitive trust between the two child domains. The relationship is similar to the concept
of an authenticated Internet Protocol Security (IPsec) tunnel. An administrator would
still need to provision access to resources, but there would be inherent trust between all
parts of the organization. The sales team could then be given read privileges to production
data from the manufacturing plant.
OpenID
Cloud-based Identity Providers (IdPs) such as Google, Facebook, and Twitter support
a standard called OpenID. This is often used to access third-party services (referred to as
the Relying Party (RP)) who support this standard. The current standard is version 2.0,
and it is administered by the OpenID Foundation, a non-profit entity.
Once an identity has been validated by the OpenID provider, a secure token is generated
and forwarded to the requesting service provider in the form of a JavaScript Object
Notation Web Token (JWT). Figure 4.6 shows the OpenID SSO process:
Shibboleth
Shibboleth is similar to SAML. It is, however, used mainly by educational establishments
such as colleges and universities. It also allows for SSO by passing a token to the service
provider and requires an IdP.
Once we have authenticated accounts, we may give access to resources.
Access control 157
Access control
Access control can be used to control authorization once a user has been authenticated.
There may be many different requirements; for example, developers may need to share
their code with one another. There may be requirements for very strict access control
when there are sensitive documents that must be accessed. There may be a requirement
to give privileges and rights to administrative role holders or perhaps to give fine-grained
access based upon the location or the country of origin of the account holder. In the
following section, we will investigate these choices.
It is important to ensure strict adherence to a MAC framework; no write down and no read
up (this stops a user with a lower clearance level from seeing data they are not cleared for).
Diameter
Diameter is the next version of RADIUS. It is based upon RADIUS but uses TCP ports to
make the connection and the authentication more reliable. It has wide support from many
vendors. It has been assigned TCP port 3868.
Kerberos
Kerberos is a secure SSO protocol that was developed by the Massachusetts Institute of
Technology (MIT) and is protected by export restrictions to use within the US. This is
now supported by the Internet Engineering Task Force (IETF) and has been assigned
several RFCs defining the implementation (RFC 3961, 3962, 4120, and 4121). Kerberos
is in widespread use and is supported by many operating system vendors, including Red
Hat, Oracle, and IBM. It has been the standard for Microsoft Active Directory services
for over 20 years. Kerberos supports encrypted communication and anti-replay. Kerberos
is very time-dependent. It is important that the clocks are in synchronization; otherwise,
Kerberos authentication will fail. Figure 4.15 shows the Kerberos SSO mechanism. The
server hosting the Key Distribution Center (KDC) and Ticket Granting Services (TCS)
could be a Windows Domain Controller (DC) running Active Directory services:
OAuth
OAuth allows for SSO. Users can sign in to their OpenID provider such as Facebook,
Google, or Twitter and they can access third-party services. Users do not need to
remember multiple credentials. It is used by many service providers, including Microsoft,
Tripadvisor, Hotels.com, and many merchant sites requiring payment authorization. There
are currently two versions of Open Authentication (OAuth): V1 and V2. OAuth V2 is
the current standard. PayPal would be a good example of a system we use every day. If you
make a payment on a merchant site and you choose to use PayPal, you will end up being
prompted to authorize that payment. A token will be securely generated by PayPal and
sent to the merchant site. This will authorize the transaction.
Figure 4.16 shows typical sites supporting SSO using OAuth:
Important Note
OAuth defines the flow of requests and responses to authorize a transaction,
while OpenID defines the role of the IdP.
166 Deploying Enterprise Authentication and Authorization Controls
802.1X
802.1X is an Ethernet standard. It defines Port-Based Access Control (PBAC). It can allow
devices or users to be authenticated to a network connection and is supported on switches,
remote-access VPNs, and WAPs. It can be easily configured with the use of Public Key
Infrastructure (PKI) certificates. Certificates can be assigned to devices such as computers,
mobile devices, IoT devices, and network printers. A wide variety of devices can be
connected securely to networks, removing the need for actual credentials to be typed in.
Tip
For the certification exam, it is important to recognize the appropriate
authentication protocols to provide both security and compatibility within
complex enterprise networks.
A single factor for authentication is always a risk. Compromise of a password may be all
that is needed for hackers to take control of an account. Having a second factor is the best
protection against this threat.
Tip
Two factors could be password and smart card, pin and ATM card, or
biometrics and a physical token such as an RFID card. One of each type (Type I
and Type III, for example) is required.
Two-step verification
Two-step verification is supported by many online IdPs. Think about when you log in to
Google from a new workstation you have never signed in from before. You would receive
a push notification in your Google app on your smartphone and need to respond:
In-band authentication
In-band authentication would mean doing all the necessary authentication checks using a
single channel, such as your internet browser connection to your bank. This would be seen
as inferior to other mechanisms. Reliance on in-band authentication makes the possibility
of Cross-Site Request Forgery (CSRF/XSRF) very real. Imagine you are connected to
your bank and have logged on with your user account and password. A malicious script
could authorize a payment to a criminal.
Figure 4.18 – RSA SecurID token ("File:RSA SecurID Token Old.jpg" by Alexander Klink is licensed
with CC BY 3.0: https://creativecommons.org/licenses/by/3.0)
Time synchronization with the authentication server is important. This type of MFA is
very popular and with more and more online digital identities to manage, you need to
ensure access is secured to these online portals. When you complete your tax returns, you
can secure this with MFA TOTP or connect to your bank to perform electronic banking.
Figure 4.19 shows typical TOTP prompts:
JWT
JWT is based upon an open standard (RFC 7519). It is used for securely sending
information between parties where a secure trusted payload is important. The attestation
service will validate a piece of information so this could be very useful within a cloud
environment where we are not already directly trusting one another. Microsoft could be
the attestation provider; they will sign a requestors information package and if the third
party trusts Microsoft's attestation service, they will be able to trust the JWT. It is a similar
concept to PKI; if you trust the root CA, then you automatically trust certificates generated
within that hierarchy. JWTs can be used for both attestation and identity proofing. A
common use of JWT is when using federation services such as OpenID and OAuth V2.
Summary
In this chapter, we have studied the challenges large enterprises face when they must
support complex environments, needing to manage internal users and their authentication
to external service providers. We have looked at the role of federation services, to ensure
robust authentication and access control are addressed in hybrid environments. We have
looked at the use of MFA as a single factor is known to be weak. We have studied the
options to ensure authentication is needed to gain access to a network.
In this chapter, you have gained the following skills:
These core skills will be useful as we move into the next domain: security operations.
172 Deploying Enterprise Authentication and Authorization Controls
Questions
Here are a few questions to test your understanding of the chapter:
1. What is the container on a Windows operating system that allows the secure storage
of user credentials and passwords?
2. What security would be provided for the storage of passwords in a cloud repository?
Choose three.
A. Password policies
B. Privileged access management
C. Password complexity
D. Password auditing
5. What password policy will ensure a password cannot be reused? Choose two.
A. Password length
B. Password reuse
C. Password complexity
D. Password history
Questions 173
6. What password policy would most likely force Bill to change his password from
flowerpot to F10w€rPot?
A. Password length
B. Password reuse
C. Password complexity
D. Password history
7. What password policy will ensure Mary cannot spend her lunch break resetting her
password 24 times to make it the original password?
8. How can you detect the use of a poor password that may match dictionary words?
A. Password spraying
B. Password auditing
C. Password guessing
D. Password reset
A. Strong encryption
B. Reversible encryption
C. Forward encryption
D. Complexity
10. What is the term used when credentials can be used with a third party
utilizing SSO?
A. Identity proofing
B. Identity federation
C. Identity cloud
D. Identity trust
174 Deploying Enterprise Authentication and Authorization Controls
11. What XML federation service will most like be used to access third-party cloud-
based corporate portals?
A. Shibboleth
B. SAML
C. OAuth
D. OpenID
12. Which federation service will most like be used to access third-party cloud-based
digital services?
A. OAuth
B. SAML
C. Kerberos
D. LDAP
13. What access control will offer the most security for a government agency?
A. MAC
B. DAC
C. Role-based access control
D. Rule-based access control
14. What access control will offer the most flexibility for de-centralized administration?
A. MAC
B. DAC
C. Role-based access control
D. Rule-based access control
15. What access control will allow for access based upon country and department?
A. MAC
B. DAC
C. Role-based access control
D. Attribute-based access control
16. Which AAA service offers the widest support across vendor networking equipment?
A. RADIUS
B. TACACS+
Questions 175
C. Circumference
D. HP proprietary
18. What can I use to authenticate securely to directory services, preventing replay
and MITM attacks?
A. IPsec
B. Kerberos
C. CHAP
D. PAP
A. 802.11
B. 802.1X
C. 802.3
D. 802.1s
20. What is the framework that allows many different authentication protocols?
A. PAP
B. EAP
C. CHAP
D. PEAP
21. What will I need to support if users need to present an RFID card, iris scan,
and pin?
A. MFA
B. 2FA
C. Two-step verification
D. In-band authentication
176 Deploying Enterprise Authentication and Authorization Controls
22. What is being used when my bank sends me a confirmation code via SMS?
A. In-band authentication
B. OOBA
C. Bandwidth
D. Out-of-bounds
A. Forgotten password
B. OTP
C. PIN
D. KBA question
24. What will I need to support if users need to present a password, memorable secret,
and pin?
A. MFA
B. 2FA
C. Two-step verification
D. Single-factor authentication
A. HOTP
B. TOTP
C. Hardware root of trust
D. JWT
26. What is called it when I sign on to directory services and can use my internal email
without being prompted to sign in a second time?
A. SSO
B. JWT
C. Attestation and identity proofing
D. TPM
Answers 177
Answers
1. B
2. A, B and C
3. A
4. B
5. A and D
6. C
7. A
8. B
9. B
10. B
11. B
12. A
13. A
14. B
15. D
16. A
17. A
18. B
19. B
20. B
21. A
22. B
23. B
24. D
25. B
26. A
Section 2:
Security Operations
In this section, you will learn about the many threats that exist for an enterprise, threats
that are often highly sophisticated and may be sponsored by state actors. You will learn
about the tools and techniques that will enable an enterprise to detect and respond to
these threats.
This part of the book comprises the following chapters:
• Intelligence types
• Actor types
• Threat actor properties
182 Threat and Vulnerability Management
Intelligence types
Gathering threat intelligence is important as this will allow security professionals to be
proactive and equipped to meet the challenges of cyberattacks. We will look at tactical,
strategic, and operational intelligence. From a practical point of view, we also need to
understand what tools are available to gather this knowledge.
Tactical intelligence
Tactical threat intelligence gathering would be performed primarily by security experts
and analysts. It is primarily focused on short-term objectives. This would be the job of
the Security Operations Center (SOC) staff who would analyze feeds from multiple
security tools, including Security Information and Event Management (SIEM) systems.
To gather tactical intelligence, we can use threat feeds and open source and closed source/
proprietary feeds, depending upon the business. Tactical intelligence will use real-time
events and technical analysis to understand adversaries and their tools and tactics. We
need the latest tools to identify new and zero-day exploits (threats that have no known
patches or mitigation). The shorter-term goals will be to identify current threats and
emerging threats using technology and automation.
Strategic intelligence
Strategic threat intelligence should be conveyed to senior managers and leaders within
an enterprise. It is based on long-term goals – for example, what are the threats to our
industry? Or who are the primary threat actors targeting our industry? An organization
would collect historical data and use this to look at trends. You could say strategic
intelligence is more about Who and Why. A military defense contractor should know that
China may target them to steal their intellectual property (IP). Based on this assumption,
we must address this risk with the appropriate countermeasures.
Actor types 183
Operational intelligence
Operational threat intelligence gathering is often performed by forensics or incident first
responders. We need to understand an adversary's techniques and methodologies. We
can search through previous data logs looking for signatures from particular activities
from known threat actors. We would use this technique to discover previous attacks that
were not detected. If you are searching for evidence of nation-state threat actors, you may
look at one of the known documented groups, APT30 (indicators strongly suggest they
are sponsored at a government level), and their mode of operation is well documented.
Previous attacks have used spear phishing as the first step.
See the following link for more intelligence on APT30 activities:
https://attack.mitre.org/versions/v9/groups/G0013/
Commodity malware
One common form of attack is to use commodity malware. This is malware that is
cross-platform and has multiple purposes. A good example of commodity malware would
be a remote access trojan. This could be used against many different operating systems
and the end goal can be very different. Effective defenses include patching systems.
Targeted attacks
Targeted attacks may use very specific tools. Examples of targeted attacks include
the following:
Actor types
Organizations must consider threats from many different sources, from the unsophisticated
script kiddie to government-sponsored attacks involving nation-state actors. We must also
consider attacks from cybercriminals – at present, organized crime is the fastest-growing
adversary. In over half of reported cases, major information system breaches can be
attributed to insiders. We will highlight these threat actors in the following section.
184 Threat and Vulnerability Management
Insider threat
Insider threats are one of the biggest reasons for security breaches. It is estimated that
more than 50% of breaches are attributable to insiders. It could be accidental or it could be
a user who has a grudge against the organization (such as a disgruntled employee).
Competitor
A competitor is an entity that works in the same field as your organization. You might
be a manufacturer of a very specific and valuable item. Your IP and design plans are
therefore very important. There is evidence that foreign powers have sponsored attacks
that have actually led to data breaches that allowed those nations to enhance their
military manufacturing at the expense of United States defense corporations. There are
many instances where commercial enterprises have resorted to underhand tactics to
gain access to Intellectual Property (IP)
Actor types 185
Some best practices to mitigate these threats would include the following:
Hacktivist
The hacktivist has a political goal. They may have a grudge against banks, governments,
and other high-profile targets. Environmentalists may target oil and gas companies.
A good example is a group called Anonymous. They have launched many successful
attacks over the years against big businesses, including Sony, PayPal, VISA, and
Mastercard. Recently, the group targeted tech billionaire Elon Musk in retaliation for
his activity concerning cryptocurrency. See the following link for more information:
https://tinyurl.com/anonymoustarget
Script kiddie
Script kiddies are often untrained and lack sophistication. However, they can still cause a
lot of damage. They download tools and scripts to attack your organization. A goal may be
just to vandalize a site or to prove they can gain access. They often do it for the rush and
the high of being a hacker.
Organized crime
Organized crime cybercriminals are a big threat. Their goal is to gain access to finances.
They are often sophisticated – they can attack individuals and they can attack big
businesses using many different techniques, including phishing, spear-phishing, and
pharming sites. The goal is generally to steal money or your valuable information and
they will sell the information on the dark web to the highest bidder. Cybercriminals and
organized crime actors also launch attacks against your organization using ransomware
(this is currently a major source of income for organized crime). Security professionals are
now referring to these groups as Big Game Hunters (BGH).
Individuals may also be targeted through ransomware, where the victim believes they have
committed a crime and are willing to pay to avoid further action.
To further understand threats, we need to understand why they are targeting our
organization and how.
186 Threat and Vulnerability Management
Resources
The resources available to a threat actor can make a big difference to the effectiveness of
the attack. Government-sponsored threat actors working in large teams will have lots
of available resources, including sophisticated hardware and software tools. They have
access to money, time, skilled people, intelligence, and so on. They can deploy personnel
physically to perform reconnaissance missions. They can also access intelligence gathered
by other government agencies.
Time
Nation-states or organized crime threat actors are full-time professional hackers (it's their
job); they are not doing this as a hobby. Another consideration is the amount of time that
they have access to your systems for. APTs may be in place for months or years without
an organization's knowledge. This means the amount of information that may have been
stolen may also be difficult to assess.
Money
How well funded are the attackers? In many cases, very well funded. The potential gains
for cybercriminals are enormous. Ransomware alone is believed to have accounted for
$20 billion in damages globally in 2020. The FBI has estimated the cost worldwide to be in
excess of $4.2 trillion for 2020, and this is predicted to grow to over $10 trillion by 2025,
according to reports in this Cybercrime Magazine article: https://tinyurl.com/
cybercrimegrowth
In 2020, SolarWinds was attacked by hackers who were able to plant malware into
updates destined for SolarWinds customers. There are over 33,000 installed instances
of SolarWinds Orion and around 18,000 customers installed the patch. The customers
included fortune 500 companies, critical infrastructure-related businesses, and
government agencies. Hackers were able to access these systems through a backdoor for
over nine months before the breach was discovered. By putting our trust in third parties,
we run the risk of creating vulnerabilities. For more details, see the following link:
https://www.bbc.co.uk/news/technology-55321643
Identifying techniques
Identifying attacks has become very challenging and we must use many tools and
techniques, such as advanced analytics, machine learning (ML), and artificial
intelligence (AI). These are just some of the tools that can help us to detect these types
of attacks.
It is vitally important for security professionals to understand the tools and techniques
that enable an organization to be well prepared for future attacks.
Intelligence feeds
There are many sources of intelligence that can be displayed in a dashboard format. We
might want to see threat maps from a live feed. A good example would be showing the
current activity globally for things like botnets and command and control (C2) servers.
There is a good example hosted by spamhaus.com, which can be found at the following
link: https://www.spamhaus.com/threat-map/. There are many good open
source options and many commercial (paid for) options from security organizations such
as Fortinet, FireEye, Symantec, and many others. In the United Kingdom, the National
Cyber Security Centre (NCSC) offers a free threat feed, but only to affiliated government
departments.
A good example of an all-round cyber threat feed can be found at the following link:
https://otx.alienvault.com/preview
See Figure 5.1 for an example threat feed:
While general cybersecurity threat feeds are very useful in understanding the current
threat landscape, more specific feeds are available for different industries, including
critical infrastructure. Figure 5.2 shows options available for different business sectors:
Deep web
The deep web, also known as the dark web, is an alternative to the regular internet (also
sometimes referred to as the surface web). There is no content indexing on the deep web.
The deep web is used in countries where personal freedoms are restricted and censorship
is in place. But a significant proportion of content on the deep web is generated by
criminal activity. Criminals will sell stolen credentials and zero-day exploits to the highest
bidders. It is important for law enforcement and cybersecurity professionals to remain
aware of the activities and data that criminals are sharing on the deep web. To access sites
on the dark web, you will have to gain access to the URL, as it will not be indexed. You will
also need to install a browser such as Tor. Figure 5.3 shows the Tor browser, which looks
similar to many other regular web browsers. However, you will need to be given access to
a site URL, as they cannot be searched:
Proprietary intelligence
Propriety threat intelligence will only be made available to subscribers. Good examples
of this would be certain government threat intelligence that will only be available to
certain government agencies. Law enforcement may share information only with certain
agencies or related law enforcement departments. Military and classified government
intelligence will not be shared and can be considered proprietary.
Human intelligence
Human intelligence (HUMINT) is intelligence gathered by real people (not machines).
This means putting boots on the ground, in a military sense. This could involve physical
reconnaissance of a site of interest (perhaps a factory or military site). This skill set will
require a certain amount of tradecraft by the intelligence officer. They will be able to read
signs and body language, and they will use profiling to understand the motivations and
goals of attackers.
To help security professionals understand how to use threat intelligence we can use one of
the industry frameworks.
Frameworks
An attack framework is very useful to allow us to understand the tools, the tactics, and
the techniques attackers will use to launch successful attacks. Security professionals must
be able to understand how they plan their attacks right from the initial reconnaissance
through to completion of the attack. We will cover some of the industry-accepted
approaches in this section.
192 Threat and Vulnerability Management
Figure 5.4 – MITRE Att&ck framework (© 2021 The MITRE Corporation. This work is reproduced and
distributed with the permission of The MITRE Corporation)
Frameworks 193
In the Enterprise model, there are 14 headings to give good clear guidance on attack
vectors and their impacts. The matrix can be found at the following link:
https://attack.mitre.org/versions/v9/matrices/enterprise/
Threat hunting
Threat hunting can be used within the context of operational intelligence. We need to
adopt one of the threat frameworks and search for Indicators of Compromise (IOC)
with a much better understanding of our adversaries' tools and methods. Using this
methodology, we are better able to detect APTs.
Threat emulation
Threat emulation allows security professionals to fine-tune their approach to threats
using a variety of Tools, Techniques, and Procedures (TTPs). Team security exercises
could be one approach, with the red team using the adversary TTP and the blue team
defending the network.
To respond to a threat, we need to recognize what event or series of events is
actually threatening.
196 Threat and Vulnerability Management
Indicators of compromise
There are many events that would take place on a busy network. Events can be recorded in
logs (where an event is unusual) or coupled with another event that itself appears unusual.
This could be an indicator of compromise (IOC). Another example of an IOC could be
several unsuccessful attempts to connect using SSH to a core network appliance, followed
by a successful authentication attempt from an unusual or gray-listed IP address. It is
important for the security operation center to be able to identify attacks or threats. To
identify IOCs, we need inputs in the form of logs and captured network traffic.
Packet capture
Packet capture (PCAP) files use a standard log format. They allow us to capture real-time
data. Captured data can be analyzed using Wireshark, tcpdump, or tshark. See Wireshark
capture in Figure 5.7:
Logs
Many different appliances and services will log activities. This could be from applications
and services running on a desktop to server-based services. All of this log data can be
useful in seeing the big picture. A standard for Linux/Unix-based operating systems
is syslog. This allows for facility codes to be assigned to events. For example, email
events would be tagged with a facility code of 2. See Figure 5.8 for a full list. This is fully
documented in RFC 5424. Full documentation can be found at the following URL:
https://datatracker.ietf.org/doc/html/rfc5424
Network logs
Network logs allow us to capture network traffic activity on switches, routers, firewalls,
intrusion detection systems, and many more network appliances. Figure 5.9 shows an
example of a firewall log:
Vulnerability logs
Vulnerability logs will be created by vulnerability scanning tools such as Nessus,
OpenVAS, and Nmap. It can be useful to consolidate and have a single pane of glass
to present all of this collected data. Vulnerability logs can be forwarded to SIEM for
automation and alerts. Figure 5.10 shows a log from a vulnerability scan:
Indicators of compromise 199
Access logs
Access logs provide an audit trail, when accountability must be established. A security
audit may be conducted to establish who accessed an information system through remote
access services or who changed permissions on a user's mailbox. We may want to record
physical access to a location, such as who gained physical access to the data center at the
time when a security breach occurred. Access logs will be important for accountability
and may be required to meet regulatory compliance.
Indicators of compromise 201
NetFlow logs
NetFlow logs are forwarded by network switches, routers, and other network appliances.
Logs are gathered by a collector and can be used to baseline the network and visualize the
flows and data types. Figure 5.12 shows a Cisco NetFlow log:
Notifications
Notifications can be sent to an operator console. This could be in the form of an alert or
any email message to indicate a threshold value has been hit or a particular event needs
some administrator's attention.
SIEM alerts
SIEM alerts will consolidate events from many sources of log data. We can then use smart
analytics machine learning and behavioral analysis on the logged data. SIEM allows for
automated alerts on unusual activity or suspicious events.
Antivirus alerts
Antivirus systems are in place to prevent the proliferation of malicious code and programs.
Even if the antivirus is successful, we should still log and monitor events to understand the
level of this activity. Ideally, this is done from centralized reporting dashboards.
Responses
When we have many security tools to guard our networks, it is important that we use
automation to identify events and where possible to provide an automated response.
This frees up the security operation center (SOC) staff to be able to take on board other
useful security tasks. Security Orchestration Automation and Response (SOAR) is
a modern approach to respond to real-time threats and alerts. Playbooks can be used
to create or provide an automated response. A playbook could be a simple ruleset or a
complex set of actions.
204 Threat and Vulnerability Management
Firewall rules
Firewall rules are important to have in place, as they protect your perimeter network
services placed in your Demilitarized Zone (DMZ). Firewall rules will also protect your
internal resources when deployed as host-based firewalls. Firewall rules may be modified
based upon a changing threat landscape and new adversaries. Bad actor IP address ranges
may need to be updated on your firewalls.
Signature rules
Signature rules are normally configured based on what is known, such as a definition file
that can be updated as new threats are constantly being detected and are evolving. It is
important that signatures are kept up to date and an antivirus engine is very dependent on
signatures and updates.
A signature could be a hash match for a known malicious file. For example, the European
Institute for Computer Antivirus Research (EICAR) test file was developed to allow
security personnel to check if antivirus software is functional. It uses a well-known string
of characters shown here:
SHA1 3395856ce81f2b7382dee72602f798b642f14140
Important Note
If you try to create the EICAR string and save it as a file, your anti-malware will
likely quarantine the file.
Behavior rules
Behavior rules have critical importance when we store so much important data within
a modern enterprise. We need systems that detect anomalies based on behavioural
analysis. The system would use AI and ML to build rulesets for normal activity. But in
the event of an anomaly, for example, a user beginning to transfer a large amount of
data to a cloud-based data repository, or a user beginning to delete an unusual number
of files within a certain amount of time, the system would generate alerts and notify
an administrator. This is typically done by modeling normal user behavior and then
comparing the abnormal bahavior against the normal.
206 Threat and Vulnerability Management
Scripts/regular expressions
Automation could include scripting. Scripts could be created to search for keywords
using regular expressions. This may provide protection by scanning captured log data
or searching incoming requests to your web server. DLP is a good example of searching
outgoing network traffic. We can formulate rules based on these search patterns.
It is vitally important that an organization has identified potential IOCs and has put
mitigation in place to respond effectively to these attacks.
Summary
In this chapter, we have looked at the varied tools and techniques that would be used
within an enterprise SOC. A security professional will need to identify different types of
threats and be able to select the correct approach and framework. We have covered the
main industry approaches. We have examined how an organization can identify IOCs and
how to respond to a variety of threats.
In this chapter, you have gained the following skills:
This knowledge gained will be very useful as we look into vulnerability management and
pen testing in the next chapter.
Questions
Here are a few questions to test your understanding of the chapter:
1. Which of the following intelligence types focuses on the threat actor and the reason
for the attack?
A. Tactical
B. Strategic
C. Targeted
D. Operational
A. Tactical
B. Strategic
C. Commodity malware
D. Targeted attacks
3. What type of attack would use spear-phishing against engineers in the Ukraine
electricity supply industry with the goal of gaining user credentials?
A. Deep web
B. Proprietary
C. Commodity malware
D. Targeted attacks
Questions 209
4. Which of the following intelligence types focuses on the technical and automated
discovery of everyday threats, threat actors, and the reason for the attack?
A. Tactical
B. Strategic
C. Commodity malware
D. Targeted attacks
5. Which of the following intelligence types uses forensics and historical logs to
identity threats?
A. Tactical
B. Strategic
C. Commodity malware
D. Operational threat intelligence
A. Threat emulation
B. Threat hunting
C. Diamond model
D. STIX
7. What is the most likely threat actor if your router firmware has been tampered with
over a period of two years, without being detected?
8. What is the most likely threat actor if your electrical power delivery capabilities
are attacked?
A. Nation-state
B. Insider threat
C. Hacktivist
D. Script kiddie
210 Threat and Vulnerability Management
9. What threat actor will most likely steal your intellectual property?
10. What is the threat when vulnerabilities are present on your network due to
misconfiguration by poorly trained technicians?
11. What is the threat when vulnerabilities are present due to the use of third-party
libraries in our code base?
12. What is the likely threat actor when thousands of systems are targeted with crypto
malware followed up with a demand for $5,000 in bitcoin?
13. What is the public network that hosts unindexed and unsearchable content that may
be used for unlawful activities?
14. What type of intelligence gathering would involve DNS record harvesting?
A. Intelligence feeds
B. Deep web
C. Open source intelligence (OSINT)
D. Human intelligence (HUMINT)
A. Intelligence feeds
B. Deep Web
C. Open source intelligence (OSINT)
D. Human intelligence (HUMINT)
16. What framework would be the best choice to build up a picture of threat actors and
their tactics and techniques for a water treatment plant?
A. MITRE ATT&CK
B. ATT&CK for industrial control system (ICS)
C. Diamond model of intrusion analysis
D. Cyber kill chain
17. What framework would be used to understand the capabilities of APT29 and how
they will target your enterprise information systems?
A. MITRE (ATT&CK)
B. ATT&CK for industrial control system (ICS)
C. Scripts/regular expressions
D. Security Requirements Traceability Matrix (SRTM)
18. What framework uses seven stages, starting with reconnaissance and ending in
actions on objectives?
A. MITRE (ATT&CK)
B. ATT&CK for industrial control system (ICS)
C. Diamond model of intrusion analysis
D. Cyber kill chain
212 Threat and Vulnerability Management
19. What file type will allow for the analysis of network traffic captured by Wireshark
or tcpdump?
20. What can be used to centrally correlate events from multiple sources and
raise alerts?
A. FIM alerts
B. SIEM alerts
C. DLP alerts
D. IDS/IPS alerts
A. Vulnerability logs
B. Operating system logs
C. Access logs
D. NetFlow logs
22. What type of logging can identify the source of most noise on a network?
A. Vulnerability logs
B. Operating system logs
C. Access logs
D. NetFlow logs
23. How will I know if my critical files have been tampered with?
A. FIM alerts
B. SIEM alerts
C. DLP alerts
D. IDS/IPS alerts
Questions 213
24. George has tried to email his company credit card details to his Gmail account. The
security team has contacted him and reminded him this is not acceptable use. How
were they informed?
A. FIM alerts
B. SIEM alerts
C. DLP alerts
D. IDS/IPS alerts
25. An attacker has had their session reset after they successfully logged on to the
Private Branch Exchange (PBX) after three unsuccessful attempts using SSH. What
is the reason for this?
A. FIM alerts
B. Firewall alerts
C. DLP rules
D. IPS rules
26. The Acme corporation needs to block the exfiltration of United States medical-related
data due to a new regulatory requirement. What is most likely going to get updated?
A. ACL rules
B. Signature rules
C. Behavior rules
D. DLP rules
27. Bill the network technician has been tasked with updating security based upon a
threat exchange update. Five known bad actor IP addresses must be blocked. What
should be updated?
A. Firewall rules
B. Signature rules
C. Behavior rules
D. DLP rules
214 Threat and Vulnerability Management
A. Signature rules
B. Behavior rules
C. Firewall rules
D. Regular expressions
29. What type of rule will alert administrators that Colin is deleting significant amounts
of sensitive company data?
A. Signature rules
B. Behavior rules
C. Firewall rules
D. Regular expressions
30. What will alert the SOC team to IOCs detected in logs of multiple network
appliances?
A. SIEM alerts
B. Behavior alerts
C. DLP alerts
D. Syslogs
31. What type of rule will alert administrators about a known malware variant that has
the following checksum:
sha1 checksum 29386154B7F99B05A23DC9D04421AC8B0534CBE1?
A. ACL rules
B. Signature rules
C. Behavior rules
D. DLP rules
32. Charles notices several endpoints have been infected by a recently discovered
malware variant. What has allowed Charles to receive this information?
A. SIEM alerts
B. Antivirus alerts
C. DLP alerts
D. Syslogs
Answers 215
Answers
1. A
2. C
3. D
4. A
5. D
6. C
7. A
8. A
9. B
10. B
11. B
12. D
13. C
14. C
15. D
16. B
17. A
18. D
19. A
20. B
21. C
22. D
23. A
24. C
25. D
26. D
27. A
28. D
29. B
30. A
31. B
32. B
6
Vulnerability
Assessment and
Penetration Testing
Methods and Tools
Security professionals must constantly assess the security posture of operating systems,
networks, industrial control systems, end user devices, and user behaviors (to name but
a few). We must constantly assess the security of our systems by utilizing vulnerability
scanning. We should use industry-standard tools and protocols, to ensure compatibility
across the enterprise. Security professionals should be aware of information sources where
current threats and vulnerabilities are published. We may need independent verification
of our security posture; this will involve enlisting third parties to assess our systems.
Independent audits may be required for regulatory, legal, or industry compliance.
In this chapter, we will cover the following topics:
• Vulnerability scans
• Security Content Automation Protocol (SCAP)
• Information sources
218 Vulnerability Assessment and Penetration Testing Methods and Tools
• Testing methods
• Penetration testing
• Security tools
Vulnerability scans
Vulnerability scans are important as they allow security professionals to understand when
systems are lacking important security configuration or are missing important patches.
Vulnerability scanning will be done by security professionals in the enterprise to discover
systems that require remediation. Vulnerability scanning may also be performed by
malicious actors who wish to discover systems that are missing critical security patches
and configuration. They will then attempt to exploit these weaknesses.
Agent-based/server-based
There are some vulnerability assessment solutions that require an agent to be installed
on the end devices. This approach allows for pull technology solutions, meaning the
scan can be done on the end device and the results can be pushed back to the server-
based management console. There are also solutions where there are no agents deployed
(agentless scanning). This means the server must push out requests to gather information
from the end devices. In most cases, agent-based assessments will take some of the
workload off the server and potentially take some traffic off the network.
Criticality ranking
When we run a vulnerability assessment and we look at the output report, it is important
to present the reports in a meaningful language that allows us to assess the most critical
vulnerabilities and be able to respond accordingly. Figure 6.1 shows two vulnerability
scans. In the first instance, there are highly critical vulnerabilities related to Wireshark
installation. The report indicates they can be remediated with a software update. The
second scan shows these vulnerabilities have been remediated:
Important Note
XCCDF does not contain commands to perform a scan but relies on the Open
Vulnerability and Assessment Language (OVAL) to process the requirements.
Important Note
CVE02012-1516 documents a VM escape vulnerability, which could severely
impact a modern data center. This would be a top priority to remediate.
Patch management
Patch management is critical in an enterprise. End devices, servers, and appliances all
require constant updating. We have operating systems running services and applications
and embedded firmware. Patches offer functionality improvements on the system and,
more importantly, security mitigation. Vulnerabilities in a system can ultimately cause
availability issues to operations. Patches are first tested by the vendor; they can then
be made available to the customer. Occasionally, patches can break working systems,
and then the benefit of the patch is largely outweighed by the need to have the system
functioning properly. Therefore, an enterprise should thoroughly test patches in a
non-production environment before deploying them into a production environment.
There are many other sources for security-related information.
Information sources
While it is important to use automation where possible to produce reports and generate
reporting dashboards, it is also important to consider alternative resources to maintain
a positive security posture. These resources will provide additional vendor-specific
information and industry-related information.
Information sources 225
Advisories
It is important for an organization to monitor security advisories; this is information
published by vendors or third parties. The information is security-related and will be
posted as an update when a new vulnerability is discovered, such as a zero-day exploit.
This would be an important mechanism, allowing security professionals to be informed
about new threats. We can use advisories to access advice and mitigation for new and
existing threats. Some example site URLs are listed as follows:
• Cisco: https://tools.cisco.com/security/center/
publicationListing.x
• VMware: https://www.vmware.com/security/advisories.html
• Microsoft: https://www.microsoft.com/en-us/msrc/technical-
security-notifications?rtc=1
• Hewlett Packard: https://www.hpe.com/us/en/services/security-
vulnerability.html
Tip
This would be a good choice for recent vulnerabilities that may not have a CVE
assigned to them, such as a zero-day exploit.
Bulletins
Bulletins are another way for vendors to publish security-related issues and vulnerabilities
that may affect their customers. Many vendors will support subscriptions through a Remote
Syndicate Service (RSS) feed. Examples of vendor security bulletins are listed as follows:
Vendor websites
Vendor websites can be useful resources for security-related information. In addition
to the previously mentioned (advisories and bulletins), additional security-related
information, tools, whitepapers, and configuration guides may be available. Microsoft is
one of many vendors who offer their customers access to extensive security guidance and
best practices: https://docs.microsoft.com/en-us/security/.
226 Vulnerability Assessment and Penetration Testing Methods and Tools
News reports
News reports can be a useful resource to heighten people's awareness of current
threat levels and threat actors. In May/June 2021, there were prominent news reports
about recent outbreaks of crypto-malware and ransomware. The following cases made
headline news:
• 14 May 2021: The US fuel pipeline Colonial paid criminals $5 million to regain
access to systems and resume fuel deliveries. See details at the following link:
https://tinyurl.com/colonialransomware.
• 10 June 2021: JBS, the world's largest meat processing company, paid out $11
million in bitcoin to regain control of their IT services. Follow the story at this link:
https://tinyurl.com/jcbransomware.
It is important to be aware that ransomware is still one of the biggest threats facing an
organization's information systems.
Testing methods
There are various methods to search for vulnerabilities within an enterprise, depending
on the scope of the assignment. Vulnerability assessments are performed by both security
professionals, searching for vulnerabilities, and attackers threatening our networks
(searching for the same vulnerabilities).
Static analysis
Static analysis is generally used against source code or uncompiled program code. It
requires access to the source code so it is more difficult for an attacker to gain access.
During a penetration test, the tester would be given the source code to carry out this
type of analysis. Static Application Security Testing (SAST) is an important process to
mitigate the risks of vulnerable code.
Testing methods 227
Dynamic analysis
Dynamic analysis can be done against systems that are operating. If this is software, this
will mean the code is already compiled and we assess it using dynamic tools.
Side-channel analysis
Side-channel analysis is targeted against measurable outputs. It is not an attack against the
code itself or the actual encryption technology. It requires the monitoring of signals, such
as CPU cycles, power spikes, and noise going across a network. The electromagnetic signals
can be analyzed using powerful computing technologies and artificial intelligence. Using
these techniques, it may be possible to generate the encryption key that is being used in
the transmission. An early example of attacks using side-channel analysis was in the 1980s,
where IBM equipment was targeted by the Soviet Union. Listening devices were placed
inside electronic typewriters. The goal seems to have been to monitor the electrical noise as
the carriage struck the paper, with each character having a particular distinctive signature.
See the following URL for more information: https://tinyurl.com/ibmgolfball.
Reverse engineering may be necessary when we are attempting to understand what a
system or piece of code is doing. If we do not have the original source code, it is possible
to run an executable in a sandbox environment and monitor its actions. Some code can be
decompiled; an example would be Java. There are many available online tools to reverse-
engineer Java code. One example is https://devtoolzone.com/decompiler/
java. Other coding languages do not readily support decompiling.
If the code is embedded on a chip, then we may need to monitor activities in a sandbox
environment. Monitor inputs and outputs on the network and compile logs of activity.
Fuzz testing
Fuzz testing will be used to ensure input validation and error handling are tested on the
compiled software. Fuzzing will send pseudo-random inputs to the application, in an
attempt to create errors within the running code. A fuzzer needs a certain amount of
expertise to program and interpret the errors that may be generated. It is common now
to incorporate automated fuzz testing into the DevOps pipeline. Google uses a tool called
ClusterFuzz to ensure automated testing of its infrastructure is maintained and errors
are logged.
While it is vitally important to perform regular vulnerability assessments against
enterprise networks and systems, it will be important to obtain independent verification of
the security posture of networks, systems, software, and users.
Penetration testing 229
Penetration testing
Penetration testing can be performed by in-house teams, but regulatory compliance may
dictate that independent verification is obtained. It is important you choose a pen testing
team that is qualified and trustworthy. The National Cyber Security Center (NCSC)
recommends that United Kingdom government agencies choose a penetration tester that
holds one of the following accreditations:
• CREST: https://www.crest-approved.org/
• Tigerscheme: https://www.tigerscheme.org/
• Cyber Scheme: https://www.thecyberscheme.org/
1. Pre-engagement interactions
2. Intelligence gathering
3. Threat modeling
4. Vulnerability analysis
5. Exploitation
6. Post-exploitation
7. Reporting
The first step is pre-engagement, where discussions need to be held between the
penetration testing representative and the customer representative.
Requirements
When an organization is preparing for security testing, for the report to have value, it
is important to understand the exact reasons for the engagement. Are we looking to
prepare for regulatory compliance? Perhaps we are testing the security awareness of
our employees?
230 Vulnerability Assessment and Penetration Testing Methods and Tools
Box testing
When considering the requirements of the assessment, it is worth considering how
realistic the assessment will be in terms of simulating an external attack from a highly
skilled and motivated adversary.
White box
White box testing is when the attackers have access to all information that is relevant
to the engagement. So, if the penetration testers are testing a web application and the
customer wants quick results, they would provide the source code and the system design
documentation. This is also known as a full-knowledge test.
Gray box
Gray box testing sits between white and black and may be a good choice when we don't
have an abundance of time. We could eliminate reconnaissance and footprinting, by
making available physical and logical network diagrams. This is also known as a partial-
knowledge test.
Black box
In black box testing, the testers would be given no information, apart from what is publicly
available. This is also known as a zero-knowledge test. The customer's goal may be to
gain a real insight into how secure the company is with a test that simulates a real-world
external attacker.
Scope of work
When preparing for penetration testing, it is important to create a scope of work. This
should involve stakeholders in the organization, IT professionals managing the target
systems, and representation from the penetration testers. This is all about what will be
tested. Some of the issues to be discussed would include the following:
• Data handling: Will the team need to actually access sensitive documents (PHI, PII,
or IP), or simply prove that it is possible?
• Physical testing: Are we testing physical access?
• What level of access will be required? What permissions are needed?
Rules of engagement
It is important to set clear rules about how the testing will proceed. While the scope is
all about what is included in the testing, this is more about the process to deliver the test
results. Here are some of the typical issues covered within the rules of engagement:
• Timeline: A clear understanding of when and how long the entire process will take
is important.
• Locations: The customer may have multiple physical locations; do we need to gain
physical access to all locations, or can we perform remote testing? Is the testing
impacted by laws within a specific country?
• Evidence handling: Ensure all customer data is secured.
• Status meetings: Regular meetings to keep the customer informed of progress is
important.
• Time of day of testing: It is important to clarify when testing will take place. The
customer may want to minimize disruption during regular business hours.
• Invasive versus non-invasive testing: How much disruption is the customer
prepared to accept?
• Permission to test: Do not proceed until the documentation is signed off. This may
also be necessary when dealing with cloud providers.
• Policy: Will the testing violate any corporate or third-party policies?
• Legal: Is the type of testing proposed legal within the country?
Once the rules of engagement are agreed upon and documented, the test can begin. If the
rules of engagement are not properly signed off, there could be legal ramifications as you
could be considered to be hacking a system.
232 Vulnerability Assessment and Penetration Testing Methods and Tools
Post-exploitation
Post-exploitation will involve gathering as much information as possible from
compromised systems. This phase will also involve persistence. It is important to adhere
to the rules of engagement already defined with the customer. Common activities
could include privilege escalation, data exfiltration, and Denial of Service (DOS). For a
comprehensive list of activities, follow this URL: http://www.pentest-standard.
org/index.php/Post_Exploitation#Purpose.
Persistence
Persistence is the goal for attacks that are long term, for instance, Advanced Persistent
Threats (APTs) are long-term compromises on the organizations' network. MITRE
lists 19 techniques, used for peristence, in the ATT&CK Matrix for Enterprise. The full
list can be found at the following link: https://attack.mitre.org/tactics/
TA0003/. Here are a few of the techniques that are listed:
• Account Manipulation
• Background Intelligent Transfer Service (BITS) Jobs
• Boot or Logon AutoStart Execution
• Boot or Logon Initialization Scripts
• Browser Extensions
• Compromise Client Software Binary
• Create Accounts
Pivoting
Pivoting is when an attacker gains an initial foothold on a compromised system and
then moves laterally through the network to bigger and more important areas. This will
typically be achieved by using an existing backdoor, such as a known built-in account. The
MITRE ATT&CK Matrix for Enterprise lists nine techniques for lateral movement. The
following is a list of typical tactics used for pivoting:
For more detail on these nine methods, follow this URL: https://attack.mitre.
org/versions/v9/matrices/enterprise/.
Pivoting allows the tester to move laterally through networks and is an important process
for gaining insights into systems on other network segments.
Security tools
There are many tools available to identify and collect security vulnerabilities or to provide
a deeper analysis of interactions between systems and services. When penetration testing
is being conducted, the scope of the test may mean the team is given zero knowledge
of the network. This would be referred to as black box testing. The team would need to
deploy tools to enumerate networks and services and use reverse engineering techniques
against applications. We will take a look at these tools.
SCAP scanner
A SCAP scanner will be used to report on deviations from a baseline, using input files
such as STIGs or other XML baseline configuration files. The SCAP scan will also search
for vulnerabilities present, by detecting the operating system and software installed using
the CPE standard. Once the information is gathered about installed products, the SCAP
scan can now search for known vulnerabilities related to CVEs.
234 Vulnerability Assessment and Penetration Testing Methods and Tools
Figure 6.5 shows the results of a SCAP scan with critical vulnerabilities listed first
(OpenVAS is an OVAL-compliant SCAP scanner):
Vulnerability scanner
A vulnerability scanner can be used to detect open listening ports, misconfigured
applications, missing security patches, poor security on a web application, and many other
types of vulnerability. Figure 6.8 shows a vulnerability scan against a website:
Protocol analyzer
A protocol analyzer is useful when we need to perform Deep Packet Inspection (DPI)
on network traffic. tcpdump, TShark, and Wireshark would be common implementations
of this technology. It is possible to capture network traffic using a common file format, to
then analyze within the tool at a later time. We could use this tool to confirm the use of
unsecure protocols or data exfiltration. Figure 6.9 shows Wireshark in use, with a capture
of unsecured FTP traffic:
Port scanner
A port scanner will typically perform a ping sweep in the first instance, to identify all the
live network hosts on a network. Each host will then be probed to establish the TCP and
UDP listening ports. Port scans can be done to identify unnecessary applications present
on a network. They can also be used by hackers to enumerate services on a network
segment. A port scan performed by NMAP is shown in Figure 6.10:
Once an attacker or penetration tester has identified available hosts and services, they can
attempt to gather more information, using additional tools.
HTTP interceptor
HTTP interceptors are used by security analysts and pen testers. When we need to
assess the security posture of a web application without being given access to the source
code, this tool is useful. Figure 6.11 shows Burp Suite, a popular HTTP interceptor. It is
provided as a default application on Kali Linux:
The HTTP interceptor is installed as a proxy directly on the host workstation. All browser
requests and web server responses can be viewed in raw HTML, allowing the tester to gain
valuable information about the application. See Figure 6.12 for the interceptor traffic flow:
Exploit framework
Penetration testers can use an exploitation framework to transmit payloads to hosts on a
network. Typically, this is done through vulnerabilities that have been identified. Popular
frameworks include Sn1per, Core Impact, Canvas, and Metasploit. Exploit frameworks
must be kept up to date to make sure all vulnerabilities are discovered and tested. Once a
scan has detected hosts on the network and identified operating systems, the attack can
begin. Figure 6.13 shows a Linux host about to be tested by Metasploit:
Security tools 241
Password crackers
Password crackers are useful to determine whether weak passwords are being used in the
organization. Common password crackers are Ophcrack, John the Ripper, and Brutus
(there are many more examples). There are many online resources as well, including
dictionary and rainbow tables.
Figure 6.14 shows the use of John the Ripper against a Linux password database:
In the output for Figure 6.14, the first column is for the user account and the second value
is the cracked password. So, user tech02 has set their password to be strong (which is
just a plain text dictionary word).
Important Note
While toor is not a common dictionary word, it is a poor choice of password for
the root account.
Summary
In this chapter, we have looked at a variety of tools and frameworks to protect information
systems.
We have identified the appropriate tools used to assess the security posture of operating
systems, networks, and end user devices. We have also learned how to secure our systems
using vulnerability scanning. We have identified industry-standard tools and protocols,
to ensure compatibility across the enterprise (such as SCAP, CVE, CPE, and OVAL), and
examined information sources where current threats and vulnerabilities are published. We
have also looked at the requirements for managing third-party engagements, to assess our
systems, and learned about the tools used for internal and external penetration testing.
These skills will be important as we learn about incident response and forensic analysis in
the next chapter.
Questions 243
Questions
Here are a few questions to test your understanding of the chapter:
1. When performing a SCAP scan on a system, which of the following types of scans
will be most useful?
A. Credentialed
B. Non-credentialed
C. Agent-based
D. Intrusive
2. What would be most important when monitoring security on ICS networks, where
latency must be minimized?
A. Group Policy
B. Active scanning
C. Passive scanning
D. Continuous integration
3. What is the protocol that allows for the automation of security compliance scans?
A. SCAP
B. CVSS
C. CVE
D. ARF
A. XCCDF
B. CVE
C. CPE
D. NMAP
244 Vulnerability Assessment and Penetration Testing Methods and Tools
5. What standard allows a vulnerability scanner to detect the host operating system
and installed applications?
A. XCCDF
B. CVE
C. CPE
D. SCAP
A. XCCDF
B. CVE
C. OVAL
D. STIG
7. What information type can be found at MITRE and NIST NVD that describes a
known vulnerability and gives information regarding remediation?
A. CVE
B. CPE
C. CVSS
D. OVAL
A. CVE
B. CPE
C. CVSS
D. OVAL
A. Self-assessment
B. Third-party assessment
C. PCI compliance
D. Internal assessment
Questions 245
10. When we download patches from Microsoft, where should they be tested first?
A. Staging network
B. Production network
C. DMZ network
D. IT administration network
A. Advisories
B. Bulletins
C. Vendor websites
D. MITRE
A. ISACs
B. NIST
C. SCAP
D. CISA
A. Static analysis
B. Dynamic analysis
C. Fuzzing
D. Reverse engineering
14. What type of analysis would allow researchers to measure power usage to predict
the encryption keys generated by a crypto-processor?
A. Side-channel analysis
B. Frequency analysis
C. Network analysis
D. Hacking
246 Vulnerability Assessment and Penetration Testing Methods and Tools
15. What type of analysis would most likely be used when researchers need to study
third-party compiled code?
A. Static analysis
B. Side-channel analysis
C. Input validation
D. Reverse engineering
16. What automated tool would developers use to report on any outdated software
libraries and licensing requirements?
A. Fuzz testing
B. Input validation
C. Reverse engineering
D. Pivoting
18. What is the term for lateral movement from a compromised host system?
A. Pivoting
B. Reverse engineering
C. Persistence
D. Requirements
A. Post-exploitation
B. OSINT
C. Reconnaissance
D. Foot printing
Questions 247
20. What is the correct term for a penetration tester manipulating the registry in order
to launch a binary file during the boot sequence?
A. Pivoting
B. Reverse engineering
C. Persistence
D. Requirements
21. What tool would allow network analysts to report on network utilization levels?
22. What would be the best tool to test the security configuration settings for a web
application server?
23. With what tool would penetration testers discover live hosts and application
services on a network segment?
25. What could be used to reverse engineer a web server API when conducting a zero-
knowledge (black box) test?
A. Exploitation framework
B. Port scanner
C. HTTP interceptor
D. Password cracker
26. What tool could be used by hackers to discover unpatched systems using automated
scripts?
A. Exploitation framework
B. Port scanner
C. HTTP interceptor
D. Password cracker
27. What would allow system administrators to discover weak passwords stored on the
server?
A. Exploitation framework
B. Port scanner
C. HTTP interceptor
D. Password cracker
28. What documentation would mitigate the risk of pen testers testing the security
posture of all regional data centers when the requirement was only for the
e-commerce operation center?
A. Requirements
B. Scope of work
C. Rules of engagement
D. Asset inventory
29. What documentation would mitigate the risk of pen testers unintentionally causing
an outage on the network during business hours?
A. Requirements
B. Scope of work
C. Rules of engagement
D. Asset inventory
Answers 249
30. What type of security assessment is taking place if the tester needs to perform badge
skimming first?
Answers
1. A
2. C
3. A
4. A
5. C
6. C
7. A
8. C
9. B
10. A
11. A, B and C
12. A
13. A
14. A
15. D
16. A
17. A
18. A
19. A
20. C
21. A
22. B
23. D
250 Vulnerability Assessment and Penetration Testing Methods and Tools
24. D
25. C
26. A
27. D
28. B
29. C
30. D
7
Risk Mitigation
Controls
A large enterprise providing information services or critical infrastructure presents
a large attack surface. We must consider all aspects of security, including application
vulnerabilities and the likelihood that we will be attacked (always think worst-case
scenario). We must be aware of the kind of attacks to expect and, most importantly, how
to mitigate these threats. We must be proactive in our approach, using the latest tools and
techniques to best protect our assets. We must also consider physical security. But most
importantly, we should deploy defense in depth.
In this chapter, we will go over the following topics:
Race conditions
A race condition, also known as time of check time of use (TOCTOU), is usually
associated with a stored value and the use of that stored value. It is a time-related
vulnerability that can cause unexpected or unwanted results. For example, an application
stores a shopping basket with a total value of $500. Another thread (within the code
module) commits the total as a sales transaction and charges the customer account. This
process takes 800 ms before reading in the total value. Meanwhile, the customer has been
able to change the stored value of $500 to $1. These flaws are difficult to test for.
Buffer overflows
A memory buffer is used to store a value. An attacker can target the memory location by
inputting too many characters and can cause unwanted results. If the buffer has a 16-byte
limit and the actual data sent is for 17 bytes, then the outcome will be an overflow, where
the extra byte may have access to a memory location for unwanted processing. This result
of a buffer overflow can be a denial of service (DoS) condition or the goal may be to run
an arbitrary command. It is important for developers to perform input validation within
the code.
Integer overflows
Integer overflows target a memory location storing a numerical value. If the appropriate
input validation checks are not present in the code, then a value may be accepted that
is outside of the range declared for the memory location. This can cause errors and
unwanted results.
Understanding application vulnerabilities 253
Broken authentication
Broken authentication is one of the OWASP top-10 risks. Attackers will target applications
using credential stuffing (lists of valid accounts and passwords), brute-force password
attacks, unexpired session cookies, and many more. Protection against these types of
attacks would include the elimination of weak passwords and the use of multi-factor
authentication (MFA). For more details, see https://tinyurl.com/OWASPA2.
Insecure references
This vulnerability now comes under the OWASP heading of broken access control and is
number 5 in the web at-risk categories. It means a user can gain access to a part of the
system that they have no authorization for. It is made possible by privilege escalation,
bypassing access control validation, or reusing another user's session ID. For more
information, see https://tinyurl.com/OWASPA5.
Security misconfiguration
Security misconfiguration could affect many parts of a system providing customer-
focused web applications. The network environment could be poorly configured, or a
misconfigured firewall may allow access over an insecure port. The operating system,
libraries, applications, storage, and HTTP improper headers should all be considered
when assessing organizational security.
Information disclosure
Information disclosure could allow an attacker to view data that is covered by regulatory
compliance, laws, intellectual property, and more. This vulnerability can be present due
to a number of factors, including weak encryption algorithms, no encryption, insecure
protocols, and unsecured data storage, to list a few. For more information, see https://
tinyurl.com/OWASPA3.
254 Risk Mitigation Controls
Certificate errors
It is important to allow for secure transport of internet application protocols. Certificates
must meet certain criteria in order to be trusted by the client-side application. Figure 7.1
shows typical fields on a standard X.509 certificate:
If the certificate will be used for multiple sites, it will need to support wildcards or
Subject Alternate Names (SANs).
To assess the security posture of a hosted application accessed over SSL/TLS, a
vulnerability assessment tool will also check for all the common vulnerabilities associated
with certificates.
Understanding application vulnerabilities 255
The current industry standard is TLS 1.2. Some sites may also support TLS 1.3. SSL is not
used anymore as it is considered insecure. As we can see in Figure 7.3, older, un-secure
standards are not supported. It is a cat and mouse game where better encryption standards
get released and the push is on for people to upgrade their systems to the stronger and
more secure standards.
Weak ciphers
A cipher is the combination of a unique key and a method to encrypt the data. The Data
Encryption Standard (DES) was developed in the 1970s and would definitely meet the
criteria for a weak cipher. It has a maximum key size of 64 bits, but in reality, the actual
key size used is only 56 bits.
Figure 7.4 – Output from the Qualys SSL Labs vulnerability testing tool
Understanding application vulnerabilities 257
In the scan results, the weakest detected cipher suites are considered to be acceptable, as
the site received an A+ security rating.
Third-party libraries
When we use third-party libraries, we need to ensure vulnerabilities are not present and
that we are not infringing on any licensing restrictions.
Dependencies
It is important to ensure dependencies are in place so software can be deployed and
functions reliably without error. For example, a program written in Java needs a Java
virtual machine (VM) to be available, and without this dependency, it cannot run.
Another example would be an application that will use location services provided by the
Google Maps API. Many applications will now use online services, and if that service is
withdrawn, your code will not function.
Assessing inherently vulnerable systems and applications 259
Regression issues
When a software component has previously worked well, but now proves to be slow or
unresponsive, it is known as a software regression bug. It is important we test all software
modules to ensure they are still functioning whenever other parts of the system are
changed. The testing is known as regression testing.
{"firstName": "John","lastName":"Jones"}
Browser extensions
A browser extension adds additional functionality to a web browser. A useful addition
to allow playing media-rich content is Adobe Flash – once installed, it is important
to keep this up to date with patches, as there are many instances of vulnerabilities on
older versions of Flash. ActiveX is a supported browser extension within the Microsoft
Internet Explorer browser. ActiveX allows functionality such as spell checkers, language
translators, and location services to be available within the browser. It is important
to validate the source for any downloaded browser extensions and to ensure they are
updated/patched. The Microsoft Edge browser will only allow the addition of browser
extensions from the Windows Store. See Figure 7.6 for an example of Microsoft Edge
extensions:
Directory traversal
Directory traversal is when an attacker can input syntax that allows them to move through
the filesystem of the target web server. The goal of this is to access directories and files that
should be restricted. Figure 7.8 shows a typical string that attempts to move into the root
directory and then switch into the etc folder and display the password file:
Cross-site scripting
Cross-site scripting (XSS) is an attack where the hacker will embed a script into a web
page, then once the script is added to the web page, it will be loaded whenever the page
is rendered in a user's web browser. The target could be a travel site where you can leave
reviews. In Figure 7.9, we can see a script that has been added to a website that will run a
script from the attacker's site:
<script>GET http://hsbc.com/fundstransfer.
do?acct=GeorgeMainwaring&amount=€500 HTTP/1.2</script>
This kind of attack is, however, highly unlikely to succeed on a banking website.
Injection attacks
Injection attacks will use the customer-facing web application server as the go-between.
The commands will target the input fields of the application and the web application will
forward the requests to the backend server.
XML
XML is a common language for exchanging data and facilitating communication between
web applications. The goal is to access content or manipulate transactions.
266 Risk Mitigation Controls
LDAP
LDAP command verbs allow searching, creating, or modifying accounts stored in
directory services. Some examples using command-line syntax for Microsoft Active
Directory services are shown as follows. In the first example, we are searching for all users
in the itadmin container:
Input:
dsquery user "ou=it admin,dc=classroom,dc=local"
Result:
"CN=Mark Birch,OU=IT Admin,DC=classroom,DC=local"
In the second example, we are creating a new user account in the users container:
Input:
dsadd user "cn=sqlsystem,cn=users,dc=classroom,dc=local" -pwd
Pa$$w0rd1
Result:
dsadd succeeded:cn=sqlsystem,cn=users,dc=classroom,dc=local
Then, using that command logic, we could submit the following command:
In this example the AND condition is evaluated before the OR operator, making the WHERE
clause true. If the application does not perform the appropriate string checks, then the
application will select the first record in the user's table as the command logic will make
the statement true (1=1 is always true). This would result in an authentication bypass
exploit. Figure 7.11 shows how this could be input into a login screen:
This type of exploit allows the attacker to access the first account in the table of users.
Figure 7.12 shows a successful login attempt:
Sandbox escape
There are many examples of code running in a secure sandbox with no insecure
interactions with the local operating system or filesystem allowed, with Chrome and Edge
browsers being examples of this technology. A vulnerability was reported in November
2020 that allowed code to be run using Chrome outside of the sandbox. This allowed
attackers to target this flaw (CVE-2020-6573). Google released an update for Chrome to
mitigate the threat. Security researchers who discovered the flaw were paid $20,000.
VM hopping
When VMs are hosted on the same hypervisor or are accessible over the same virtual
network, there is the possibility for an attacker to gain access to another virtual host. This
attack can succeed if security is not addressed properly. Attacks can be launched through
the switch. A DoS could cause the switch to forward packets to all ports.
Recognizing common attacks 269
VM escape
When access to the underlying hypervisor is possible, then the attacker has broken out of
their isolated VM. The attacker may access the filesystem, re-route traffic, or re-configure
networking devices. It is the same as an attacker having access to your physical data center
with all the cabinets and racks unlocked. There are over 50 documented vulnerabilities
associated with this exploit. CVE-2020-3962 lists a vulnerability within VMware products
that allows for VM escape.
Interception attacks
These types of attacks allow a third party to gain access to our intellectual property,
customer records, or anything that we would consider confidential. The means of attack
will typically use a man-in-the-middle (MITM) to access the data. Figure 7.13 shows the
user's browser accessing an e-commerce application:
All the data will be accessed by the attacker before it is sent to the router and also before it
is returned to the client browser.
Social engineering
This is still a highly successful attack variant targeting humans. As many of these exploits
appear to be genuine and believable, they stand a good chance of success. There are many:
• Dumpster diving: This is where an attacker can search through discarded company
documents thrown into the trash. If we do not sanitize these documents, then
useful company information may be stolen. Documents could include calendars,
organizational charts, and sensitive data. This can be used to gain intelligence on an
organization's employees and may play a part in active reconnaissance.
• Shoulder surfing: Gaining access to credentials by being in close proximity to a
user who is logging in to a system. They could be looking over someone's shoulder
to see their password.
• Card or credential skimming: Used to gain access to token-based credentials by
cloning radio-frequency identification (RFID) cards.
VLAN hopping
This exploit allows an attacker to gain access to traffic on a protected network segment
(VLAN) that should be securely segmented. There are two well-known attack vectors that
are commonly used.
Double tagging
The attacker crafts a VLAN frame with two tags: one tag for the valid VLAN, and a second
tag for the protected segment. When the frame is passed between switches across a trunk
port, the original tag will be removed by the receiving switch to reveal a VLAN ID that
matches one of the supported VLANs on the switch. When we allow ports to be associated
with the default VLAN, this is possible, as the sending switch does not perform the check
for the VLAN ID.
Switch spoofing
The attacker will connect a device to a switch port that is set up to auto-negotiate as a
trunk port. This is targeting the Dynamic Trunking Protocol (DTP) feature, and if the
switch is not secured, then the attacker can forward all VLAN traffic to their device. It is
important to disable this configuration as a default option. To remediate this vulnerability,
we would set all ports as access-only ports, and for the ports that are used for trunking, we
would use the following command:
This would stop an attacker connecting to a port and setting up a trunk to their device.
272 Risk Mitigation Controls
Hunts
A hunt team will be tasked with discovering IOCs and APTs. The goal will be to discover
previous attacks and attacks in progress and prevent future attacks by gathering threat
intelligence. Forensics techniques and access to historical logged data can be used.
Developing countermeasures
Once security professionals have identified tactics, techniques, and procedures (TTPs),
we can use this information to build better defenses. Known blocks of bad actor IP
addresses can be blocked, rules can be updated on Network Intrusion Prevention
(NIP), Remote Triggered Blackhole (RTBH) rules can be created, as well as many
other countermeasures.
Deceptive technologies
There are tools and technologies that can be used to delay or divert attackers while at the
same time gathering useful threat intelligence. This includes the following techniques.
Hunts 273
Honeynet
A honeynet is a collection of systems and services set up to simulate a vulnerable network.
The goal will be to divert the attackers from the real network and to identify the attackers
and their tools and techniques. The goal of the honeynet is to simulate a real network.
With honeynets mainly deployed as virtual instances running on hypervisors, we can also
use the power of dynamic network configurations to provide for changing configurations
and responses to activity.
Honeypot
A honeypot has the same goals as a honeynet but is a single system, such as a vulnerable
web application server.
Decoy files
A decoy file can be used to discover the whereabouts of attackers and both internal
and external threats. A file would be created that would be of interest to the attacker,
for example, passwords.doc. The file would contain a hidden image, and when the
file is opened the image will be accessed from the web server. All access will be logged,
including the IP address of the system opening the file. This would generate a beacon.
Processing pipelines
Processing pipelines are used to read or capture raw data, process the data, and store the
output in data lakes or warehouses for further analysis. When we apply this technique to
building automation into our security, we can use artificial intelligence (AI) and ML to
better protect information systems. The data is unstructured, raw data at the start of the
process. Figure 7.15 shows the steps of the processing pipeline:
1. Capture data: Capture raw data from existing data records or live data streams.
2. Process data: Formatting from many different data types into a usable format.
3. Store data: Typical storage is a data lake. It is still raw data.
4. Analyze data: Now we try to make sense of all the captured data.
5. Use data: Build rules for our security appliances.
Other techniques to make the best use of large pools of data are discussed in the
following subsections.
Antivirus
Antivirus tools are an important preventative solution. They can be deployed on end
devices and on network gateways, such as next-generation firewalls (NGFWs) and
unified threat management (UTM). The latest antivirus tools will use smart detection
techniques, including heuristic analysis, and centralized control and monitoring.
Immutable systems
Immutable systems allow systems to be deployed easily from a validated image. This relies
on strict version control. When a change is required, updates and patches can be tested
in a non-production environment and signed off. The build image is then created and
assigned a version identifier, and production images can then be replaced. When we adopt
this process, there is less likelihood that we will have systems that do not align with a strict
security posture.
Hardening
To ensure systems and services are protected when in use, there should be a checklist
or baseline, to ensure only the required applications and services are available. By
adopting this approach, we can minimize our security footprint. If we disable or uninstall
unnecessary services and applications, then we avoid the risk of service or listening ports
being compromised. Vendor hardening guides and baseline configuration compliance
tools can be very useful. Figure 7.17 shows a configuration compliance scan for Red Hat
Linux 7. This will allow the configuration to be enforced through remediation:
Applying preventative risk reduction 277
Sandbox detonation
If an unknown file or attachment cannot be validated as genuine, then the safest way to
understand its purpose or logic is to observe behaviors in a secure environment where
there will be no adverse impacts to other systems. The analysis will allow files, scripts,
macros, and URL behaviors to be determined. The sandbox will emulate the operating
system but isolate access to the physical hardware. Microsoft Windows 10 supports a
secure sandbox (see Figure 7.18):
Application control
It is important to protect networks and information systems by ensuring we restrict the
applications that are installed to a set that are considered safe and vulnerability-free. It is
important that any applications used by the enterprise are covered by software licenses.
License technologies
Licensing can be complex in a large organization with multiple sites, business units, and
devices. It is important to have global oversight of the licenses that are available and in
use. Unexpected licensing costs or legal actions are best avoided. There are many tools
available to provide the appropriate information and reporting to ensure we are compliant.
Applying preventative risk reduction 279
Atomic execution
An atomic execution of a transaction means all or nothing. Examples could include a write
or move operation within the filesystem. When we move a file onto a new disk partition,
the original will be deleted. Therefore, a successful write operation must be accomplished
before the original can be deleted. This would ensure that a sudden loss of power would
not result in missing data. Also, the thread of execution, moving the file, would be isolated
from any other running process.
With other transactions, such as committing a payment operation to a database, this process
would ensure the process could not be manipulated by a race condition (or TOCTOU).
Security automation
Within large, complex enterprises, the challenge is to maintain a strong security posture
while supporting a diverse, heterogeneous environment. It is not uncommon to see a
mixture of Windows, Linux, Unix, and specialist operating systems supported within
the enterprise. To maintain a secure, stable environment, we must look at automating
security tasks.
280 Risk Mitigation Controls
* 0 * * 0 /bin/sh backup.sh
There is an interesting graphical user interface (GUI) utility, allowing for the creation of
crontab lines, at the following link:
https://crontab-generator.org/
Bash
Bash is a common shell that is now included with many common operating systems,
including Unix, Linux, and macOS, and can be added as a feature on Microsoft Windows.
It allows commands to be executed from within the shell and also supports automation
through shell scripts. It is important to install the latest version and to apply any updates,
as there are vulnerabilities on older versions. There are many commands available and
help will display the range of commands. To view help on specific commands, we use
man <command name> or <command name>--help.
Applying preventative risk reduction 281
PowerShell
PowerShell is the current Microsoft command shell. It is open source and can also
be installed on Linux and macOS operating systems. It is very powerful and includes
extra functionality that is not available through the GUI. Commands are executed by
combining a verb and a noun. Examples of verbs are get, set, start, and stop,
which are the actions, and the noun will refer to the object, such as vm or service. An
example command to obtain a list of all available VMs and display the output is shown in
Figure 7.21:
Python
Python is an open source interpreted programming language. It is supported on many
operating systems and can be used to automate administrative tasks. One of the benefits
of using Python is that it is relatively easy for new users to understand. For additional
information, including downloads and help, see the following link:
https://www.python.org/
Applying preventative risk reduction 283
Physical security
To ensure our organization is fully protected from all threats, including physical threat
actors, we must ensure we have defense in depth. Information systems may be hosted
within our own managed data centers, with third parties and also cloud providers.
To ensure we meet expected industry and regulatory standards for security, there are
audits that can be performed. When planning for the security of a data center, there are
recognized international standards such as the American National Standards Institute/
Telecommunications Industry Association (ANSI/TIA-942). This standard focuses on
physical security controls. Regulatory compliance standards, such as the Payment Card
Industry Data Security Standard (PCI DSS) or the Sarbanes-Oxley Act, have certain
requirements for physical security controls. The Statement on Standards for Attestation
Engagements No. 16 (SSAE 16) is a compliance audit that focuses on controls
implemented by service organizations.
Review of lighting
ANSI/TIA-942 has requirements for occupied space, entry points, and unoccupied space
within a data center. For entry points and unoccupied spaces, it is a requirement that motion
sensors are deployed and will automatically activate lighting. The lighting should be of a
standard that safe passage is possible, and identification is possible using video cameras. In
all occupied zones, lighting must be provided at a minimum intensity of 200 lux.
Camera reviews
It is important to review available tools and technologies when choosing systems, such as
IP video. Cameras can be used for facial recognition and vehicle license plate recognition,
as well as detecting intrusions within a facility, so systems supporting high definition
or Infrared (IR) may also need to be considered. To ensure high-quality images are
recorded, it may be necessary to have high-capacity network infrastructure and storage.
284 Risk Mitigation Controls
"confined space means any place, including any chamber, tank, vat, silo, pit,
trench, pipe, sewer, flue, well or other similar space in which, by virtue of its
enclosed nature, there arises a reasonably foreseeable specified risk."
An organization must pay close attention to health and safety. Regulations and laws
concerning health and safety can be very strict. Policies, procedures, and adequate
training must be made mandatory.
Where an employer has been shown to have not performed due care, penalties can result
in fines and prison sentences.
Summary
In this chapter, we have assessed enterprise risk using many applicable methods. We have
studied options to mitigate risks. Enterprises will host information services or critical
infrastructure, and this presents a large attack surface. We have considered all aspects of
security, including application vulnerabilities and the likelihood that we will be attacked.
We have learned about many common application vulnerabilities. We have understood
the importance of inherently vulnerable systems and applications. In this chapter, we
have investigated common attacks against applications and learned about the benefits of
proactive and detective risk controls. You have learned about effective preventative risk
reduction. This knowledge will be useful when planning for incident response and the use
of forensic analysis in the next chapter.
In the next chapter, we will take a look at planning an effective incident response policy. We
will understand the importance of forensics to identify and provide evidence in the event of
a breach. We will also learn which tools are appropriate during the forensics process.
Questions 285
Questions
Here are a few questions to test your understanding of the chapter:
1. Attackers find a vulnerability on a website that allows them to select items from a
shopping basket. When the authorize payment button is selected, there is a 500 ms
delay. The attackers run a script that takes 200 ms and allows the final payment to be
altered. What is the vulnerability that has been targeted?
A. Buffer overflow
B. Integer overflow
C. Broken authentication
D. Race condition
2. Attackers find a vulnerability on a website that allows them to select items from
a shopping basket. There is a running total value for the basket. When items are
added beyond a total of $9,999, the total displays a value starting from $0.00. What
is the vulnerability that has been targeted?
A. Buffer overflow
B. Integer overflow
C. Broken authentication
D. Weak ciphers
3. What allows attackers to sniff traffic on a network and capture cookies sent
over HTTP?
A. Improper headers
B. Poor exception handling
C. Certificate errors
D. Race condition
4. What allows developers to maintain an inventory of all code libraries and licenses
used in their applications?
6. What is it called when developers no longer release security patches for their
software applications?
A. End-of-support/end-of-life
B. Regression issues
C. Dependencies
D. Bankruptcy
8. What is it called when developers rely on a cloud provider API for full functionality
of their software applications?
9. When a software component has previously worked well but now proves to be slow
or unresponsive, what is it known as?
A. Unsafe functions
B. Unsafe third-party libraries
C. Software dependencies
D. Software regression bug
10. When considering input validation for your web application, where should the
validation take place?
A. Client-side
B. Flash
C. Server-side
D. ActiveX
11. What is runtime or interpreted code that can provide media-rich web content
within a web browser?
12. What is runtime or interpreted code that can provide partial page updates
(therefore saving bandwidth) when repositioning a map on screen?
13. Security professionals have found IOCs while reviewing Security Incident and
Event Management (SIEM) logs. The following commands were found from the
application server logs:
../../../../../etc/password
288 Risk Mitigation Controls
14. Security professionals have found IOCs while reviewing SIEM logs. The following
commands were found in the application server logs:
GET http://acmebank.com/transferfunds.
do?acct=bobjones&amount=$400 HTTP/1.1
15. While reviewing web application firewall logs, security professionals have found
IOCs. The following commands were found in the logs:
SELECT * FROM users WHERE username = ''OR 1=1 –' AND
password = 'mypass1'
16. While executing malware in an isolated environment, malware has been found on
previously unaffected systems. What is the likely cause?
A. Sandbox escape
B. Virtual machine (VM) hopping
C. VM escape
D. Sandbox detonation
Questions 289
17. Internet traffic has been rerouted causing outages for many large internet providers.
Attackers have used default accounts to configure ISP routers. What technology or
vector of attack has been used?
A. BGP
B. VLAN hopping
C. LDAP
D. DDoS
18. What type of attack will most likely be effective when untrained users are targeted?
A. Social engineering
B. VLAN hopping
C. Hunts
D. DDoS
19. Security researchers need to understand APT bad actors by observing their tools,
tactics, and procedures. What would be the best tool for this?
A. Honeynet
B. Honeypot
C. Decoy files
D. Antivirus
A. Honeynet
B. Honeypot
C. Decoy files
D. Logic bomb
21. Microsoft security researchers need to understand APT bad actors by observing
their tools, tactics, and procedures. They gather massive amounts of raw security
data every day from customer endpoints. What would be the best approach to
identify IOCs?
A. Processing pipelines
B. Indexing and search
290 Risk Mitigation Controls
22. What allows an organization to deploy server operating systems that must be
replaced when there is an updated version?
A. Immutable systems
B. Hardening
C. Sandbox detonation
D. License technologies
A. Application whitelisting
B. Application hardening
C. Application blacklisting
D. Atomic execution
A. Application whitelisting
B. Application hardening
C. TOCTOU
D. Atomic execution
25. Linux systems need to run a scheduled backup at midnight every day. What would
allow administrators to automate the process?
A. Cron
B. Bash
C. PowerShell
D. Python
26. Linux system administrators need to execute common shell commands. What
should they use?
A. Cron
B. Bash
Answers 291
C. PowerShell
D. Python
27. Microsoft administrators need to run powerful command-line utilities and create
scripts to automate everyday system tasks. Scripts will also be created using .PS1
extensions. What will they use?
A. Cron
B. Bash
C. PowerShell
D. Python
28. Acme Corporation needs to support a common programming language that will
function across different vendor operating systems. What should they choose?
A. Cron
B. Bash
C. PowerShell
D. Python
Answers
1. D
2. B
3. A
4. D
5. A
6. A
7. B
8. C
9. D
10. C
11. D
12. A
13. A
292 Risk Mitigation Controls
14. C
15. D
16. A
17. A
18. A
19. A
20. C
21. A
22. A
23. A
24. D
25. A
26. B
27. C
28. D
8
Implementing
Incident Response
and Forensics
Procedures
When considering all the threats that can impact an organization, it is important to ensure
there are policies and procedures in place to deal with unplanned security-related events.
To ensure timely responses to security incidents, we should implement detailed planning
to provide controls and mitigation. It is important, given the nature of sophisticated,
well-funded adversaries, that we use a holistic approach when deploying appropriate
threat detection capabilities. Some approaches may involve automation, which can lead
to occasional mistakes (false positives and false negatives), so it is important that we also
ensure we include humans in the loop. The ever-increasing complexity of attacks and a
large security footprint add to these challenges. There is also evidence that Advanced
Persistent Threat (APT) actors are likely to target vulnerable organizations. Countering
APTs may require that we use advanced forensics to detect Indicators of Compromise
(IOCs) and, where necessary, collect evidence to formulate a response.
294 Implementing Incident Response and Forensics Procedures
Event classifications
There are many ways to identify anomalous or malicious events. We can take advantage
of automated tools such as Intrusion Detection Systems (IDS) and Security Incident
and Event Monitoring (SIEM). We can also rely on manual detection and effective
security awareness training for our staff, which can help in detecting threats early. Service
desk technicians and first responders can also be effective in detecting malicious activity.
Common Attack Pattern Enumeration and Classification (CAPEC) was established by the
US Department of Homeland Security to provide the public with a documented database of
attack patterns. This database contains hundreds of references to different vectors of attack.
More information can be found at http://capec.mitre.org/data/index.html.
Triage event
It is important to assess the type and severity of the incident that has occurred. You can
think of triage as the work that's been done to identify what has happened. This term
is borrowed from emergency room procedures when a patient in the ER is triaged to
determine what needs to be fixed. This enables the appropriate response to be made
concerning the urgency, as well as the team members that are required to respond. It is
important to understand systems that are critical to enterprise operations as these need to
be prioritized. This information is generally available if Business Impact Analysis (BIA)
has been undertaken. BIA will identify mission-essential services and their importance
to the enterprise. BIA will be covered in detail in Chapter 15, Business Continuity and
Disaster Recovery Concepts. As there will be a finite number of resources to deal with the
incident, it is important not to focus on the first-come, first-served approach.
296 Implementing Incident Response and Forensics Procedures
Preparation
While preparing for an IRP, it is a good practice to harden systems and mitigate security
vulnerabilities to ensure a strong security posture is in place. In the preparation phase, it
is normal to increase the enterprise's resilience by focusing on all the likely attack vectors.
Some of the tasks that should be addressed to prepare your organization for attacks
include the following:
Detection
To respond to a security incident, we must detect anomalous activity. The vector that's
used for an attack can be varied in that it can include unauthorized software, external
media, email attachments, DoS, theft of equipment, and impersonation, to name a few. We
must have the means to identify new and emerging threats. Common methods include
IDP, SIEM, antivirus, AntiSpam, FIM software, data loss prevention technology, and third-
party monitoring services. Logs from key services and network flows (NetFlow and sFlow)
can also help detect unusual activity. First responders, such as service desk technicians,
may also be able to confirm IOCs when responding to user calls.
Once we have detected such security-related events, we must analyze the activity to
prepare a response.
298 Implementing Incident Response and Forensics Procedures
Analysis
Not every reported security event or automatic alert is necessarily going to be malicious;
unexpected user behavior or errors may result in false reporting. It is important to discard
normal/non-malicious events at this stage. When a security-related event is incorrectly
reported, it can result in the following:
When the correct tools and investigative practices are followed, then the correct diagnosis
should be made, resulting in the following:
• True positives: This means that the event has been identified as malicious and the
appropriate action can be taken.
• True negatives: This means that an event has been correctly identified as
non-threatening and no remedial action needs to be taken.
It is important to document the incident response process using issue tracking systems,
which should be readily available for IRT members. The following diagram shows
information that should be recorded in this application database:
Containment
It is important to devise containment strategies for different types of incidents. If the
incident is a crypto-malware attack or fast-spreading worm, then the response will
normally involve quickly isolating the affected systems or network segment. If the attack
is a DoS or DDoS that's been launched against internet-facing application servers, then the
approach may involve implementing a Remote Triggered Black Hole (RTBH) or working
with an ISP offering DDoS mitigation services.
The following diagram shows the criteria that should be considered when planning
containment strategies:
Lessons learned
An important part of the incident response process is the after-action report, which
allows improvements to be made to the process. What went well or not well should be
addressed at this point. The team should perform this part of the exercise within days of
the incident. The following diagram highlights some of the issues that may arise from a
lessons learned exercise:
Playbooks
Playbooks can be used to respond to an incident by giving security professionals a set
of checks and actions to work through common security scenarios. This can be very
beneficial in reducing response times and containing the incident. The following diagram
shows a playbook that's been created to help handle a malware incident:
Playbooks can be very useful as they provide a step-by-step set of actions to handle typical
incident scenarios.
Runbooks
A runbook will allow first responders and other Incident Response Team (IRT)
members to recognize common scenarios and document steps to contain and recover
from the incident. It could be a set of discrete instructions to add a rule to a firewall or to
restart a web server during the recovery phase within incident response. Runbooks do not
include multiple decision points (this is what playbooks are used for), so they are ideal for
automated responses and will work well when integrated within Security Orchestration
and Response (SOAR).
There are many vendor solutions currently being offered to orchestrate security responses.
Communication plan
It is important to have effective communication plan stakeholder management.
The IRT may need to communicate with many external entities, and it is important to
document these entities within the IRP. Some examples of these parties are shown in the
following diagram:
Forensic process
It is important to follow the correct forensic process. This can be broken down into four
steps, as shown in the following diagram:
1. Data collection involves identifying sources of data and the feasibility of accessing
data. For data that's held outside the organization, a court order may be required to
gain access to it. It is important to use validated forensic tools and procedures if the
outcome is going to be a legal process.
2. Once we have collected the data, we can examine the raw data. Such data could
include extensive log files or thousands of emails from a messaging server; we can
filter out the logs and messages that are not relevant.
3. Data analysis can be performed on the data to correlate events and patterns.
Automated tools can be used to search through the logged data and look for IOCs.
4. During the reporting phase, decisions will be made as to what further action is
applicable. Law enforcement will need detailed evidence to construct a legal case.
Senior management, however, will require reports in a more business-orientated
format to formulate a business response.
Understanding forensic concepts 305
Chain of custody
A chain of custody form must begin when the evidence is collected. It is important to
document relevant information about where and how the evidence was obtained. When
the evidence changes hands, it is important that documentary evidence is recorded,
detailing the transaction.
The following diagram shows a typical chain of custody form:
Order of volatility
When you're undertaking computer forensics, it is important to follow standards and
accepted practices. Failure to do so may result in a lack of evidence or evidence that is
inadmissible. RFC 3227 is the Internet Engineering Taskforce (IETF) standard for
capturing data that will be used to investigate data breaches or provide evidence of
IOCs. The volatile data is held in a CPU cache that contains data that is constantly being
overwritten, while archived media is long term and is stored on paper, a backup tape, or
WORM storage. The following screenshot shows some of the storage locations that should
be addressed:
Memory snapshots
Memory snapshots allow forensic investigators to search for artifacts that are loaded into
volatile memory. It is important to consider volatile memory as an anomalous activity that
may be found that does not get written to storage or logs.
Images
For investigators to analyze information systems while looking for forensic artifacts, the
original image/disk mustn't be used (it will be stored securely as the control copy). The
image must be an identical bit-by-bit copy, including allocated space, unallocated space,
and free and slack space.
308 Implementing Incident Response and Forensics Procedures
Evidence preservation
Once the evidence has been obtained, it is important to store the evidence in a secure
location and maintain the chain of custody. We must be able to demonstrate that the
evidence has not been tampered with. Hashing the files or images is performed to create a
verifiable checksum. Logs can be stored on Write Once Read Many (WORM) to preserve
the evidence.
Cryptanalysis
Cryptography may have been used to hide evidence or to render filesystems that were
unreadable during a ransomware attack. In these cases, cryptanalysis may be deployed to
detect the techniques that were used, as well as the likely attacker.
Steganalysis
Steganography is a technology that's used to hide information inside another file, often
referred to as the carrier file. Steganalysis is the technology that's used to discover hidden
payloads in carrier files. Digital image files are often used to hide text-based data, such as
JPEGs. When text is hidden in a compressed digital image file, it can distort the pixels in
the image. These distortions are then identified by the Steganalysis tool.
foremost
foremost allows you to retrieve various file types that have been deleted from
the filesystem. It will search for file fragments in many different formats, including
documents, image formats, and binary files. It can retrieve deleted files from the live file
system or a forensics image.
The following screenshot shows help for the foremost command:
In this example, we are searching the fixed drive for the .pdf and .png file types.
strings
This is a useful forensics tool for searching through an image file or a memory
dump for ASCII or Unicode strings. It is a built-in tool that's included with most
Linux/Unix distributions.
310 Implementing Incident Response and Forensics Procedures
The following screenshot shows the options for the strings command:
We are using the strings command to search for all the strings within python3
and piping the output to the grep command, to only display strings containing
copyright characters.
Hex dump
A hex dump allows us to capture and analyze the data that's stored in flash memory. It is
commonly used when we need to extract data from a smartphone or other mobile device.
It will require a connection between the forensics workstation and the mobile device.
A hex dump may also reveal deleted data, such as SMS messages, contacts, and stored
photos that have not been overwritten.
Using forensic analysis tools 311
Binwalk
Binwalk is a useful tool in computer forensics as it will search binary images for embedded
files and executable code. When running the tool, the results can be placed in a folder for
further analysis. It can also be used to compare files for common elements. We may be
able to detect signatures from a known malicious file in a newly discovered suspicious file.
The output will display the contents and the offsets in decimal and hex of the payloads.
The following screenshot shows the output of the binwalk command:
Ghidra
Ghidra is a tool developed by the National Security Agency (NSA) for reverse
engineering. It includes software analysis tools that are capable of reverse engineering
compiled code. It is a powerful tool that can be automated and also supports many
common operating system platforms. It is designed to search out malicious processes
embedded within binary files.
OllyDbg
Another popular reverse engineering tool is OllyDbg. It can be useful for developers to
troubleshoot their compiled code and can also be used for malware analysis. OllyDbg
may also be used by adversaries to steal Intellectual Property (IP) by cracking the
software code.
312 Implementing Incident Response and Forensics Procedures
Readelf
Readelf allows you to display the content of ELF files. ELF files are system files that are
used in Linux and Unix operating systems. ELF files can contain executable programs and
libraries. In the following example, we are reading all the fields contained within the ssh
executable program file:
readelf -a ssh
Objdump
This is a similar tool to Readelf in that it can display the contents of operating system files
on Unix-like operating systems.
Strace
Strace is a tool for tracing system calls made by a command or binary executable file.
The following screenshot shows the Strace output for hostname. The actual output for
hostname would be the local system's hostname (dell7580):
ldd
To display dependencies for binary files, we can use the ldd command. This tool is
included in most distributions of Linux operating systems and will show any dependent
third-party libraries.
The following screenshot shows ldd searching for dependencies in the ssh binary file:
file
The file command is used to determine the file type of a given file. Often, the extension
of a file, such as csv, pdf, doc, or exe, will indicate the type. However, many files in
Linux do not have extensions. In an attempt to evade detection, attackers may change the
extension to make an executable appear to be a harmless document. file will also work
on compressed or zipped archives. The following screenshot demonstrates the use of the
file command:
Analysis tools
Forensic toolkits are comprised of tools that can analyze filesystems, metadata, running
operating systems, and filesystems. We need advanced tools to detect the APTs and
IOCs that are hidden within our information systems. Let's look at some examples of
analysis tools.
ExifTool
To view or edit a file's metadata, we will need a specific analysis tool. ExifTool supports
many different image formats, including JPEG, MPEG, MP4, and many more popular
image and media formats.
The following command is used to extract metadata from an image that's been
downloaded from a website:
exiftool nasa.jpg
Nmap
Nmap can be used during analysis to fingerprint operating systems and services. This will
aid security professionals in determining the operating system's build version and the
exact versions of the hosted services, such as DNS, SMTP, SQL, and so on.
Aircrack-ng
When we need to assess the security of wireless networks, we can use Aircrack-ng. This
tool allows you to monitor wireless traffic, as well as attack (via packet injection) and crack
WEP and WPA Pre-Shared Keys (PSKs).
Volatility
The Volatility tool is used during forensic analysis to identify memory-resident artifacts. It
supports memory dumps from most major 32-bit and 64-bit operating systems, including
Windows, Linux, macOS, and Android. It is extremely useful if the data is in the form of a
Windows crash dump, Hibernation file, or VM snapshot.
Sleuth Kit
This tool is a collection of tools that are run from the command line to analyze forensic
images. It is usually incorporated in graphical forensic toolkits such as Autopsy. The
forensic capture will be performed using imaging tools such as dd. Sleuth Kit also
supports dynamically linked storage, meaning it can be used with live operating systems
as well as with static images. When it is dynamically linked to operating system drives, it
can be useful as a real-time tool, such as when responding to incidents.
Imaging tools
When considering the use of imaging tools to be used in forensic investigation, one of the
primary goals is to choose a tool that has acceptance when presenting evidence to a court.
Not all tools guarantee that the imaging process will leave the original completely intact,
so additional tools such as a hardware write blocker are also important. The following
tools are commonly used when the evidence will need to be presented to a court of law.
316 Implementing Incident Response and Forensics Procedures
dd command
The dd command is available on many Linux distributions as a built-in tool. It is known
commonly as data duplicator or disk dump. Although on older builds of Linux the tool
also allowed for the acquisition of memory dumps, it is not possible to take a complete
dump of memory on a modern distribution.
The format of the command is dd if=/dev/sda of=/dev/sdb <options>, where
if is the input field and of is the output field. In this case, we are copying from the first
physical disk to the second forensics attached disk.
Using forensic analysis tools 317
Hashing utilities
When we are working with files and images, it is important to use hashing tools so that
we can identify files or images that have been altered from the original. A hash is a
checksum for a piece of data. We can use a hash value to record the current state of an
image before a cloning process, after which we can hash the cloned image to prove it is the
same. During forensic analysis, we can use a database of known hashes for our operating
system and application software to spot anomalies. Many hashing utilities are included
with operating systems.
Sha<keylength>sum is included with most Linux distributions, allowing for the
checksum of a file to be calculated using sha160, 224, 256, 384, and 512. In the
following example, a sha256 is being calculated for the Linux grep command:
sha256sum grep
605aaf67445e899a9a59c66446fa0bb15fb11f2901ea33386b3325596b3c8
423 grep
netstat allows a live report to be created on listening ports, connected ports, and the status
of connections for local and remote network interfaces. If you need to capture this output
for further analysis, it can be piped to a file using the netstat <options> filename
syntax. The following screenshot shows the output of the netstat command, saved as a
text file:
ps can be used to view running processes on Linux and Unix operating systems. Each
process is allocated a process ID, which can be used if the process is to be terminated with
the kill command.
The following screenshot shows the output of ps -A (shows all processes) when run on
Kali Linux:
vmstat can be used to show the number of available computing resources and currently
used resources. The resources are proc (running processes), memory (used and available),
swap (virtual memory on disk), I/O (shows blocks sent and received from disk), system
(interrupts per second, hardware calls), and CPU (processor activity). The vmstat
command with no options will show the average values since the system was booted. The
following screenshot shows an example where there is a 5-second update for the display:
netcat
To execute commands on a remote computer, we can install netcat (nc). netcat can be
launched as a listener or a compromised computer to give remote access to attackers. It
may also be used in forensics to gather data with a minimum footprint on the system
under investigation.
To set up a listening port on the system under investigation, we can type the
following command:
nc -l -p 12345
To connect from the forensics workstation, we can use the following command:
nc 10.10.0.3 12345
This will allow us to run commands on the system under investigation to reflect all the
outputs on the forensics workstation.
To transfer a file for investigation, we can use the following commands. On the system
under investigation, we will wait for 180 seconds on port 12345 to transfer
vreport.htm:
To transfer the report file to the forensics workstation, we can use the following command:
We now have a copy of the report on the forensics workstation (the report is a
vulnerability scan that was performed with a different tool).
tcpdump can be used to capture real-time network traffic and also to open captured traffic
using common capture formats, such as pcap. tcpdump is included by default on most
Linux distributions. It is the Linux version of Wireshark.
To capture all the traffic on the eth0 network interface, we can use the following
command:
tcpdump -I eth0
The following screenshot shows the traffic that was captured on eth0:
conntrack -L
Using forensic analysis tools 323
Wireshark
One of the most well-known protocol analyzers is Wireshark. It is available for many
operating system platforms and presents the security professional with a Graphical User
Interface (GUI). Using Wireshark, we can capture traffic in real time to understand
normal traffic patterns and protocols. We can also load up packet capture files for detailed
analysis. It also has a command-line version called TShark. The following screenshot
shows a packet capture using Wireshark:
In the example, we have filtered the displayed packets to show only DNS activity.
Wireshark is a very powerful tool. For further examples and comprehensive
documentation, go to https://www.wireshark.org/docs/.
Summary
In this chapter, we have considered many threats that can impact an enterprise and
identified policies and procedures to deal with unplanned security-related events. We
learned about the importance of timely responses to security incidents. Knowledge has
been gained on deploying the appropriate threat detection capabilities. We have studied
automation, including orchestration and SOAR, also taking care to include a human in the
loop. Ever-increasing evidence of APTs means that we need to rely on forensics to detect
IOCs and, where necessary, collect the evidence to formulate a response. You should now
be familiar with incident response planning and have a good understanding of forensic
concepts. After completing the previous section, you should now be familiar with using
forensic analysis tools.
Cybersecurity professionals must be able to recognize and use common security tools
as these will be important for many day-to-day security activities. Nmap, dd, hashing
utilities, netstat, vmstat, Wireshark, and tcpdump are tools that will feature in the CASP
004 certification exam's questions. Specialist binary analysis tools are not as commonly
used outside of specialist job roles.
These skills will be useful when we move on to the next chapter, where we will cover
securing enterprise mobility and endpoint security.
Questions
Answer the following questions to test your knowledge of this chapter:
1. During a security incident, a team member was able to refer to known documentation
and databases of attack vectors to aid the response. What is this an example of?
A. Event classification
B. A false positive
C. A false negative
D. A true positive
Questions 325
3. During a security incident, a senior team leader coordinated with members already
dealing with a breach. They were told to concentrate their efforts on a new threat.
What process led to the team leader's actions?
A. Preparation
B. Analysis
C. Triage event
D. Pre-escalation tasks
A. Preparation
B. Detection
C. Analysis
D. Containment
A. Preparation
B. Detection
C. Analysis
D. Containment
326 Implementing Incident Response and Forensics Procedures
6. After a security incident, workstations that were previously infected with crypto-
malware were placed in quarantine, wiped, and successfully scanned with an
updated antivirus. what part of the incident response process should be followed?
A. Analysis
B. Containment
C. Recovery
D. Lessons learned
A. Communication plan
B. Runbooks
C. Configuration guides
D. Vendor documentation
8. Critical infrastructure has been targeted by attackers who demand large payments
in bitcoin to reveal the technology and keys needed to access the encrypted data.
To avoid paying the ransom, analysts have been tasked to crack the cipher. What
technique will they use?
A. Ransomware
B. Data exfiltration
C. Cryptanalysis
D. Steganalysis
A. Containment
B. SOAR
C. Communication plan
D. Configuration guides
Questions 327
10. A technician who is part of the IRT is called to take a forensic copy of a hard drive
on the CEO's laptop. He takes notes of the step-by-step process and stores the
evidence in a locked cabinet in the CISO's office. What will make this evidence
inadmissible?
A. Evidence collection
B. Lack of chain of custody
C. Missing order of volatility
D. Missing memory snapshots
11. A forensic investigator is called to capture all the possible evidence from a
compromised laptop. To save battery life, the system is put into sleep mode. What
important forensic process has been overlooked?
A. Cloning
B. Evidence preservation
C. Secure storage
D. Backups
12. A forensic investigator is called to capture all possible evidence from a compromised
computer that has been switched off. They gain access to the hard drive and connect
a write blocker, before recording the current hash value of the hard drive image.
What important forensic process has been followed?
A. Integrity preservation
B. Hashing
C. Cryptanalysis
D. Steganalysis
13. Law enforcement needs to retrieve graphics image files that have been deleted or
hidden in unallocated space on a hacker's hard drive. What tools should they use
when analyzing the captured forensic image?
14. FBI forensics experts are investigating a new variant of APT that has replaced Linux
operating system files on government computers. What tools should they use to
understand the behavior and logic of these files?
A. Runbooks
B. Binary analysis tools
C. Imaging tools
D. vmstat
15. A forensic investigator suspects stolen data is hidden within JPEG images on a
suspect's computer. After capturing a forensic image, what techniques should they
use when analyzing the JPEG image files?
A. Integrity preservation
B. Hashing
C. Cryptanalysis
D. Steganalysis
A. Nmap
B. Aircrack-ng
C. Volatility
D. The Sleuth Kit
17. A forensic investigator is called to capture all possible evidence from a compromised
computer that has been switched off. They gain access to the hard drive and connect
a write blocker. What tool should be used to create a bit-by-bit forensic copy?
A. dd
B. Hashing utilities
C. sha256sum
D. ssdeep
Questions 329
18. To stop a running process on a Red Hat Linux server, an investigator needs to see all
the currently running processes and their current process IDs. What command-line
tool will allow the investigator to view this information?
A. netstat -a
B. ps -A
C. tcpdump -i
D. sha1sum <filename>
19. While analyzing a running Red Hat Linux server, an investigator needs to show
the number of available computing resources and currently used resources. The
requirements are for the running processor, memory, and swap space on the disk.
What tool should be used?
A. vmstat
B. ldd
C. lsof
D. tcpdump
20. During a live investigation on a Fedora Linux server, a forensic analyst needs to
view a listing of all opened files, the process that was used to open them, and the
user account associated with the open files. What would be the best command-line
tool to use?
A. vmstat
B. ldd
C. lsof
D. tcpdump
21. While analyzing a running Red Hat Linux server, an investigator needs to run
commands on the system under investigation to reflect all the outputs on the
forensics workstation. The analyst also needs to transfer a file for investigation using
minimum interactions. What command-line tool should be used?
A. netcat
B. tcpdump
C. conntrack
D. Wireshark
330 Implementing Incident Response and Forensics Procedures
22. Security professionals need to assess the security of wireless networks. A tool needs
to be identified that allows wireless traffic to be monitored, and the WEP and WPA
security to be attacked (via packet injection) and cracked. What would be the best
command-line tool to use here?
A. netcat
B. tcpdump
C. Aircrack-ng
D. Wireshark
23. A forensic investigator needs to search through a network capture saved as a pcap
file. They are looking for evidence of data exfiltration from a suspect host computer.
To minimize disruption, they need to identify a command-line tool that will
provide this functionality. What should they choose?
A. netcat
B. tcpdump
C. Aircrack-ng
D. Wireshark
24. A forensic investigator is performing analysis on syslog files. They are looking for
evidence of unusual activity based upon reports from User Behavior Analytics
(UBA). Several packets show signs of unusual activity. Which of the following
requires further investigation?
A. nc -w 180 -p 12345 -l < shadow.txt
B. tcpdump -I eth0
C. conntrack -L
D. Exiftool nasa.jpg
25. Recent activity has led to an investigation being launched against a recent hire in
the research team. Intellectual property has been identified as part of code now
being sold by a competitor. UBA has identified a significant amount of JPEG
image uploads to a social networking site. The payloads are now being analyzed by
forensics. What techniques will allow them to search for evidence in the JPEG files?
Answers
The following are the answers to this chapter's questions:
1. A
2. C
3. C
4. A
5. D
6. C
7. B
8. C
9. B
10. B
11. B
12. A
13. A
14. B
15. D
16. A
17. A
18. B
19. A
20. C
21. A
22. C
23. B
24. A
25. A
Section 3:
Security Engineering
and Cryptography
In this section, you will learn how to deploy various controls to protect enterprise data
and systems. You will learn how to protect end devices using a variety of different
technologies and learn the importance of cryptography and PKI when protecting
enterprise data.
This part of the book comprises the following chapters:
In this chapter, we will learn about the tools and techniques needed to secure our
endpoint devices. We will study the following topics:
Managed configurations
There are many configuration settings available on the typical operating system, no matter
who the vendor is. Without the proper management of these settings, devices that access
corporate data will present a security risk. In the following subsections, we will take a look
at some of the available configuration options.
Implementing enterprise mobility management 337
Application control
It is important that we can control applications that are installed in the enterprise
workspace – that is, every application should be justified by a business need. When users
have access to the Google Play Store, Apple App Store, or Microsoft Store, they have the
option to install hundreds of applications. Applications often have multiple configuration
options, such as access to contacts information or access to location data. These settings
can be controlled by using mobile application management (MAM) tools, deployment
scripts, or Microsoft Group Policy Objects (GPOs). To allow for some flexibility in our
application control, we can enable containerization for enterprise applications and data.
Many vendors offer solutions that allow an organization to host a customized or restricted
list of applications on their own company-branded portal. Figure 9.1 shows a user view of
the Microsoft Store with Trainingpartners company branding:
Passwords
At a minimum, we should enable screen locks and a minimum password length (based on
company standards). This is to ensure that the data on the device will not be accessible if
the device is lost or stolen.
MFA adds another line of defense in cases where a password could be compromised.
Patch repositories
In order to offer the best levels of protection, it is important to access the latest patches
for supported end devices and ensure that they are deployed in a timely fashion. Failure
to implement patches will result in vulnerabilities that could have been mitigated.
Additionally, not installing the latest patches may render a device non-compliant, and
the user may not be able to access company resources until the device is updated and
made compliant.
Wi-Fi
Wi-Fi is the primary connection type for most mobile devices, although it is also
commonplace to access 4G+ and 5G cellular networks at high speed with limitless data
plans. In both instances, we should look to secure these links.
Device certificates
It is common for security certificates to be distributed to users and devices for a variety
of purposes. We can use certificates to gain trusted access to the network using 802.1X
security. There are many security applications that also require the use of certificates, such
as IPsec-based virtual private networks (VPNs), the S/MIME standard, and Pretty Good
Privacy (PGP).
To automate the deployment of certificates, we can use the Simple Certificate Enrolment
Protocol (SCEP). This is supported in most MDM solutions. Figure 9.4 shows the SCEP
enrolment process:
Implementing enterprise mobility management 341
Device profiles
To automate common settings on mobile devices, a profile with a set of configuration
objects can be deployed to the devices. Common profiles can include settings for Wi-Fi,
VPNs, and many more. Figure 9.5 displays some available settings when deploying device
configuration profiles:
Bluetooth
Bluetooth is a relatively short-range radio frequency. Class 2 Bluetooth (which is most
common for consumer devices) transmits at 2.5 mW and has a range of around 10 meters.
It allows data to be sent wirelessly with no encryption. This means wireless headsets could
allow users to attend a secure web conference but a wireless listening device could be used
to eavesdrop on the conference. It is common to restrict the use of Bluetooth in these
situations.
Near-field communication
Near-field communication (NFC) operates at low speeds over short distances (up to
4 cm). It can be used to transfer data – such as configuration data or print jobs – to an
NFC-enabled printer. It is commonly used in the wireless payment systems for Apple Pay
and Google Pay. Once a user has unlocked a mobile device, then NFC-enabled applications
may transfer data without any user verification. This function could lead to fraudulent
payment transactions (currently the United Kingdom allows up to £100 to be transferred
using contactless payment). It could also allow confidential data to be transferred.
Peripherals
Mobile devices can support extensibility through SD and MicroSD cards or USB
On-The-Go (OTG). This allows for data transfer and connectivity using USB devices.
Geofencing
One effective way to control the functionality of mobile devices is to use GPS tracking
to locate devices and restrict functions based upon the proximity of the device to a site
of interest. So, we could map out the coordinates of a secure site and create a geofence.
When devices are inside the geofence, we can disable the devices' cameras and any
recording capability, and when devices are outside the geofence, normal service can be
restored. In case a company device is stolen, we can create conditions where the device is
wiped of all its enterprise data when it is taken outside the geofence.
344 Enterprise Mobility and Endpoint Security Controls
VPN settings
VPN settings can be configured by deploying a VPN profile to a mobile device. It is
important to ensure confidentiality of business data when company employees are outside
the workplace. An always-on profile ensures that the device will always connect through
the secure VPN. Implementing a full tunnel configuration will ensure all applications and
browser activity is routed through the company network. This configuration will make
sure all security protocols can be applied. Figure 9.6 shows an example of a VPN client
using a full tunnel VPN:
Geotagging
An organization should consider the security of applications that use geotagging. For
example, users could install a fitness application on a mobile phone that automatically
uploads their activity (complete with maps) to social media. This is quite common when
you look at applications such as Strava or MapMyRun. In Figure 9.7, we can see the
mapping feature on the Strava application:
If you allow other users to view your activities, then you may reveal sensitive details about
your movements or reveal information regarding sensitive locations.
Tethering
Because mobile data plans can be very affordable on cellular networks, it is common for
users to use their mobile phones as a hotspot for their laptops. This presents a security
risk when a laptop is plugged into the enterprise local area network (LAN), as the laptop
can now bridge two networks.
Airplane mode
Mobile devices commonly have a setting that turns off all radio frequency channels (for
example, airplane mode). This could be a useful configuration option to invoke when
location-based security is a priority. Disabling radio frequency channels protects the
device from network based threats. It also means the device cannot download security
updates or be managed using Mobile Device Management (MDM).
Location services
Mobile operating systems often have privacy options associated with location services.
If location services are enabled, then content providers can deliver more tailored
information for users. Microsoft now includes a feature on Windows operating systems
called News and Interests that delivers traffic reports, weather forecasts, and local news
to devices based on their location. You may not want this location information to be
available for all applications. We can restrict these features in the Microsoft privacy
settings seen in Figure 9.8:
Implementing enterprise mobility management 347
Deployment scenarios
There are many different scenarios that an organization may consider when deploying
mobility management. For example, costs can be a driving factor and support issues
may be important. But security and management will always be the priority when
implementing MDM.
Corporate-owned devices
When an organization is responsible for purchasing the mobile devices that will be used
by employees, there can be much more control in selecting the operating systems and
devices that best suit the organization's business needs.
Physical reconnaissance
Mobile devices can be used to gather intelligence. If we allow the use of camera, video,
audio, and GPS functionality on mobile devices, a malicious user could capture accurate
data about a site that could later be used in an attack. This device may belong to an insider
threat actor or the data could be stolen from an unsecured mobile device.
Health privacy
Many devices will support fitness applications that can store the medical records of
users. Due to the COVID-19 pandemic, public health service tracking applications are
also commonplace. In the United Kingdom, there are National Health Service (NHS)
applications that can store a citizen's COVID-19 status and records of their immunization
history. This data would be referred to as PHI in the United States. It is important that this
type of data is held securely.
Containerization
When we allow the personal use of mobile devices that will also access or store business
data, we need to implement segmentation. Segmenting applications and data on mobile
devices is referred to as containerization. On Samsung devices, there is a vendor-created
container called Secure Folder. This can be managed using the Samsung Knox security
tools, or it can be managed using third-party MDM tools. Currently, Knox is supported
by Microsoft Endpoint Manager (previously Microsoft Intune), VMware AirWatch,
Ivanti MobileIron, and many more. Figure 9.10 shows the Secure Folder application on a
Samsung 9.0 device:
Security considerations for mobility management 353
Hardening techniques
It is important to recognize that there are many ways to strengthen the security posture of
devices. In this section, we will investigate some of the hardening techniques available.
When an account does not need to be used in an information system, it should be disabled.
Shell restrictions
To limit the ability of a user to access command shell tools and utilities, it is common
to block access to the shell itself or to limit the commands that can be accessed within
the shell. On Windows operating systems, it is common practice to block access to the
Command Prompt (CMD) and PowerShell interfaces for standard users. This is not
really an option for Linux and Unix systems because most of the functionality of these
operating systems is accessed outside of the graphical user interface (GUI). Common
shells used on Linux include Bash and KornShell. In Figure 9.13, we have created a new
user account with a restricted shell by using the -s switch command:
Patching
It is important to identify all of our assets and ensure we have the automated patching
of these devices enabled. If we cannot deploy vendor patches, we risk the possibility of
vulnerabilities leading to data loss or we might encounter availability issues. Patching
should be planned to avoid the potential of these disruptions and availability issues.
Ideally, we would fully test the effects of operating system or software application
patches in a test/staging area before deploying them into production. Figure 9.15 shows a
screenshot from ManageEngine Patch Manager Plus:
Firmware
It is important to update embedded operating system code, as this could be vulnerable to
attacks if not kept up to date. For example, Cisco IOS has several vulnerabilities posted
for potential attacks against its Session Initiation Protocol (SIP) that can cause a Denial
of Service (DoS) condition. CVE-2008-3799 documents a cybersecurity vulnerability,
where an attack can cause a memory leak, eventually resulting in resource exhaustion.
SIP is used with Voice over Internet Protocol (VOIP), it is used to place a call to another
subscriber. As many business-based telephony solutions have adopted this technology it is
important to secure this important functionality.
SELinux
SELinux was originally developed as a series of Linux security modules that were
added as patches to the Linux operating systems. It was originally developed by the
U.S. National Security Agency (NSA) but is now open source. It is incorporated into
most Linux distributions. With most operating systems, file ownership is controlled
through discretionary access control (DAC), which means the owner can set or change
permissions on files and folders that they own. The root account normally also has
the right to change the permissions on other users' files and folders, however, SELinux
enforces MAC, which means changing other users' permissions will not work (as the
system will override these changes).
SELinux has three main settings:
There are hundreds of possible enforcement settings. These are delivered via Boolean
settings (that is, on/off options). To view all of the currently enforced settings, you can
run getsebool -a from the Linux command shell.
Figure 9.17 shows the partial output of the preceding command for some SELinux
enforcable settings:
Secure Boot
To ensure the integrity of operating systems, it has become a standard procedure to check
the integrity of the bootloader needed to launch the installed operating system. Windows
operating systems from Windows 8.0 onward have supported this feature. This requires
a supported firmware interface ( that is, one following the Unified Extensible Firmware
Interface (UEFI) specification) that comes with a pre-loaded security key. The boot files
are signed using the pre-loaded key and then validated during the boot sequence. Systems
sold with Microsoft Windows 10 pre-installed have Secure Boot enabled by default.
Figure 9.19 shows the components required for Secure Boot:
UEFI
UEFI has been the standard firmware to support PC systems since Windows 8.0 was
released, in 2008. It offers many advantages over the previous option (the Basic Input
Output System (BIOS)), including a mouse-driven graphical environment. UEFI allows
for system checks and security processing to be carried out before launching the operating
system. As operating systems launch quickly when installed on solid-state drives (SSDs),
it is hard to press the correct function key quickly enough to access the menu. It is
common to access the UEFI settings through the Windows Advanced Recovery options
menu, within the operating system. Figure 9.20 shows the Windows menu used to access
the UEFI settings:
Implementing endpoint security controls 365
BIOS protection
The BIOS component has been standard within computing systems since the 1980s.
It allows for a series of power-on self-tests (POSTs) and for configuration settings to
be saved to the complementary metal-oxide semiconductor (CMOS), which needs a
power supply or battery to retain settings. It offers little in the way of security, apart from
allowing for passwords to protect the BIOS settings.
Attestation services
Attestation services allow for secure values to be forwarded from a hardware device to
an attestation service. Microsoft supports a service called the Host Attestation Service
within Azure Cloud Services. Once a device has been registered, the host TPM can
be used to store values that cannot be tampered with. An example use of this service is
ensuring that host operating systems are not running debugging tools as part of the host
operating system. This is because debugging tools could allow attackers to gain access to
local system memory and therefore access to confidential data.
366 Enterprise Mobility and Endpoint Security Controls
Measured Boot
With mMeasured Boot, PCR values can be validated from the TPM. The values
themselves are stored as cryptographic hashes. The hashes form a blockchain, which
means a number of values can be combined into a single hash value. Figure 9.21 shows the
hash chaining function:
Self-encrypting drives
When considering using FDE, performance may also be a goal. Self-encrypting drives
(SEDs) use built-in hardware to provide encryption, thereby offering a performance
advantage over software encryption. This is useful when incorporated into mobile
computing devices.
Compensating controls
Once an organization has deployed secured/hardened computing systems, we cannot
guarantee that the systems will be 100% protected. Depending on the new external threats,
insider threats, and attack techniques that may develop for an organization, additional
compensating controls may have to be considered. We will discuss some examples of these
in the following subsections.
Antivirus tools
Antivirus tools should be deployed as a centrally managed solution for Windows
operating systems. Windows is the main target for malware – there are very few examples
of malware infections for Unix and Linux systems. However, Linux file servers and mail
servers could pass on infected files if they are serving Windows clients. Traditional
antivirus software works by identifying malware by using signature definition files.
However, a virus may be programmed to change its signature, therefore defeating the
signature definition approach. As a result, newer antivirus tools can look for common
strings within a suspicious file. Windows comes with built-in protection with Microsoft
Windows Defender, but additional malware detection application suites are available to
further complement this.
Host-based firewalls
A local firewall will complement the network firewalls by blocking unwanted connections
on internal network segments also known as east-west traffic. To do this, host-based
firewalls are frequently deployed on the most commonly used operating systems. Microsoft
Windows has Windows Defender Firewall, which can be managed using graphical tools or
with PowerShell commands. Figure 9.22 shows a screenshot of Windows Firewall:
To display the firewall rules that are currently configured, we use the iptables –help
command.
To discard all firewall rules that are currently set, we can use the iptables –flush
command.
Figure 9.23 shows some iptables firewall rules that are set to block traffic being received
from any private network address:
To display rulesets (called chains), we use the iptables –list command from the
Linux bash shell.
It is normal to add a rule to a security device to block source traffic originating from a
private network address (like the example in Figure 9.23).This is done where traffic is
routed from external networks, as this type of traffic normally indicates the addresses are
being spoofed.
Redundant hardware
When hosting important systems that may result in a disproportionate impact if they fail,
we must identify ways to make them more resilient. On data center servers, it is common to
deploy redundant power supplies, network interface cards, and storage. In these situations, it
is important to identify where single points of failure can negatively impact a workload.
370 Enterprise Mobility and Endpoint Security Controls
Self-healing hardware
Wherever it is possible to increase resilience, we should implement systems that can identify
problematic events and where necessary, make an adjustment. One example of this is using
hardware clusters, where a failure to send a heartbeat signal would result in a workload
being hosted on another hardware node. Modern hard disk drives automatically mark
out bad blocks and replace them with reserve blocks held in a reserve pool. On Microsoft
operating systems from Windows Server 2012 onward, the Resilient File System (ReFS)
has been an available feature that allows this. The ReFS deploys an integrity scanner that
constantly scans the hosted drives and initiates a repair process if needed.
Summary
In this chapter, we have learned how to provide security for endpoint devices running a
variety of operating systems. We have understood the need to harden end devices such as
traditional desktop computers, laptops, tablets, and handheld and wearable technology. We
have discussed how to assess the security of these devices, how to choose the appropriate
technologies that the enterprise should adopt, and how to ensure we can provide the
attestation that these devices are compliant with security policies. We have also understood
the need for deployed images to be built from a validated secure template. We have
learned that services should only be enabled if there is a business need to justify them. We
investigated compensating controls, including host firewalls, EDR software, and antivirus
tools. We also learned about the tools and techniques needed to secure our endpoints. We
have studied technologies to support host attestation and Secure Boot options.
This information should give the reader a good baseline understanding of endpoint
security, before we move on to the next chapter, where we will study how to secure
critical infrastructure.
Questions 371
Questions
1. Some executives from an organization attend an industry conference. Using mobile
devices and wireless headsets, they are able to stay in touch with colleagues back at
the workplace. What may present a security concern in this situation?
A. Tethering
B. WPA3
C. Device certificates
D. Bluetooth
A. NFC
B. A split-tunnel VPN
C. Geofencing
D. Always-on VPN settings
3. What function should be disabled to ensure scientists cannot use their mobile
devices to bridge the corporation's network with a cellular operator's network?
A. Tethering
B. WPA3
C. Device certificates
D. Bluetooth
A. Containerization
B. Token-based access
C. A patch repository
D. Whitelisting
372 Enterprise Mobility and Endpoint Security Controls
5. A user calls the service desk because her Samsung smartphone is prompting her to
install updates that the vendor says will offer more functionality and security. What
is this an example of?
A. MFA requirements
B. Token-based access
C. A patch repository
D. Firmware over-the-air
6. An employee's company mobile device is reported as stolen 24 hours after the event.
Sensitive data has been posted online by hackers. What would have mitigated this
risk if the report had been made earlier?
A. MFA requirements
B. A remote wipe
C. A patch repository
D. Firmware over-the-air
7. What type of setting will ensure mobile devices will only be able to access Wi-Fi
when they connect securely to the company WLAN?
A. WPA3 SAE
B. Device certificates
C. Device profiles
D. Bluetooth
8. An employee has noticed several suspicious payments made from a company debit
card via Google Pay on their company smartphone. They recently attended a busy
trade conference. What technology was likely used to make the payments?
A. NFC
B. Peripherals
C. Geofencing
D. VPN settings
9. How can we prevent certain mobile applications from being accessible when
employees take COPE devices out of the warehouse?
A. NFC
B. MFA
Questions 373
C. Geofencing
D. VPN settings
10. The service desk receives a call from a senior manager. She is concerned that
spyware may be installed on her smartphone. Recent news, traffic, and weather
updates have been targeted specifically for her location. What is the most likely
reason for this activity?
A. Airplane mode
B. Location services
C. NFC
D. Geofencing
11. A user is concerned that DNS lookups may be logged by government agencies. The
user would like to protect their privacy. What would be the best method to protect
privacy during name resolution?
A. Geofencing
B. VPN settings
C. DNS over HTTPS (DoH)
D. Containerization
12. A nation-state sends a security team to scope out a military site in California in
the United States. They use mobile devices to gather images, map the locations
of communications equipment, and record detailed information about troop
movements. What are they performing?
A. Geotagging
B. Geofencing
C. Physical reconnaissance
D. Personal data theft
13. A personal device has many applications installed that are not available through the
Apple App Store. The device subsequently fails compliance checks. What has likely
made the device fail to be compliant with the security policies?
A. Jailbreaking
B. Sideloading
C. Containerization
D. An unauthorized application store
374 Enterprise Mobility and Endpoint Security Controls
14. A senior employee has followed a QC link and installed a mobile application used
to order food and beverages at a local restaurant. The application is not available on
the Google Play Store. Acceptable Use Policy (AUP) states that applications can
only be downloaded from the official vendor store. What best describes what has
allowed this application to be installed?
15. Developers need to test mobile applications on a variety of hardware before making
them available on official application stores. How can they install the applications
locally on mobile devices?
16. A sales director would like to allow sales employees to use their personal devices
for accessing company applications and data as part of an effort to reduce business
costs. What would be the best control to mitigate the risk of employees co-mingling
personal and company data?
A. Geotagging
B. Geofencing
C. Containerization
D. Remote wipes
17. When on a business trip, a CEO was detained for several hours at border control.
When he was eventually reunited with his mobile phone, it had physical evidence of
tampering. He powered on the device and input the correct pin, but found that all of
the company applications and data were inaccessible. What has led to this situation?
A. Geofencing
B. Containerization
C. Remote wipes
D. An eFuse
Questions 375
18. A user has been able to run an unmanaged Linux operating system alongside
a managed Windows 10 build on a company laptop. What actions would allow
security professionals to prevent this issue from re-occurring?
19. Security administrators have deployed SELinux in enforcing mode. All unnecessary
services have been removed. In a further attempt to enforce security, a number of
commands – including vmstat and grep – have been blocked from some user
accounts. What best describes this action?
A. Whitelisting
B. Shell restrictions
C. ASLR
D. Memory encryption
20. The CISO is meeting with software engineers to better understand some of
the challenges that they face. He is asking if there are any settings that can be
incorporated into build images that will help to prevent attacks against the system
memory. What two features should be chosen?
A. ASLR
B. Patching
C. Firmware
D. NX/XN
21. What is deployed to mitigate the risk of privilege elevation and the misuse of
applications on Android mobile devices?
A. SELinux
B. TPM technology
C. SEAndroid
D. Attestation services
376 Enterprise Mobility and Endpoint Security Controls
22. What built-in module stores PCR values and enforces integrity on a
hardware platform?
23. What would be the best choice of technical control to block a fast-spreading worm
that targets a well-known NetBIOS port?
A. UEBA
B. A host-based firewall
C. A HIDS
D. Redundant hardware
24. A reporting tool has alerted the administrator that Joe Smith, who is leaving
the company in 4 weeks, has uploaded a large number of PDF documents to his
personal cloud storage. What has likely triggered this event?
A. UEBA
B. A host-based firewall
C. EDR software
D. Self-healing hardware
25. A system administrator needs to ensure the root account cannot be used to gain
access to user data on a Linux Network File System (NFS) server. What actions
would allow security professionals to prevent this issue from occurring?
Answers
1. D
2. D
3. A
4. D
5. D
6. B
7. C
8. A
9. C
10. B
11. C
12. C
13. A
14. D
15. B
16. C
17. D
18. C
19. B
20. A and D
21. C
22. A
23. B
24. A
25. B
10
Security Considerations
Impacting Specific
Sectors and Operational
Technologies
An enterprise may operate in a diverse environment. When considering the operation
of the plant and equipment within operational technologies, it is important to fully
investigate legal and regulatory responsibilities. Many countries have different laws and
regulatory requirements; in some cases, we may even see regulations differing between
different states. Fines can be significant when it is proven that a company has broken laws
or safety protocols. The technology used to deliver automation and supervisory control
can be complicated and, in some cases, lacks the strict security that may be available
on the business network. It is important that senior leadership within the organization
understands the importance of cyber security and maintaining a strong security posture.
They should sponsor and drive the enterprise to ensure it performs the appropriate risk
assessments. This is especially important when diversifying into new markets or business
sectors. In this chapter, we will study the following topics:
Energy sector
Energy suppliers must adhere to regulatory authorities that may be international
standards and may include country-specific requirements. In the US, electric utility
companies must comply with laws and regulations relating to air pollution, greenhouse
gas reporting, and industrial waste production. Agencies oversee these providers, and a
lack of compliance may result in a substantial fine. The US Environmental Protection
Agency (EPA) has successfully prosecuted companies through civil enforcement,
resulting in significant financial loss for non-compliance. The nuclear power regulations
are country-specific—for example, United Kingdom (UK) regulations are overseen
by the Office for Nuclear Regulation (ONR), which is responsible for the regulation
of 36 nuclear facilities, including security and safety requirements. Failure to comply
with the ONR can result in a site losing the authority to operate or facing substantial
monetary outlay in fines and retrospective provisions. In April 2019, Sellafield Ltd. was
fined nearly £500,000 for an H&S violation that resulted in an employee receiving mild
plutonium poisoning. For details, see the following link: https://tinyurl.com/
ONRnuclearfines.
Identifying regulated business sectors 381
Manufacturing
Manufacturers must prove to their customers that they can be trusted to deliver quality
products and have sound business practices. Accreditation can be obtained to indicate
a level of competency, such as the International Organization for Standardization
(ISO) 9001. These standards cover quality control (QC) for manufacturing companies.
Many other standards may need to be considered when a company is looking to produce
particular goods. Some standards may be enforced nationally and internationally to ensure
product compliance. In the UK, certain standards must be observed when operating
plants and equipment in a manufacturing environment. Some of these requirements are
quite complicated and require appropriate policies and procedures to be created to ensure
compliance. Within the UK, standards are covered by the following bodies:
• ISO
• International Electrotechnical Commission (IEC) and International
Telecommunication Union (ITU)
• British Standards Institution (BSI)
• European Committee for Standardization (CEN)
• European Committee for Electrotechnical Standardization (CENELEC)
• European Telecommunications Standards Institute (ETSI)
To sell products within a particular country, a manufacturer must follow these standards.
Plant and equipment must be operated within strict safety guidelines. Of greater
importance is the risk of intellectual property (IP) being stolen by an adversary. Patents
can be applied for in countries where there is a threat from a competitor who may steal
your inventions. In Europe, the European Patent Convention (EPC) covers 30 European
countries. To protect your IP in other parts of the world, individual patents must be
registered in each country.
382 Security Considerations Impacting Specific Sectors and Operational Technologies
Healthcare
There are many regulatory requirements for providers of healthcare, and this will differ
between countries. In the US, the Health Insurance Portability and Accountability Act
(HIPAA) requires strict control of protected health information (PHI) records stored
and transmitted electronically. HIPAA compliance is regulated by the Department
of Health and Human Services (HHS). If a healthcare provider fails to implement
the appropriate due diligence and due care, they may be prosecuted and fined for
transgressions. Protected information includes names, addresses, medical records, social
security numbers, and much more. There are strict reporting requirements for data
breaches; affected individuals must be notified within 60 days and HHS must also be
notified within 60 days for data breaches involving more than 500 records. Enforcement
is the responsibility of the Office for Civil Rights (OCR). There are different violation
levels—level one carries a maximum single fine of $59,522, rising to $1,785,651 for a level
four violation. An employee can also be sentenced to jail for serious violations—in the
most severe cases, this can be up to 10 years.
Healthcare providors such as hospitals and clinics must make significant investments
in medical equipment and systems to deliver efficient healthcare. Due to the high cost
of equipment it must remain in service for extended periods of time and may present a
vulnerability as the technology may be outdated.
Public utilities
In the US, public utilities are regulated under the Public Utility Regulatory Policies
Act (PURPA). This act was passed after the energy crisis in 1973, due to an oil embargo
following the Yom Kippur war, where the price of oil increased by over 300%. It
is designed to promote the production of domestic energy and renewable energy.
Enforcement and regulatory requirements are managed within individual states within
the US. Rates charged by public utility companies must be fair and non-discriminatory.
In the UK, gas and electricity providers are regulated by the Office of Gas and Electricity
Markets (Ofgem). There are controls in place to ensure price rises are capped to the
current inflation rate. By far the most important concern for operators of public utilities is
cyberattacks against their services and data breaches concerning customer records.
Understanding embedded systems 383
Public services
Public service providers (PSPs) include education, emergency services, healthcare,
housing, waste collection, transportation, and social care (there are more). They are
services that are provided by the government and are not intended to generate profits for
the provider. They are provided to benefit the community. Public services are targeted by
cybercriminals; healthcare organizations have been hit particularly hard by ransomware
attacks. In May 2021, the Irish Department of Health was targeted by a group known as
Conti, who forced the service to cancel many patient services and were forced to use pen
and paper in lieu of their usual information systems. It is estimated that around 35% of
ransomware victims pay criminals to gain access to their systems and data.
Facility services
Facility services can include cleaning, maintenance, security, waste management, catering,
and much more. Contracting with third parties to provide facility services can result
in flexibility and cost savings. It is important to consider downstream liability when
contracting out any part of your daily operations.
Many regulated sectors have a requirement for OT. This environment is sometimes
referred to as IT outside of carpeted areas. A metropolitan transportation service
provider (TSP) running a subway transit system would meet this criterion. They need
to monitor safety systems, subway trains, critical signaling, and passenger movements
and send urgent messages across these non-business networks. We will take a look at
components used within these types of environments.
Internet of things
Internet of things (IoT) covers many technologies, including home automation,
building control systems, and many other areas where automation of hardware is required.
Many IoT devices operate wirelessly, making them potential targets if we do not harden
our wireless networks. Consumer products include lightbulbs, smart speakers, televisions,
refrigerators, and much more. IoT does not define a standard—it relies on standards
provided by other protocols. We will compare these protocols and standards later in
the chapter.
System on a chip
A system on a chip (SoC) is a single piece of silicon that contains a central processing
unit (CPU), memory, storage, and input/output (I/O) port. Tablets and smartphones
would be good examples of where SoC technology is used. By integrating all the
components required on a compute node, power consumption is much reduced. Examples
of SoCs would include Qualcomm Snapdragon, Advanced RISC Machines (ARM),
Apple's M1, and Intel Core Consumer Ultra-Low Voltage (CULV).
In many situations when vulnerabilities are discovered, the hardware may need to
be replaced.
Embedded systems are integrated into environments where OT is deployed—typically,
this means anywhere outside of the enterprise business network where there may be a
plant and equipment that requires supervision and control. We will now look at example
OT environments.
Understanding ICS/SCADA
There are many examples of OTs that are deployed to automate the delivery of industrial
processes and critical infrastructure. ICS are deployed within manufacturing and process
control environments. They allow production lines to run and chemicals to be processed
and delivered at the correct flow rates. It is important that information in the form of
telemetry is displayed on management systems. There are many components required
to manage complex processing environments. Figure 10.2 shows the components of a
SCADA system:
PLCs
OT relies on specialist ICS, designed to operate in hostile or challenging environments.
PLCs are designed to operate in factories, processing plants, and many other industrial
settings. PLCs are used to automate a physical process such as a luggage conveyor belt
at an airport, or a traffic light system in a mine. Unlike information systems hosted in
a business environment, PLCs do not use commercial operating systems (OSes). They
will run specialist embedded OSes designed to deliver specialist instructions to control
industrial equipment. The first example of a PLC was used by General Motors on their
car assembly plant in 1968; this PLC was named the Modicon 084. PLCs need to process
instructions in real time in order to accurately control or adjust critical processes. The
following screenshot shows an array of available PLCs used in industrial environments:
Historian
Industrial control environments use a database logging system known as an operational
historian. The data collected will be gathered from process controls such as sensors,
instrumentation, and other types of controls. This allows for the capture of data that can
be interpreted to show trends and allow engineers to analyze the data and adjust processes
where necessary. A Historian system is composed of three main components:
Data collectors for interfacing with the data sources such as a PLC, and
networked devices.
Server software that processes and stores the data from the data collectors.
Client applications that allow for analysis, reporting, and visualizations.
388 Security Considerations Impacting Specific Sectors and Operational Technologies
Ladder logic
This is a simple programming language based upon relay-based logic, used originally in
electromechanical relays. The program will process multiple inputs or signals and can
perform a function if all the expected inputs are received while processing the logic.
Engineers originally used ladder logic to design circuit boards to activate mechanical
relays; now, this same approach is used in a visual programming language. Ladder logic
processes instructions using rungs (like a ladder) from top to bottom and from left to
right. The following screenshot shows an example of ladder logic:
Understanding OT protocols
Many supported protocols can be found in ICS and OT environments. Some protocols
have been in existence for over 50 years and lack controls to encrypt data or provide
integrity. Newer protocols have robust security built in. We will take a look at examples of
these protocols.
The controller area network (CAN) specification does not support any native security
or encryption; it is intended as a specification for reliable transmission of data frames
across a shared bus. CAN is a standard documented within the ISO11898-1 standard,
CAN defines the data link and physical layer of the Open Systems Interconnection (OSI)
model. Vendor implementations can include provisions for security, but this is not part of
the CAN specification. Figure 10.5 depicts a CAN bus model used in a vehicle.
Modbus
Modbus is a messaging protocol used in ICS to provide serial communication over
different cable types, including Ethernet. It is popular as it is royalty-free and has become
a de facto standard within OT environments. Modbus is typically deployed on SCADA
networks where it can relay telemetry back to monitoring computers from remote
terminal units (RTUs) in an industrial environment. Modbus was designed in the
1970s to service industrial communication requirements. Modbus has become the de
facto standard in industrial environments. Modbus offers no security against tampering
with message integrity and is therefore vulnerable to MITM attacks if an attacker gains
access to the network. There are many examples of vulnerabilities posted for the Modbus
protocol. See the following URL for current known vulnerabilities concering the Modbus
protocol. https://tinyurl.com/modbus-cves.
Zigbee
Zigbee is a wireless protocol intended primarily for home automation. It allows for
communication with low-power devices over distances varying between 10 and 100
meters, making it an ideal candidate for personal area networks (PANs). It is covered by
the IEEE 802.15.4 standard and operates over 2.4 gigahertz (GHz) radiofrequency. Zigbee
has wide support from many vendors, including Samsung, Amazon, and Ikea. It is not
intended as a data transport but more for control messages, as the data rate only allows
for 250 kilobits/second. The network architecture supports star and tree networks, based
upon a central hub acting as the coordinator. Amazon supports many products that are
capable of acting as a central hub for home automation. With a central hub device, users
can control lighting, central heating thermostats, home security, washing machines, and
many more household appliances. Figure 10.6 shows a typical home automation network,
with a central controller:
Zigbee supports encryption of traffic using 128-bit symmetric keys based upon Advanced
Encryption Standard (AES). Zigbee also supports anti-replay using a frame counter
mechanism and frequency hopping to prevent jamming attacks.
• EtherNet/IP—This is based on standard Ethernet IEEE 802.3 and uses the TCP/IP
protocol suite.
• CompoNet—This implementation is designed for optimum delivery of small
messages between controllers and industrial endpoints (switches, sensors, valves,
and so on).
• ControlNet—This is used when time criticality is important, such as safety control
systems (SCS). It uses a deterministic model, which means a message will be
delivered within a predictable amount of time.
• DeviceNet—This implementation uses CAN for the datalink layer. It supports
direct current (DC) power delivery over the connection, supporting devices up to
24 volts (drawing up to 8 amps).
394 Security Considerations Impacting Specific Sectors and Operational Technologies
Figure 10.7 shows the layers and relationships between CIP and the OSI model.
DDS is middleware, meaning it is the software layer that sits between the operating
system and applications. It allows various components within a system to more easily
communicate and share data. It is designed to simplify development of distributed systems
by allowing software developers to focus on the functionalility of their applications rather
than the complexities of passing information between applications and systems. Figure
10.8 shows the layers that are important for DDS.
Summary
In this chapter, we have taken a look at the challenges when supporting operational
technologies. We have looked at examples of ICS where plant and equipment must be
controlled using SCADA networks. We have studied the importance of adhering to legal
and regulatory responsibilities. We have discussed that countries have different laws and
regulatory requirements. We have seen examples of some significant fines that can be
levied when it is proven that a company has broken laws or safety protocols. We have
looked at the technology used to deliver automation and supervisory control and have
taken a look at popular protocols used on these networks. We have looked at examples of
embedded systems and the challenges that they may bring when networking these devices.
The skills learned in this chapter will be useful as we move on to the next chapter, where
we will take a look in more depth at securing data using cryptography and PKI.
Questions
1. Which regulated business sector is intended to benefit citizens and generate no
commercial profit?
A. Energy
B. Manufacturing
C. Healthcare
D. Public services
2. Which regulated business sector would typically involve the processing and storage
of PHI?
A. Energy
B. Manufacturing
C. Healthcare
D. Public utilities
Questions 397
3. Which regulated business sector may be targeted by competitors who want to steal a
company's IP?
A. Energy
B. Manufacturing
C. Healthcare
D. Public utilities
A. SCADA
B. Zigbee
C. IoT
D. Local area network (LAN)
5. What risk mitigation would be used when supporting SCADA and business
networks for an energy provider?
6. What is a type of processor chip that performs a dedicated task and may be used for
bitcoin mining?
A. IoT
B. SoC
C. ASIC
D. FPGA
7. What is a specialist hardened computer that will control actuators, valves, and
pumps in an industrial environment?
A. Desktop computer
B. PLC
C. Mainframe computer
D. Sensor
398 Security Considerations Impacting Specific Sectors and Operational Technologies
A. IoT
B. SoC
C. ASIC
D. FPGA
9. This term covers many technologies including home automation, building control
systems, and many other areas where automation of hardware is required.
A. IoT
B. SoC
C. ASIC
D. FPGA
10. What is the database logging system known as that will collect data from process
controls such as sensors, instrumentation, and other types of controls?
A. Historian
B. Ladder logic
C. SIS
D. HVAC
11. This is a simple programming language based upon relay-based logic, used
originally in electromechanical relays.
A. Historian
B. Ladder logic
C. Zigbee
D. Modbus
Questions 399
12. What is the de facto standard message transport protocol used in industrial
environments that offers no security against tampering with message integrity and
is therefore vulnerable to MITM attacks?
A. CAN
B. Modbus
C. DNP3
D. Zigbee
13. What is the networking middleware known as a pub-sub model that is aimed at
publishing messages to subscribers?
A. CAN
B. CIP
C. DNP3
D. Zigbee
14. Which wireless protocol intended primarily for home automation allows
communication with low-power devices over distances varying between 10 and
100 meters?
A. CAN
B. CIP
C. DNP3
D. Zigbee
15. What is a protocol used for the transmission of messages on industrial networks?
There are four types of networks offering different transport and network models,
including 802.3 Ethernet.
A. CAN
B. CIP
C. DNP3
D. Zigbee
400 Security Considerations Impacting Specific Sectors and Operational Technologies
Answers
1. D
2. C
3. B
4. A
5. A
6. C
7. B
8. D
9. A
10. A
11. B
12. B
13. B
14. D
15. B
11
Implementing
Cryptographic
Protocols and
Algorithms
Securing enterprise networks relies on a strategy called defense in depth. One very
important part of defense in depth is protecting data in many different states, primarily
at rest, in transit, and in use. When confidentiality is required, we can apply encryption to
sensitive data to ensure we can protect that data. In some cases, we must be able to verify
the integrity of the data using hashing and signing.
Cryptography can be a daunting subject area for IT professionals, with algorithms
consisting of highly complex mathematical ciphers. The job of IT professionals and
management is to ask the right questions and ensure the correct standards and protocols
have been enabled. Regulatory authorities may have very strict requirements when using
cryptographic ciphers to protect data that an enterprise will store, process, and transmit.
It is the job of security professionals to ensure the correct configuration and deployment is
provided for the appropriate technology.
402 Implementing Cryptographic Protocols and Algorithms
• SHA-224
• SHA-256
Understanding hashing algorithms 403
• SHA-384
• SHA-512
It is worth noting that SHA-2 is still the preferred option for most operating systems.
Windows operating systems, default tools, and utilities are based around SHA-2
SHA-3 is a family of cryptographic hash functions defined in FIPS PUB 202. The output
size in bits is the same as SHA-2 (224, 256, 384, and 512) but uses a SHA-3 descriptor
before the hash function. Most Linux/Unix distributions come with hashing utilities.
In the following command, we have used a Linux Bash shell command to generate a hash
to verify whether the bootstrap log has been tampered with:
sha256sum bootstrap.log
1709de6f628968c14d3eed2f306bef4f39e4ab036e51386a59a487ec0
e4213fe bootstrap.log
Windows PowerShell can also be used to generate hash values. The format is
Get-Filehash <Filename> -Algorithm <hashtype>:
Algorithm Hash
--------- ----
SHA256 4FFA3D9C8A12BF45C4DE8540F42D460BC2F55320E6
3CAAC4FDFABBB384720B40
404 Implementing Cryptographic Protocols and Algorithms
To address both integrity and authentication for a packet, we can use hash functions and
a shared secret.
Block ciphers
These ciphers are used to encrypt data in blocks, typically 64 or 128 bits. They offer the
most robust security but lack the outright speed that's offered by stream ciphers. The
following are some examples of block ciphers:
• Triple Digital Encryption Standard (3DES): 3DES replaced the original Data
Encryption Standard (DES), which was designed and adopted in the 1970s.
DES offered a key size of only 64 bits (56 bits for the key itself). In 1999, 3DES
became the new standard, while an alternative was being developed. 3DES can be
implemented in several different ways, but the most secure is by using three separate
keys. 3DES has an effective key size of 168 bits. NIST guidance stipulates that 3DES
will be retired by 2023.
406 Implementing Cryptographic Protocols and Algorithms
Block ciphers are the mainstay of symmetric encryption algorithms, but we must also look
at fine-tuning when using these algorithms. Symmetric ciphers can be deployed using
specific modes of operation.
Imagine that you are encrypting a 1,000-page document and that each line on every page
is encrypted using the same key. If the ciphertext is subjected to scrutiny using statistical
analysis, then patterns can be seen.
In the following figure, we can see an example of an image that has been encrypted
using ECB mode. Discernable color patterns are evident due to the repeated use of the
same cipher:
Note that OFB is unlikely to be selected as there are newer blockchain ciphers that offer
better performance.
Counter (CTR)
CTR mode also offers a performance advantage over legacy blockchain ciphers. It allows
a block cipher to act as a stream cipher by preparing the cipher before the plaintext
is encrypted. As there is no need to wait for each block of plaintext to be encrypted
(it plays no part in the following cipher), then powerful multi-processor systems can take
advantage of the parallel processing of blocks.
The following diagram shows an example of CTR mode:
The following diagram shows an example of Mac-then-Encrypt (MtE) being used for
SSL/TLS packets:
Poly1305
Poly1305 is another popular cryptographic-based MAC that's commonly used along
with AES (Poly1305-AES), though is also used with ChaCha. Poly1305 is popular with
e-commerce providers due to its high speed on standard CPU architectures. This also
supports the protocol for AEAD.
In addition to block ciphers, there are use cases where stream ciphers are considered
a better option, particularly when considering data in transit.
Understanding asymmetric encryption algorithms 411
Stream ciphers
Stream ciphers are optimized for real-time traffic. They are fast and as they process the
data as a stream of bits, fewer errors are generated. The previous standard for stream
ciphers was RC4, though this is no longer considered a secure cipher and was withdrawn
from widespread use in 2015. The following are two popular modern stream ciphers:
• ChaCha: ChaCha was designed for high performance. As a stream cipher, it has
many advantages over block ciphers when deployed with real-time applications
such as streaming media or VoIP. It was designed as an alternative to AES when
deploying SSL/TLS security. It is a refined version of Salsa20. The ChaCha cipher
is based on 20 rounds of encryption using a 256-bit key size. One of its major
strengths is its speed and resistance to side-channel analysis. ChaCha has
been adopted by Cloudflare and Google as its preferred option for TLS 1.3
SSL/TLS connections.
• Salsa20: Salsa20 was developed by the same person (Daniel J. Bernstein) that
developed the ChaCha cipher. It is closely related to the ChaCha cipher and
offers choices of 128-bit and 256-bit key sizes. It was superseded by the newer
ChaCha cipher.
Symmetric ciphers are fast and efficient but do not provide a mechanism for secure
symmetric key exchanges, nor do they support concepts that are used within digital
signatures. For this, we need asymmetric encryption.
Diffie-Hellman (DH)
DH is a key agreement protocol that's typically used to exchange a secret/session key
for IPSec or SSL/TLS. It is not used to exchange a key but allows two parties to generate
a secret/session key independently. This is useful as it means the session key was never
transmitted, so any MITM attack would be unsuccessful.
414 Implementing Cryptographic Protocols and Algorithms
Key stretching
Key stretching is used to protect weak keys. Examples of such keys include the passwords
stored in a database. If an attacker gains access to the weak passwords that have been
hashed in a database, then they can use offline techniques to crack these passwords. As
they will not use the live login option, they will not lock the accounts out. Offline tools/
techniques could include dictionary attacks, rainbow tables, and brute-force attacks.
When we harden a password using key stretching, the original password is passed through
multiple rounds of encryption. An attacker will now need to guess a password string and
subject it to multiple rounds of encryption (just like the key stretching algorithm) in an
attempt to crack the password. Key stretching will slow down the attacker's attempts to
a considerable degree while allowing the legitimate users to access their accounts with
little overhead.
Password salting
This is the most basic way to protect a password against offline attacks while using tables
and brute-force techniques. A pseudo-random string of characters is generated. This salt
is now mixed with the password string and hashed. The following diagram shows the
password salting process:
This salted password is now stored in the password database. This offers a useful level
of protection for passwords that are stored, though can be improved upon by using key
stretching techniques.
Quantum computing
Quantum computing is designed to solve complex problems more efficiently than
existing supercomputers. Supercomputers harness the traditional processing capabilities
of compute nodes hosting thousands of CPU and GPU cores. Solving a problem using
quantum processing may be thousands of times faster than using the existing computing
models. A regular computer (or supercomputer) processes 0s and 1s and generates
answers to problems using 0s and 1s – this is a solid approach and works well to solve
many different computational problems. Let's compare this to a simple problem where
a freight company needs to deliver goods to 10 different customers. There are 10 different
truck types and 10 different routes to the customer sites, and each customer has
a specific payload to be delivered. There may be over 3 million solutions when it comes
to optimizing the use of trucks, fuel, and routes. One example quantum algorithm is called
Grover's search, which can process the problem on a quantum computer and generate the
results in a much more efficient manner (compared to traditional computers). Instead of
searching through all 3 million possible solutions, Grover's search will find the solution
by checking various solutions – in this case, around 1,732 calculations. For complex
computational models, this may mean that an answer is available within seconds rather
than weeks.
The adoption of quantum computing and efficient algorithms to harness this technology
presents a major cybersecurity threat. At the time of writing, Google has a quantum
computer rated at 50 qubits (qubits describe quantum bits). A qubit can represent multiple
states. It is estimated that around 20 million qubits would be required to crack existing
cryptographic keys. For examples of quantum computing and the threats to cryptography,
go to https://tinyurl.com/quantumthreats.
Understanding emerging security technologies 421
Blockchain
A blockchain is a secure digital ledger that's supported by a public peer-to-peer network.
A blockchain is a robust tamper-proof mechanism that's used to protect the integrity and
authenticity of data.
Records or transactions are added as a new block and the hash is calculated and added to
the chain. The following diagram shows a blockchain:
For the new transactions to be accepted, they must be approved by the majority of the
nodes on a Peer-to-Peer (P2P) network. The new transaction (or block) can now be
added to the public ledger. We can see the process of approving a new transaction in the
following diagram:
Homomorphic encryption
This type of encryption is used to protect data in use. When sensitive data must
be accessed by an application or service, then homomorphic encryption should
be considered.
Biometric impersonation
When considering security within large, complex enterprises, it is important to protect
identities with robust authentication schemes. Biometrics has been very useful in
providing Multi-factor Authentication (MFA), including facial recognition, gait
analysis, and voice patterns. The adoption of very powerful compute nodes has also made
biometric impersonation using deep fakes possible.
424 Implementing Cryptographic Protocols and Algorithms
In 2020, a United Arab Emirates-based bank manager took a phone call from a company
director, who instructed him to make a large transfer totaling $35 million. It later
transpired that criminals had used deep learning computing techniques to create realistic
renditions of the company director's speech patterns. To read more about this story, go to
https://tinyurl.com/deepfakeheist.
Deep learning uses Artificial Intelligence (AI) and neural networks to extract distinct
traits or features from raw information. This technology can be used to generate
deep fakes.
There have been many instances of deep fakes being used to steal the identities of victims.
Computer-generated deep fake audio and videos are very difficult to distinguish from the
real thing. Deep fakes can also be referred to as synthetic media.
To mitigate the threats posed by deep fakes, there needs to be a robust way to verify that
the digital media, images, or voice are original. Blockchains may be a useful solution to
verify whether the media is original. The following URL provides an interesting news
article on deep fakes: https://tinyurl.com/deepfakereport.
3D printing
This technology is widely adopted and is used by many diverse industries, from motor
manufacturers to aircraft production. 3D printing uses a Computer-aided Design (CAD)
process to create a blueprint for the printer to use. The printer can lay multiple layers of
materials to create prototypes, as well as production components.
The wide adoption of this technology means it is relatively easy for a competitor to
recreate designs based on the enterprise's intellectual property. Other risks may involve
a hacker gaining access to a network-connected 3D printer and introducing defects into
the printing process. However, it may be a useful security mitigation when we would
otherwise rely on third-party manufacturers to produce prototypes and early design
models. It is of paramount importance to secure the blueprints and manufacturing
data that are used by this process. Some 3D printing manufacturers are incorporating
technology that allows for digital fingerprinting, allowing a printed component to be
traced back to the source.
Summary 425
Summary
In this chapter, we learned about the protocols and technologies that are used to protect
data in many different states, primarily at rest, in transit, and in use. We gained an
understanding of hashing algorithms, primarily to support integrity. These hashing
algorithms include SHA, SHA-2, SHA-3, MD, and RIPE. We also looked at message
integrity using HMAC and AEAD.
We then studied the options for ensuring confidentiality using symmetric encryption,
including block ciphers such as AES and 3DES. We also identified cipher block modes,
including GCM, ECB, CBC, CTR, and OFB. We then looked at common stream ciphers
such as ChaCha and Salsa20, where real-time applications must be considered.
After that, we looked at asymmetric encryption, which is used for S/MIME, digital
signatures, and key exchange. These asymmetric algorithms include ECC, ECHDE, RSA,
and DSA.
We now understand how to deploy secure protocols, including SSL, TLS, S/MIME, IPSec,
and SSH.
We also gained an understanding of key stretching using PBKDF2 and bcrypt.
We also looked at new and emerging technologies that can help protect sensitive data
and intellectual property. Emerging technologies include blockchains, homomorphic
encryption, biometric impersonation, and deep fakes. We then looked at some of the risks
to be considered when deploying 3D printing.
These skills will be useful when we study additional security concepts using Public Key
Infrastructure (PKI) as this is primarily deployed to support the authenticity of
our asymmetric key pairs.
In the next chapter, we will look at how public keys can be trusted by generating digital
certificates, as well as how certificates can be revoked and managed within an enterprise.
426 Implementing Cryptographic Protocols and Algorithms
Questions
Answer the following questions to test your knowledge of this chapter:
1. Recent log analysis has revealed that archived documents have been tampered with,
even though the hash-matching database shows that the values have not changed.
What could have caused this?
2. Recent log analysis has revealed that archived documents have been tampered with.
To mitigate this vulnerability, which of the following should not be used?
A. RACE-320
B. MD5
C. SHA-384
D. SHA3-256
A. RACE-256
B. MD5
C. SHA-512
D. ECC
Questions 427
4. Google engineers are configuring security for a new regional data center. They are
looking to implement SSL/TLS for customer-facing application servers. What would
be a good choice, considering the need for speed and security?
5. What is used to authenticate packets that are sent over a secure SSL/TLS
connection?
A. SHA
B. HMAC
C. MD
D. Key exchange
6. Hackers can gain access to encrypted data transmissions. Log analysis shows that
some application servers have different blockchain cipher configurations. Which log
entries would cause the most concern?
A. GCM
B. ECB
C. CBC
D. CTR
A. 3DES
B. AES
C. ChaCha
D. RC4
428 Implementing Cryptographic Protocols and Algorithms
A. AES
B. ECDHE p521
C. ChaCha-256
D. SHA-512
9. What type of key agreement would most likely be used on IPSec tunnels?
A. Diffie-Hellman
B. DSA
C. RSA
D. Salsa
10. What is a good choice regarding a signing algorithm that will work well on
low-powered mobile devices?
A. DSA
B. RSA
C. ECDSA
D. HMAC
11. What is the first step in the handshake for a secure web session that's using
SSL/TLS?
A. Server hello
B. Session key created
C. Client hello
D. Pre-master secret
Questions 429
12. A government agency needs to ensure that email messages are secure from mailbox
to mailbox. It cannot be guaranteed that all SMTP connections are secure. What is
the best choice?
A. SSL/TLS
B. S/MIME
C. IPSec
D. SSH
14. While setting up a commercial customer-facing web application server, what would
be a good choice regarding a key exchange that will support forward secrecy?
A. DH
B. RSA
C. ChaCha
D. ECDHE
15. What term is used to describe the message integrity that's provided by protocols
such as Poly1305 and GCM?
A. Non-repudiation
B. Authenticated encryption with associated data
C. Perfect forward secrecy
D. Collision resistance
16. What would be used to provide non-repudiation when you're sending a business
associate an email message?
A. TLS/SSL
B. AES-256
C. S/MIME
D. IPSec
17. A developer is protecting the password field when they're storing customer profiles
in a database. What would be a good choice for protecting this data from offline
attacks? Choose two.
A. PBKDF2
B. AES
C. bcrypt
D. ChaCha
Questions 431
18. What do Alice and Bob need to exchange before they send signed email messages to
each other?
A. Private keys
B. Cipher suite
C. Public keys
D. Pre-shared keys
19. What will be used when Alice needs to sign an important business document to her
colleague, Bob?
20. What encryption protocol will be used to encrypt emails while in transit, across
untrusted networks, when the client has no encryption keys?
A. SSL/TLS
B. IPSecC
C. SSH
D. S/MIME
432 Implementing Cryptographic Protocols and Algorithms
Answers
1. B
2. B
3. C
4. A
5. B
6. B
7. C
8. B
9. A
10. C
11. C
12. B
13. B
14. D
15. B
16. C
17. A and C
18. C
19. B
20. A
12
Implementing
Appropriate PKI
Solutions, Cryptographic
Protocols, and
Algorithms for
Business Needs
Public key infrastructure (PKI) is of vital importance to any size organization and
becomes a necessity for a large enterprise. PKI gives an organization the tools to verify
and provide authentication for keys that will be used to secure the data. Without PKI we
cannot use encryption keys, as the authenticity of keys cannot be verified. When a key pair
is generated, we need to assign a trustworthy signed certificate to the unique public key, in
a similar fashion to a passport that is generated for a trustworthy citizen. Without PKI, we
cannot use online banking, e-commerce, smart cards, or virtual private networks (VPNs)
with any assurance.
434 ImplementingAppropriatePKISolutions,CryptographicProtocols,andAlgorithmsforBusinessNeeds
It is important to understand how the entire process works and potential problem areas
that may need to be managed using troubleshooting skills. In this chapter, we will take a
look at the following topics:
Certificate authority
A certificate authority (CA) consists of an application server running a service called
Certificate Services (or Linux/Unix equivalent daemon). There may be multiple levels of
CAs; there will always be a root CA. In addition, there will normally be at least one more
layer. This is known as the subordinate or intermediate CA. The root CA will typically be
kept in a secure location, in many cases isolated (or air-gapped). The root CA only needs
to be powered up and available to sign intermediate CA signing requests. For redundancy,
an enterprise may have several issuing CAs. The issuing CA must be powered up and
available to sign the client CSR; it will need to be highly available. Certificates follow a
standard. The current standard is X509.v3, and this dictates the formatting and included
information that is present on a digital certificate.
Registration authority
When a request is received from a client entity, there will be policies that dictate whether a
request will be approved. To issue a certificate that validates a domain name or some other
identity, verification checks must be done. You can compare the registration authority
(RA) role to that of an auditor. If you wanted to apply for a passport, then identity
verification would need to be done; otherwise, somebody could steal your identity. In
the PKI hierarchy, the passport would be the digital certificate and the RA would be the
government agency/department responsible for issuing passports.
Wildcard certificate
A wildcard certificate allows an organization to host multiple websites, using a single key
pair, with a single certificate. There may be restrictions or additional costs based upon the
number of sites hosted. The wildcard certificate requires all sites covered by the certificate
to use the same Domain Name System (DNS) domain name. Only the hostname of each
site can be different. Figure 12.3 shows a wildcard certificate:
Understanding certificate types 439
Extended validation
An extended validation certificate is mainly used by banks and other financial institutions;
it is a type of certificate that carries a high level of assurance. Customers will see a visual
indicator while using their web browser, to indicate the site has an extended validation
certificate. Figure 12.4 shows Microsoft Internet Explorer connected to a site with an
extended validation certificate:
Sites that are enrolled for extended validation certificates benefit from additional security
services and support from the CA, such as regular vulnerability scans (from the CA).
Multi-domain
If an organization needs to support multiple domains using a single certificate, then a
multi-domain certificate would be appropriate. This type of certificate is different from a
wildcard certificate as it allows for different subdomains, domains, and hostnames for a site.
Figure 12.5 shows the Subject Alternative Name (SAN) extension on a banking website:
General-purpose
A general-purpose certificate could be issued to a user for multiple uses. A user may need
to sign and encrypt email, authenticate to their Active Directory services account, and
use secure VPN access. We could minimize administrative actions by providing a single
certificate for the user.
Certificate usages/templates
To provide support for all the applications that are used or are being planned for, it
is important to ensure the CA has the appropriate templates enabled to support user
requests. Typical templates will include the following types:
• Client authentication
• Server authentication
• Digital signatures
• Code signing
• General-purpose
442 ImplementingAppropriatePKISolutions,CryptographicProtocols,andAlgorithmsforBusinessNeeds
In Figure 12.7, we can see Microsoft CA templates available for client enrolment:
Trust models
While there are many commercial CAs who all act independently, there are also occasions
where two or more CAs need to work together. Examples can be where government
agencies and contracting companies need to enable trust but do not use commercial CAs.
A common approach is to sign a CSR from a trusted partner.
444 ImplementingAppropriatePKISolutions,CryptographicProtocols,andAlgorithmsforBusinessNeeds
Cross-certification certificate
When two or more CAs need to work together, they can generate a type of certificate
called a cross-certification certificate. One CA will use the private key to sign a certificate
containing the public key of a trusted partner. In Figure 12.8, we can see ACME CA
signing the public key of WINGTIPS CA:
Certificate pinning
HyperText Transfer Protocol (HTTP) Public Key Pinning (HPKP) headers are used to
protect an organization's users or customers from man-in-the-middle (MITM) exploits.
On the web application server, we can embed the server public key into the code so that
a client application or browser will block the site/server if the certificate has changed.
There are instances where the site can also pin the CA public key certificate into the code
to ensure customers will only trust the site if the server certificate has been issued by a
particular CA, such as GoDaddy or DigiCert.
The technology works by storing a trusted certificate value when first connecting to an
application; there is a maximum age field that can be set by the application provider.
The maximum age value could be anything from a few hours to several months. This is a
very effective way to ensure attackers cannot create lookalike farming sites using another
certificate with the same CN field.
When using this approach, you may also need to plan for an event where your server
certificate may be revoked and replaced with a new one. If the HPKP is based upon the
issuing CA public key certificate, then there will not be a problem. But if the public key
from the web server is embedded in the code and subsequently revoked, your clients will
not be able to access the web application server (until the maximum age field is reached).
In many cases, the public key is embedded into the mobile application when it is
downloaded from the app store; this means when keys need to be changed, the app will
need to be updated.
446 ImplementingAppropriatePKISolutions,CryptographicProtocols,andAlgorithmsforBusinessNeeds
Certificate stapling
Stapling allows a trusted web application server to begin a secure session with a client
by including both the server certificate and revocation status of the certificate. This is
accomplished by sending requests to the CA OCSP service for the current status of the
application server's digital certificate. This will speed up the process for the end user.
Figure 12.9 shows the certificate stapling process:
Certificate stapling allows the client to trust the server certificate, without the need to
independently contact the CA for the up-to-date CRL.
CSRs
A CSR will include the details for the creation of a certificate. Certain pieces of
information will be required, along with the requester's public key. In Figure 12.10, we can
see a typical template that must be completed to generate the CSR:
When the request is generated, it creates a file; this is stored in Privacy Enhanced Mail
(PEM) format and encoded using Base64 characters. Figure 12.11 shows a CSR file:
• Web services
• Email (Transport Layer Security (TLS) and Secure/Multipurpose Internet Mail
Extensions (S/MIME))
• Code signing
• Federation—trust models
• VPN
• Client authentication (smartcards)
• Enterprise and security automation/orchestration
In some cases, during the certificate enrolment process, there may be a requirement to
capture a copy of the private key. This is not a default setting but may be required for
certain types of certificate requests. Government agencies may have a policy of retaining a
user's private key so that data can be recovered in the event of the loss or corruption of the
original private key. It may also be a legal requirement in some countries.
450 ImplementingAppropriatePKISolutions,CryptographicProtocols,andAlgorithmsforBusinessNeeds
Key escrow
Key escrow enables a CA to store copies of private keys for entities that have generated
a CSR. Escrow is an agreement where something of value is stored securely and can
only be released under strict conditions, already specified. A real-world example is
when conveyancers handle the sale of a property. The conveyancers will have an escrow
bank account that is separate from their own regular bank account. When contracts are
signed and the buyer releases the payment, the money goes into the escrow account and
is used to pay the seller. If your private key is destroyed along with a Common Access
Card (CAC) or smartcard, then key escrow allows an organization to release a copy of
the private key to the account holder. Key escrow requires strict security management to
ensure private keys are only released in specific agreed-upon circumstances.
Key rotation
It is important to recognize the benefits of key rotation to ensure data confidentiality is
maintained. Keys can be rotated automatically or manually, based upon the organization's
policies, or may be dictated by regulatory compliance. If a key is compromised, then it
should be revoked immediately. The Payment Card Industry Data Security Standard
(PCI DSS) requires that keys are rotated on a regular basis, based upon the number of
records or transactions that have been encrypted. There are other considerations that an
organization should have, including staff turnover, the strength of current encryption
keys, and the value of the data. The National Institute of Science and Technology (NIST)
offers guidance on suitable key rotation timelines. This guidance is documented in NIST
Special Publication (SP) 800-57 Part 1 Revision 5, found at the following link: https://
tinyurl.com/800-57REV5.
Troubleshooting issues with cryptographic implementations 451
Mismatched keys
When considering key rotation, it is important to document all dependencies (software
modules and hardware). For example, if the public key is hardcoded into the software
of hardware devices and a key is rotated on a critical dependent module, then a key
mismatch may occur. Documented policies and procedures should be implemented to
avoid this situation.
Embedded keys
It is possible for vendors to use embedded keys when there is a need to ship computer
systems or other hardware devices with default secure configurations. An example could
be Windows operating systems (OSes) configured for a secure boot. The Microsoft public
key is embedded in Unified Extensible Firmware Interface (UEFI) firmware. This allows
for secure boot right from first use. In some instances, the embedded key may need to
be updated or changed due to the intended use of the computer. If we need to install
Linux on the system and benefit from a secure boot, then we would need to change the
embedded key; for the Linux distribution, this would be the public key.
Crypto shredding
Where there is a need to render confidential or sensitive data unreadable, then crypto
shredding may be a good choice. This would entail encrypting the data at rest with
specific encryption keys. The key used to encrypt the data could then be deleted, therefore
shredding the sensitive data. This could be useful when there is no physical access to the
storage device, such as when data is stored by a third-party cloud provider. Once the key
has been shredded, then access to the data is not going to be possible. In the same way as
physically shredding a hard drive, this is used to render the data unrecoverable. Crypto
shredding is a technique that is used on Apple iPhones; when the pin number is entered
incorrectly 10 times, the Advanced Encryption Standard (AES) encryption key is erased
(rendering the device storage inaccessible).
Cryptographic obfuscation
Modern encryption ciphers deploy complex mathematical procedures in order to protect
the keys used within a cipher. Multiple levels of confusion and diffusion are designed to
make the task of a crypto-analyst very difficult. Regulatory bodies are constantly requiring
additional security protocols and existing ciphers to use longer encryption keys and more
complex cipher block modes. With these requirements, it is not an easy task to reverse a
cipher in the event that the plaintext needs to be accessed without the encryption key.
Compromised keys
Once a key is compromised, then it should be taken out of circulation and published to
the CRL. It is important this is done in a timely fashion. Depending on the value of the
data and potential exposure, this may need to be performed immediately, or certainly
within a few hours. Examples of compromised keys may include stolen private keys.
Summary
In this chapter, we have learned about the importance of PKI, we have taken a look at
a typical PKI hierarchy. We have been able to understand the roles played by CAs and
registration authorities (RAs).
We have taken a look at certificate types, including wildcard certificates, extended validation,
multi-domain, and general-purpose certificates. We have gained an understanding of the
common usages for certificates, including client authentication, server authentication
(application servers), digital signatures, and code signing. We have taken a look at important
extensions used when publishing certificates, including CN and SAN.
Questions 453
We have taken a look at the requirements needed to become a trusted CA, how providers
are audited, and what is required to maintain trusted status.
We have looked at common trust models used when CAs need to work together and have
understood the importance of the cross-certification trust model.
We have understood why is important to address certificate life cycle management,
including the rekeying of credentials.
In order to mitigate common methods of attack (including MITM), we have seen how
certificate pinning can be used to safeguard an organization hosting web application servers.
We have gained an understanding of the requirements to generate a CSR, and the formats
and templates that are used during the process.
During this chapter, we have taken a look at the main differences between the OCSP
and CRL. We have also examined how certificate stapling can speed up the secure
handshaking process for a TLS connection.
We have looked at issues where there are compatibility, configuration, and operational
problems that cause communication to be disrupted.
This information will be useful in the next chapter when we take a look at governance,
risk, and compliance.
Questions
1. ACME needs to request a new website certificate. Where will they send the request
(in the first instance)?
A. Root CA
B. Subordinate/intermediate CA
C. RA
D. CRL
A. Client authentication
B. Server authentication
C. Digital signatures
D. Code signing
454 ImplementingAppropriatePKISolutions,CryptographicProtocols,andAlgorithmsforBusinessNeeds
3. Web developers have created a new customer portal for online banking. They need
to ensure their corporate customers are satisfied with the security provisions when
connecting to the portal. Which certificate type should they request for the portal?
A. Wildcard certificate
B. Extended validation
C. Multi-domain
D. General-purpose
A. Wildcard certificate
B. Extended validation
C. General-purpose
D. SAN
A. Wildcard certificate
B. Extended validation
C. General-purpose
D. SAN
A. Cross-certification
B. Chaining
C. Wildcard certificate
D. Extended validation
Questions 455
A. Public
B. Private
C. Digital signature
D. Symmetric
A. Wildcard certificate
B. Extended validation
C. Certificate pinning
D. Certificate stapling
9. A large online retailer would like the customer web browsing experience to be low
latency, with a speedy secure handshake and verification of the website certificate.
What would best meet this requirement?
A. Extended validation
B. Certificate pinning
C. Certificate stapling
D. CSR
10. A user discovers that a colleague has accessed their secure password key and may
have made a copy of the private key (stored on the device). What action should
security professionals take to mitigate the threat of a key compromise?
11. Which HTTP extension will ensure that all connections to the bank's e-commerce
site will always also be encrypted using the assigned X.509 certificate?
12. When a public key is bundled within the UEFI firmware on a new Windows laptop,
what is this termed as?
13. A cybercriminal has stolen the smartphone of the chief executive officer (CEO)
from ACME bank. They have attempted to guess the personal identification
number (PIN) code several times, eventually locking the device. After mounting
the storage in a lab environment, it is not possible to access the stored data. What
has likely prevented a data breach?
A. Embedded keys
B. Exposed private keys
C. Crypto shredding
D. Improper key handling
14. Several employees are required to bring their laptops into the office in order to
obtain new encryption keys, due to a suspected breach within the department.
What is taking place?
A. Rekeying
B. Crypto shredding
C. Certificate pinning
D. Cryptographic obfuscation
Answers 457
15. When an engineer connects to a switch using a Secure Shell (SSH) connection,
there is a request to download and trust a new public key certificate. There was no
such request when connecting from the same computer the previous day. What is
the likely cause of this request?
A. Compromised keys
B. Exposed private keys
C. Extended validation
D. Key rotation
Answers
1. C
2. D
3. B
4. D
5. A
6. A
7. A
8. C
9. C
10. A
11. B
12. D
13. C
14. A
15. D
Section 4:
Governance, Risk,
and Compliance
In this section, you will learn the different approaches for assessing enterprise risk,
including quantitative and qualitative techniques. You will study different risk response
strategies and learn why regulatory and legal considerations are important during this
response. Finally, you will learn about creating effective business continuity and disaster
recovery plans.
This part of the book comprises the following chapters:
Supply chains add additional complexity to an enterprise's overall security footprint; lack
of visibility of who is handling enterprise data or processing enterprise data can add to
risk. Vendor management and assessments must be addressed by risk management teams.
In this chapter, we will learn about the following topics:
This approach is used to ensure we identify the greatest risks for the overall system and
use this as a baseline to apply the controls. When calculating the aggregate values, we
track the high-water mark in each column (this is a simplistic description where we look
for the highest single value in each column). In the preceding screenshot, the impact
value for admin information is LOW. We must, however, ensure that the potential impact
for supervisory control and data acquisition (SCADA) alert data is mitigated with the
appropriate control. As this is a shared information system, we may choose to implement
a high availability (HA) failover solution.
One of the benefits of using a qualitative method is the fact that it can be easier to
perform. However, it does not produce outputs that convey the level of risk from a
financial perspective. To present risks to decision-makers, it may be necessary to use a
quantitative approach in order to present monetary values. We will now take a look at this
type of assessment.
Gap analysis
A gap analysis is undertaken in order to understand where there may be missing controls.
In order to perform this analysis, we must be able to assess the current state of controls
and compare this to where we want the organization to be (the desired state). Figure 13.5
shows the steps required to perform a gap analysis:
Transfer
When an organization wants to control or mitigate risk but does not have the resources
(personnel or finances) to implement a response, then transference may be an appropriate
strategy. If the organization cannot afford to rebuild the warehouse in case of fire, then
they should consider fire insurance. If the organization does not have the personnel or
infrastructure to support secure card payments, they may consider engaging a third party
to process these payments.
Accept
When a risk is accepted, it is normally because the risk is considered to be acceptable to
the organization based upon the risk tolerance/appetite. An example of risk acceptance
would be a low likelihood event, such as earthquakes or floods in an arid location with no
prior history of earthquakes or floods.
Implementing risk-handling techniques 469
Avoid
To avoid an identified risk, the organization should cease the activity that is identified to
be outside of the acceptable risk tolerance. If the risk is deemed to be extreme, such as loss
of life, then the risk avoidance will be justified.
Mitigate
In many cases, the response to risk will be to mitigate risks with controls. These controls
may be managerial, technical, or physical. Mitigation will not avoid risks completely but
should bring the risks down to an acceptable level. Mitigation could involve an acceptable
use policy (AUP) for use of mobile equipment; in addition, mobile device management
(MDM) would allow the organization to encrypt company data.
As well as formulating effective risk responses, it is important to recognize risks that will
have to be absorbed in some way.
Risk types
When performing risk assessments, it is important to understand that risk cannot be
completely eliminated, unless we choose risk avoidance. Certain risk types will always be
present.
Inherent risk
This is the risk that exists prior to any controls being applied. An example could be a
financial institution that employs staff with no background checks or formal interviews.
At the same time, there are no locks or barriers safeguarding the bank vault. Some
organizations will have to accept a much higher level of inherent risk when choosing a
particular area of operation or business sector. There is an inherent risk when considering
operating in a hostile geographic region, both from an environmental and political nature.
Residual risk
This is the risk that remains after controls have been applied. There will always be some
remaining risk, but it will be reduced to a level that falls within the corporation's risk
tolerance. For banks, a solution might be background checks for employees, biometric
locks, and closed-circuit television (CCTV) for the vault.
470 Applying Appropriate Risk Strategies
Risk exceptions
An exception is where a corporation cannot comply with regulatory requirements of
corporate policy. There may be circumstances beyond the control of the organization, or
perhaps a technical constraint that requires a risk exemption. A common requirement for
a risk exemption may be legacy equipment or applications. Exemptions should be formally
documented and signed off by the appropriate business owners or stakeholders.
Managing risk is a continuous process, so frameworks, controls, and effective
management are important for an enterprise.
• Control—During this phase, we identify controls that will meet the enterprise
requirements, as far as the levels that they are willing to accept.
• Review—In this phase, we must understand that there will be changes within the
industry, regulatory changes, and changes to the threat landscape. When there
are changes, then the enterprise must assess these new risks. Are the controls still
adequate within this changing threat landscape?
There are many frameworks adopted by organizations, and depending on the business
sector or operational requirements, a particular model usually gains widespread
acceptance. We'll now take a look at some risk management frameworks.
Identify
In this stage, the organization must identify all assets that may be subject to cybersecurity
risk. Assets may well include people, systems, data, and business operational capabilities—
in other words, any element of risk. Without this identification stage, we cannot effectively
focus efforts on the critical parts of the business.
Protect
Once we have identified all critical assets, we can move on to the next stage, which
is to protect. We will mitigate and minimize the potential for damage arising from a
cybersecurity incident.
Understanding the risk management life cycle 473
Detect
It is important that the organization is able to detect when a cybersecurity incident is
taking place. To be successful within this phase, we will need a combination of security
professionals, well-trained personnel, and automated detection.
Respond
It is important to have appropriate plans in place to respond to cybersecurity incidents.
We must be able to contain cybersecurity incidents while maintaining business
operations. It is important to focus on response planning, communication, and tools and
techniques to perform analysis, mitigation, and continuous improvements.
Recover
In the recovery phase, an enterprise should maintain plans that enable the organization to
recover from significant events (business continuity (BC); incident response plans). It is
important to focus on resiliency at this stage.
It is important for all these elements to be considered to ensure they will work well together.
People
The controls that we choose must be effective and accepted by our staff. If they are overly
complex or poorly implemented, then they may try to work around the obstruction.
Imposing strict password policies on users may result in workarounds such as password
reuse or writing complex passwords down.
Processes
It is important to formalize processes that will be used within the enterprise. A business
process should ensure that goals are achieved in a safe and secure way. The standards and
policies that the organization adopts will help to ensure that processes are performed in
an efficient but secure manner. Single points of failure (SPOFs) should be identified and
eliminated, and processes that have the potential for fraudulent activities should also
be changed.
Technology
Technology can be useful when considering risk controls. Innovation can be an important
business driver in many ways. Automation can help solve or speed up complex processing
problems. Artificial intelligence (AI) and machine learning (ML) are helping to protect
our information systems from ever-increasing sophisticated threats. We must, however,
harness this technology to ensure it makes the right decisions; many systems still require
humans in the loop.
In order to understand if the controls are effective, we must monitor risk activities. We
will now look at techniques used to track risks.
Risk appetite
Risk appetite defines the amount of risk that an organization is prepared to accept when
pursuing business objectives. This will differ between different business sectors and
industries and will also be very dependent on company culture, competitors, objectives,
and the financial wellbeing of the company. It is difficult to define an exact value when
describing risk appetite. Risk appetite is usually expressed as low, medium, or high. There
is usually a sweet spot that is acceptable in many industries. An energy provider with a
large customer base may be content to operate gas-fired power stations. It is a well-proven
technology, and if the technology is mature and relatively safe, they do not want to change
the model or target new customers. Their appetite for risk is low.
Risk tolerance
This is the deviation that the enterprise will accept in relation to the risk appetite. This
metric can be expressed as a number or percentage value. The energy supplier negotiates for
future deliveries of gas from global suppliers. If reserves of gas, for unexpected cold weather,
drop to less than 5% then we must act, as the company has set this as its risk tolerance.
Trade-off analysis
When considering enterprise risk for a large organization, there may be many risks and
response strategies that should be evaluated. There may be differing views based on
stakeholder priorities, and the strategic goals of the enterprise may need to be considered.
If an organization decides that customer satisfaction is the main priority, they may decide
to commit more resources into customer-facing support teams, taking personnel away
from the sales team. This may mean fewer sales in the short term, but the long-term
strategy may mean more customers.
Managing risk with policies and security practices 477
Job rotation
When employees are in the same job role for a significant amount of time, there is the
likelihood that they become complacent and burnt out. A change in job role means the
enterprise has redundancy in skill sets and motivated employees. Another benefit is the fact
that fraudulent activity will be less likely, as employees do not have the option to establish
long-term hidden practices, and the new job holder may uncover fraudulent activities.
Mandatory vacation
In certain industries, it is common practice for the staff to be away from their job for a
period of time on an annual basis. The duration may vary; in finance, the recommended
mandatory vacation is 2 weeks. This period of vacation ensures that their position can
be audited while they are on vacation. Another benefit is that mandatory vacation serves
as a deterrent. An unauthorized activity may be difficult to hide from the company if the
employee is not around to fix potential problems.
478 Applying Appropriate Risk Strategies
Least privilege
This is a good practice that involves identifying business functions and having robust
account management policies to ensure the user has the privileges they need for their
job. It may seem obvious, but it can be easy to overlook this basic control. It is common
practice to add an employee to a role group that has privileges far in excess of the
actual privileges they need. Take the example of the administrator group on a Windows
server; this role allows the user to perform any administrative function on the server. A
technician may need certain privileges to install software and hardware drivers; if they are
added to the administrator group, they would gain privileges beyond their requirements.
Once all the controls are in place, it is important that compliance is constantly monitored.
There are vendors offering solutions to monitor, report, and alert when systems deviate
from the baseline.
As an enterprise will rely on third-party vendors to perform essential business functions
on its behalf, it is important that there are controls in place to assess and mitigate risks. We
will now explain these techniques.
There are many examples of attacks that have been launched, exploiting supply chains,
often relying on a lack of visibility on the part of the enterprise. It is important to assess
all risks that may be present when we work with third parties. When performing vendor
assessments, we need to ensure they meet the expected levels of compliance required by
the enterprise. To ensure a vendor meets the expectations of the business, we may audit
the vendor or use third-party assessments. The following topics should be considered
during an assessment.
Vendor lock-in
It is important to assess all available options when bringing in external providers or
technology solutions. Cloud service providers (CSPs) would be a good example of
potential lock-in, where the provider can impose strict financial penalties if the customer
decides to switch CSP before the end of a specific contracted period. If the service is not
adequate and cannot be improved due to a lack of defined service-level agreements
(SLAs), then the customer may have to accept an inferior service. Technology may also
be a contributor to lock-in, when bleeding-edge technology may be adopted. Later, the
customer realizes the solution locks them into an incompatible database format. This
could result in a solution that would be very difficult to migrate to another platform.
Vendor lock-out could also be considered; this is when the vendor makes it difficult to
work with third parties as their service may be incompatible.
Vendor viability
It is important to choose SPs that have a provable track record in delivering services that
the enterprise would like to adopt. A sound financial analysis should also be conducted, as
a major provider of services would present a large risk to BC if they went out of business.
Support availability
It is important to agree upon levels of service; these agreements must be realistic and will be
negotiated between both parties. Clear reporting metrics and agreed response times must
be set. There should be a legally binding document generated; this is known as an SLA.
Geographical considerations
Data sovereignty is an important consideration for many organizations, including
government agencies, critical infrastructure, and regulated industries. There is a legal
requirement in some areas of operations that may require strict adherence to the
storage and transmission of certain data types. It is important that the vendor service is
compatible with these requirements.
ISO/IEC 27000:2018
This is a series of standards published by both ISO and the International Electrotechnical
Commission (IEC). The standard is broad in scope, covering the main security controls
for information management systems. It is a framework intended to secure IP, customer
records, third-party information, employee details, and financial information.
ISO/IEC 27001:2013
This standard focuses on information security; it allows an organization to adopt a
framework and prove to its customers that it has a strong security posture. Third-party
assessments will ensure that an organization has adopted all the necessary controls. The
standard focuses on the confidentiality, integrity, and availability of information systems.
ISO/IEC 27001 is a certifiable standard, meaning the organization and staff can be certified.
ISO/IEC 27002:2013
This standard covers all the key controls that should be included to ensure secure
operations when hosting information systems. This standard is for guidance only and does
not allow an organization to be accredited or certified. It allows an organization to follow
guidelines, leading to the adoption of 27001 compliance.
More information can be found at the following link: https://tinyurl.
com/27000iso.
Technical testing
It is important that the vendor provides the necessary technology to fulfill the customers'
requirements. The vendor may be a manufacturing subcontractor where quality
assurance (QA) is of the utmost importance, requiring regular on-site inspections.
Network segmentation
It is a common requirement that networks be segregated in order for the business to
be compliant with regulatory bodies. Therefore, the vendor should also follow these
requirements. When considering operational technology (OT), industrial control
systems (ICS), and SCADA, then these types of networks should not be connected to the
same network as the business users. Segmentation for some sectors may require airgaps,
while in other situations virtual local area networks (VLANs) should be adequate.
484 Applying Appropriate Risk Strategies
Transmission control
When data must be sent between the customer and the vendor, all efforts must be made to
ensure confidentiality is maintained. Confidentiality is also important where the vendor
will act on behalf of the customer and exchange data with outside entities. The vendor may
be providing monitoring and support for the customer's SCADA network. In this case. they
should use Internet Protocol security (IPsec) tunnels and consider the deployments of
jump servers (for jump servers, see Chapter 1, Designing a Secure Network Architecture).
Shared credentials
The use of shared credentials should be avoided, as it is difficult to have an effective audit
trail if we do not have individual accounts. Default credentials, such as the administrator
account on Windows or the root account in Linux, allow a user to obtain all privileges for
a system. It is standard practice to assign a user privilege by adding them to a role group;
then, we can have a proper audit trail.
Summary
In this chapter, we have been able to understand that enterprise risk is a major
consideration for an organization and will have a significant impact on the organization.
We have gained an understanding that an enterprise should employ security professionals
who have expertise in conducting appropriate risk assessments or engage qualified
assessors to assist the enterprise. We have taken a look at strategies for responding to risks.
We were able to understand why we should deploy effective controls, and the need to
have monitoring and reporting. We were able to understand why an enterprise must have
targets, for risk tolerance.
Supply chains add additional complexity to an enterprise. We have addressed the need for
visibility of who is handling enterprise data or processing enterprise data.
An understanding of vendor management and assessments is a key takeaway in this
chapter, as well as the importance of risk management teams.
We have gained an understanding of appropriate risk assessment methods, and we have
been able to implement risk-handling techniques.
We have gained an understanding of the risk management life cycle and now understand
risk tracking.
During the chapter, we have gained knowledge of how to manage risk with policy and
security practices and managing and mitigating vendor risk.
Questions 485
These skills will be useful in the next chapter, where we look at regulatory compliance and
legal compliance for enterprise activities. We will also take a look at managing enterprise
risks through BC planning.
Questions
Answer the following questions to test your knowledge of this chapter:
1. What type of risk assessment would use likelihood and impact to produce a
numerical risk rating?
A. Qualitative assessment
B. Gap assessment
C. Quantitative risk assessment
D. Impact assessment
2. What type of risk assessment would use metrics including asset value, monetary
loss during an event, and a value that could be expected to be lost during the course
of a year?
A. Qualitative assessment
B. Gap assessment
C. Quantitative risk assessment
D. Impact assessment
3. What is the metric that is used to calculate the loss during a single event?
4. If my database is worth $100,000 and a competitor steals 10% of the records during
a breach of the network and this happens twice in a year, what is the SLE?
A. $100,000
B. $1,000
C. $20,000
D. $10,000
486 Applying Appropriate Risk Strategies
5. If my database is worth $100,000 and a competitor steals 10% of the records during
a breach of the network and this happens twice in a year, what is the ALE?
A. $200,000
B. $1,000
C. $20,000
D. $10,000
6. A company currently loses $20,000 each year due to IP breaches. A managed security
service provider (MSSP) guarantees to provide 100% protection for the database
over a 5-year contract at an annual cost of $15,000 per annum. What is the ROI in $?
A. $75,000
B. $25,000
C. $125,000
D. $2,500
7. If my risk management team need to understand where the business may be lacking
security controls, what should they perform?
A. Qualitative assessment
B. Gap assessment
C. Quantitative risk assessment
D. Impact assessment
8. What type of risk response would purchasing cyber liability insurance be classed as?
A. Transfer
B. Accept
C. Avoid
D. Mitigate
9. What would be considered both a deterrent and useful security practice to ensure
employees' job performance can be audited when they are not present?
A. Job rotation
B. Mandatory vacation
C. Least privilege
D. Auditing
Questions 487
10. What is the term for risk that is present within an industry, prior to any controls?
A. Remaining
B. Residual
C. Inherent
D. Acceptance
11. What is the term for risk that remains within an industry, after the deployment of
security controls?
A. Remaining
B. Residual
C. Inherent
D. Acceptance
12. What is the metric that an organization can use to measure the amount of time that
was taken to restore services?
A. MTTR
B. MTBF
C. ALE
D. ARO
13. What is the metric that an organization can use to measure the reliability of a
service?
A. MTTR
B. MTBF
C. ALE
D. ARO
14. What type of risk response may be considered by a financial start-up company with
a high-risk appetite if the potential rewards are significant and the risk is minimal?
A. Transfer
B. Accept
C. Avoid
D. Reject
488 Applying Appropriate Risk Strategies
15. What is a good practice when assigning users privileges to reduce the risk of
overprivileged accounts?
A. SoD
B. Job rotation
C. Mandatory vacation
D. Least privilege
16. What is an organizational policy that would make it less likely that a user will insert
a Universal Serial Bus (USB) storage device that they received at an exposition?
17. What will an enterprise use to track activities that may lead to enterprise risk?
18. What is the term that is used to describe the situation where a vendor has
proprietary technology that makes it difficult for a customer to switch vendor?
A. Vendor risk
B. Vendor lock-in
C. Third-party liability
D. Vendor management plan
19. If a customer is concerned that a third-party development team may go bust during
an engagement, what can they use to ensure they will have access to the source
code?
A. Change management
B. Staff turnover
C. Peer code review
D. Source code escrow
Answers 489
20. What is the metric that an organization should use to calculate the total loss during
a year?
A. MTTR
B. MTBF
C. ALE
D. ARO
Answers
1. A
2. C
3. C
4. D
5. C
6. B
7. B
8. A
9. B
10. C
11. B
12. A
13. B
14. B
15. D
16. A
17. A
18. B
19. D
20. D
14
Compliance
Frameworks, Legal
Considerations, and
Their Organizational
Impact
When an enterprise engages in business operations, it is important to consider many
factors that are important for success. Operating within diverse industries may require
compliance and the adoption of standards to satisfy legal or regulatory compliance.
Regulations may be strict and, to be granted the authority to operate and to show
compliance, controls and policy must be put into place. Legal compliance is often
a complex area and will differ from country to country and, in some cases, may differ
between states or regions within the same country. There are many relationships that an
enterprise maintains in order to function.
492 Compliance Frameworks, Legal Considerations, and Their Organizational Impact
These relationships require formal agreements, mostly legal agreements to ensure that the
enterprise is protected. In this chapter, we will look at the following topics:
In this section, we will look at some of the challenges posed by operating within
diverse industries.
Data considerations
When considering the protection of data, there are many responsibilities that an
enterprise must be aware of.
Data sovereignty
Data sovereignty is important when an enterprise is considering hosting data that may
be subject to laws and regulations relating to the storage of certain data types. Processed
digital data will need to meet the strict requirements of the country where that data has
been collected. There are often strict regulations to consider if a company stores and
processes data from citizens of another country. Global cloud-based providers must take
care that they do not break the laws of the country when collecting or processing certain
data types.
Data ownership
Ownership implies that any data created or acquired and subsequently stored by the
enterprise will now need to be handled safely and securely. Data ownership means the
company is accountable for the protection of the data. It is important that the company
understands the value of the data and implements the controls that are required to meet
legal and regulatory requirements.
494 Compliance Frameworks, Legal Considerations, and Their Organizational Impact
Data classifications
To ensure correct data handling, data should be classified both to reflect the value to the
business and to ensure that appropriate legal and regulatory controls are put in place.
Figure 14.2 shows an example of data classification for a commercial organization:
Data retention
It is important to ensure that laws and regulatory requirements are met when considering
data retention policies. Data should be labeled and stored according to the appropriate
regulations, national laws, and, in some cases, local or state laws. Different data types may
have different retention requirements to meet regulatory compliance. The Sarbanes Oxley
Act of 2002 (SOX) requires accounts payable and receivable ledgers to be retained for 7
years, while customer purchase orders and invoices only have to be retained for 5 years.
Security concerns associated with integrating diverse industries 495
Data types
An enterprise operating across diverse industries will likely store and handle data with
different regulatory and legal requirements. In order to apply the correct controls, it is
important to identify the different data types.
Health
Protected Health Information (PHI) will be subject to strict privacy rules and
regulations. Hospitals, clinics, doctors, surgeries, and healthcare insurance providers will
store patient records. There are strict rules for the handling of this type of data.
Financial
Financial records may include accounts receivable, accounts payable, purchase ledgers,
and corporation tax records. This type of data is sensitive and should be labeled and
handled with appropriate care. Data retention must be strictly adhered to as there will be
strict requirements for legal and regulatory compliance.
Location of data
The location of data will be important as there may be a requirement for legal jurisdiction
and, for certain types of data, sovereignty laws or regulations may be applicable.
Understanding regulations, accreditations, and standards 497
Data controllers who are in breach of the compliance regulations can be fined €20
million or 4% of the organization's turnover (whichever is the greater). Recent examples
include Google, which was fined €57 million by the French National Data Protection
Commission. According to the commission's report, the transgression resulting in the
fine was as a result of:
For CSPs who want to demonstrate their adherence to security standards, there are two
levels for compliance. These are STAR Level 1 and Level 2.
STAR Level 1
Level 1 is a self-assessment exercise completed by the CSP. It enables the provider to
document services that they offer and lets their customer know the security standards that
they have implemented. The Security Questionnaire comprises over 263 questions. Refer to
Figure 14.10 to see examples of the controls that those members must have in place:
STAR level 2
Level 2 is assessed by a third party and allows for a more detailed audit, but, more
importantly, offers the customer extra assurance due to the third-party attestation. This
audit will ensure the provider meets the ISO/IEC 27001 standards against the Cloud
Controls Matrix (CCM).
Regulatory compliance, legal compliance, and the adoption of industry-accepted
standards are very important to an enterprise. Conformance ensures the business is able
to win contracts and operate according to accepted standards. In addition, there may be
very strict legal requirements required for an organization to operate.
Understanding legal considerations 507
Due diligence
An organization must understand all the regulatory and legal requirements when
operating within strict regulatory frameworks. Part of the responsibility is to ensure that
the business takes steps to assess what mitigations and controls should be implemented to
protect information systems. A simple way to remember due diligence is to think, What
are my responsibilities? If you are storing sensitive data, you should assess all the risks
associated with that activity.
Due care
Once the organization has assessed all the requirements to protect the information
systems that it controls, mitigation and controls need to be implemented and maintained.
When an organization has assessed all the risks, then due care would comprise the actions
needed to protect the sensitive data.
Export controls
Many countries have strict laws regarding the exporting of sensitive technology, including
hardware and software. It can be a criminal offense in many countries to export sensitive
technologies without applying for the appropriate export licenses or ensuring that the
technology is not on the government's restricted list. Obvious restricted items include
military equipment or arms sales and may also include technology embedded in a mobile
smartphone. For up-to-date lists of export controls concerning organizations within the
US, visit the following URL: https://www.trade.gov/us-export-controls.
For export controls concerning the United Kingdom, visit the following link: https://
www.gov.uk/business-and-industry/export-controls.
508 Compliance Frameworks, Legal Considerations, and Their Organizational Impact
Legal holds
A company should be prepared to respond to an enforceable legal hold. This may be
required by law enforcement or a government agency or may be obtained by a court
order. Data should be preserved, failing which the company will be in contempt of the
order. A legal hold will involve electronic records, paper-based records, and mobile data
sources (laptops, smartphones, and external storage). It is important to suspend all normal
activities, such as data retention policies (do not delete the data, even if policy dictates that
it only needs to be retained for 12 months), as the legal hold will have precedence and may
last for years.
E-discovery
Once the data has been frozen, we can begin the process of identifying relevant evidence.
There needs to be a clear set of instructions on the date ranges of the required electronic
documentation and the scope of the data. There needs to be agreement on the format
of the data that will be provided, including the possibility that metadata may also
be required.
A business must be able to respond in a timely fashion when a legal notice is served. Legal
counsel will often be required for specific guidance as the law can be complex to navigate.
Summary
In this chapter, we have looked at the challenges of operating within diverse industries. We
tried to understand the requirements for compliance, and the importance of standards to
meet in terms of legal or regulatory compliance. We have seen where strict compliance is
necessary for a business to attain the authority to operate.
We have understood why controls and policy must be put in place – to show compliance.
In the chapter, we also looked at the complexities of legal compliance and understood
how it differs from one country to another. We have looked at formalizing agreements to
ensure the enterprise is protected.
We will find the knowledge gained to be useful in the next chapter, where we will take
a look at Business Continuity Planning (BCP), Disaster Recovery Planning (DRP),
high availability, incident response planning, and the use of the cloud for
business continuity.
Questions 511
Questions
Here are a few questions to test your understanding of the chapter:
1. What must a government agency consider when planning to store sensitive data
with a global CSP?
A. Data sovereignty
B. Data ownership
C. Data classification
D. Data retention
2. Who is accountable for the storage and protection of customer data? They must
ensure that they implement controls to meet legal and regulatory requirements.
A. Data controller
B. Data protection officer
C. Data processor
D. Supervisory authority
A. GDPR
B. Financial records
C. Intellectual property
D. PII
E. COPPA
4. A multinational company wants the assurance that data will not be accessible when
their contract with a CSP expires. What technology may be applicable?
A. Crypto Erase
B. Pulping
C. Shredding
D. Degaussing
512 Compliance Frameworks, Legal Considerations, and Their Organizational Impact
5. A global automobile manufacturer must ensure that its products are compatible
with its worldwide customer base. What regulations or standards will be
most important?
10. A smartcard manufacturer needs to sell products to a global market. They need to
show compliance using internationally agreed-upon protocols. What would be a
useful accreditation or assurance that their products have been evaluated and will
meet the security requirements of their customers?
11. What regulatory body is intended to protect the personal data of EU citizens?
12. A US smartcard manufacturer needs to sell its products in a global market. They
need to ensure that the technology is not sold to countries or governments hostile to
the US. What guidance or regulations should they consult?
A. Due care
B. Export controls
C. Legal holds
D. E-discovery
514 Compliance Frameworks, Legal Considerations, and Their Organizational Impact
13. A government department has data privacy requirements, and they need to have
employees and service providers sign this agreement. They should be made aware of
the strict terms of this agreement and the penalties that may be forthcoming. What
type of agreement will be important?
15. Wingtips Corporation would like to build resiliency into its network connections.
They are working with an Internet Service Provider (ISP) that proposes a highly
available MPLS solution. To ensure the vendor is able to deliver the service with
99.999% uptime, what documentation will be important?
16. What agreement should be used when business partners need to share data? This
agreement may stipulate a timeline for the information exchange to be supported,
security requirements, data types that will be exchanged, and the actual sites that
will be part of the data interchange.
17. What agreement ensures that the customer data will be protected by the service
provider and that agreed-upon steps are in place if data breaches or any adverse
action were to occur?
A. Due care
B. Export controls
C. Legal holds
D. E-discovery
A. Due care
B. Export controls
C. Legal holds
D. E-discovery
20. What document may be used when business partners need to document
responsibilities? This document will not be written by lawyers and is intended to
formalize a verbal agreement or a handshake.
Answers
1. A
2. A
3. A, D
4. A
5. C
6. C
7. D
8. B
9. A
10. D
11. A
12. B
13. C
14. B
15. A
16. D
17. E
18. C
19. D
20. C
15
Business Continuity
and Disaster
Recovery Concepts
In order for an organization to conduct business operations in challenging and diverse
environments, high-level planning and mitigation controls should be undertaken to
ensure the enterprise is resilient. When the business delivers high-value services, plans
must be developed for business continuity. In the event that a significant disruption
may impact delivered services, plans should be created to allow the business to remain
functional. Plans should be developed to ensure business operations can be resumed
quickly and efficiently in the event of a disaster. High availability, redundancy, and fault
tolerance are important considerations to ensure that services are available. Alternative
sites may need to be identified, to allow for business continuity. Automation should be
used where there is an opportunity to become more resilient, while bootstrapping can be
used for the rapid deployment of workloads, using scripting for custom configuration.
Autoscaling allows an enterprise to deploy workloads on demand to satisfy customer
demands. When the enterprise has to deliver services to a global customer base, it may
be appropriate to investigate the use of content delivery networks for low latency and the
increased availability of services.
518 Business Continuity and Disaster Recovery Concepts
The initial purpose document can be used to build the contingency plan, while it can also
be used as the basis for a DRP. Once key systems are identified, this information can also
be used to document an effective cyber incident response plan.
The purpose of a BIA is to identify critical services that the business delivers and
understand the potential impacts that may be caused by a lack of service. The goal of the
exercise is enterprise resilience, with the creation of our contingency plans.
Business leaders must identify the critical resources needed for the business and agree
upon acceptable downtime, using the following metrics.
Mission-essential functions
Mission-essential functions need to be identified by the planning team. Senior business
leaders and key stakeholders must contribute to this plan. A mission-essential function
would typically represent a single point of failure to the business. Documentation would
be created to highlight these functions and the agreed-upon objectives to mitigate these
single points of failure. Figure 15.3 shows an example of mission-essential planning:
Mission-essential functions are potential single points of failure and appropriate planning
objectives ensure that goals can be achieved.
Cold site
When planning for alternative sites to run the business, this is the least costly. However,
it is the least effective if the business must be operational within a short time frame.
An example of a cold site could be leased office space and suitable facilities for computing
equipment. The time to relocate personnel and systems and become operational could
result in a significant delay for the organization.
Warm site
If the enterprise needs to switch operations over to an alternative site within hours rather
than days, then a good choice may be a warm site. A warm site has equipment and
facilities ready for the business to use. Personnel and data will need to be moved to the site
to become operational.
524 Business Continuity and Disaster Recovery Concepts
Hot site
A hot site normally consists of all the equipment and data needed for the business to
continue operations. The site should allow the organization to switch operations within
a short time frame. If the business runs critical infrastructure or e-commerce, then the site
would be fully replicated with all required data and information systems. Many solutions
are designed to failover to the hot site within a matter of seconds. If your business is
guaranteeing an SLA of 99.9999% to customers, then your site cannot be offline for more
than 31.56 seconds per year.
Figure 15.7 shows the differences between alternate sites:
Mobile site
A mobile recovery site is typically delivered on the back of a trailer. It can be in the form
of a cold, warm, or hot site. It can be shipped close to the location of the staff, removing
one major logistical challenge. If the business needs a solution that can be operational
within seconds, then a mobile site will not be a good choice. Examples of mobile sites are
military solutions, where organizational resilience is important. The US military is able to
deploy mobile data centers containing sufficient facilities to house 90 units of computing
equipment. The unit incorporates pop-out tents, with space for 20 people. Figure 15.8
shows a mobile data center:
Conducting a business impact analysis 525
Scalability
Scalability is defined as the system's ability to increase and decrease performance levels in
response to the demands placed upon the system. An example could be a database server.
As database queries are processed, there will be a point where it cannot simultaneously
handle any more requests.
Vertical scaling
In order to allow critical services to handle additional user requests, we can add more
compute resources to the platform. To vertically scale an information system, we can add
additional Central Processing Units (CPUs), Random Access Memory (RAM), faster
disk input/output (I/O), and additional network connections. This approach poses
a higher risk of downtime and outages compared to other approaches.
Horizontal scaling
Horizontal scaling is achieved by adding more workloads in the form of additional
computers or platforms. This is often achieved by deploying additional compute nodes in
the form of Virtual Machines (VMs). This can work well if the resources can be deployed
on-demand using autoscaling.
Planning for high availability and automation 527
Resiliency
Computing solutions must be resilient. This will mean that downtime is reduced and that
business services can be delivered with more reliability.
High availability
High availability can be achieved by clustering important business applications or using
network load balancing to efficiently distribute the workload.
Diversity/heterogeneity
Too much reliance on a single vendor solution could adversely impact an organization.
Supply problems may impact systems that require spares and maintenance, while
a critical unpatched vulnerability could impact all of your network infrastructure
appliances. Reliance on a single vendor operating solution could result in downtime
due to zero-day exploits.
Distributed allocation
When we are supporting high availability, we may need to distribute workloads across
multiple nodes. An accurate algorithm must be able to best distribute workloads, thereby
optimizing the computing platforms. This system is typically used for large server farms
handling high numbers of client requests. To ensure that adequate compute nodes are
always available, geo-redundancy and scalability are important factors.
Redundancy
Redundancy can be achieved by duplicating systems or processes. Data redundancy can
eliminate outages due to disk failure by creating a mirror of a data drive.
528 Business Continuity and Disaster Recovery Concepts
Replication
The availability of data is often critical. Information systems can grind to a halt without
reliable access to data. Critical transactions may need access to data that is time-critical.
Therefore, a decision must be made about how to replicate the data. Asymmetric
replication allows for a time tag. When inputting data, there is a background process
copying the data to the replica storage system. Symmetric replication allows for no time
lag; the data is written or committed to both systems at the same time (this is the more
costly solution).
Clustering
High availability can be achieved by clustering important business applications
or using network load balancing to efficiently distribute the workload. Figure 15.9 shows
a failover cluster:
Automation
It is important for capacity planning to be performed in order to identify the computing
needs of an enterprise. There must be provision for workloads to be deployed on-demand
to service enterprise and customer requirements. In a modern cloud environment, a high
degree of automation will be required to support the needs of the business.
Autoscaling
When an enterprise supports critical services, it is important that workloads can be
deployed on demand. Autoscaling allows the customer to work with the CSP to define
expected day-to-day compute needs. The customer can then choose a plan where spikes
in demand result in the automatic provisioning of additional compute resources. Figure
15.10 shows an example of autoscaling:
Bootstrapping
When automating the creation and deployment of virtual machines to support workloads,
bootstrapping allows for configuration files to be created to simplify the deployment of
these virtual computers or appliances. Bootstrapping allows for the rapid deployment
of virtualized infrastructure and supports autoscaling. It is common to apply the
configuration to a compute node or cluster as it boots up from a standard image
(such as Linux, Unix, or Windows).
530 Business Continuity and Disaster Recovery Concepts
Testing plans
To ensure that enterprise BCP and DRP will be effective, thorough testing will need to be
performed. Without testing, there is no guarantee that the plan will be effective.
Checklist
A checklist test is a detailed examination of BCP/DRP documentation, performed by
stakeholders and team members on their own. This will entail sharing the plan with the
appropriate people and organizing a communication channel for feedback. This type of
test intends to pinpoint any inaccuracies or errors in the documents.
Explaining how cloud technology aids enterprise resilience 531
Tabletop exercises
It is important to gather representation from within the company to effectively understand
if there are any potential problems with a proposed plan. Stakeholder involvement is
important when assessing the effectiveness of the plan. Scenarios can be discussed and
actions that need to be performed can be evaluated. A tabletop exercise ensures that the
Disaster Recovery Team (DRT) or Cyber Security Incident Response Team (CSIRT)
do not need to perform exhaustive testing until the plans are fine-tuned.
Walk-through
When particular elements of a plan are being scrutinized, this is called paper-based
testing, or a walk-through. It allows the architects of the plan to discuss a particular
process within the overall plan. A walk-through may also include input from stakeholders
who would be impacted by this part of the plan.
Collaboration tools
Cloud-based collaboration tools can be very useful in enabling teams to work together.
Common cloud-based collaboration tools allow an enterprise to work effectively with
partners and remote workers and to engage with their customers. Common tools include
web-based conferencing, chat, Voice over IP (VOIP), and many more. These tools have
proven very useful where businesses have had to adapt to a remote working model. When
considering online collaboration tools, resilience and security need to be addressed.
Collaboration tools can also be useful as a failover when a primary service is unavailable.
If a cellular network is unavailable due to disruption or other outages, then we can use
tools such as Microsoft Teams or Zoom conferencing to communicate with partners,
team members, or customers.
Storage configurations
When an organization makes use of CSP, to host critical services or to offer redundancy in
the event that the enterprise needs to use the CSP for BCDR, then the security of the data
is of paramount importance. To secure the data, there are methods available that can both
secure the data and also offer redundancy.
Bit splitting
Bit splitting is intended to protect data stored with cloud providers. Data blocks are first
encrypted using the AES 256-bit symmetric algorithm, and the encrypted block is then
split and distributed across multiple data stores. The stored split blocks are hashed when
they are written to the filesystem in order to accurately retrieve the data using data maps.
Data dispersion
Data dispersion is intended to operate much like RAID solutions that utilize parity.
The data blocks are dispersed across multiple storage systems or cloud providers. If one
provider is unavailable, then we can still access the data, as the algorithm used is similar to
RAID5 or RAID6.
534 Business Continuity and Disaster Recovery Concepts
Summary
In this chapter, we have understood why an organization needs to conduct high-level
planning to ensure appropriate mitigation controls are deployed. We have learned about
why plans are developed for business continuity. We have understood what is needed in
a DRP. We have looked at high availability, redundancy, and fault tolerance, when critical
services need to be available. We have looked at the business need for alternate sites,
as well as physical locations and the services delivered by CSPs. We have learned why
automation should be used in an effort to become more resilient. Knowledge has been
acquired about bootstrapping, which can be used for the rapid deployment of workloads
using scripting for custom configuration. We have looked at cloud-based autoscaling,
allowing an enterprise to deploy workloads on demand to satisfy customer demands. We
have looked at CDNs for low latency and the increased availability of services. We have
looked at the need for the thorough testing of plans, including checklists, walk-throughs,
tabletops, and parallel and full interruption tests.
We have studied cloud-based security and collaboration tools that can protect enterprise
data and enable a business to be resilient when using CSPs.
Questions
Here are a few questions to test your understanding of the chapter:
1. What metric is used by planners for a critical resource that would cause severe
adverse effects for the enterprise? This metric forms the line in the sand that
cannot be crossed.
2. What metric is a goal set by the enterprise to ensure a critical service will be
operational within a specified timeframe?
3. What is a planning objective used when the restoration of a critical service will also
require data to be restored?
4. What planning objectives ensure that critical services are recovered first, while other
functional elements, such as printing customer tickets, are given a lower priority?
6. An organization has leased office space and suitable facilities for computing
equipment. The intention is to relocate personnel and systems and become
operational in the event that the main office location is unavailable. What have
they used?
A. A cold site
B. A warm site
C. A hot site
D. A mobile site
Questions 537
7. A planning team has identified a requirement for a site, housing equipment and
facilities ready for the business to use. Personnel and data will need to be moved
to the site to become operational. What have they identified?
A. A cold site
B. A warm site
C. A hot site
D. A mobile site
8. An e-commerce site needs a failover location that has all the equipment and
data needed for the business to continue operations. The site should allow the
organization to switch operations within a short time frame. What should they use?
A. A cold site
B. A warm site
C. A hot site
D. A mobile site
A. Vertically scalability
B. Replication
C. Performance scalability
D. Horizontally scalability
10. What type of scalability is achieved by adding more workloads in the form of
additional computers or compute nodes? This can work well if the resources can be
deployed on demand using autoscaling.
A. Vertically scalability
B. Replication
C. Performance scalability
D. Horizontally scalability
538 Business Continuity and Disaster Recovery Concepts
11. What type of data replication will be needed where there can be no time lag, where
the data is written or committed to both systems at the same time (this is the more
costly solution)?
A. Symmetric
B. Cloud-based
C. Asymmetric
D. CDN
12. What is used when the customer works with the CSP to define expected day-to-day
compute needs? The customer can then choose a plan where spikes in demand result
in the automatic provisioning of additional compute resources.
A. Autoscaling
B. Caching
C. Bootstrapping
D. Clustering
13. Select the cloud service, where the service provider is often referred to as a
gatekeeper. This service will protect the enterprise data from inbound threats
(into the cloud) and outbound threats such as data exfiltration.
A. IaaS
B. CASB
C. PaaS
D. Secure Web Gateway (SWG)
Questions 539
14. What is a cloud platform where the customer only pays for computing time and
does not need to worry about maintaining servers or reserving network bandwidth?
This type of computing makes better use of cloud resources as the customer only
pays for what they use.
A. Infrastructure computing
B. Cloud access security broker
C. Serverless computing
D. Virtual computing
15. What cloud storage solution is intended to protect data stored with cloud providers?
Data blocks are first encrypted using the AES 256-bit symmetric algorithm and the
encrypted block is then split and distributed across multiple data stores.
A. Bit splitting
B. Data dispersion
C. Availability
D. Collection
16. What is a cloud storage solution where data blocks are dispersed across multiple
storage systems or cloud providers? If one provider is unavailable, then we can still
access the data as the algorithm used is similar to RAID5 or RAID6.
A. Bit splitting
B. Data dispersion
C. Availability
D. Collection
540 Business Continuity and Disaster Recovery Concepts
17. What is the testing that uses stakeholder involvement to assess the effectiveness of
the plan? Scenarios can be discussed and actions that need to be performed can be
evaluated. This exercise ensures that the Disaster Recovery Team (DRT) or Cyber
Security Incident Response Team (CSIRT) do not need to perform exhaustive
testing until the plans are fine-tuned.
A. Checklist
B. Walk-through
C. Tabletop exercises
D. Full interruption test
18. What is the final stage in the testing of BCP/DRP plans? This can disrupt business
operations, so should only be used when all other types of tests have been
successfully executed.
A. Checklist
B. Tabletop exercises
C. Full interruption test
D. Parallel test/simulation test
19. What technology is offered by CSPs to allow for the rapid deployment of virtualized
infrastructure? It is common to apply the configuration to a compute node
or cluster as it boots up from a standard image (such as Linux, Unix, or Windows).
A. Autoscaling
B. Distributed allocation
C. Bootstrapping
D. Replication
20. What type of network allows for geographically dispersed servers delivering
content in the form of web pages, media, and images to worldwide consumers?
This network uses caching to ensure that data is available on the edge of the
network, where users or customers will benefit from lower latency.
A. Autoscaling
B. Distributed network
C. Content delivery network
D. Replicated network
Answers 541
Answers
1. D
2. B
3. A
4. C
5. B
6. A
7. B
8. C
9. A
10. D
11. A
12. A
13. B
14. C
15. A
16. B
17. C
18. C
19. C
20. C
16
Mock Exam 1
Welcome to the study guide assessment test! These test questions are designed to resemble
real-world CASP 004 exam questions. To make this test as realistic as possible, you should
attempt to answer these questions closed book and allocate the correct amount of time.
You can use some paper or a scratchpad to jot down notes, although this will be different
in a real test environment. If you wish to look at the PearsonVue candidate testing rules,
check out the following URL: https://home.pearsonvue.com/comptia/onvue.
Questions
1. Developers are building sensitive references and account details into the application
code. Security engineers need to ensure that the organization can secure the
continuous integration/continuous delivery (CI/CD) pipeline. What would be the
best choice?
3. The ACME corporation has recently run an annual risk assessment as part of its
regulatory compliance. The risk management team has identified a high-level risk
that could lead to fraudulent activities. The team has recommended that certain
privileged tasks must be performed by more than one person for the task to be
validated. What is this an example of?
A. Job rotation
B. Least privilege
C. Separation of duties
D. Multi-factor authentication
4. Security professionals are analyzing logs that have been collected from MDM
software. The following log entries are available:
C. Impossible time travel; disable the device's account and carry out further
investigation.
D. Anomalous status reporting; initiate a remote wipe of the device.
5. An e-commerce site has recently upgraded its web application servers to use TLS
1.3, though some customers are calling the service desk as they can no longer access
the services. After analyzing the logs that had been generated on the client's devices,
the following was observed:
ERROR_SSL_VERSION_OR_CIPHER_MISMATCH
What is the most likely cause of the reported error?
A. Clients are configured to use ECDHE.
B. Clients are configured to use RC4.
C. Clients are configured to use PFS.
D. Clients are configured to use AES-256 GCM.
6. The security professionals are reviewing all the servers in the company and discover
that a server is missing crucial patches that would mitigate a recent exploit that
could gain root access. Which of the following describes the teams' discovery?
A. A vulnerability
B. A threat
C. A breach
D. A risk
A. Black-box testing
B. Gray-box testing
C. Red-team exercises
D. White-box testing
E. Blue-team exercises
546 Mock Exam 1
8. Recently, the ACME corporation has merged with a similar-sized organization. The
SOC staff now have an increased workload and are failing to respond to all alerts.
What is the likely cause of this behavior?
A. False positive
B. Alert fatigue
C. False negative
D. True positive
9. A small regional bank, with no dedicated security team, must deploy security at
the edge of the network. They will need a solution that will offer protection from
multiple threats that may target the bank's network. What would be the best
solution for the bank?
A. Router
B. WAF
C. UTM
D. DLP
10. During baseline security training for new developers, attention must be focused on
the use of third-party libraries. What is the most important aspect for a commercial
development team that's considering the use of third-party libraries? Choose two.
11. A CISO wants to change the culture of the organization to strengthen the company's
security posture. The initiative will bring the development and operations teams
together when code is released to the production environment. What is the best
description of this initiative?
A. DevOps
B. A team-building exercise
C. A tabletop exercise
D. SecDevOps
Questions 547
A. Agile
B. Waterfall
C. Spiral
D. Build and Fix
13. A CISO for a large multinational bank would like to address security concerns
regarding the use and auditing of local administrator credentials on end devices.
Currently, users are given local administrator privileges when access is required. This
current practice has resulted in undocumented changes, a lack of accountability, and
account lockouts. What could be implemented to address these issues?
14. The ACME corporation has been suffering from increasing numbers of service
outages on the endpoints due to ever-increasing instances of new malware. The
Chief Financial Officer's laptop was impacted while working remotely from a hotel.
The objective is to prevent further instances of endpoint disruption. Currently, the
company has deployed a web proxy at the edge of the network. What should the
company deploy to mitigate these threats?
15. A company has been testing its Disaster Recovery Plan (DRP) while team
members have been assessing challenges that had been encountered while testing in
parallel. Computing resources ran out at 65% of the restoration process for critical
services. What documentation should be modified to address this issue?
A. Spawn a shell using sudo and use a text editor to update the sudoer's file.
B. Perform ASIC password cracking on the host.
C. Access the /etc/passwd file to extract the usernames.
D. Use the UNION operator to extract the database schema.
17. A security analyst is concerned that a malicious piece of code was downloaded on a
Linux system. After running various diagnostics tools, the analyst determines that
the suspect code is performing a lot of input/output (I/O) on the disk drive. The
following screenshot shows the output from one of the diagnostics tools:
18. A CISO needs to ensure there is an effective incident response plan. As part of
the plan, a CSIRT team needs to be identified, including leadership with a clear
reporting and escalation process. At what part of the incident response process
should this be done?
A. Preparation
B. Detection
C. Analysis
D. Containment
19. The ACME corporation's CSIRT team responded to an incident where several
routers failed at the same time. The cause of the failure is unknown, and the routers
have been reconfigured and restored to operational condition. The integrity of the
router's configuration has also been verified. Which of the following should the
team perform to understand the failure and prevent it in the future?
20. Jeff, a developer with the ACME corporation, is concerned about the impact of new
malware on an ARM CPU. He knows that the malware can insert itself in another
process memory location. Which of the following technologies can the developer
enable on the ARM architecture to prevent this type of malware?
A. Execute-never (XN)
B. EDR software
C. Total memory encryption
D. Virtual memory encryption
550 Mock Exam 1
21. Security professionals have detected anomalous activity on the edge network. To
investigate the activity further, they intend to examine the contents of the pcap file.
They are looking for evidence of data exfiltration from a suspect host computer. To
minimize disruption, they need to identify a command-line tool that will provide
this functionality. What should they use?
A. netcat
B. tcpdump
C. Aircrack-ng
D. Wireshark
22. Ann, a security analyst, is investigating anomalous activity within syslog files.
She is looking for evidence of unusual activity based on reports from User Entity
Behavior Analytics (UEBA). Several events may be indicators of compromise.
Which of the following requires further investigation?
A. Netstat -bn
B. vmstat -a 5
C. nc -w 180 -p 12345 -l < shadow.txt
D. Exiftool companylogo.jpg
23. UEBA has generated alerts relating to significant amounts of PNG image uploads to
a social networking site. The account that has generated the reports is a recent hire
in the Research and Development division. A rival manufacturer is selling products
that appear to be based on the company's sensitive designs.
The payloads are now being analyzed by forensics investigators. What tool will allow
them to search for evidence in the PNG files?
A. Steganalysis tool
B. Cryptanalysis tool
C. Binary analysis tool
D. Memory analysis tool
24. Marketing executives are attending an international trade exhibition and must
connect to their company's email using their mobile devices during the event. The
CISO is concerned that this may present a risk. What would best mitigate this risk?
C. Geofencing
D. Always-on VPN settings
25. A company employee has followed a QC link and installed a mobile application
that's used to book and schedule activities at a vacation resort. The application is not
available on Google Play Store. Company policy states that applications can only be
downloaded from the official vendor store or company portal. What best describes
what has allowed this app to be installed?
26. A company has deployed a hardened Linux image to mobile devices. The
restrictions are as follows:
27. A regional Internet Service Provider (ISP) is experiencing outages and poor
service levels over some of its copper-based infrastructure. These faults are due to
the reliance on legacy hardware and software. Several times during the month, a
contracted company must follow a checklist of 12 different commands that must
be run in serial to restore performance to an acceptable level. The ISP would like to
make this an automated process. Which of the following techniques would be best
suited for this requirement?
28. A security analyst is investigating a possible buffer overflow attack. The attack seems
to be attempting to load a program file. Analysis of the live memory reveals that the
following string is being run:
code.linux_access.prg
29. A CISO at a regional power supply company is performing a risk assessment. The
CISO must consider what the most important security objective is when applying
cryptography to control messages. The control messages are critical and enable the
operational technology to ensure the generators are outputting the correct electrical
power levels. What is the most important consideration here?
30. Alan, a CISO for an online retailer, is performing a quantitative risk assessment.
The assessment is based on the public-facing web application server. Current figures
show that the application server experiences 80 attempted breaches per day. In
the past 4 years, the company's data has been breached two times. Which of the
following represents the ARO for successful breaches?
A. 50
B. 0.8
C. 0.5
D. 29,200
Questions 553
31. Security engineers are assessing the capabilities and vulnerabilities of a widely used
mobile operating system. The company intends to deploy a secure image to mobile
phones and tablets. The mobile devices mustn't be vulnerable to the risk of privilege
elevation and the misuse of applications. What would be the most beneficial to the
company for addressing these concerns?
32. Gerry, a CISO for a national healthcare provider, is assessing proposals for
network storage solutions. The proposal is for NAS to be deployed to all regional
hospitals and clinics. As the data that will be stored will be sensitive and subject
to strict regulatory compliance, security is the most important consideration. The
proposal is for appliances running a Linux kernel and providing secure access to
authenticated users through NFS. One major concern is ensuring that the root
account cannot be used to gain access to user data on the Linux NFS appliances.
What would best prevent this issue from occurring?
33. ACME chemicals is conducting a risk assessment for its legacy operational
technology. One of their major concerns is the widespread use of a standard
message transport protocol that's used in industrial environments. After performing
a vulnerability assessment, several CVEs are discovered with high CVSS values. The
findings describe the following vulnerabilities:
34. Mechanical engineers are using a simple programming language based on relay-
based logic, as shown in the following diagram:
35. A small water treatment plant is being controlled by a SCADA system. There are
four main treatment tanks, each being serviced by an input pump and an output
pump. The design of the plant offers redundancy as the plant can operate without
all the tanks being available. The plant is comprised of a standard SCADA mix of
operational technology, including PLCs and a supervisory computer.
What system failure will cause the biggest outage?
A. Loss of a treatment tank
B. Loss of supervisory computer
C. Failure of an input pump
D. Failure of a PLC
A. SDLC
B. OVAL
C. IEEE
D. OWASP
37. The customers of a large online retailer are reporting high levels of latency when
they are searching for products on the e-commerce site. The site consists of an array
of load-balanced APIs that do not require authentication. The application servers
that host the APIs are showing heavy CPU utilization. WAFs that have been placed
in front of the APIs are not generating any alerts.
Which of the following should a security engineer recommend to best remedy these
performance issues promptly?
A. Implement rate limiting on the API.
B. Implement geo-blocking on the WAF.
C. Implement OAuth 2.0 on the API.
D. Implement input validation on the API.
556 Mock Exam 1
38. ACME bank engineers are configuring security for a new data center. They are
looking to implement SSL/TLS for customer-facing application servers. Customers
will connect to the bank API through a deployed mobile application. They must
now choose a symmetric algorithm that offers the greatest speed and security.
Which should they choose?
A. ChaCha256 + poly1305
B. 3DES + CBC
C. AES256 + CBC
D. Salsa256 + CBC
39. Hackers can gain access to encrypted data transmissions. After performing
vulnerability assessments on the application servers, several cipher suites are
available for backward compatibility. Which of the following would represent the
greatest risk?
A. TLS_RSA_WITH_AES_128_CBC_SHA
B. TLS_RSA_WITH_RC4_40_MD5
C. TLS_DHE_RSA_WITH_AES_256_CBC_SHA
D. TLS_RSA_WITH_3DES_EDE_CBC_SHA
40. A company is deploying an online streaming service for customers. The content
needs to be protected; only the paid subscribers should be able to view the streams.
The company wants to choose the best solution for low latency and security. What
would be the best choice?
A. 3DES
B. AES
C. ChaCha
D. RC4
41. A government agency is configuring a VPN connection between Fort Meade and
a field office in New York. Of primary importance is having a highly secure key
exchange protocol due to the threats posed by nation state threat actors. Which
encryption protocol would be a good choice?
C. ChaCha-256
D. SHA-512
42. Software developers are deploying a new customer-facing CRM tool. The
deployment will require the customers to download an application on their system.
Customers must be able to verify that the application is trustworthy. What type of
certificate will the software developers request to fulfill this requirement?
A. Client authentication
B. Server authentication
C. Digital signatures
D. Code signing
43. A large insurance provider has grown in size and now supports customers in many
different countries. Due to this increased footprint, they are looking to minimize
administration by allocating a single certificate to multiple sites. The sites will be
country-specific, with different domain names. What would be the best choice for
delivering this requirement?
A. Wildcard certificate
B. Extended validation
C. General-purpose
D. Subject Alternate Name (SAN)
44. The CISO is delivering a security briefing to senior members of staff. One of the
topics of conversation concerns the current e-commerce site. During a Q&A session,
the CISO is asked questions about PKI and certificates. A rudimentary question is
asked – what key is stored on a certificate? What should the CISO answer?
A. Public key
B. Private key
C. Public and private keys
D. Signing key
45. A large online bank would like to ensure that customers can quickly validate that the
bank's certificates are not part of a CRL. What would best meet this requirement?
A. Extended validation
B. Certificate pinning
558 Mock Exam 1
C. OCSP
D. CRL
47. Nation state-sponsored actors have stolen the smartphone of a government official.
They have attempted to guess the PIN code several times, eventually locking the
device. They are attempting to gain access to the data using forensic tools and
techniques but the data cannot be accessed. What has likely prevented a data breach
from occurring?
48. A small startup energy company has built up a database of clients. It is estimated
that this database is worth $100,000. During a data breach, a cyber-criminal
(working for a competitor) steals 10% of the records. The company fails to put
adequate controls in place and a second breach occurs within 12 months.
What is the Annual Loss Expectancy (ALE)?
A. $200,000
B. $1,000
C. $20,000
D. $10,000
Assessment test answers 559
49. A defense contractor currently loses an estimated $2,000,000 each year due to
intellectual property theft. The company has a solid reputation for R&D and
manufacturing but has no dedicated security staff. A Managed Security Service
Provider (MSSP) guarantees that they will provide 90% protection for the data over a
5-year contract at an annual cost of $250,000 per annum. What is the ROI in dollars?
A. $10,000,000
B. $9,000,000
C. $750,000
D. $7,750,000
50. An automobile manufacturer suffers a power outage at one of its foundries. The
facility supplies critical components for the company. The COOP designated the
foundry as a mission-essential service, and it was agreed that the foundry must
be operational within 24 hours. The energy supplier has struggled to repair severe
storm-damaged cables. As a result, the facility is without power for 72 hours. What
is the metric that describes this 72-hour outage?
12. B. The waterfall methodology means that we must have defined all the requirements
at the start of the process and that no changes will be made during the development
cycle. See Chapter 2, Integrating Software Applications into the Enterprise.
13. C. Use Privileged Access Management (PAM) to remove user accounts from the
local admin group and prompt the user for explicit approval when elevation is
required. This solution allows accounts to elevate their privileges and that these
actions will be audited. See Chapter 4, Deploying Enterprise Authentication and
Authorization Controls.
14. A. Replace the current antivirus with an EDR solution. The end devices must be
protected when they are not on the company network. The other solutions will not
adequately fulfill the requirements. See Chapter 9, Enterprise Mobility and Endpoint
Security Controls.
15. D. Recovery Service Level. See Chapter 15, Business Continuity and Disaster
Recovery Concepts.
16. C. Access the /etc/passwd file to extract the usernames. As the account is a
standard user, they will not have the right to edit configuration files (sudoers), so
the best option is to access the passwd file (you do not need any privileges to do
this). See Chapter 7, Risk Mitigation Controls.
17. C. ID22 shows a high amount of disk I/O using the vmstat command. See Chapter
8, Implementing Incident Response and Forensics Procedures.
18. A. Preparation. For details on creating an incident response plan, see Chapter 8,
Implementing Incident Response and Forensics Procedures.
19. A. Root cause analysis. This would be performed as a result of lessons learned/AAR.
See Chapter 8, Implementing Incident Response and Forensics Procedures.
20. A. Execute-never (XN). CPU chips support memory protection within the
hardware. See Chapter 9, Enterprise Mobility and Endpoint Security Controls.
21. B. tcpdump. This is a command-line protocol analyzer that's capable of capturing
traffic and can be used to analyze previous captures. pcap is a standard packet capture
file format. See Chapter 8, Implementing Incident Response and Forensics Procedures.
22. C. nc -w 180 -p 12345 -l < shadow.txt. Netcat can be used to run
remote commands on a target system, allowing for files to be transferred. See
Chapter 8, Implementing Incident Response and Forensics Procedures.
23. A. Steganalysis tool. This tool would search for data hidden within the graphics file.
See Chapter 8, Implementing Incident Response and Forensics Procedures.
562 Mock Exam 1
24. D. Always-on VPN settings. They will always have an encrypted connection that's
routed through the company network. See Chapter 9, Enterprise Mobility and
Endpoint Security Controls.
25. D. Unauthorized application stores. See Chapter 9, Enterprise Mobility and Endpoint
Security Controls.
26. B. Shell restrictions. The current settings mitigate the main threats but do not
prevent built-in commands from being run. See Chapter 9, Enterprise Mobility and
Endpoint Security Controls.
27. A. Deploy SOAR utilities and runbooks. This will automate this repetitive process
and take some of the workload off the technicians. See Chapter 8, Implementing
Incident Response and Forensics Procedures.
28. B. ASLR. This mitigation is built into the operating system and is considered a
better option (NX+DEP is hardware-based and less effective).
29. D. Ensuring the integrity of messages. Control messages will not normally be
confidential but must be tamper-proof. This is the best solution. See Chapter 10,
Security Considerations Impacting Specific Sectors and Operational Technologies.
30. C. 0.5. The ARO over 4 years is 0.5 as there were only two successful breaches. See
Chapter 13, Applying Appropriate Risk Strategies.
31. C. Security-Enhanced Android (SEAndroid). This is SELinux for mobile devices.
See Chapter 9, Enterprise Mobility and Endpoint Security Controls.
32. B. Run SELinux in enforced mode. This will enforce Mandatory Access Control
(MAC). See Chapter 9, Enterprise Mobility and Endpoint Security Controls.
33. B. Modbus. This is a well-used control protocol that's used within industrial
controlled environments. It is vulnerable to many different threats. See Chapter 10,
Security Considerations Impacting Specific Sectors and Operational Technologies.
34. B. Ladder logic. See Chapter 10, Security Considerations Impacting Specific Sectors
and Operational Technologies.
35. B. Loss of a supervisory computer. See Chapter 10, Security Considerations
Impacting Specific Sectors and Operational Technologies. This can also be seen in the
following diagram:
Assessment test answers 563
43. D. Subject Alternate Name (SAN). This will allow a single certificate to be issued
for multiple sites. A wildcard would not be suitable as the domain names will be
different. See Chapter 12, Implementing Appropriate PKI Solutions, Cryptographic
Protocols, and Algorithms for Business Needs.
44. A. Public key. A digital certificate validates the public key. Private keys are not
shared but can be stored in escrow if a copy needs to be made. See Chapter 12,
Implementing Appropriate PKI Solutions, Cryptographic Protocols, and Algorithms for
Business Needs.
45. C. OCSP. This allows a quick response to be provided when a CRL check is required.
See Chapter 12, Implementing Appropriate PKI Solutions, Cryptographic Protocols,
and Algorithms for Business Needs.
46. B. HTTP Strict Transport Security (HSTS). This will ensure that all the
connections are forced to use HTTPS/TLS. See Chapter 2, Integrating Software
Applications into the Enterprise.
47. C. Crypto shredding. The symmetric key that was used to encrypt the data is
destroyed, making data recovery ineffective. See Chapter 12, Implementing Appropriate
PKI Solutions, Cryptographic Protocols, and Algorithms for Business Needs.
48. C. $20,000.
Asset Value (AV) = 100,000
Exposure Factor (EF) = 10%
Single Loss Expectancy (SLE) = 10,000
Annual Rate of Occurrence (ARO) = 2
Annual Loss Expectancy (ALO) = 20,000 (SLE x ARO)
See Chapter 13, Applying Appropriate Risk Strategies.
49. D. $7,750,000. ROI= (9,000,000 Reduction in risk – 750,000 cost of control). The
contract is for 5 years, so the potential loss would be 10,000,000. As we mitigate
90% of the loss, we have saved 9,000,000 but must pay 5 x 250,000 = 1,250,000. See
Chapter 13, Applying Appropriate Risk Strategies.
50. A. Mean time to recovery (MTTR). See Chapter 13, Applying Appropriate
Risk Strategies.
Hopefully, you have enjoyed testing yourself via a typical mix of CASP questions. For
more exam resources, please visit https://www.casp.training.
17
Mock Exam 2
Welcome to the study guide assessment test! These test questions are designed to resemble
real-world CASP 004 exam questions. To make the test as realistic as possible, you should
attempt these questions closed book and allocate the appropriate amount of time. You
can use some notepaper or a scratchpad to jot down notes, although this will be different
in a real test environment. To find the PearsonVue candidate testing rules, check out
https://home.pearsonvue.com/comptia/onvue.
End of Study Assessment Test
Number of Questions: 50
Passing Score: 83% (Estimated)
566 Mock Exam 2
Questions
1. A company works with a cloud service provider (CSP) that provides bleeding-
edge technology to perform data analytics and deep learning techniques on the
company's data. As the technology becomes more widespread, it appears that a rival
CSP can offer the same solutions for a 50% cost saving. However, it seems that the
database format and rule sets that have been created can't be transferred to the rival
CSP. What term would best describe this situation?
A. Vendor risk
B. Vendor lock-in
C. Third-party liability
D. Vendor management plan
2. A major retailer works with a small, highly regarded, third-party development team.
They intend to invest significant resources into a new customer-facing set of APIs.
The retailer is concerned about the financial stability of the development company
and worries that they may need to start the development project from scratch if the
developers go bust. What could be used to allay the fears of the retailer?
A. Change management
B. Staff turnover
C. Peer code review
D. Source code escrow
3. Andy is the CSO within a department of the United Kingdom's HM Revenue and
Customs (HMRC). All new systems that will require government funding must
be assessed concerning cost savings by working with a CSP. Andy is overseeing a
proposed new system that will reduce the workload of the Inland Revenue HMRC
employees. What must a government agency consider when planning to store
sensitive data with a global CSP?
A. Data sovereignty
B. Data ownership
C. Data classifications
D. Data retention
Questions 567
Patient's address
Patient's bank account details
Patient's medical history
Patient's X-ray records
Employee bank account details
What type of information will need to be protected and which regulations are the
most important? (Choose two)
A. COPPA
B. Personally identifiable information (PII)
C. Financial records
D. Intellectual property
E. GDPR
5. A regional bank intends to work with a CSP to harness some of the benefits associated
with cloud computing. The bank wants the assurance that data will not be accessible
when their contract with a CSP expires. What technology would be most applicable?
A. Crypto erase
B. Pulping
C. Shredding
D. Degaussing
9. Eva is the CISO for a global stocks and shares trading site. She is performing a risk
assessment that focuses on customer data being stored and transmitted. Customers
are mainly based in North America with a small percentage based globally,
including Europe. When it comes to considering regulatory and legal requirements,
which of the following will be the most important?
10. A US smartcard manufacturer needs to sell its products in a global market. They
need to ensure that the technology is not sold to countries or governments that are
hostile to the US. What guidance or regulations should they consult?
A. Due care
B. Export controls
C. Legal holds
D. E-discovery
11. A government department has data privacy requirements and they need to have
employees and service providers sign this agreement. They should be made aware of
the strict terms of this agreement and the penalties that may be forthcoming if these
requirements/standards are not met. What type of agreement will be important?
13. A global pharmaceutical company would like to build resiliency into its network
connections. They are working with an ISP, who proposes a highly available MPLS
solution. To ensure the vendor can deliver the service at 99.999% uptime, what
documentation will be important?
14. A software development company and a mobile phone manufacturer have entered
a business partnership. The business partners need to share data during a series
of upcoming projects. This agreement will stipulate a timeline for the information
exchange to be supported, security requirements, data types that will be exchanged,
and the actual sites that will be part of the data interchange. What documentation
best details these requirements?
A. SLA
B. MSA
C. MOU
D. ISA
15. A regional healthcare provider needs to address ever-escalating costs. They propose
to host some of the information systems with a CSP. The healthcare provider needs
assurances that any sensitive data will be protected by the service provider, and that
agreed-upon steps are in place if data breaches or any adverse action were to occur.
What document would address these requirements?
A. Due care
B. Export controls
C. Legal holds
D. E-discovery
Questions 571
17. A company has several internal business units. The business units are semi-
autonomous but need to support each other for the business to be efficient.
To ensure the business units can work together, it is important to document
responsibilities for each business unit. This document will not be written by lawyers
and is intended to formalize previous verbal agreements. What documentation
would best suit this requirement?
19. A public transportation provider has recently completed a BIA and has determined
that the Continuity of Operations Plan (COOP) will require an alternative site
to be available in the event of a major incident at the main operational site. The
planning team has identified a requirement for a site, housing equipment, and
facilities ready for the business to use. Personnel and data will need to be moved to
the site to become operational. What have they identified?
A. Cold site
B. Warm site
C. Hot site
D. Mobile site
572 Mock Exam 2
20. A CISO for a cellular telephony provider is working with a Cloud Service Provider
(CSP) to define expected day-to-day computing needs. The company wants to
be able to choose a plan where spikes in demand result in additional compute
resources being automatically provisioned. What technology would best meet
this requirement?
A. Autoscaling
B. Caching
C. Bootstrapping
D. Clustering
22. What form of testing uses stakeholder involvement to assess the effectiveness of
the plan? Scenarios can be discussed and actions that need to be performed can
be evaluated. This exercise ensures the Disaster Recovery Team (DRT) or Cyber
Security Incident Response Team (CSIRT) do not need to perform exhaustive
testing until the plans are fine-tuned.
A. Checklist
B. Walk-through
C. Tabletop exercises
D. Full interruption test
A. Autoscaling
B. Distributed allocation
C. Bootstrapping
D. Replication
24. A news delivery platform provider needs to deliver content in the form of web
pages, media, and images to worldwide consumers. The requirement is for
geographically dispersed servers using caching to ensure that data is available on the
edge of the network, where users or customers will benefit from lower latency. What
technology would best suit this requirement?
A. Autoscaling
B. Distributed network
C. Content delivery network
D. Replicated network
574 Mock Exam 2
26. A utility company is following industry guidelines to harden its server systems.
One of the first steps that the guidelines suggest is to identify all the available and
unneeded services. What tool would best suit this requirement?
27. A well-known developer's content sharing portal has been targeted by a DDoS
attack. Although it's the web application servers that are being targeted, the effect
of all the traffic flooding the network has made all the services unavailable. Security
experts are looking to implement protection methods and implement blackhole
routing for the web application servers. What has this mitigation achieved?
28. Security analysts are responding to SIEM alerts that are showing a high number of
IOC events. The analysts have a reason to suspect that there may be APT activity
in the network. Which of the following threat management frameworks should the
team implement to better understand the TTPs of the potential threat actor?
A. NIST SP 800-53
B. MITRE ATT&CK
C. The Cyber Kill Chain
D. The Diamond Model of Intrusion Analysis
A. NIDS
B. NIPS
C. WAF
D. Reverse proxy
30. A small law firm is looking to reduce its operating costs. Currently, vendors are
proposing solutions where the CSP will host and manage the company's website and
services. Due to legal and regulatory requirements, the company requires that all the
available resources in the proposal must be dedicated. Due to cost constraints, the
company does not want to fund a private cloud. Given the company requirements,
which of the following is the best solution for this company?
31. A company that uses Active Directory Services (ADS) is migrating services from
LDAP to secure LDAP (LDAPS). During the pilot phase, the server team has been
troubleshooting connectivity issues from several different client systems. Initially,
the clients would not connect as the LDAP server had been assigned a wildcard
certificate, *.classroom.local. To fix these problems, the team replaced the
wildcard certificate with a specific named certificate, win2016-dc.classroom.
local. Further problems are causing the connections to fail. The following
screenshot shows the output from a troubleshooting session:
32. A sales team relies on a CRM application to generate leads and maintain customer
engagement. The tool is considered a mission-essential function to the company.
During a business impact assessment, the risk management team indicated that data,
when restored, cannot be older than 2 hours before a system failure. What planning
objective should be used when the restoration will also require data to be restored?
33. A large defense contractor has recently received a security advisory documenting
the activities of highly skilled nation-state threat actors. The company's hunt team
believes they have identified activity consistent with the advisory. Which of the
following techniques would be best for the hunt team to use to entice the adversary
to generate malicious activity?
34. A new online retailer must ensure that all the new web servers are secured in
advance of a PCI DSS security audit. PCI DSS requirements are strict and define
acceptable cipher suites. Deprecated cipher suites should not be used as they offer
weak encryption and are vulnerable to on-path attacks. In preparation for the audit,
a security professional should disable which of the following cipher suites?
A. TLS_RSA_WITH_AES_128_CCM_8_SHA256
B. TLS_RSA_WITH_RC4_128_SHA
C. TLS_RSA_WITH_AES_128_CBC_SHA256
D. TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
578 Mock Exam 2
A. Disabled
B. Permissive
C. Enforcing
D. Preventing
37. A company has recently undertaken a project to move several services into the
cloud. A cloud service provider now hosts the following services:
D. Containers
E. Vulnerability scanner
38. A CISO is reviewing the current security of an electricity supply company. The
company has many operational sites and must connect the sites securely to the
company headquarters, which is where the company's data center is located. The
technology that's supported within these sites includes industrial control systems
and PLCs. The technology is legacy and uses the Modbus protocol across the
networks. A VPN solution is being proposed to securely connect all the sites to the
company's data center.
The CISO is concerned that a recent security advisory, concerning certain
asymmetric algorithms, may impact the company's operations. Which of the
following will be most likely impacted by weak asymmetric encryption?
A. Modbus
B. VPN links
C. Industrial control systems
D. Datacenter equipment
To secure the network segment, additional rules must be enabled on the network
firewall. Which rules should be added to meet this security requirement?
Choose two.
A. SRC Any DST 10.10.0.1 PORT 53 PROT TCP ACTION
Deny
B. SRC Any DST 10.10.0.4 PORT 23 PROT TCP ACTION
Deny
C. SRC Any DST 10.10.0.4 PORT 22 PROT TCP ACTION
Deny
D. SRC Any DST 10.10.0.50 PORT 80 PROT TCP ACTION
Deny
E. SRC Any DST 10.10.0.1 PORT 88 PROT TCP ACTION
Deny
F. SRC Any DST 10.10.0.50 PORT 443 PROT TCP ACTION
Deny
40. A systems administrator has deployed all updated patches for Windows-based
machines. However, the users on the network are experiencing exploits from
various threat actors, which the patches should have corrected. Which of the
following is the most likely scenario here?
41. A penetration tester is trying to gain access to a remote system. The tester can see
the secure login page and knows one user account and email address but has not
discovered a password yet. Which of the following would be the easiest method of
obtaining a password for the known account?
A. Man-in-the-middle
B. Reverse engineering
C. Social engineering
D. Hash cracking
Questions 581
43. A manufacturing company is deploying loT locks, sensors, and cameras, which
operate wirelessly. The devices will be used to allow physical access by locking and
unlocking doors and other access points. Recent CVEs have been listed against the
devices, for which the vendor has yet to provide firmware updates. Which of the
following would best mitigate this risk?
A. memdump
B. foremost
C. dd
D. nc
582 Mock Exam 2
A. grep
B. ExifTool
C. Tcpdump
D. Wireshark
46. A critical service on a production system keeps crashing at random times. The
systems administrator suspects that the code has not been adequately tested and
may contain a bug. When the service crashes, a memory dump is created in the
/var/log directory. Which of the following tools can the systems administrator
use to reproduce these symptoms?
A. DAST
B. Vulnerability scanner
C. Core dump analyzer
D. Hex dump
47. Ontario Outdoors Inc is expecting major disruptions due to a winter weather
warning. The CISO has been reviewing company policies to ensure adequate
provisions are in place to deal with these environmental impacts and finds that
some are missing or incomplete. The CISO must ensure that a document is
immediately drafted to move various personnel and equipment to other locations to
avoid downtime in operations. What is this an example of?
48. Acme corporation operates a nuclear power station and relies on a legacy ICS to
perform equipment monitoring functions. Regulatory compliance requires that this
monitoring is mandatory. Penalties for non-compliance could be costly. The ICS has
known vulnerabilities but cannot be updated or replaced. The company has been
refused cyber-liability insurance. Which of the following would be the best option to
manage this risk in the company's production environment?
49. Following a security incident, forensics has handed over a database server to the
server admin team to begin the recovery phase. The team is looking to deploy an
automated build by running a script. When accessing the Bash shell, they observe
the following command as the most recent entry in the server's shell history:
dd if=dev/sda of=dev/sdb
Which of the following most likely occurred?
A. Forensics have used binary analysis tools to search the metadata.
B. The drive was cloned for forensic analysis.
C. The hard drive was formatted after the incident.
D. There is evidence that the forensics team may have missed.
50. A software engineer is looking to implement secure code while the code is still in
the development environment. The goal is to deploy code that meets stability and
security assurance goals. Which of the following code analyzers will produce the
desired results?
A. SAST
B. DAST
C. Fuzzer
D. Peer code review
584 Mock Exam 2
Answers
1. B. Vendor lock-in. This makes it difficult to switch providers as the technology is
often proprietary. See Chapter 13, Applying Appropriate Risk Strategies.
2. D. Source code escrow. External developers represent third-party risk. This can be
mitigated by storing the code with an escrow service. This protects the IP of the
developers but also protects the customer. See Chapter 13, Applying Appropriate
Risk Strategies.
3. A. Data sovereignty. The type of data that's stored by a government department
would typically have strict regulatory controls. A global CSP may store the data
offshore. See Chapter 13, Applying Appropriate Risk Strategies.
4. B and E. This type of data would be labeled as PII and GDPR regulatory controls
would be important as the patients and employees may be EU citizens. See Chapter
14, Compliance Frameworks, Legal Considerations, and Their Organizational Impact.
5. A. Crypto erase. The customer will not have physical access to the data, so they will
not be able to ensure other methods of destruction can be implemented. Crypto
Erase will render the data unrecoverable. See Chapter 13, Applying Appropriate
Risk Strategies.
6. C. International Organization for Standardization (ISO). This will ensure that
the products will be suitable across international boundaries. See Chapter 14,
Compliance Frameworks, Legal Considerations, and Their Organizational Impact.
7. B. Capability Maturity Model Integration (CMMI). This accreditation is required
to tender for US government software contracts. See Chapter 14, Compliance
Frameworks, Legal Considerations, and Their Organizational Impact.
8. A. Payment Card Industry Data Security Standard (PCI DSS). Storage and
processing of customer card details will be subject to PCI DSS compliance. See
Chapter 14, Compliance Frameworks, Legal Considerations, and Their Organizational
Impact.
9. A. General Data Protection Regulation (GDPR). As this is not government
or payment card data, then the focus will be on customers based in the EU.
See Chapter 14, Compliance Frameworks, Legal Considerations, and Their
Organizational Impact.
10. B. Export controls. This is important when you're exporting technology. See
Chapter 14, Compliance Frameworks, Legal Considerations, and Their Organizational
Impact.
11. C. Non-disclosure agreement (NDA). This is legally enforceable and protects
intellectual property. See Chapter 14, Compliance Frameworks, Legal Considerations,
and Their Organizational Impact.
Answers 585
12. B. Master service agreement (MSA). This is useful when it is necessary to set
baseline terms for future services. See Chapter 14, Compliance Frameworks, Legal
Considerations, and Their Organizational Impact.
13. A. Service-level agreement (SLA). This will allow the customer and the
service provider to agree upon delivered services and the metrics that will be
used to measure performance. See Chapter 14, Compliance Frameworks, Legal
Considerations, and Their Organizational Impact.
14. D. Interconnection security agreement (ISA). This is important for documenting
the details when a connection is made between two or more parties. See Chapter 14,
Compliance Frameworks, Legal Considerations, and Their Organizational Impact.
15. E. Privacy-level agreement (PLA). This is very important when you're looking to
assure customers who must adhere to strict regulatory compliance. See Chapter 14,
Compliance Frameworks, Legal Considerations, and Their Organizational Impact.
16. C. Legal holds. This ensures that the data will be retained for any legal process.
See Chapter 14, Compliance Frameworks, Legal Considerations, and Their
Organizational Impact.
17. C. Memorandum of understanding (MOU). This is not a legal document but
it can be very useful when there needs to be co-operation between two or more
parties. See Chapter 14, Compliance Frameworks, Legal Considerations, and Their
Organizational Impact.
18. B. Recovery time objective. A recovery time objective is a planning objective that is
set by stakeholders within the business. It may be cost-driven and requires careful
consideration. See Chapter 15, Business Continuity and Disaster Recovery Concepts.
19. B. Warm site. A warm site will not be as costly as a hot site but will not be
operational until data is restored and staff are available to operate the site. See
Chapter 15, Business Continuity and Disaster Recovery Concepts.
20. C. Autoscaling. This allows the company to access additional computing power using
automation. See Chapter 15, Business Continuity and Disaster Recovery Concepts.
21. B. CASB. The company data must be protected in the cloud. Not all users will
originate from a company network, so NGFW and SWG will not work. DLP does
not address all the requirements. See Chapter 15, Business Continuity and Disaster
Recovery Concepts.
22. C. Tabletop exercises. Stakeholders will discuss how they will act when dealing
with a presented scenario. See Chapter 15, Business Continuity and Disaster
Recovery Concepts.
586 Mock Exam 2
31. D and G. The clients may not trust the issuing CA-classroom.classroom.
local by default. LDAPS does not support wildcard certificates.
The first issue that the server team solved was the problem that LDAPS does not
support wildcard certificates. The second problem is most likely that the certificate
authority (CA) is not trusted. If this is an internal CA, then the root CA certificate
will need to be installed in the trusted enterprise store of all client computers. See
Chapter 12, Implementing Appropriate PKI Solutions, Cryptographic Protocols, and
Algorithms for Business Needs.
32. A. Recovery point objective. When data must be available to service a mission-
critical service, then the recovery point objective metric must be used. See Chapter
15, Business Continuity and Disaster Recovery Concepts.
33. D. Deploy decoy files on hosts systems on the same network segment. If an APT has
access to the network, then a decoy file will be a good test to observe any malicious
activity. See Chapter 7, Risk Mitigation Controls.
34. B. TLS_RSA_WITH_RC4_128_SHA. RC4 is weak encryption and will not be used
in any regulated industries. See Chapter 11, Implementing Cryptographic Protocols
and Algorithms.
35. C. Enforcing. To run an SElinux policy and make Mandatory Access Control
(MAC) effective, the systems must be powered up in enforced mode. See Chapter 9,
Enterprise Mobility and Endpoint Security Controls.
36. C. Watermarking. If an organization wants to detect theft or exfiltration of sensitive
data, then documents can be checked out from an information system, but an
automatic watermark will be applied to the document using the identity of the user
who checked out the document. See Chapter 3, Enterprise Data Security, Including
Secure Cloud and Virtualization Solutions.
37. B. CASB. A CASB is often referred to as a gatekeeper that protects the enterprise
data from inbound threats into the cloud and outbound threats such as data
exfiltration. Another benefit of CASB is to ensure regulatory compliance, by labeling
and monitoring the use of the data, to ensure compliance. See Chapter 15, Business
Continuity and Disaster Recovery Concepts.
38. B. VPN links. A VPN allows traffic to be secured when it's passing through
untrusted networks. If the external traffic uses weak encryption, then it could be
accessed by an adversary. See Chapter 1, Designing a Secure Network Architecture.
588 Mock Exam 2
39. B and D.
SRC Any DST 10.10.0.4 PORT 23 PROT TCP ACTION
Deny
SRC Any DST 10.10.0.50 PORT 80 PROT TCP ACTION
Deny
Port 23 supports the telnet protocol; this allows unsecured traffic to be sent when
you're configuring equipment across a network. Port 80 allows for unsecured web
traffic. Port 53 is for DNS traffic; this does not transmit passwords or sensitive data.
Port 88 is Kerberos; this encrypts the transmission of user login traffic. Port 22 is
SSH, encrypting traffic is used to access a console session on another host system.
See Chapter 1, Designing a Secure Network Architecture, for more information on
firewall rules.
40. B. The users did not reboot the computer after the patches were deployed. Certain
patches may not be effective until the operating system is rebooted. See Chapter 9,
Enterprise Mobility and Endpoint Security Controls.
41. C. Social engineering. Once an attacker has access to credentials, the most likely
exploit to reveal a password is social engineering. See Chapter 6, Vulnerability
Assessment and Penetration Testing Methods and Tools.
42. C. Configure SELinux and set it to enforcing mode. SELinux enforces mandatory
access control, allowing for strict enforceable policies to be deployed. This would
further restrict a compromised account from accessing other resources on the
system. See Chapter 9, Enterprise Mobility and Endpoint Security Controls.
43. C. Add all the IoT devices to an isolated wireless network and use WPA2 and
EAP-TLS. As all the devices connect wirelessly, they must be connected to a
wireless segment. It is important to separate the network as there are vulnerable
systems. See Chapter 10, Security Considerations Impacting Specific Sectors and
Operational Technologies.
44. B. Foremost. This is a forensics tool that can search for complete or partial files that
have been deleted or hidden in some way. See Chapter 8, Implementing Incident
Response and Forensics Procedures.
Answers 589
45. B. ExifTool. The following screenshot shows the partial output from ExifTool:
Hopefully, you enjoyed testing yourself with a typical mix of CASP questions. For more
exam resources and extra content please visit https://www.casp.training.
Index
Symbols Active Directory services 164
active scans
3D printing 424
versus passive scans 220
802.1X
ActiveX 260
about 30, 166
actors, types
authentication 30
about 183
security 340
advanced persistent threat (APT) 184
competitor 184
A hacktivist 185
insider threat 184
acceptable use policy (AUP) 469
organized crime cybercriminals 185
access control
script kiddies 185
about 157
Address Resolution Protocol (ARP) 14
attribute-based access control 161
address space layout randomization
Discretionary Access
(ASLR) 359
Control (DAC) 158
ad hoc networks 9
Mandatory Access Control
Adobe Flash 260
(MAC) 157, 158
Adups 354
role-based access control 158, 159
advanced access control 149
rule-based access control 160
Advanced Encryption Standard
access control lists
(AES) 340, 393, 406, 452
(ACLs) 10, 85, 107, 204, 205
advanced network design
access control vestibules 34
about 24
access logs 200
hardware and applications placement 32
accreditation 497
IP Security (IPsec) 26
Active Directory Federation
network authentication methods 30
Services (ADFS) 154
592 Index
X
X.509 certificates
about 394
X509.v3 436
Xcode 352
x-Frame-Options (XFO) 75
Xilinx 7-series FPGA chips 385
XML 265
XML External Entities (XXE) 73
XSS-Protection 75
XTS-AES cipher mode
reference link 406
Packt.com
Subscribe to our online digital library for full access to over 7,000 books and videos, as
well as industry leading tools to help you plan your personal development and advance
your career. For more information, please visit our website.
Why subscribe?
• Spend less time learning and more time coding with practical eBooks and Videos
from over 4,000 industry professionals
• Improve your learning with Skill Plans built especially for you
• Get a free eBook or video every month
• Fully searchable for easy access to vital information
• Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and
ePub files available? You can upgrade to the eBook version at packt.com and as a print
book customer, you are entitled to a discount on the eBook copy. Get in touch with us at
customercare@packtpub.com for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for
a range of free newsletters, and receive exclusive discounts and offers on Packt books and
eBooks.
626 Other Books You May Enjoy